id
stringlengths
20
20
score
int64
1
5
normalized_score
float64
0.2
1
content
stringlengths
212
2.96M
sub_path
stringclasses
1 value
BkiUgMrxK03BfNelYqvl
5
1
\section{Introduction} Cloud technologies have been continuously evolving during the recent years. One important aspect of this evolution is the increased decouplement of the applications from the underlying infrastructure. Serverless computing is the most recent step on this journey, making applications independent even from the virtual infrastructure. In practice, this means that the infrastructure scaling and maintenance tasks are all offloaded to the provider, and the developers using the platform can focus on the application logic~\cite{berkeley}. The Function as a Service (FaaS) model is a realization of serverless computing that was first widely introduced by Amazon Web Services (AWS) through its Lambda service. Simply stated, using the FaaS development model, an application developer can write functions in any of the languages supported by the given platform, functions can be attached to event sources, or triggers and the platform automatically executes them on demand. Existing, especially commercial, platforms offer a wide-range of triggers including HTTP requests, database or other storage data updates and various message queues. The functions in FaaS are in general required to be stateless, specifically the state should be provided as input or externalized to a database. After the introduction and initial success of AWS Lambda, other cloud players also introduced similar services. Then, following the commercial trends, open source implementations also started to be available. While they are becoming more and more mature, the stability and supported features of these options are still behind the commercial alternatives. A notable exception is OpenWhisk that is used in IBM's commercial offering. Other major open source implementations include Kubeless, OpenFaaS, Nuclio, Fission and the Fn project. As we will show in the following sections, the virtualization technologies used in today's FaaS solutions require at least a couple of hundred of milliseconds to start. This extra startup latency is usually referred to as \textit{cold start} and directly impacts the service quality. To tackle this problem, FaaS systems use pre-warmed execution units, meaning that they keep environments up and running for a while in order to make subsequent \textit{warm starts} fast. While this approach makes cold starts rare, they still occur from time to time without the possibility to predict from the user's point of view. A well performing FaaS system with only cold starts would be an important advancement in this technology domain. On the one hand, keeping idle environments running wastes resources, on the other hand, significant part of the complexity in existing platforms comes from the handling of warm environments, including per-function load monitoring, scaling and routing requests to proper warm environments. If a platform would be able to start functions fast enough for each individual incoming request, the system could be greatly simplified, as function scaling can be simply driven by the actual load. In this paper, we investigate if it is possible to create such an FaaS framework. First, we dive into the recent advancements of the container technology and show its current distance from a cold-only FaaS platform, then, we turn our focus to unikernels due to their lightweight characteristics. The idea of using unikernels in FaaS has been proposed in a few research papers during the last year \cite{nabla-paper, sand, berkeley}, but to the best of our knowledge we are the first who present a working prototype. The rest of the paper is structured as follows: in Section~\ref{sec:background} we discuss the important aspects of the technology domain, then, in Section~\ref{sec:performance} we show our startup measurement results. In Section~\ref{sec:fn} we show how unikernels can be applied in the Fn FaaS platform. Finally, we discuss the related work in Section~\ref{sec:related}, then conclude the paper. \section{Technology Background}\label{sec:background} In theory, fundamentally different technologies can be used to build an FaaS system, ranging from starting or forking a process through different container technologies to unikernels or complete virtual machines. In the following we discuss the possible alternatives on this scale, then in the next section we will compare their startup performance. \subsection{About using processes} \label{sec:processes} The most lightweight option for creating a new executor entity is by using \textit{fork()} or \textit{clone()} in Linux. According to our experiences, forking can take between 55--500~$\mu$s depending on how much memory needs to be replicated, even with Linux's copy-on-write memory sharing. In order to have a process that can be forked for each incoming request, the process - with the function loaded - needs to be started first. As a result, the performance of starting a processes draws a baseline for cold starts. The main limitation of processes is that while it is possible to restrict their access capabilities regarding the filesystem, networking, etc., once all needed features are enabled the system basically ends up using something like a Docker container. We will measure the time required to start different types of processes in the next section and we argue that this approach is a viable option for single-tenant, performance oriented FaaS platform. However, in this paper we seek for more isolated options from security perspective, that are confidentially applicable for multi-tenant environments. \subsection{Containers} \label{sec:containers} Since its introduction in 2013 Docker became the de-facto solution for light-weight virtualization by combining several components in the kernel for separation of container instances. Due to the light-weight and high-granularity nature of FaaS systems, Docker is in general a good fit and it is used in all existing open source FaaS implementations as an execution engine. Docker is built using several layers and starting a container requires gRPC based communication throughout its stack that includes the CLI, Docker Engine, containerd, a shim layer to decouple containers, and finally an Open Container Initiative (OCI) compatible runtime~\cite{oci}. OCI was established in 2015 to create open industry standards for container environments. It currently maintains two specifications: the Runtime Specification and the Image Specification. The former outlines how to run a filesystem bundle or image that is unpacked on disk. In reality, the definitions are mostly based on Docker components and the runtime reference implementation, called as runc, also comes from Docker. However, since the introduction of OCI, several other runtimes have appeared, including for example Kata Containers~\cite{kata} and gVisor~\cite{gvisor}. The true power of OCI is that the runtimes can be fundamentally different, while runc uses Linux namespaces and cgroups to create containers, gVisor is a user-space kernel that is built using syscall interception as the core component and finally Kata Containers uses Qemu with KVM to launch lightweight general purpose virtual machines and as a result blurs the distinction between containers and virtual machines. \subsection{Virtual machines} In theory it would be possible to use traditional virtual machines as FaaS execution units, but we ruled out this option as such a machine takes 10s of seconds to start. AWS recently open sourced its light-weight hypervisor Firecracker that is claimed to be the backend for its Lambda and Fargate services~\cite{firecracker}. Firecracker makes use of KVM to launch micro-virtual machines, combines the security benefits of virtual machines and the resource efficiency of containers. In the next section we will show that while Firecracker is faster than Qemu, it cannot beat runc and gVisor. The most light-weight available virtual machine options are unikernels. A unikernel is a single-purpose virtual machine, a single image including only the relevant drivers, operating system components and the application itself. They are usually single threaded and come with a single address space. Unikernels have been around for many years and due to their single-purpose nature the related solutions are mostly designed for packet processing and high-performance computing. The key advantage of unikernels is the combination of high-performance, hardware level separation and low footprint. These are made possible due to the highly-specialized nature that comes with internal simplicity. The unikernel concept is not new and it has quite a few realizations. Most of the implementations use one of the generic, well-known hypervisors, like Xen or Qemu-KVM. In this paper we focus on IncludeOS~\cite{includeos-paper} which is a single task system written in C++. IncludeOS builds on virtio drivers, comes with its own networking stack and standard library implementation. We use IncludesOS because besides supporting Qemu-KVM it can be also compiled for solo5. Solo5 is a sandboxed execution environment primarily intended for unikernels, thus providing extremely fast startup time~\cite{ukvm}. IncludeOS uses the \textit{hardware virtualized tender} (hvt) of solo5, that is previously known as ukvm, and builds on top of KVM. Solo5 was recently extended with a \textit{sandboxed process tender} (spt) that uses seccomp for separating processes~\cite{nabla-paper}. In an FaaS system that uses cold-only functions the image size is an important factor, as images should be transferred and cached on a lot, in an extreme setting on all, the machines in the cluster. Looking at the different options, the solo5 example applications take only around 200~kB disk space. A simple echo server built using IncludeOS is around 2.5~MB, while a base Alpine Linux container is around 6~MB. Finally, the base Firecracker kernel is around 20~MB and the rootfs we use is around 50~MB. \section{Comparing startup performance} \label{sec:performance} In this section we compare the startup performance of the different virtualization technologies to get a clear picture about the different options. \subsection{General FaaS architecture} FaaS systems are built using highly similar main components, namely a gateway, an HTTP or event router that is also called dispatcher, a cluster manager and the function executor units. A request to run a function is received by the gateway, that passes it to the dispatcher, the dispatcher looks for available (warm) units to execute the request and may request a new, cold, unit from the cluster manager. In production ready FaaS frameworks the dispatcher also performs authentication and authorization before executing requests. \subsection{Our measurement system} In order to better understand the startup time overhead of the different runtime options we created a benchmarking tool using the C++ based CppCMS web framework~\cite{cppcms}. In our setup the CppCMS acts as an FaaS gateway and our measurement application running over CppCMS acts as event router and dispatcher by exposing different URLs, like \textit{/docker\_runc} or \textit{/includeOS}. On receiving an HTTP request the application executes a simple echo application using the given technology. For example if \textit{/docker\_runc} is queried, the framework starts an Alpine Docker container with \textit{/bin/date} as the command. The scaling inside our physical measurement machine is automatically handled by the CppCMS framework, which is configured to have multiple processes for accepting connections and 20 worker threads. We use the hey HTTP load generator tool to generate requests and measure the latency~\cite{hey}. Hey is an easy to use tool and can be used to send the given number of HTTP requests with the defined parallelism. We use a different machine to run hey, and the 2 servers are connected through a dedicated 40~Gbps Mellanox network. We are using Ubuntu 18.04 with 4.18 kernel version and our servers are equipped with Intel Xeon E5-2670 CPUs, 64~GB of memory and Samsung PM1633a SSD drives. For the container related measurements we used Docker 18.09.3, runc 1.0.0-rc6, Kata Containers 1.4.3 and a gVisor commit from 12th of March 2019. We used Firecracker 0.15 and version 0.4 of the solo5 hypervisor for the spt measurements. Finally, we used IncludeOS 0.14 that builds on a solo5 commit from July 2018. We used different parallelism configurations for the measurement, for example \textit{10 parallel calls} in the figures implies that 10 requests are in-flight at any given time. As our measurement machine has 24 cores, we will show the behaviour under overload conditions with setting the highest load to 40 parallel requests. We also validated that the machine sending the requests is not a bottleneck in this range. For each measurement we used 10000 requests and we use boxplots with whiskers reaching the 1st and 99th percentiles. \subsection{Baseline Docker results} According to our measurements, starting a single Alpine Linux Docker container via the CLI can take around 650~ms with the default runc OCI runtime in interactive mode, and 450~ms as a daemon. Starting runc directly with the most basic configuration and the exported Alpine image takes around 150~ms. The difference is actually the accumulated overhead of various factors. In order to run an OCI runtime, a \textit{rootfs} containing the filesystem, and a \textit{configuration file} are required. Adding the namespace configurations used by Docker to the basic runc configuration file adds roughly 100~ms to the time required to start the environment. The largest overhead comes from networking configuration, followed by the mount and inter process communication namespaces. Other than the overhead of gRPC communication throughout the Docker software stack, most of the remaining difference comes from the Docker storage drivers that are needed to create a rootfs for runc. Docker uses the \textit{overlay2} storage driver by default that is a union filesystem making it possible to place a container specific writeable layer on top of multiple read only container image layers and logically showing it as a single filesystem. We compared the different available storage drivers and found that the default option performs the best from the perspective of startup latency. \subsection{Comparing OCI runtimes} \begin{figure} \includegraphics[width=0.48\textwidth]{figures/OCI.png} \caption{Startup times with OCI runtimes and Firecracker. For better visibility Kata Containers is omitted under the overload condition, with median value at 2.2 seconds and 99th percentile at 3.3 seconds.} \label{fig:OCI} \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth]{figures/docker_alt.png} \caption{Startup times with Docker} \label{fig:docker_alt} \end{figure} The most important advantage of OCI is that it makes it possible to use radically different runtimes under Docker. First in Figure~\ref{fig:OCI} we show our measurement results under different levels of parallelism with 3 OCI runtimes, and we also include measurements with Firecracker. We use the same Alpine rootfs for all OCI runtimes, and an Alpine image for Firecracker. As it can be seen, gVisor provides better results compared to runc, while Kata Containers is clearly slower than the other options due to the overhead of starting up Qemu-KVM each time. We also found the startup performance of Firecracker to be a quite comparable option to OCI runtimes. While all options scale fairly well up until 20 parallel start requests, when we go over the number of cores available in the server, the latency gets impacted, especially for Kata Containers. Figure~\ref{fig:docker_alt} suggests that the overhead of the Docker layers over the OCI level, hide most of the performance differences. Moreover, starting up a container takes over 10 seconds under the highest measured load, most probably due to limitations in accessing kernel resources and creating the union filesystems. In summary, we argue that while new OCI runtimes, such as gVisor, provide better startup characteristics, until Docker and all the required kernel configurations add hundreds of milliseconds of overhead it is not practical to build an FaaS system without warm starts using Docker. \subsection{Unikernels and processes} While containers are too slow in their current form to make an FaaS platform possible without cold starts, Figure~\ref{fig:unikernels} shows that processes and unikernels provide a good opportunity. As it can be expected starting a process (e.g. a compiled Go application) brings the best latency characteristics. An interpreted language, like Python, takes significantly more time to start even without libraries. Loading a module like \textit{scipy} adds an additional 80~ms to the startup according to our experiences. The figure shows that IncludeOS unikernel instances using solo5's hvt can start in around 8--15~ms under moderate load. We also included the basic test application of the new spt tender of solo5 and as it can be seen it gives almost the same performance as processes. The example application lacks the libraries, dynamic memory management and other features that come with IncludeOS. Once the unikernel will support this tender, the related startup times are expected to be better than with hvt. Finally, we also measured the overhead of the CppCMS framework by adding a \textit{/noop} URL. The overhead is only around 0.7~ms for low-load scenarios, but grows considerable over 20 parallel requests. While this type of overhead is independent from the virtualization technologies, it exists in all FaaS implementations as requests need to go through the gateway and dispatcher components. \begin{figure}[b] \centerline{\includegraphics[width=0.48\textwidth]{figures/unikernels.png}} \caption{Measured startup times with processes and unikernels} \label{fig:unikernels} \end{figure} \section{FaaS with unikernels} \label{sec:fn} Based on the results we determined that unikernels, and in our case IncludeOS specifically, can be potentially used as an FaaS runtime environment. As we will show in this section, having an FaaS with only cold IncludeOS execution is a viable alternative of keeping warm execution units. One of the most important advantage of our proposed solution is that it does not use idle executors and thus eliminates resource waste. The idle timeout for warm executor units is obviously a configurable parameter either by the operator or the developer, however, this configuration presents a trade-off between wasting resources and experiencing frequent cold starts. Wang et al.~\cite{curtains} analyzed FaaS platforms in public clouds and found that in the AWS cloud the Firecracker instances serving the same function are co-located to the same machine roughly while they fit into the physical memory. They measured that AWS keeps idle function executors up and running for nearly half an hour, effectively wasting significant amount of memory and CPU resources. They also reported that co-location influences startup times when sudden scale-out is required, similarly as we have shown in Figure~\ref{fig:OCI}. Compared to AWS Lambda, in Fn by default the idle timeout is configurable per function and the system keeps the executor containers in paused state, still reserving resources. In relation the the presented alternatives, our unikernel based Fn extension essentially does not waste resources as the unikernel exits immediately after executing the user's code. Furthermore, while in a traditional FaaS platform the user can never know if a given request will experience an extended delay due to a cold start, with our solution the execution latency is predictable. \subsection{The architecture of Fn with unikernels} There are multiple open source FaaS solutions available, the ones with tight Kubernetes integration cannot be easily modified to run unikernels instead of Docker containers. Out of the few remaining options we choose the Go based Fn Project~\cite{fn}. The Fn server can be separated to three main components, the \textit{gateway}, \textit{agent} and the \textit{driver}. The gateway is responsible for receiving the requests from the clients. The agent manages the life-cycle of function runtimes on the given host through the driver that handles runtime specific commands. Fn has by default only the Docker driver as fully functional implementation. We added a new driver to provide the IncludeOS support. As our unikernels exit after execution, the lifecycle management functionality of the agent becomes unnecessary with our approach. Early versions of Fn were able to run warm and cold-only functions as well, but the former has been removed from recent releases. Currently an extra wrapper, called as FDK (Function Development Kit), is used to turn any user function into an executable container. The Docker driver communicates with the FDK running in a container via HTTP over a Unix socket, and the FDK calls the user function internally. In our current IncludeOS driver implementation, we do not use an FDK, but use the standard in and out for communication with the unikernel, as it was done in Fn before the introduction of the FDK. Functions can be added to the system using the \textit{deploy} command of the Fn CLI. With Docker containers, the user function and a \textit{yaml} configuration file are required, and the CLI tool will automatically create a Docker container by wrapping the function using the proper language specific FDK. Following this approach, we added an extra option to indicate that an IncludeOS build is required for the given function. The \textit{boot} IncludeOS build script is then used to create the solo5 specific image that is placed to a specific directory on the host. When a function is called, the new driver starts the deployed IncludeOS image using the solo5 hypervisor, gives the received user input as parameter and waits for output on the stdout. After the execution of the function, the unikernel simply exits, and, in parallel, the user gets back the result. \subsection{Measurement results} To present our solution in a realistic configuration, we deployed our modified Fn platform into the Stockholm region of the AWS cloud. The deployment contained both the original Docker and our IncludeOS based Fn versions. We used an \textit{m5.metal} instance for the deployment as IncludeOS needs access to KVM that is only available with \textit{metal} instances. We used Postgress as the backend database for Fn as we got significant performance improvements compared to the default sqlite option. We also deployed a Go Lambda function into the same region and made it accessible through the AWS API Gateway. We performed measurements from Ericsson's lab in Stockholm and summarized the results in Table~\ref{tab:inaws}. As it can be seen, the unikernel based approach gives an order of magnitude better cold start time compared to both the unmodified Fn and AWS Lambda. \begin{table}[ht] \centering \begin{tabular}{| l || c | c | c |} \hline \multicolumn{1}{|c||}{Environment} & Cold start & Warm start & Connection setup \\ \hline\hline Fn IncludeOS & 33.4 & - & \multirow{2}{*}{6.9} \\ \cline{1-3} Fn Docker & 288.3 & 13.6 & \\ \hline AWS Lambda & 449.7 & 78.0 & 50.1 \\ \hline \end{tabular} \caption{Median function execution latency measured from Stockholm with Fn and Lambda deployed in the AWS Stockholm region. The numbers are in ms.} \label{tab:inaws} \end{table} An important difference between the Fn and the Lambda deployments is that the API Gateway uses TLS, that adds considerable overhead to the connection setup time due to the required 3 round-trips and the computational costs. In the table we included the connection setup both for cold and warm starts, also meaning that re-using the same TCP/TLS connection (if possible) is a powerful optimization option. Our solution with cold starts gives roughly the same latency as warm functions in AWS Lambda, considering the connection creation overhead. For comparison, using an EC2 instance in the same AWS region as a measurement point gives only slightly lower connection setup overhead. As expected, the overhead grows with distance, getting up to around 200~ms if the Lambda function is called from our lab in Budapest. To evaluate the pure, worst-case difference between the Docker and IncludeOS based Fn solutions, we ran a set of measurements in our local lab environment. First, we observed a notable difference in the deployment time, the C++ compilation in case of IncludeOS takes about 3.5 seconds, while Docker requires 9-10 seconds to create the image. Figure~\ref{fig:fn} shows that the startup and execution of our test function with IncludeOS takes around 10-20~ms. In comparison, the latency with a warm Go function takes 3-5~ms, with the price of wasting the resources reserved by the continuously running Docker containers when they are idle even for a few milliseconds. \begin{figure}[t] \centerline{\includegraphics[width=0.48\textwidth]{figures/fn.png}} \caption{Fn measurement results in our local lab environment} \label{fig:fn} \end{figure} Finally, we must note that by using more complex functions the overhead of Fn with IncludeOS gets less and less significant compared to the execution time. This minimal overhead enables creating FaaS environments with cold starts only, without the need to continuously monitor, scale and run idle function executor units, everything can be scheduled on demand. \subsection{Limitations} In this section we showed that our unikernel based FaaS approach provides similar latency characteristics compared to traditional warm execution approaches. Our solution has a few limitations in its current form that require further work. Probably the most interesting question is the proper handling and distribution of function images. While Docker has a complete ecosystem for container image management, there is no standard solution available for unikernels at the moment. Furthermore, we get the best performance if the image is available on the machine that gets the request. As we discussed before, starting an interpreted language like Python with complex modules needed for the function adds around 80~ms overhead to the execution. Due to this limitation the presented unikernel approach in its current form is better suited for compiled languages like C++ or Go. \section{Related Work} \label{sec:related} Serverless technologies and especially FaaS got a lot of attention during the recent years. Various papers evaluated the performance, usability and security aspects of both commercial \cite{curtains, cold-factors, performance-factors} and open source \cite{open-serverless-eval, open_source_review} FaaS options. Jonas et al. highlighted in their technical report that performance, especially predictable performance, is a key limitations in today's solutions \cite{berkeley}. They also suggested that demand is increasing for fine grained security contexts and mentioned unikernels as a possible solution to minimize the attack surface. Besides deployment and security aspects, Wang et al. analyzed the cold start performance of FaaS implementations in the AWS, Azure and Google clouds~\cite{curtains}. They found that, depending on the configuration, the cold starts take a few hundred milliseconds in AWS and Google clouds and around 3.5~seconds in Azure with high fluctuation in the latter. Manner et al. investigated multiple factors influencing cold starts in the AWS and Azure clouds~\cite{cold-factors}. They found that cold starting Java based functions takes 2-3 times more than using JavaScript for the same purpose, with the latter taking around 600~ms in the best scenarios. Akkus et al. suggested in their paper that unikernels are a viable option for serverless, with concerns about limitation on flexibility~\cite{sand}. In their solution they addressed cold starts by using long running Docker containers to separate users and internally forked a worker process for each request. The development of solo5's spt was done through the work on Nabla containers~\cite{nabla-paper}. Nabla is an OCI compliant runtime that can only run special container images with a binary for the solo5 spt inside. While our measurements show that solo5 spt provides extraordinary startup times, adding Docker on top of it basically hides its advantages compared to our proposed solution with IncludeOS. \section{Conclusion} In this paper we showed that using unikernels as the runtime technology is a feasible option for FaaS systems. We demonstrated that while the container technology advances quickly, the startup still takes at least a few hundred milliseconds making warm execution units necessary. We showed that unikernels provide a good alternative with startup times under 15~ms, enabling FaaS platforms without all the resource waste and complexity needed for keeping the environments warm.
train/arxiv
BkiUdv84eIXh3noafvOT
5
1
\section{INTRODUCTION} The role of strangeness in low and medium energy nuclear physics is currently of considerable interest, as it has the potential to deepen our understanding of the relevant strong interaction mechanisms in the non-perturbative regime of QCD. For example, the system of a strange baryon (hyperon $Y$) and a nucleon ($N$) is in principle an ideal testing ground for investigating the importance of SU(3)$_{flavor}$ symmetry in hadronic interactions. Existing meson exchange models of the $YN$ force usually assume SU(3) flavor symmetry for the hadronic coupling constants, and in some cases \cite{Holz,Reu} even the SU(6) symmetry of the quark model. The symmetry requirements provide relations between couplings of mesons of a given multiplet to the baryon current, which greatly reduce the number of free model parameters. Specifically, coupling constants at the strange vertices are then connected to nucleon-nucleon-meson coupling constants, which in turn are constrained by the wealth of empirical information on $NN$ scattering. Essentially all these $YN$ interaction models can reproduce the existing $YN$ scattering data, so that at present the assumption of SU(3) symmetry for the coupling constants cannot be ruled out by experiment. One should note, however, that the various models differ dramatically in the treatment of the scalar-isoscalar meson sector, which describes the baryon-baryon interaction at intermediate ranges. For example, the Nijmegen group \cite{NijII,NijIII,NijIV} views this interaction as being generated by genuine scalar meson exchange. In their model D \cite{NijII} an $\epsilon(760)$ is exchanged as an SU(3)$_{\it flavor}$ singlet. In models F~\cite{NijIII} and NSC~\cite{NijIV}, a scalar SU(3) nonet is exchanged --- namely, two isospin-0 mesons (besides the $\epsilon(760)$, the $\epsilon '(1250)$ in model F and $S^*(975)$ in model NSC), an isospin-1 meson $\delta$ and an isospin-1/2 strange meson $\kappa$. The T\"ubingen model \cite{Tueb}, on the other hand, which is essentially a constituent quark model supplemented by $\pi$ and $\sigma$ exchange at intermediate and short ranges, treats the $\sigma$ meson as an SU(3) singlet with a mass of 520 MeV. In the (full) Bonn $NN$ potential~\cite{MHE} the intermediate range attraction is provided by uncorrelated and correlated $\pi\pi$ exchange processes (Figs.~\ref{fig1}(a)--(b) and Fig.~\ref{fig1}(c), respectively), with $NN$, $N\Delta$ and $\Delta\Delta$ intermediate states. {}From earlier studies of the $\pi\pi$ interaction it is known that $\pi\pi$ correlations are important mainly in the scalar-isoscalar and vector-isovector channels. In one-boson-exchange (OBE) potentials these are included effectively via exchange of sharp mass $\sigma$ and $\rho$ mesons. One disadvantage of such a simplified treatment is that this parameterization cannot be transferred into the hyperon sector in a well defined manner. Therefore in the earlier $YN$ interaction models of the J\"ulich group~\cite{Holz}, which start from the Bonn $NN$ potential, the coupling constants of the fictitious $\sigma$ meson at the strange vertices ($\Lambda\Lambda\sigma$, $\Sigma\Sigma\sigma$) are free parameters --- a rather unsatisfactory feature of the models. This is especially true for the extension to the strangeness $S=-2$ channels, interest in which initiated with the prediction of the H-dibaryon by Jaffe~\cite{Jaffe}. Unfortunately, so far there is no empirical information about these channels. \begin{figure}[h] \vskip 4cm \special{psfile=sen1.ps hoffset=-10 voffset=-20 hscale = 70 vscale=70} \caption{Two-pion exchange in the $NN$ interaction: (a) uncorrelated iterative and (b) crossed boxes, (c) correlated two-pion exchange.} \label{fig1} \end{figure} These problems can be overcome by an explicit evaluation of correlated $\pi\pi$ exchange in the various baryon-baryon channels. A corresponding calculation was already done for the $NN$ case (Fig.~\ref{fig1}(c)) in Ref. \cite{Kim}. The starting point there was a field theoretic model for both the $N\anti{N}\to\pi\pi$ Born amplitudes and the $\pi\pi$ and $K\anti{K}$ elastic scattering~\cite{Lohse}. With the help of unitarity and dispersion relations the amplitude for the correlated $\pi\pi$ exchange in the $NN$ interaction was computed, showing characteristic discrepancies with the $\sigma$ and $\rho$ exchange in the (full) Bonn potential. In a recent paper \cite{REUBER} the J\"ulich group presented a microscopic derivation of correlated $\pi\pi$ exchange in various baryon-baryon ($BB'$) channels with strangeness $S=0, -1$ and $-2$. The $K\anti{K}$ channel is treated on an equal footing with the $\pi\pi$ channel in order to reliably determine the influence of $K\anti{K}$ correlations in the relevant $t$-channels. In this approach one can replace the phenomenological $\sigma$ and $\rho$ exchanges in the Bonn $NN$ \cite{MHE} and J\"ulich $YN$ \cite{Holz} models by correlated processes, and eliminate undetermined parameters such as the $BB'\sigma$ coupling constants \cite{we}. The resulting models thus have more predictive power and should allow a more reliable treatment of the $S=-2$ baryon-baryon channels. \begin{figure}[hb] \vskip 4.3cm \special{psfile=sen2.ps hoffset=-10 voffset=-20 hscale = 70 vscale=70} \caption{The present dynamical model for the $B\bar B \rightarrow \mu \bar \mu$ amplitude ($\mu \bar \mu$ = $\pi\pi$, $K\bar K$).} \label{fig5} \end{figure} In the next section we describe the basic ingredients used to derive correlated $\pi\pi$ and $K\anti{K}$ exchange potentials for the baryon--baryon amplitudes in the $\sigma$ and $\rho$ channels. We also give a short outline of the microscopic model for the required $B\anti{B'}\to\pi\pi,\,K\anti{K}$ amplitudes. The results for the potentials are presented in Section 3, where we focus on the $\pi\pi$ correlations in the $\sigma$ channel. These are compared with the results obtained from the exchange of a sharp mass $\sigma$ meson in the Bonn $NN$ \cite{MHE} and J\"ulich $YN$ \cite{Holz} potentials. Furthermore, we introduce and discuss results obtained with a parameterization of correlated $\pi\pi$ and $K\anti{K}$ exchange potentials by an effective $\sigma$ exchange for the $NN$ and $YN$ channels. Finally, some concluding remarks are made in Section 4. \section{MODEL FOR CORRELATED $2\pi$ EXCHANGE} Figure \ref{fig5} shows a graphic representation of our dynamical model for correlated $2\pi$ exchange. Here $B\anti{B'}$ stands for $N\anti{N}$, $\Lambda \anti{ \Lambda}$, $\Lambda \anti{ \Sigma}$/$\Sigma \anti{ \Lambda}$ or $\Sigma \anti{ \Sigma}$. The basic ingredients are the $B\anti{B'} \rightarrow \pi\pi$, $K\anti{K}$ Born amplitudes and the $\pi\pi$-$K\anti{K}$ interaction, which we outline below. Note that a microscopic model of correlated $\pi\pi$ exchange for the $NN$ case was already presented in Ref.~\cite{Kim}. Interestingly enough, the resulting strength turned out to be considerably larger than that from sharp mass $\sigma '$ and $\rho$ exchanges used in the (full) Bonn potential \cite{MHE}. \subsection{$\pi\pi \rightarrow \pi\pi$ Amplitude} The dynamical model used here is based on the meson exchange framework of Refs. \cite{Lohse,Janssen} involving the $\pi\pi$ and $K\anti{K}$ coupled channels. The driving terms for $\pi\pi \rightarrow \pi\pi$ consist of exchange and pole diagrams (Fig.~\ref{fig6}, first and last diagram, respectively) with $\epsilon \equiv f_0(1440)$, $\rho \equiv \rho (770)$ and $f_2 \equiv f_2(1274)$ intermediate states. The coupling $\pi\pi \rightarrow K\anti{K}$ is provided by $K^*(892)$ exchange, illustrated in the second diagram in Fig.~\ref{fig6}. The bare parameters (masses, coupling constants) in the pole diagrams are dressed by unitarizing the interaction terms in a relativistic Schr\"odinger equation. The $K\anti{K} \rightarrow K\anti{K}$ interaction (Fig.~\ref{fig6}, third diagram) is strongly isospin dependent: in the scalar-isoscalar channel all contributions ($\rho$, $\omega$, $\phi$) add up and provide a sizable attraction, which leads to a $K\anti{K}$ bound state at $f_0(975)$ (see Fig.~\ref{fig8} for the resulting phase shifts) --- the genuine scalar resonance $f_0(1440)$ sits at a higher energy, at about 1.4 GeV. On the other hand, in the vector-isovector channel there is strong cancellation between $\rho$ and $\omega$, $\phi$ exchange, since the former changes sign. Consequently, the influence of the $K\anti{K}$ channel here is negligible. The corresponding phase shift is dominated by the $\rho$-pole diagram, as illustrated in Fig.~\ref{fig8}. \begin{figure}[t] \vskip 4cm \special{psfile=sen3.ps hoffset=-10 voffset=-20 hscale = 70 vscale=70} \caption{Meson exchange diagrams included in the dynamical model for the $\pi\pi$, $K\anti{K}$ interaction \protect\cite{Lohse}.} \label{fig6} \end{figure} \subsection{$B\anti{B} \rightarrow 2\pi $ Helicity Amplitudes} Based on the $\pi\pi \rightarrow \pi\pi$ amplitude (which has a well defined off-shell behavior) the evaluation of diagrams such as in Fig.~\ref{fig1}(c) for any $BB'$ system can be done in two steps. Firstly the $N\anti{N} \ (\Lambda \anti{\Lambda}, \ \Sigma \anti{\Sigma}, \ {\rm etc}.) \rightarrow 2\pi$ amplitudes (illustrated in Fig.~\ref{fig5}) are determined in the pseudophysical region ($t \leq 4 m^2_\pi$). For the transition Born amplitude, $V$, both the $N$ and $\Delta$ (or $\Lambda$, $\Sigma$ and $Y^*$ in case of $Y\anti{Y'}$) exchanges are taken into account. Corresponding coupling constants in the transition interaction are taken from the Bonn $NN$ \cite{MHE} and the J\"ulich $YN$ potential models \cite{Holz}. Note that this cannot be done for the cutoff masses at the vertices since the form factors now act in quite a different kinematic regime, where the baryon is the essential off-shell particle. For the $NN$ case, the parameters can be fixed independently (to $\Lambda_{NN\pi} = 1.9$ GeV, $\Lambda_{N\Delta \pi} = 2.1$ GeV) by using quasi-empirical information obtained by H\"ohler at el.~\cite{Hoehler2} by analytically continuing the $\pi N$ and $\pi \pi$ scattering data. Such information is, however, not available for the $Y\anti{Y}$ ($Y = \Lambda, \Sigma$) channels, so that here we make the reasonable assumption that $\Lambda_{\Lambda\Sigma\pi} \simeq \Lambda_{\Sigma\Sigma\pi} \simeq \Lambda_{NN\pi}$ and $\Lambda_{\Lambda Y^*\pi} \simeq \Lambda_{\Sigma Y^* \pi} \simeq \Lambda_{N\Delta\pi}$. \section{POTENTIAL FROM CORRELATED $\pi\pi$ AND $K\anti{K}$ EXCHANGE} {}From the $B\anti{B'} \rightarrow 2\pi $ helicity amplitudes the spectral functions can be calculated (see Ref.~\cite{REUBER} for details), which are then inserted into dispersion integrals to obtain the (on-shell) baryon-baryon interaction: \begin{equation} V^{(0^+,1^-)}_{B_1',B_2';B_1,B_2}(t) \propto \int_{4m^2_\pi}^\infty dt' {\rho^{(0^+,1^-)}_{B_1',B_2';B_1,B_2}(t') \over t'-t}, \ \ t < 0 . \end{equation} We should note that the helicity amplitudes obtained according to Fig.~\ref{fig5} still generate the uncorrelated (first diagram on the r.h.s. of Fig.~\ref{fig5}), as well as the correlated pieces (second and third diagrams). Thus, in order to obtain the contribution of the truely correlated $\pi\pi$ and $K\anti{K}$ exchange we must eliminate the former from the spectral function. This can be done by calculating the spectral function generated by the Born term and subtracting it from the total spectral function: \begin{equation} \rho^{(0^+,1^-)} \longrightarrow \rho^{(0^+,1^-)} - \rho^{(0^+,1^-)}_{\rm Born} . \end{equation} Note that the spectral functions characterize both the strength and range of the interaction. Clearly, for sharp mass exchanges the spectral function becomes a $\delta$-function at the appropriate mass. It is convenient to present our results in terms of effective coupling strengths, by parameterizing the correlated processes by (sharp mass) $\sigma$ and $\rho$ exchanges. The interaction potential resulting from the exchange of a $\sigma$ meson with mass $m_\sigma$ between two $J^P=1/2^+$ baryons $A$ and $B$ has the structure: \begin{equation} V^{\sigma}_{A,B;A,B}(t) \ = \ g_{AA\sigma} g_{BB\sigma} {F^2_\sigma (t) \over t - m^2_\sigma} , \end{equation} where a form factor $F_\sigma(t)$ is applied at each vertex, taking into account the fact that the exchanged $\sigma$ meson is not on its mass shell. This form factor is parameterized in the conventional monopole form, \begin{equation} F_\sigma (t) = {\Lambda ^2_\sigma - m^2_\sigma \over \Lambda ^2_\sigma - t} \ , \end{equation} with a cutoff mass $\Lambda_\sigma$ assumed to be the same for both vertices. The correlated potential as given in Eq.~(1) can now be parameterized in terms of $t$-dependent strength functions $G_{B_1',B_2';B_1,B_2}(t)$, so that for the $\sigma$ case: \begin{equation} V^{(0^+)}_{A,A;B,B}(t) = G^{\sigma}_{A,A;B,B}(t) {1 \over t - m^2_\sigma}. \end{equation} The effective coupling constants are then defined as: \begin{equation} g_{AA\sigma}g_{BB\sigma} \quad\longrightarrow \quad G_{AB\to AB}^\sigma (t)= {(t-m_\sigma^2)\over\pi F^2_\sigma(t)} \int_{4m_\pi^2}^{\infty} { \rho^{(0+)}_{AB;AB}(t') \over t'-t} dt' . \label{eq:3_effccsig} \end{equation} Similar relations can be also derived for the correlated exchange in the isovector-vector channel \cite{REUBER}, which in this case will involve vector as well as tensor coupling pieces. \begin{figure}[htb] \vskip 12cm \special{psfile=sen4.ps hoffset= 20 voffset=-60 hscale = 60 vscale=60} \caption{$\pi\pi$ phase shifts in the $J^P=0^+$ ($\sigma$) and $J^P=1^-$ ($\rho$) channels and the corresponding inelasticity in the $\sigma$ channel.} \label{fig8} \end{figure} \vfill \eject We stress once more that this parameterization does not involve any approximations as long as the full $t$-dependence of the effective coupling strengths is taken into account. The parameters of $\sigma$ and $\rho$ exchange are chosen to have the same values in all particle channels. The masses $m_\sigma$ and $m_\rho$ of the exchanged particles have been set to the values used in the Bonn-J\"ulich models of the $NN$~\cite{MHE} and $YN$~\cite{Holz} interactions,\ $m_\sigma=550$ MeV, $m_\rho=770$ MeV. The cutoff masses $\Lambda_{\sigma}$ and $\Lambda_{\rho}$ have been chosen so that the coupling strengths in the $S=0, -1$ baryon-baryon channels vary only weakly with $t$. The resulting values ($\Lambda_\sigma=2.8$ GeV, $\Lambda_\rho=2.5$ GeV) are quite large compared to the values of the phenomenological parameterizations used in Refs.~\cite{Holz,MHE}, and thus represent rather hard form factors. Note that in the OBE framework the three reactions $NN\rightarrow NN$, $YN\rightarrow YN$, $YY \rightarrow YY$ are determined by two parameters (coupling constants) $g_{NN\sigma}$ and $g_{YY\sigma}$, whereas the correlated exchanges are characterized by three independent strength functions, so that vertex coupling constants cannot be determined uniquely. In the physical region the strength of the contributions is to a large extent governed by the value of $G$ at $t=0$. The values for the various channels are shown in Table \ref{tab:5_6_10}. Apart from the values for our full model, Table 1 contains results obtained when uncorrelated contributions involving spin-1/2 baryons only are subtracted from the spectral function of the invariant baryon-baryon amplitudes. These are the proper values to be used for constructing a $NN$ or $YN$ model based on simple OBE-exchange diagrams. For the full Bonn $NN$ model, contributions involving spin-3/2 baryons also need to be subtracted, since the corresponding contributions are already treated explicitly in this model, namely via box diagrams with intermediate $\Delta$-states as shown in Fig.~\ref{fig1}(a). Obviously processes involving spin-3/2 baryons increase the correlated contribution, in practice by about 30\% in all channels. Comparing the relative strengths of effective $\sigma$ exchange in the various baryon-baryon channels, one observes that the scalar-isoscalar part of correlated $\pi\pi$ and $K\anti{K}$ exchange in the $NN$ channel is about twice as large as in both $YN$ channels, and 3 -- 4 times larger than in the $S=-2$ channels. \renewcommand{\arraystretch}{1.3} \begin{table}[h] \[ \begin{array}{|r|r|rr|rrr|} \hline \multicolumn{7}{|c|}{G^\sigma_{AB\to AB}/4\pi } \\ \hline &NN&\Lambda N&\Sigma N&\Lambda\Lambda&\Sigma\Sigma&\Xi N \\ \hline \mbox{full model} &5.87 &2.82 &2.58 &1.52 &1.72 &1.19 \\ \mbox{subtractions for OBE model} &7.77 &3.81 &3.15 &2.00 &2.31 &1.52 \\ \hline \end{array} \] \caption{Effective $\sigma$ coupling strengths $G^\sigma_{AB\to AB}(t=0)$ for correlated $\pi\pi$ and $K\anti{K}$ exchange in the various baryon-baryon channels. (The meaning of the rows is given in the text.)} \label{tab:5_6_10} \end{table} \renewcommand{\arraystretch}{1.1} The average size of the effective coupling strengths is only an approximate measure of the strength of correlated $\pi\pi$ and $K\anti{K}$ exchange in the various particle channels. The precise energy dependence of the correlated exchange as well as its relative strength in the different partial waves of the $s$-channel reaction is determined by the spectrum of exchanged invariant masses, or spectral functions, leading to a different $t$-dependence of the effective coupling strengths. To demonstrate this we show in Fig.~\ref{fig:5_6_3} the on-shell $NN$ potentials in spin-singlet states with angular momentum $L=0, 2$ and 4, which are generated directly by the scalar-isoscalar part of the correlated $\pi\pi$ and $K\anti{K}$ exchange. As expected it is attractive throughout. \begin{figure}[ht] \vskip 5cm \special{psfile=nn0.ps hoffset=-10 voffset=-135 hscale = 40 vscale=40} \special{psfile=nn2.ps hoffset= 190 voffset=-135 hscale = 40 vscale=40} \vskip 5cm \special{psfile=nn4.ps hoffset= 95 voffset=-135 hscale = 40 vscale=40} \caption{{The $\sigma$-like part of the $NN$ on-shell potential in various partial waves. The solid lines are derived from our microscopic model of correlated $\pi\pi$ and $K\anti{K}$ exchange. The dotted lines are obtained if the dispersion-theoretic result is parameterized by $\sigma$ exchange, while the dashed correspond to the $\sigma$ exchange used in the Bonn OBEPT potential \protect\cite{MHE}. }} \label{fig:5_6_3} \end{figure} Figure \ref{fig:5_6_3} shows that the results evaluated from the microscopic correlated $2\pi$ exchange model (solid curves) are comparable to those of the $\sigma$ exchange in the Bonn OBEPT potential (dashed curves) in partial waves with $J \leq 2$. However, the correlated $2\pi$ exchange is significantly stronger in high partial waves because the $\sigma$ exchange, which corresponds to a spectral function proportional to $\delta(t'-m^2_\sigma)$, does not contain the long-range part of the correlated processes. Indeed, parameterizing the results derived from the microscopic model by $\sigma$ exchange as before, but using the effective coupling strength $G^\sigma_{NN\to NN}$ at $t=0$ (dotted curves), we obtain rough agreement with the exact result in the $^1S_0$ partial wave, but underestimate the magnitude considerably in the high partial waves. Obviously the replacement of correlated $\pi\pi$ and $K\anti{K}$ exchanges by an exchange of a sharp mass $\sigma$ meson with a $t$-independent coupling cannot provide a simultaneous description of both low and high partial waves. Note that the above curves are based on the spectral function with subtraction for OBE models. Corresponding results for the full model were presented in Fig.~22 of Ref.~\cite{REUBER}, and differ quantitatively from the ones shown here. In particular, the contribution of the correlated $\pi\pi$ and $K\anti{K}$ exchanges in the scalar-isoscalar channel is stronger than the $\sigma '$ exchange of the full Bonn potential in {\it all} partial waves --- in agreement with earlier results reported in Ref.\cite{Kim}. In Fig.~\ref{fig:5_6_4} we show the corresponding on-shell matrix elements for the $\Lambda N$ channel. Also here we see that the results generated by the scalar-isoscalar part of correlated $\pi\pi$ and $K\anti{K}$ exchange are comparable to the ones of the $\sigma$ exchange used in the J\"ulich $YN$ model~A. In fact, correlated $\pi\pi$ exchange is slightly stronger in the $^1S_0$ partial wave, but weaker in the $^1D_2$ partial wave. Once again we see that the parameterization by an effective $\sigma$ exchange works well for the lower partial waves ($J\leq 2$), but fails for the higher partial waves. Finally, Fig.~\ref{fig:5_6_5} shows the corresponding results for the on-shell $\Sigma N$ potentials. Here one can see that the $\sigma$ exchange used in the J\"ulich $YN$ model A is clearly much stronger than the results one obtains from the correlated $\pi\pi$ and $K\anti{K}$ exchange. These differences will have an impact on the properties of the new hyperon-nucleon interaction model currently being developed \cite{we}. Specifically, the weaker interaction in the scalar-isoscalar channel resulting from our microscopic model of correlated $\pi\pi$ exchange provides much less attraction in the $N\Sigma$ $S$-waves and, in turn, should also reduce the coupling between the $N\Lambda$ and $N\Sigma$ channels. The strong coupling between these two systems in the original J\"ulich $YN$ model is possibly one of the reasons why it does not lead to a bound state for the hypertriton \cite{Miya}. \begin{figure}[ht] \vskip 5cm \special{psfile=nl0.ps hoffset=-10 voffset=-135 hscale = 40 vscale=40} \special{psfile=nl2.ps hoffset= 190 voffset=-135 hscale = 40 vscale=40} \vskip 5cm \special{psfile=nl4.ps hoffset= 95 voffset=-135 hscale = 40 vscale=40} \caption{{The $\sigma$-like part of the $\Lambda N$ on-shell potential in various partial waves. The curves are as in Fig.~\protect\ref{fig:5_6_3}, except for the dashed lines which correspond to the $\sigma$ exchange used in the J\"ulich $YN$ potential $A$ \protect\cite{Holz}. }} \label{fig:5_6_4} \end{figure} \begin{figure}[ht] \vskip 5cm \special{psfile=ns0.ps hoffset=-10 voffset=-135 hscale = 40 vscale=40} \special{psfile=ns2.ps hoffset= 190 voffset=-135 hscale = 40 vscale=40} \vskip 5cm \special{psfile=ns4.ps hoffset= 95 voffset=-135 hscale = 40 vscale=40} \caption{{The $\sigma$-like part of the $\Sigma N$ on-shell potential in various partial waves. Same description of curves as in Fig.~\protect\ref{fig:5_6_4}. }} \label{fig:5_6_5} \end{figure} \section{SUMMARY} An essential part of baryon-baryon interactions is the strong medium-range attraction, which in one-boson-exchange models is parameterized by exchange of a fictitious scalar-isoscalar meson with mass around 500 MeV. In extended meson exchange models this part is naturally generated by two-pion exchange contributions. As well as uncorrelated two-pion exchange, correlated contributions must be included in which the exchanged pions interact during their exchange; these terms in fact provide the main contribution to the intermediate-range interaction. In the scalar-isoscalar channel of the $\pi\pi$ interaction the coupling to the $K\anti{K}$ channel plays a strong role, which has to be explicitly included in any realistic model for energies near and above the $K\anti{K}$ threshold. As kaon exchange is an essential part of hyperon-nucleon interactions a simultaneous investigation of correlated $\pi\pi$ and $K\anti{K}$ exchanges is clearly necessary. In Ref.~\cite{REUBER} the correlated $\pi\pi$ and $K\anti{K}$ exchange contributions in various baryon-baryon channels have therefore been investigated within a microscopic model for the transition amplitudes of the baryon-antibaryon system ($B\anti{B'}$) into $\pi\pi$ and $K\anti{K}$ for energies below the $B\anti{B'}$ threshold. The correlations between the two mesons have been taken into account by means of $\pi\pi-K\anti{K}$ amplitudes, determined in the field theoretic framework of Refs.~\cite{Lohse,Janssen}, which provide an excellent description of empirical $\pi\pi$ data up to 1.3 GeV. With the help of unitarity and dispersion-theoretic methods, the baryon-baryon amplitudes for correlated $\pi\pi$ and $K\anti{K}$ exchange in the $J^P=0^+$ ($\sigma$) and $J^P=1^-$ ($\rho$) $t$-channels have then been determined. In the $\sigma$ channel the strength of correlated $\pi\pi$ and $K\anti{K}$ exchange decreases with the strangeness of the baryon-baryon channels, becoming more negative. In the $NN$ channel the scalar-isoscalar part of correlated exchanges is stronger by about a factor of 2 than in both hyperon-nucleon channels ($\Lambda N$, $\Sigma N$), and 3 to 4 times stronger than in the $S=-2$ channels ($\Lambda \Lambda$, $\Sigma\Sigma$, $N\Xi$). With the current model it is now possible to reliably take into account correlated $\pi\pi$ and $K\anti{K}$ exchange in both the $\sigma$ and $\rho$ channels for various baryon-baryon reactions. This will be especially important in processes where only scant empirical data exist, as the elimination of the phenomenological $\sigma$ and $\rho$ exchanges considerably enhances the predictive power of baryon-baryon interaction models. Given the strong constraints on $\sigma$ as well as $\rho$ exchange from correlated $\pi\pi$ exchange, a more sound microscopic model for the $YN$ interaction can hence now be constructed, which can be used to address open questions such as the role of SU(3) flavor symmetry, or the nature of the short range part of the $BB'$ interaction arising from $\omega$ exchange. \bigskip {\bf Acknowledgements} \medskip We would like to thank the Special Research Centre for the Subatomic Structure of Matter at the University of Adelaide for support during the completion of this work. J.H. was supported by the DFG project no. 477 AUS-113/3/0. \defNucl.\ {Nucl.\ } \defPhys.\ {Phys.\ } \defRev.\ {Rev.\ } \defLett.\ {Lett.\ } \def\Phys \Lett{Phys.\ Lett.\ } \def\Phys\Lett B{Phys.\ Lett.\ B} \def\Nucl\Phys{Nucl.\ Phys.\ } \def\Nucl\Phys A{Nucl.\ Phys.\ A} \def\Nucl\Phys B{Nucl.\ Phys.\ B} \def\Nucl\Phys (Proc.\ Suppl.\ )B{Nucl.\ Phys.\ (Proc.\ Suppl.\ )B} \def\Phys\Rev{Phys.\ Rev.\ } \def\Phys\Rev\Lett{Phys.\ Rev.\ Lett.\ } \def\Phys\Rev C{Phys.\ Rev.\ C} \def\Phys\Rev D{Phys.\ Rev.\ D} \def\Rev Mod.\ \Phys{Rev.\ Mod.\ Phys.\ } \defZ.\ \Phys{Z.\ Phys.\ } \defZ.\ \Phys A{Z.\ Phys.\ A} \defZ.\ \Phys C{Z.\ Phys.\ C} \defAnn.\ \Phys{Ann.\ Phys.\ } \def\Phys Rep.\ {Phys.\ Rep.\ } \defAdv.\ in \Nucl\Phys Vol.\ {Adv.\ in Nucl.\ Phys.\ Vol.\ } \defProg.\ Theor.\ \Phys{Prog.\ Theor.\ Phys.\ } \defProg.\ Theor.\ \Phys Suppl.\ {Prog.\ Theor.\ Phys.\ Suppl.\ } \def\Phys \Lett{Phys.\ Lett.\ } \defJ.\ Physique{J.\ Physique} \defFew--Body Systems, Suppl.\ {Few--Body Systems, Suppl.\ } \defInt.\ J.\ Mod.\ \Phys A{Int.\ J.\ Mod.\ Phys.\ A} \defNuovo Cimento~{Nuovo Cimento~}
train/arxiv
BkiUdO_xK7kjXLSrFbDj
5
1
\section*{Acknowledgements} We would like to thank Vattenfall AB for funding this research, along with Adrian Jackson at EPCC and Ian Chisholm at the Institute of Petroleum Engineering, Heriot Watt, for technical support that made this project possible. \section{Conclusions}\label{S:Conclusion} In this paper, we have presented the use of dynamically active models of wind turbines embedded in a high-resolution Large-Eddy Simulation CFD model of the environment, with appropriate extent and resolution to represent both the response of wind turbines to the atmospheric flow, and the effect of a large array of wind turbines on the flow. The main aim of this work was to demonstrate a validation of this modelling approach as a valid tool to investigate the interaction between wind farms and the environment. Key requirements for this were: a) to describe a turbine based on its rotor design and key operating controls of blade pitch and rotational speed, b) to simulate the response of these turbines, in a way which follows the control mechanism of actual turbines when they are placed in a naturally fluctuating wind, and c) to describe the resulting wake and its recovery within and around the wind farm. The turbine validation in section~\ref{S:Turbine-validation} has demonstrated that a model of the turbine based on best estimates of the rotor blades and the design rotor rotational speed generates a power output and thrust coefficient for such a model in 'clean' reference wind conditions produce a very good agreement with manufacturer's data and observations. The Lillgrund model has shown that it is possible to simultaneously resolve flow features in three dimensions over a wide range of scales, from 5~m at the rotor, to large-scale atmospheric eddies and wind farm wakes several km in length. Through its coupling of LES to the dynamic turbine model, the performance of turbines has been shown to fluctuate in response to local flow conditions, and that the turbulent flow generated by the turbines reflects that found in real turbines~\citep{HirthSchroeder13}. Our model has compared well to the SCADA data from the real wind farm; where it has not can perhaps be attributed to four reasons: i) insufficient information regarding meteorological conditions at Lillgrund, such as temperature and humidity, ii) the assumption of a neutrally stable atmosphere, iii) too short a simulation period for accurate performance statistics, and iv) the limitations of the actuator disc in the near-wake. While this could be addressed in principle by acquiring more data from the wind farm to ensure that observations and model represent the same flow conditions, the latter three are model constraints. In particular, the last reason demands considerably more computational resources. In their simulation of Lillgrund, \citet{Churchfield2012} employed actuator lines \citep{sorensen2002} to represent the turbine blades, which require much higher resolution meshes near the rotors. In contrast to the $40\,\textrm{km}^3$ simulation domain here, which contained 30 million elements running on 256 computer cores, Churchfield used 315 million cells running across 4096 cores, for a domain less than half the size at $16\,\textrm{km}^3$. Finer meshes and the actuator line approach have the undoubted ability to better resolve near-wake features than we do here, but to decide whether the increased fidelity in the near-wake is significant enough to merit the trade-off in computational effort, a detailed study of both actuator disc and an actuator line wind farm models is required. The key next step, however, is to address the second point about the atmospheric condition. At the time this work was carried out, computing power allowed either a substantial horizontal extent, covering a significant part of the farm wake as chosen here, or a substantial vertical extent, covering a significant part of the atmospheric boundary layer as chosen by \citet{Archer2013}. Computing power has progressed to a degree where our methodology can be applied to a larger domain covering both the horizontal extent to resolve the farm wake, and the vertical extent to cover the unstable atmospheric boundary layer. While buoyancy effects were not considered here, following the choice to start with a neutrally stable atmosphere, the LES CFD software used here has already been used for convective flows \citep{mactavish2013}, and allows to incorporate convectively unstable conditions as a refinement to the methodology rather than a step change. Now that the methodologies for wind farm characterisation are validated, with fully resolved atmospheric boundary layer and convective processes included, it is possible to apply this modelling methodology to both engineering and atmospheric sciences applications. In the engineering context, large scale simulations of wind farms are now practical using state-of-the-art computational fluid dynamics, and can be used to inform wind farm design. Once a turbine has been parameterised, it can be placed in a variety of real or imagined scenarios. The modelled turbines can be turned off to simulate failure, and the impact on surrounding turbines can be examined. Will the turbines downwind experience greater power output? Do they experience higher levels of turbulence? Such questions could be applied to control strategies, optimising the balance between turbine loading and maintenance costs on one side, and overall energy yield on the other. Alternatively, by adding, removing or moving turbines, we could alter array layouts, and observe the resulting change in power output and electricity yield of the simulated farm. In the atmospheric science context, this modelling methodology will allow for a full dynamic simulation of the interaction between a large wind farm and the atmospheric boundary layer at horizontal length scales of tens of kilometres, whilst resolving the key length scales of the fluid-rotor interactions without requiring excessive computing power. Through the multiscale wind farm modelling shown in this paper, we can investigate the transport and decay of turbulence induced by the turbines, the wind farm wake dynamics and decay, and the impacts of large wind farms on local weather as much as for parameterisation in climate models. \section{Discussion} \label{S:Discussion} Many turbine models exist and have been applied to wind farms using computational fluid dynamics \citep{Churchfield2012, calaf2010, BaPretal09, MCJGM07}. The model presented here differs from most in one significant way, in that the lift and drag generated by the turbine blades simultaneously apply torques to the generator, the blades, and the air which flows through the actuator volumes. This dynamic, reactive model of the turbine allows us to study deep-array wake effects in wind farms using a more physically accurate model than commonly-used methods, which rely upon estimations of the upwind wind speed to directly calculate the backthrust \citep{prospath2009}, or to calculate the angle of attack for rotors turning at a calibrated rate of rotation \citep{calaf2010, meyers2010}. Through use of Large Eddy Simulation (LES) it also permits the study of unsteady, turbulent flow effects within the wind farm, which we have shown to be a key driver in the performance of the Lillgrund model. The alternative of RANS, and especially unsteady RANS (URANS), has been utilised in wind turbine models elsewhere \citep{BaPretal09, MCJGM07, elkasmi2008}, but excessive wake diffusion is an issue \citep{sanderse2011, sumner2010}, and the applicability of techniques limiting this turbulent diffusion at the rotors \citep{elkasmi2008} for multiple turbines has been called into question by \citet{rethore2009}. This is problematic for wind farm modelling, as blade-generated turbulence plays an important role in deep-array wakes. The model used here uses a previously validated technique \citep{Creech2009, CFC11} to effect blade-generated turbulence with LES CFD and, as can be seen in this paper, its effectiveness has been vindicated by the power recovery in downwind turbines. Particularly in the extreme case of the wind direction 223\ensuremath{^\circ}, we can see that turbines in the second row produce low levels of power, but in the third and fourth rows we see power recovery, due to the increased wake mixing due to rising levels of turbulence within the wind farm. The Synthetic Eddy Method (SEM) used here allowed the characteristics of the atmospheric turbulence being fed into the model at the inflow boundary to be finely tuned, and these were varied with height according to Danish turbulence standards \citep{windenergyhandbook}. The SEM boundary conditions turned out to be a secondary, but also important, source of turbulence for the model farm. In initial tentative simulations too little turbulence was fed into the model, which resulted in poor downwind wake recovery; only when the correct turbulence statistics were applied, did the model produce deep-wake effects on turbine performance which matched the SCADA data. This suggests that while the blade-generated turbulence is important for wake mixing and dissipation, so too does the atmospheric turbulence at longer length scales (10-150~m as opposed to $<$~5~m). Indeed, it is the combination of these two that produces the levels of mixing and recovery within the simulation. Validation of the wind farm model against observations was challenging, as the model resolves time scales not accessible from available measurements. While SCADA data may be available at a high sampling rate, the same will never be true for the required boundary conditions. As a result it is not possible to truly reproduce a computer solution of an actual, observed situation, and one has to resort to modelling a set of typical cases and compare these with as many appropriate observations as possible. Given the nature of atmospheric flows, it is possible that cases with similar wind conditions may lead to locally very different flows and turbine responses within the wind farm, as seen in Figure~\ref{fig:Validation_T1}(c). Due to the computational expense of CFD, however, it is impractical to explore all possible solutions, and the solution obtained from computer simulations must be evaluated to how well it matches the distribution of possible solutions. To achieve this, we chose a small number of possible wind conditions, covering a set of wind directions which represent key geometric relationships between the upstream wind direction and the turbine positions relative to that direction. To capture a sufficient number of observations corresponding to the simulation, cases were selected from the wind speed range between the cut-in and rated wind speeds, over which the normalised power output appeared to be constant. Despite this careful selection, there was still the challenge to compare an ensemble of time-average observations with a single realisation of a flow sampled at a high temporal resolution. For example, 10-minute averages presented for Horns Rev~\citep{Gaumond13} showed that the wake effect was apparently much less pronounced than predicted by standard engineering wake models when analysed over a narrow 5\ensuremath{^\circ}~wind direction. In contrast, the wake models gave extremely good results to the observations when the results were averaged over a 30\ensuremath{^\circ}~($\pm 12\ensuremath{^\circ}$) wind direction sector. Their effect is also apparently much less pronounced than our observations presented here, with a power deficit for the second turbine at around 65\%, which gradually but continually decreased to around 55\% for the last turbine in the row \citep[Fig.4]{Gaumond13}. The mismatch between their observations was attributed to uncertainties in the wind direction due to bias in the sensors as well as spatio-temporal variation. Considering that the 10-minute averaging of the data is equal to the residence time $U/L$ ($U$ the wind speed and $L$ the length of the wind farm), this averaging will smooth out any local features within the farm, and the results would indeed be expected to conform to a broader selection of inflow situations as represented by the wider 30\ensuremath{^\circ}-sector. In our case, the residence time is 4 to 5 minutes while the SCADA data have a 1-minute SCADA resolution. With that, there will be some spatio-temporal averaging of individual flow features noticeable but there should also be evidence of the larger of the features be visible in the data. The substantial fluctuations demonstrated by the simulations have been observed around wind turbines by~\citet{HirthSchroeder13}. These features include strong wake meandering, breaking-up of atmospheric eddies moving into the array, and jetting between turbines. As these features are resolved within the model, results at any time may vary considerably from the more uniform flows found in RANS CFD simulations or time-averaged observations. A comparison between the simulation results and the corresponding SCADA data confirms this especially for the first three rows of turbines. In the deep array, the agreement between LES CFD and time-averaged observations is much better, which suggests that the turbulent mixing provided by the turbines is very effective in destroying larger coherent flow structures, while enhancing the more isotropic smaller scale turbulence. It is also possible that the appropriate modelling of the three apparent sources of turbulence for such CFD modelling was an essential component of capturing the very strong power deficit of the turbine fully in the wake of the front turbine. In the model, the sources of turbulence are the drag from the water surface, the turbulence created at the turbine rotor, and the free-stream turbulence advected into the domain by the SEM inflow conditions. This means that the air flowing into the front turbine is relatively clean, only with the upstream turbulence consisting more of larger eddies and with relatively weak turbulence generated at the surface and transported upwards. The flow structure behind that turbine is then a turbulent wake expanding in fairly quiescent air, apart from the surviving large atmospheric eddies. Therefore the wake recovery is relatively low, given the low drag coefficient of the sea surface~\citep{CFC11}, and the wake deficit is still substantial at the point of the second turbine. This second turbine generates another wake which is now located within the decaying wake of the upstream turbulence, and that latter turbulence helps to mix the wake and tip vortices more rapidly, which then leads to a wake less deep at the location of the third turbine. Beyond those turbines, the flow becomes more and more uniformly mixed as the newly formed wakes mix with the existing turbulent wakes. \section{Empty domain} \label{S:Empty} Before modelling the wind farm, an empty domain without wind turbines was run for two hours of simulation time. This allowed fully turbulent flow to evolve across the entire volume, which would then be checked for correctness. At the end of the run a checkpoint was created, acting as a starting point for the full wind farm simulations; here, the problem was remeshed to accommodate finer resolution near the modelled wind turbines. This was a relatively straightforward process due to Fluidity's hr-adaptive meshing techniques and check-pointing capability. As the present simulations concerned a neutrally stable atmosphere, buoyancy effects do not need to be included (e.g., \protect\citet{Wu:2013gb}) and a standard logarithmic velocity profile can be used for the inlet conditions with matching lower boundary conditions. \subsection{Simulation volume} The maximum extent of Lillgrund windfarm is 2.7 km from east to west. To ensure that no blockage effects would occur, the horizontal dimensions of the simulation domain were chosen to be 8.1 km in both horizontal directions. This would ensure a large extent of open sea on each side of the wind farm, as well as sufficient space downwind for wake effects to be modelled. For the domain height, \protect\citet{fitch2013} presented depths of the atmospheric boundary layer ranging from around 100~m for stable conditions up to over 1000~m for unstable conditions. To ensure a sufficient domain height, while working within the constraints of the available computing resource, wind engineering reference guidelines \protect\citep{cabezon2011} which would be appropriate for neutral conditions were used. \protect\citet{cabezon2011} suggested $5H$, where $H$ is the height of any obstacle obstructing flow. In the Lillgrund simulations, the obstacle height would be the height of the wind turbine hubs plus the radius, so that $H=111.5\rm~m$. To leave an acceptable margin for error, a height of 600~m was chosen, which meant the simulation domain was 8.1 km x 8.1 km x 600 m, as shown in Figure \protect\ref{fig:lillgrund-empty-domain}. While \protect\citet{calaf2010}, \protect\citet{Churchfield2012} and \protect\citet{Archer2013} adopted the compromise to resolve more of the unstable atmospheric boundary layer with domain heights of 1000~m at the expense of a much more constricted horizontal extent, one of our goals was to include more of the wind farm wake which required a larger horizontal extent. Observations reported by \protect\citet{iungo2012} as well as experiments by \protect\citet{chamorro2011}, simulated by \protect\citet{Wu:2013gb}, suggested that this compromise would be acceptable. \begin{figure} \begin{centering} \includegraphics[width=0.75\columnwidth]{Fig17} \caption{Empty simulation domain showing boundary conditions, measuring 8.1 km x 8.1 km x 600 m. Mesh cell dimensions were approximately 75 m x 75 m x 25 m.} \label{fig:lillgrund-empty-domain} \end{centering} \end{figure} \subsection{Boundary and initial conditions} \subsubsection{Sea surface} \label{S:lillgrund-sea-surface} The sea surface was specified as a rough wall boundary condition, with a roughness height $z_0$, which represented the drag induced by the surface's roughness. In reality this surface has waves, whose composition and frequency is affected by parameters such as mean wind speed, gusting, and wave age. This, in turn, has a reciprocative effect on air flow over the waves. However, for the sake of simplicity a single time-independent value of $z_0$ was chosen, which was cross-checked against published data for similar wind speed regimes \citep{makin1995, mahrt1996}, as shown later in this section. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Fig18} \caption{Drag coefficients of the sea surface as a function of $u_{10}$, the wind speed at 10~m above sea level. The solid black line and the dotted line show the drag coefficients across long and short fetches, using data derived from \citet{mahrt1996}. The blue line represents the drag coefficients for a fully-developed sea, i.e. with an extremely long fetch, from \citet{makin1995}.} \label{fig:drag_coeff} \end{figure} The waves were considered to be in relatively open sea, which given the long fetch (approx. 10~km or greater) towards coastlines shown in figure \ref{fig:lillgrund-location} is a reasonable assumption. This is an important choice as fetch, along with wind speed, has been shown to affect the surface drag \citep{mahrt1996} and, with it, $z_0$. From \citet{makin1995}, the surface drag coefficient $C_{D, sea}$ can be related to the roughness height by \begin{equation} C_{D, sea} = \left[ \frac{K}{\ln(z_{R}/z_0)} \right]^2 \label{eqn:cd-sea} \end{equation} using the standard reference height of $z_{10} = 10 \operatorname{m}$, where $K\approx 0.41$ is the von Karman constant. The information from \citet{mahrt1996} and \citet{makin1995} is collated in Figure~\ref{fig:drag_coeff}. To determine the correct equivalent 10 m reference wind speed, $u_{10}$, the log law for turbulent flow was used as a starting point, ie. \begin{equation} u=\left(\frac{u_{\tau}}{K} \right) \, \operatorname{ln} \left( {\frac{z}{z_0}} \right) \label{eqn:u10} \end{equation} The frictional velocity, $u_{\tau}$, can be calculated by substituting in $\bar{u}_H$ and $z_H$: \begin{equation} u_{\tau} = \frac{\overline{u}_H K}{ln \left(\frac{z_H}{z_0}\right)} \end{equation} where $z_H = 65\, \textrm{m}$ is the hub height, and $\overline{u}_H = 10\, \mathrm{m/s}$ was specified as the mean freestream wind-speed at hub height; $\overline{u}_H$ is discussed further in the next section. If a roughness height of $z_0=2 \times 10^{-4} \, \mathrm{m}$ is chosen, $u_{\tau}$ is defined and can substituted into (\ref{eqn:u10}) to give the mean speed at 10 m as $u_{10}=8.524 \, \mathrm{m/s}$. Using equation (\protect\ref{eqn:cd-sea}) this leads to a surface drag coefficient of $C_{D,sea}=1.436 \times 10^{-3}$ which is in good agreement with \citet{mahrt1996} and \citet{makin1995}, as can be seen from Figure \protect\ref{fig:drag_coeff}. \subsubsection{Inflow wind conditions} \label{S:Inflow} At the start of each simulation, the wind velocity is set to $0~\mathrm{m/s}$ across the domain. The inflow conditions were specified as a mean velocity profile with a fluctuating component applied to it, as shown in Figure \ref{fig:lillgrund-sem-profile}. The mean velocity profile was specified as \begin{equation} \label{eqn:mean-profile} \overline{\mathbf{u}}(z) = \left[ \begin{array}{c} \left(\frac{u_{\tau}}{K} \right) \, \operatorname{ln} \left( {\frac{z}{z_0}} \right) \\ 0 \\ 0 \end{array} \right] \end{equation} To calculate the profile, the mean wind speed at hub height $u_H$ was taken as fixed at $\overline{u}_H=10\rm~ m/s$ for each simulation. The key choice for this was to operate the turbines at a substantial power output but below the power curve knee at 12 m/s (cf.~Figures~\ref{fig:PerformanceCurve} and~\ref{fig:RelPerformance_U}). With $u_{\tau}$, $K$ and $z_0$ already known from \S~\ref{S:lillgrund-sea-surface}, the profile for $\overline{\mathbf{u}}(z)$ is now completely specified. For the fluctuating component, as the model used wall-adapted local eddy (WALE) LES \citep{nicoud1999} to model turbulence, the turbulence at the inlet had to be explicitly generated through the synthetic eddy method \citep{jarrin2006} at the inflow boundary, shown in figure \ref{fig:lillgrund-sem-profile}. There were two main sets of parameters which controlled this turbulence generation, namely the turbulence length scales and the Reynolds stress profiles. The Reynolds stress tensor profiles were based on \citet{pavlidis2010}, with the diagonal components $R_{xx}$, $R_{yy}$ and $R_{zz}$ components specified; the normalised profile for $R_{xx}$ is shown in Figure \protect\ref{fig:reynolds-stress-profile}. According to the same paper, the non-diagonal components of the stress tensor are impractical to specify accurately, but only have a minor influence on flow far downstream and can be omitted. \begin{figure} \begin{centering} \includegraphics[width=0.55\columnwidth]{Fig19} \caption{A synthetic-eddy method (SEM) generated velocity profile at the inflow boundary. The dotted curved line represents the mean logarithmic profile; the irregular solid line represents the velocity profile with SEM fluctuations superimposed.} \label{fig:lillgrund-sem-profile} \end{centering} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=0.5\columnwidth]{Fig20} \caption{The normalised Reynolds stress profile $R'_{xx}$, as a function of normalised height $z'$, derived from \citet{pavlidis2010}. The squares represent the specified data points, and linear interpolation was used to reconstruct a continuous profile.} \label{fig:reynolds-stress-profile} \end{centering} \end{figure} The mean lengthscale components, $L_{1u}$, $L_{1v}$ and $L_{1w}$ were taken from the Danish standard DS 472, as specified in \protect\citep[p.24]{windenergyhandbook}, which gave these as: \begin{equation} \begin{array}{lcl} L_{1u} & = & \left\{ \begin{array}{ll} 5 z & \mbox{for } z < 30\rm\,m\\ 150{\rm\, m} & \mbox{for } z \geq 30\rm\,m\\ \end{array}\right.\\ L_{1v} & = & 0.3\,L_{1u} \\ L_{1w} & = & 0.1\,L_{1u} \end{array} \label{eqn:turblengths} \end{equation} This gave the mean length scales as a function of height from above the sea surface. \subsection{Domain validation} \label{s:domain-validation} Validating the empty domain represented a challenge, its main purpose to provide realistic wind conditions at the site of the wind farm. Those conditions would be sensitive to sea surface boundary conditions, inflow conditions, mesh resolution and turbulence parameters, and arriving at the appropriate combination of these was a process of successive testing and refinement. Several criteria were formulated in order to demonstrate whether the empty domain simulation was working correctly, and that it had been run for long enough. These were limited by constraints on both time and computing resource, due to the volume of data involved. Firstly, to show that the flow was fully-developed, the mean flow speed was calculated as an instantaneous spatial average at hub-height by slicing through the velocity field every 10 time-steps, and this was plotted as a function of simulated time, along with its derivative and linear regression fits, in Figure \ref{fig:uH_convergence}. This graph is plotted from t=1000~s to t=2000~s of simulation time, and it can be seen that $\bar{u}_H$ has converged towards a constant value, since the linear regression of its temporal derivative $d\bar{u}_H / dt$ over this period is effectively 0. Moreover, the linear regression of $\bar{u}_H$ gives a value of $\bar{u}_H=9.825 \, \mathrm{ms}^{-1}$, which is within 2\% of the intended value of $10\, \mathrm{ms}^{-1}$. Further to this, calculations of the turbulent intensity near where the wind farm would be showed a turbulence intensity at hub-height of 8\%, which is close to that measured upwind of comparable offshore windfarms \citep{Hansen2012}. Lastly, there was a degree of overlap between the empty domain and full wind farm simulations. As a final test of the empty domain conditions, a preliminary full farm simulation at a wind direction of 223\ensuremath{^\circ} was run, where the rows are aligned with the wind and wake effects would be dominant. The turbulence lengthscale and Reynolds stress profiles were tuned in the precursory empty domain simulations, and the full-farm re-run until there was good agreement with SCADA data in Row D. This transpired to be an important test, as too little upstream turbulence resulted in overly pronounced wake deficits and reduced wake recovery. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Fig21} \caption{Plots of spatial average $\bar{u}_H$ (blue line) and $d\bar{u}_H / dt$ (red line) From t=1000s to 2000s. The dashed lines are linear regression fits. Over this time period, $\bar{u}_H$ can be seen to fluctuate around $9.825 \mathrm{ms^{-1}}$, and that the trend for $d\bar{u}_H / dt$ is close to $0 \, \mathrm{ms^{-2}} $.} \label{fig:uH_convergence} \end{figure} \section{Full farm model} \label{s:full-farm} Once the turbulent air flow across the empty domain had fully developed and was statistically stable, the 48 modelled Siemens wind turbines were placed within the simulation domain, with their RPM set to 0. For practical considerations, only one hub-height wind speed was considered for the eight different wind speed directions, 198\ensuremath{^\circ}, 202\ensuremath{^\circ}, 207\ensuremath{^\circ}, 212\ensuremath{^\circ}, 217\ensuremath{^\circ}, 223\ensuremath{^\circ}, 229\ensuremath{^\circ}, and $236\ensuremath{^\circ}$, as specified in Table~\ref{tab:Cases}. Each modelled wind farm was run for 20 minutes of simulation time beyond the empty-domain spin-up, with the first 10 minutes considered as a secondary spin-up period with the turbines in place. For the last 10 minutes the air flow across the domain had fully evolved, and the modelled turbines' diagnostics were statistically stationary, although their instantaneous values were continually fluctuating. The actual process of putting in the turbines involved remeshing the domain, then changing some of the parameters of the simulation to accommodate the change in flow conditions due to the turbines' presence. These were, specifically: i) anisotropic mesh ranges set as function of distance from turbines, and ii) velocity interpolation errors changed to vary with distance from the turbines, so that the hr-adaptive meshing algorithm within the CFD code was more sensitive to steep velocity gradients closer to the turbines, and would resolve spatial velocity fluctuations in more detail. \subsection{Turbine positioning} Rather than rotate the domain to match the prevailing wind direction, it was decided that it was simpler to rotate the wind farm to effect the same change in oncoming flow relative to the turbines. The process was as follows. Before rotating the wind farm, its centre, $\mathbf{p}_c$, had to be determined from the spatial coordinates of the Lillgrund wind turbines, which were given in geographic Cartesian coordinates (Easting and Northing). This was calculated as \begin{equation} \mathbf{p}_c= \frac{1}{N} \sum_{j=1}^N \mathbf{p}_j \end{equation} where $\mathbf{p}_j$ is the position of turbine $j$, and $N$ is the number of turbines, in this case $N=48$. The coordinates of turbine $i$ relative to this centre are then \begin{equation} \mathbf{p}'_i = \mathbf{p}_i - \mathbf{p}_c \end{equation} Taking the inlet wind in the $x$-direction as specified in section~\ref{S:Inflow}, a westerly wind ($270\ensuremath{^\circ} = 3\pi/2\rm\,rad$) requires no rotation and a south westerly wind (225\ensuremath{^\circ}) would require a clockwise rotation of wind farm about their centroid of $45\ensuremath{^\circ}$, and so on. The rotation can be written as \begin{equation} \mathbf{p}''_i = \mathbf{R}_w \, \mathbf{p}'_i \mbox{\hspace{3em}with\hspace{1em}} \mathbf{R}_w =\begin{bmatrix} \cos a & \sin a & 0 \\ -\sin a & \cos a & 0 \\ 0 & 0 & 1 \end{bmatrix} \end{equation} where $\mathbf{R}_w$ is the rotation matrix for wind direction $w$ (in radians) and $a=\frac{3 \pi}{2} - w$ as illustrated in Figure~\ref{fig:rotated-wind-farm}. \begin{figure} \begin{centering} \includegraphics[width=0.55\columnwidth]{Fig22} \caption{The translated and rotated turbine coordinates, with respect to domain origin and wind direction. $c$ represents the centroid of the wind turbines' coordinates; $L$ the minimum distance between the edge of the inflow boundary; $a$ the rotation necessary to account for wind direction.} \label{fig:rotated-wind-farm} \end{centering} \end{figure} Lastly, the turbines' coordinates are translated so that there is at least $L=2\rm~km$ from the furthest upwind turbine to the leftmost boundary, and put them in the centre of our domain laterally, which is $W=8.1\rm~km$ across. Therefore, by finding $x_{min} = \min(p''_{x,i})$, we do one final translation to get the three-dimensional coordinates of the turbines' rotors as \begin{equation} {\mathbf p}'''_i = {\mathbf p}''_i + \begin{bmatrix} x_{min} + L \\ W/2 \\ z_H \end{bmatrix} \end{equation} where $z_H$ is the hub height of the turbine. By positioning and rotating them thus (see figure \ref{fig:rotated-wind-farm}), the same empty domain could be used, while at the same time ensuring that enough space was left between the farm and the edges of the domain such that no unrealistic accelerative effects would occur on the other side of the domain, and that the wakes behind the farm would be given sufficient space to develop. This process would have to be undertaken for each different wind direction. \subsection{Remeshing} With the turbine rotor positions within the simulation calculated, the finite element domain mesh was adapted (or remeshed), so that the mesh resolution was sufficient to resolve the flow through the rotors. Typically, this meant that resolution would have to increase from 75~m horizontally and 10~m vertically, to nearer 5~m isotropically in the vicinity of a turbine rotor and within the turbine volumes. This was done by creating a non-advective, non-diffusive field within Fluidity, to which Fluidity's hr-adaptive algorithms were sensitive; this field was a cubic function of distance extending for a distance of $2D$ from the nearest turbine. The hr-adaptivity would detect the gradient in this field, and increase the mesh resolution to resolve the solution, as Figure \ref{fig:mesh-adding-turbines} shows. \begin{figure} \begin{centering} \includegraphics[width=0.96\columnwidth]{Fig23} \caption{Two horizontal slices through the Lillgrund model mesh, showing how adding the turbines to the model increases the mesh resolution near the wind farm. The elements far away from the turbines are highly anisotropic and measure approximately 75~m horizontally; closer to the turbines, the mesh resolution becomes isotropic, with elements measuring 5~m across vertically and horizontally.} \label{fig:mesh-adding-turbines} \end{centering} \end{figure} \section{Introduction}\label{S:Intro} \subsection{Background} \label{s:introduction-background} Substantial offshore wind farms with many tens of turbines over 100~m tall are being built at an increasing pace, which leads to a number of challenging and interesting problems for engineering and atmospheric sciences, as much as for the electricity industry. In this article, we will investigate some of these by comparing operational data from Lillgrund offshore wind farm with a computational model of that wind farm. Lillgrund wind farm consists of 48 turbines, each with a rated power output of 2.3~MW, in a compact array in the waters between Denmark and Sweden just south of the {\"O}resund bridge. Modern offshore wind turbines often have a rotor diameter in excess of 100~m, sampling the wind from typically 50~m to 150~m above the sea surface. They are therefore sampling a dynamically active part of the turbulent planetary boundary layer with a typical wind shear profile of the mean wind increasing with height, as well as turbulent eddies of length scales comparable with the turbine rotor blades, and time scales including that of the typical inertial time scale of the rotor of a few seconds. For these reasons, considerable research is being carried out to characterise and understand the turbulence structures, the transport phenomena in the boundary layer, and their interactions with the turbines~\citep{Abkar:2013qa,Banta:2013mi,Kalvig:2014xq,Rajewski:2013ee}. These are of great importance to the design and performance of wind turbines. Conversely, while a single wind turbine would only affect the atmosphere locally in the form of a wake decaying over the length scale of around ten rotor diameters, the cumulative effect of a whole wind farm on the atmosphere is much greater. For example, the effect on vertical mixing through the turbulence generation by the rotor blades can lead to warming near the surface in stable atmospheric conditions, and cooling in unstable conditions~\citep{Roy:2004dz,FitchLund2013}. Satellite and airborne observations of winds in the lee of wind farms suggest that wind farm wakes modify the atmospheric flow for many tens of kilometres downstream of the turbine array~\citep{Christiansen2005,Christiansen2006,Hasager:2008dk}. The effect of wind farms is not only noticeable behind the turbine array but also above, as the wind farm induces its own developing boundary layer~\citep{Wu:2013gb} with significant upwelling observed at heights well above the turbines. Even flow in the upper layer of the oceans is reported to be affected by large offshore wind farms~\citep{Brostrom2008585}. Both the horizontal and vertical scales of large wind farms have increased to the point that their presence can be expected to affect weather and climate~\citep{Keith:2004la,Wang:2011rw}, and should therefore be included in climate models through a suitable parameterisation. While early parameterisation approaches were based on modifying the surface roughness \citep{Barrie:2010kl,Ivanova:1998fy,WangPrinn2010,Wang:2011rw}, \citet{fitch2013} demonstrated that those approaches lead to a very different result when compared to a parameterisation which models the wind farm as a momentum sink not at the surface, but at the rotor height. Momentum and heat fluxes were significantly affected throughout the depth of the planetary boundary layer and at length scales of 100~km. This demonstrates that the momentum exchange and turbulent energy production within the wind farm must be well understood, to develop wind farm parametrisation schemes of wind farms for NWP and climate prediction. With wind farms easily reaching installed capacities of hundreds of megawatts, the reliable estimations of their electricity production is becoming increasingly important for the electricity industry. A key factor affecting the performance is that turbines in the array may lie in path of the wakes generated by others, whereby they experience substantially lower wind speeds than their upwind neighbours~\citep{BPF10}. The result of this is that the farm as a whole produces less electricity than the same turbines would in isolation. The wind farm effect is easily illustrated by comparing the power output from the entire wind farm investigated here with that from a single turbine in the front row. The blue shaded area in Figure~\ref{fig:PerformanceCurve} shows the power coefficient, that is the power output divided by the rated power, from the front turbine against the wind speed measured from the anemometer on that turbine's anemometer. This shows the typical features of power generation starting at a `cut-in' wind speed of around 3~m/s, increasing with approximately $\propto U$ until the rated power is reached at the rated wind speed of around 11~m/s, above which the power output remains constant until the `cut-out' wind speed of around 25~m/s, at which point the turbine is switched off for safety reasons. Compared with that is the total power output from all normally operating turbines in that wind farm, at the same reference wind speed measured at the front turbine. The important point here is that the farm's power coefficient is significantly suppressed when compared to that of the front turbines, where the 90\%- ranges do not overlap over the entire range below the rated wind speed. Only when hub height wind speeds exceed 15 m/s does the wind farm reach its full potential. Whilst Lillgrund is an extreme case due to its turbines being closely spaced, it nevertheless highlights the issue, and the resulting need for being able to predict the wakes and wind farm performance in the planning of offshore wind farms. \begin{figure} \begin{centering} \includegraphics[width = 0.55\columnwidth]{Fig1} \caption{Power coefficients for a wind turbine (blue line and shaded area with diagonal hatching) and for the entire wind farm (red line and plain shading). The lines denote the median of the observed output within a $0.25$ m/s wind speed window and the extent of the shaded regions denotes the 90\% of the observations (for details see section \ref{S:Lillgrund:FarmPerf}). } \label{fig:PerformanceCurve} \end{centering} \end{figure} A great deal of research has also focussed upon modelling and parameterising wind turbines. Common approaches to modelling wind turbines use linear wake theory, such as Jensen's Park model \citep{BPF10,Ainslie88,Jensen83}, and it is recognised that the simple wake models lose accuracy when applied to multiple wakes interacting. Recent research has combined simple turbine models with computational fluid dynamics, with turbines often represented as simple porous discs~\citep{Espana2011}, actuator discs \citep{Creech2009}, actuator lines~\citep{Churchfield2012, Machefaux2012} or actuator surfaces~\citep{shen2007}. These can be embedded in RANS fluids solvers~\citep{cabezon2011}, pseudo-spectral solvers~\citep{calaf2010, wu2011}, fixed-mesh LES finite difference~\citep{jimenez2008} and finite volume codes~\citep{Churchfield2012}, or an LES finite element solver with an unstructured, hr-adaptive mesh~\citep{CFC11}. It should be mentioned that RANS and LES represent alternate approaches to the problem of modelling turbulence, and that each has its own benefits and shortcomings. In RANS, any temporal fluctuations in the fluid velocity are represented by an additional viscous term called the `eddy viscosity'. In LES the turbulence is treated explicitly, except for turbulent eddies smaller than the grid size of the CFD simulation, which are modelled as `sub-grid eddy viscosity'. The main advantage of RANS is that it is computationally inexpensive and capable of being run on desktop computers; however, details of temporal fluctuations in the flow are lost, since they are treated implicitly. On the other hand, LES provides a greater level of fidelity by preserving both temporal and spatial fluctuations on the flow, to grid resolution level; it is also much more computationally intensive, and can require supercomputer-scale resources. One option here is the use of hr-adaptivity to reduce these demands. This meshing strategy can both move the computational meshes (r-adaptivity) and/or change the local mesh resolution (h-adaptivity) to minimise error in the solution, but also allows the mesh to track unsteady flow features \citep{piggott2004}. For a more detailed overview on RANS, LES, and their use within wind turbine modelling, see \citet{creechfruh2014}. Presently, detailed wind turbine and wind farm models are limited to a restricted domain around the turbines while the interaction between wind farms and the environment require much larger domains. Turbine scales are on the order of hundreds of metres in the horizontal and 100 to 200 m in the vertical, which extends to a few kilometres in the horizontal for wind farms. Yet atmospheric models need to resolve the planetary boundary layer of depth up to a kilometre, and tens to hundreds of kilometres in the horizontal. While one approach would be to link the two scales through nested models, computational resources are beginning to allow domain sizes in a single model which are substantially larger than the wind farm alone. This moves towards a situation where a full wind farm could be modelled in a domain, which eventually will be able to include the planetary boundary layer and a horizontal extent to investigate the wind farm wake. This study presents the methodology aimed at this. Given the computing resources available at the time, this study demonstrates the approach in a model which will lead to the full vertical and horizontal extent needed for the full planetary boundary layer and full wake farm. \subsection{Aims and outline} With the aim of demonstrating and validating time-dependent wind farm modelling, this study provides a detailed analysis of the observed wind farm performance, together with a high-resolution computational model of the wind farm for a selection of key wind conditions. This begins with section~\ref{S:Lillgrund}, which introduces Lillgrund wind farm. Sections~\ref{S:Method} to \ref{s:full-farm} introduce the modelling approach and implementation, starting with the overall modelling methodology in section \ref{S:Method}, which describes in detail how the turbines and their response to the wind are represented. Section~\ref{S:turbine-parameterisations} describes how the model was configured for the Lillgrund turbines. Section~\ref{S:Empty} details the modelling of the domain without turbines, used to produce a realistic background flow structure and then, in section \ref{s:full-farm}, to the full domain with the wind turbines positioned for different wind direction to simulate key wind conditions as identified from the results in section \ref{S:Lillgrund}. The results from the CFD model and the corresponding performance data from the SCADA record are described separately in section~\ref{S:Results}, which is then followed by a comparison and validation in section~\ref{S:Validation}. To conclude, some of the findings and issues are discussed and summarised in sections~\ref{S:Discussion} and~\ref{S:Conclusion}, respectively. \section{Lillgrund Wind Farm} \label{S:Lillgrund} \subsection{Description of wind farm} Lillgrund offshore wind farm is located 7~km south of the {\"O}resund bridge between Copenhagen in Denmark and Malm\"o in Sweden, as shown in Figure~\ref{fig:lillgrund-location} (55\ensuremath{^\circ} 31' N, 12\ensuremath{^\circ} 47' E). While it sits in a region fairly well enclosed by land, the prevailing south-westerly wind coincides with the longest wind fetch of between 25~km and 50~km, and the effects of land topography on air flow can be ignored. It has been operated by Vattenfall Vindkraft AB since December 2007 \citep{Lillgrund2}. The array consists of 48 Siemens 2.3 MW Mk II wind turbines, each with a rotor diameter of $D=93\rm~m$ and a hub height of 65~m, in a regular lattice-type array as shown in Figure~\ref{fig:Map2} where each turbine is given a number as well as a grid-name using column letters A to H and row numbers 1 to 8\@. There is a gap within the array where turbines D05 and E05 would have been, but the water there is too shallow for installation vessels to operate. The turbines are close to each other, with a spacing of $4.3D = 400\rm~m$ in the prevailing wind direction, SW -- NE direction (43$^\circ$ / 223$^\circ$), and $3.3D= 307\rm~m$ in the NW -- SE direction (120$^\circ$ / 300$^\circ$). Originally smaller turbines had been planned for, but by the time the turbines were being installed these larger turbines were available, and it was decided to opt for the larger turbines without changing the layout. Overall, the extent of the wind farm is up to 2.9~km in the prevailing wind direction and 2.25~km across, covering a total area of around $6\rm~km^2$. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{Fig2} \caption{Location of Lillgrund wind farm. Sweden is to the right, and Denmark to the upper left. Courtesy of \citet{Lillgrund2}.} \label{fig:lillgrund-location} \end{figure} \begin{figure} \centering \includegraphics[width=.90\linewidth]{Fig3} \caption{Detailed plan view of Lillgrund. The turbines are labelled from 01\_A01 through to 48\_H04. The grid lines have a spacing of 500~m. Courtesy of \citet{Lillgrund2}.} \label{fig:Map2} \end{figure} \subsection{Meteorological conditions}\label{S:Meteo} \begin{figure} a) \hspace*{0.48\textwidth} b) \\ \includegraphics[width = 0.48\textwidth]{Fig4a} \includegraphics[width = 0.48\textwidth]{Fig4b} \caption{Wind conditions: a) wind rose of the wind speed at hub height for the site of the wind farm covering the analysis period, using \cite{Carslaw2012}; b) Histogram of the distribution of wind shear exponent within the wind bin investigated here. The legend indicates the heights of the pair of anemometers used.} \label{fig:Rose} \end{figure} The meteorological conditions at Lillgrund were monitored during the planning and construction with a meteorological mast south-west of the turbine array and are reported in \citet{Lillgrund16}. This analysis was repeated from available data covering part of the operational phase. The wind rose in Figure~\ref{fig:Rose}~(a), using the later operational data, shows the typical pattern of prevailing winds from the south-westerly direction. The met mast was equipped with anemometers at three heights, 25, 40, 62.5 and 65~m, wind vanes for wind direction at 23 and 61 m, and temperature at 8 and 61 m height. \citet{Lillgrund16} reported a correlation of the wind shear exponent, using a power-law profile, of $$ \frac{U_{65}}{U_{25}} = \left(\frac{65\rm m}{25\rm m}\right)^{\alpha} $$ with an exponent of $\alpha=0.108$ for the entire period of their analysis covering the entire range of wind conditions experienced between September 2003 and February 2006\@. The analysis was repeated with the later operational data, using all possible pairs of anemometers on the met mast. When calculated from the the ratio of the 10-minute wind speed averages and the ratio of the height of pairs of anemometers, this showed a large range in the wind shear exponents, with a slight preference for either an exponent significantly less than the mean or an exponent closer to neutral conditions ($\alpha\approx 0.14$). Considering the focus of our study, we only present the results for those data where the wind direction was between 180\ensuremath{^\circ} and 260\ensuremath{^\circ} and the wind speed at 65~m between 5~m/s and 12~m/s. The correlation in the results between those involving the upper level was very good (correlation coefficient $0.76 < r < 0.994$) but the correlation between the results of the pair 40~m and 25~m and all other pairs was poor ($r\approx 0.46$). For that reason, Figure~\ref{fig:Rose}~(b) shows how frequent a particular instantaneous wind shear exponent occurred, using the two upper anemometers against that at 40~m. Both show a distribution with a clear maximum though with a bias among the two pairs despite the close spacing of the upper two anemometers, one suggesting a most common wind shear exponent of $0.14 \lesssim \alpha \lesssim 0.16$ and the other of $0.15 \lesssim \alpha \lesssim 0.2$. These results highlight two challenges, namely the difficulty of obtaining reliable measurements from routinely deployed instruments and of adequately describing wind conditions by common, fixed wind shear profiles, whether they have a power-law or logarithmic form. Nevertheless, for modelling wind farms through CFD, it is necessary to represent 'the wind conditions' by typical and well-defined approximations. The results in Figure~\ref{fig:Rose}~(b) indicate that common wind shear profiles are satisfactory approximations at least at heights occupied by the turbine rotors and, in particular that wind shear profiles associated with neutral conditions of the atmosphere are sufficiently common to be a valid scenario to demonstrate the capabilities of the modelling approach and to validate its results against observations before embarking on the next step of including convection or stratification effects. \subsection{Lillgrund diagnostics} The analysis data set was derived from the output of turbine diagnostics from the SCADA (supervisory control and data acquisition) system at an interval of 1 minute covering a period of 480 days, starting in December 2007; however, this analysis only uses data from January 2008 when all turbines were finally connected to the system. Furthermore, the analysis only included instances when at least 40 turbines were operating normally, to ensure that the data reflected the farm as a whole while allowing for scheduled or unscheduled downtime of some turbines. Turbines with a curtailed output were also filtered out, to exclude those not operating according to their normal performance characteristics. The resulting set of valid data covered 323 days. The available data from the met mast overlapped with that period, but did not cover the full range of valid operational data. This necessitated the use of nacelle data to infer wind speed and direction and a further validation stage to test the correspondence between nacelle data and met mast data. The first stage in this is to identify the `\emph{front}' turbine to use as the provider for the proxy wind speed and direction measures. \subsubsection{Front turbine selection} To construct the turbine's performance curve, first the wind direction and representative `front' turbine had to be determined. This was achieved by selecting three turbines from each edge of the wind farm associated with a wind direction sector spanning 45$^\circ$. At each time step, the appropriate sector was identified by finding instances where the three front turbines for that sector had a yaw direction consistent with that wind direction. From those instances, the representative front turbine was chosen as that having the median of the nacelle wind speed, yaw direction, and active power output. The final selection was then inspected for consistent behaviour across sectors. Having thus identified the turbine to represent the free-stream conditions, the actual consistency between the nacelle-based measures and the met mast could be carried out. \subsubsection{Wind speed and nacelle anemometer} \begin{figure} a) \hspace*{0.48\textwidth} b) \\ \includegraphics[width=.48\linewidth]{Fig5a} \includegraphics[width=.48\linewidth]{Fig5b} \caption{Comparison of direction information from the met mast and the nacelle yaw direction over the directional range investigated in detail.} \label{fig:Yaw} \end{figure} While a met mast anemometer is designed to measure the true local wind speed, in contrast a nacelle anemometer sits behind the rotor but is calibrated to estimate the free-stream velocity, and that calibration has to vary with the turbine's action. A further complication is that Lillgrund has only a single met mast south-west of the farm but very close to the turbines. For most of the wind directions other than south-westerlies, the anemometer is affected by turbine wakes and certainly within the wind farm wake for wind directions from the northerly sectors, so that the met mast instruments no longer measure the free-stream conditions. For that reason, it is deemed that the most reliable measure of the free wind speed is the calibrated output from the anemometer at the top of that turbine which is most exposed to the wind. \protect Figure~\ref{fig:Yaw}(a) shows the wind speed readings from nacelle anemometer against that from the anemometer at 63~m above the sea on the met mast for wind directions between 180\ensuremath{^\circ} and 270\ensuremath{^\circ}. While there is some variation, both random and systematic, the agreement between the two measures is good enough to be able to use the nacelle wind speed as an indicator of the free-stream wind speed, especially in the range between the cut-in wind speed of the turbines and the rated wind speed. \subsubsection{Wind direction and nacelle yaw} As with the wind speed, a measure of the wind direction based on available turbine data had to be determined. In ideal conditions, the nacelle yaw should follow the wind direction, but this only happens with a delay given by the yaw control mechanism of the turbine. Furthermore, identifying the current wind direction and actuating the rotor and nacelle yaw appropriately are not trivial. \protect\citet{Lillgrund15} presented some evidence that the nacelle yaw of the front turbine did follow the wind direction from the met mast, albeit with a slight delay, filtering out the faster fluctuations, and a with small but persistent bias. A more complete re-analysis of the relationship between the two measures across the entire range showed both a random variation and a systematic variation over the range investigated. This suggests that the yaw mechanism is effected by the flow induced by the other turbines in the front line affecting the selected turbine. However over the more restricted range to be investigated in this study, that systematic variation is very small, leaving only the random variation and an offset of around 9\ensuremath{^\circ}~between the met mast wind direction and the nacelle yaw, as shown in Figure~\protect\ref{fig:Yaw}. The nacelle yaw is on average $9\ensuremath{^\circ}\pm 7\ensuremath{^\circ}$ less than the met mast. \subsection{Wind farm performance}\label{S:Lillgrund:FarmPerf} The two performance curves shown in Figure~\ref{fig:PerformanceCurve} compare that of a turbine exposed to the wind (in the blue shading with the cross-hatching) with that of the entire wind farm (the red shaded area) against the`free wind speed' at hub height. In both cases, the shading captures 90\% of all valid data. The wind turbine curve in Figure~\ref{fig:PerformanceCurve} aggregates the data from only those turbines which are on the edge of the farm facing the wind at any time. For the wind farm in Figure~\ref{fig:PerformanceCurve}, the sum of total power output from the normally operating turbines was divided by that number of turbines and their rated power to calculate the normalised power output, normalised against the active installed capacity of the wind farm. For both curves, the power coefficient is plotted against the wind speed recorded at the front turbines. \subsubsection{Relative wind farm performance} \begin{figure} \centering \includegraphics[width =0.7\linewidth]{Fig6} \caption{Relative performance of wind farm, $\Pi$, against nominal wind speed averaged over all wind directions. The solid line is the median, and the shaded area indicates the range from the $5^{th}$ to the $95^{th}$ percentile. The dashed line indicates the extent over which the median relative performance is relatively constant.} \label{fig:RelPerformance_U} \end{figure} \begin{figure} \centering \includegraphics[width = 0.7\linewidth]{Fig7} \caption{Relative performance of wind farm, $\Pi$, against the wind direction, for all data within the wind speed band, $5.5{\rm m/s} < U < 10.5\rm~m/s$. The solid line is the median, and the shaded area indicates the range from the $5^{th}$ to the $95^{th}$ percentile. The two vertical dot-dashed lines indicate the wind speed range focussed on for the detailed analysis.} \label{fig:RelPerformance_Dir} \end{figure} A previous assessment of the wind farm performance \citep{Lillgrund15} subdivided the variable range into three zones. To refine their analysis, a power deficit or relative wind farm performance can be defined as \begin{equation} \Pi \equiv P_{Farm}/(N_T P_{Front}) \end{equation} where $P_{Farm}$ is the sum of the power output from all normally operating turbines and $P_{Front}$ is the power output from a 'front' turbine identified as being on the windward edge of the wind farm. $N_T$ is the number of normally operating turbines which excludes turbines operating at a curtailed level or turbines which have been turned off. Figure~\ref{fig:RelPerformance_U}, which shows the median of that ratio together with the range covering 90\% of the data, demonstrates that the relative farm performance is constant over an extended wind speed range from around 5~m/s to 11~m/s (indicated by the dashed line). As these results include all wind directions, the range is substantial within that wind speed band. \subsection{Identification of cases to be simulated} When combining all relative power coefficients within that wind speed band but resolving the wind direction over small wind directional bins, $3^\circ$ in the case shown in Figure~\ref{fig:RelPerformance_Dir}, pronounced peaks and troughs can be seen as a result of the lattice structure of the wind farm layout, as turbines in the second and third row move in and out of the wake from the upwind turbines. This clear sensitivity of the power deficit to the wind direction in the wind speed band $U_{\rm cut-in} < u < U_{\rm rated}$ motivated this investigation, in which specific wind directions are analysed in more detail. In particular, the focus is on the narrow wind direction sector indicated by the dot-dashed lines in Figure~\ref{fig:RelPerformance_Dir}, which covers the two extreme cases of the turbines in second row fully shaded and fully exposed to the free stream, and some intermediate scenarios. \begin{table} \caption{Cases of wind directions investigated and schematics how the wind direction and turbine layout relate to each other.} \label{tab:Cases} \centering \begin{tabular}{ l l l l l } \hline\noalign{\smallskip} Direction & Characteristics & Direction & Characteristics \\ \hline\noalign{\smallskip} 198\ensuremath{^\circ} & Maximum exposure of second row & 202\ensuremath{^\circ} & Second row exposed, third row in first row wake\\ & \includegraphics[width = 0.3\textwidth]{Fig_Table_1_Map_CFD198} && \includegraphics[width = 0.3\textwidth]{Fig_Table_1_Map_CFD202} \\ 207\ensuremath{^\circ} & Second and third rows not shielded & 212\ensuremath{^\circ} & Second and third rows not shielded \\ & \includegraphics[width = 0.3\textwidth]{Fig_Table_1_Map_CFD207} && \includegraphics[width = 0.3\textwidth]{Fig_Table_1_Map_CFD212} \\ 217\ensuremath{^\circ} & Second row partially shielded & 223\ensuremath{^\circ} & Turbines fully aligned with wind \\ & \includegraphics[width = 0.3\textwidth]{Fig_Table_1_Map_CFD217} && \includegraphics[width = 0.3\textwidth]{Fig_Table_1_Map_CFD223} \\ 229\ensuremath{^\circ} & Second row partially shielded & 236\ensuremath{^\circ} & Second row between wakes, oblique opening \\ & \includegraphics[width = 0.3\textwidth]{Fig_Table_1_Map_CFD229} && \includegraphics[width = 0.3\textwidth]{Fig_Table_1_Map_CFD236} \\ \hline\noalign{\smallskip} \end{tabular} \end{table} The relative power deficit, $\Pi$, of the wind farm is clearly function of the wind direction but, on average, it is constant within the reference wind speed range between 5.5~m/s and 10.5~m/s. Because of this, we chose to investigate the wind farm effect for typical wind conditions with a free stream velocity at hub height of 10~m/s for a set of south-westerly directions centred around that where turbines are fully aligned with the wind. Based on the turbine coordinates provided by Vattenfall in local Euclidean North-East coordinates, this occurs at a wind direction of 223$^\circ$, and the cases analysed here centre around this wind direction and extend either side to that case. The wind directions chosen and how they relate to the turbine positions are listed in Table~\ref{tab:Cases}. For Lillgrund wind farm, the chosen wind directions are key cases, as they are within the sector of the prevailing winds as shown by the wind rose in Figure~\ref{fig:Rose}. As neutral stability conditions are sufficiently frequent and, in the absence of sufficient atmospheric stability information from the SCADA data, this set of simulations was restricted to neutral conditions. \section{Computational methodology} \label{S:Method} As with previous work \citep{CFC11}, the turbine model described below in section \ref{s:turbine-formulation} is broadly derived from blade-element momentum theory. Rather than use axial induction factors however, lift and drag are calculated from tabular aerofoil data, and applied to the incompressible Navier-Stokes momentum equation as body forces, with CFD used to resolve the flow. This is a common approach utilised in wind turbine modelling \citep{jimenez2008, meyers2010, lu2011, Churchfield2012}; for a summary of techniques, see \citet{creechfruh2014}. Fluidity, an open-source, finite-element hr-adaptive CFD solver from Imperial College \citep{piggott2004}, was used to solve the Navier-Stokes equations with LES turbulence modelling. This solver has a long history in coastal and oceanographic modelling \citep{ford2004, pain2005, piggott2008, funke2011, kimura2013, hill2014}, but has also been used to study atmospheric boundary layer turbulence \citep{aristodemou2009, pavlidis-springer-2010, pavlidis2010}. Following on from \citet{CFC11}, the mesh for velocity and pressure was adaptive and unstructured; resolution was concentrated near the cylindrical volumes representing the turbines, to ensure that there were sufficient mesh nodes within the turbine. Furthermore, as the meshes were adaptive, the mesh nodes within these volumes had to be gathered at each timestep, since there was no guarantee that the mesh would be identical between timesteps. Section \ref{s:fluid-equations} will briefly detail the main fluid dynamics equations, while section \ref{s:turbine-formulation} deals with the turbine model itself. \subsection{Fluid equations} \label{s:fluid-equations} Our starting point is the Navier-Stokes momentum equation for an incompressible Newtonian fluid, i.e. \begin{equation} \label{eqn:basic-ns-mom} \frac{D \mathbf{u}}{Dt} = - \frac{1}{\rho} \nabla p + \nu \nabla^2 \mathbf{u} + \frac{1}{\rho} \mathbf{F} \end{equation} where $\mathbf{u}$ is the velocity field, $\rho$ is the density of air, $p$ is pressure, $\nu$ is the kinematic viscosity of air, and $\mathbf{F}$ is a vector representing the body forces exerted on the air by the wind turbines. The body forces are calculated by the turbine model, and only exist within the cylindrical volumes each turbine occupies, described in more detail in section \ref{s:turbine-formulation}. At each timestep, the CFD solver passes velocity, time, and time-step size to a seperate turbine module, which then calculates the turbine performance and passes back the body force terms to the solver, which then solves the equations. This process is represented as a flowchart in Figure \ref{fig:main-routine-flowchart}; further details can be found in Section \ref{s:turbine-formulation}. Within the Fluidity solver, equation (\ref{eqn:basic-ns-mom}) was discretised into a finite element P1-P1 element pair, with a wall-adapting local eddy-viscosity (WALE) variant of the LES turbulence model \citep{ducros1998, nicoud1999} for subgrid-scale turbulence. In tensor notation this becomes \begin{equation} \frac{D \overline{u}_i}{D t} = - \frac{1}{\rho} \frac{\partial p}{\partial x_i} + \frac{\partial}{\partial x_j} \left[ \left( \nu + \nu_t \right) \left( \frac{\partial \overline{u}_i}{\partial x_j} + \frac{\partial \overline{u}_j}{\partial x_i} \right) \right] + F_i \end{equation} The overbar denotes the velocity field filtered above the filter lengthscale $\Delta$, and $\nu_t$ represents the additional viscosity due to subgrid-scale turbulence, i.e. at lengthscales below $\Delta$. Standard Smagorinsky models define this as $\nu_t = C^2_S \Delta^2 \left| \overline{S} \right|$, where $C_S$ is the Smagorinsky coefficient, and $\left| \overline{S} \right|$ the strain-rate tensor. However this performs poorly near wall boundaries, since the eddy viscosity increases as soon as there is a velocity gradient, whereas the turbulence should drop away rapidly near the wall. With WALE LES, a new formulation of $\nu_t$ was developed to account for this phenomenon. The Smagorinsky coefficient is still required, and was set to $C_S=0.18$ for the simulations. \begin{figure} \begin{centering} \includegraphics[width = 0.5\columnwidth]{Fig8} \caption{Overview of the calculation procedure at each time-step.} \label{fig:main-routine-flowchart} \end{centering} \end{figure} \subsection{Turbine formulation} \label{s:turbine-formulation} In a development comparable to the recent studies by \protect\citet{Archer2013} or \protect\citet{Nilsson2014}, our goal was to incorporate the dynamic response of the turbine to the local flow conditions. This builds upon \citet{CFC11}, which describes a torque-controlled actuator disc model of a fixed-pitch turbine, adding active blade pitch control and rotor yaw. Other torque-controlled models such as \citet{breton2014, wu2015}, appear to parameterise the turbine behaviour as a relaxed iteration of the rotor angular velocity, using tabulated steady-state torque data based upon manufacturers' turbine specifications. Whilst this is certainly a practical solution, our approach is aimed at modelling the physical processes and control actions to achieve the desired power output. Ultimately this will also encompass the mechanical inertia of the drive train, the full electro-mechanical response of the generator, and the associated frictional losses. \subsubsection{Frame of reference} In order to calculate body forces due to lift and drag, the coordinates and velocity of nodes on the mesh must be translated to the frame of reference of each turbine rotor, i.e.~a coordinate system local to that turbine, which must take into account the position, yaw and tilt of the turbine rotor. Here, we use a common technique in computer graphics used to transform between reference frames \protect\citep{foley1997}. If we indicate coordinates within the simulation reference frame with $(^{*})$, then for a wind turbine hub at position $\mathbf{x}^{*}_T=(x^{*}_T, y^{*}_T, z^{*}_T)$, a yaw angle of $\psi$ and an upward rotor tilt of $\gamma$, then the coordinates of a mesh node $\mathbf{x}=(x,y,z)$ in the turbine reference frame will be \begin{equation}\label{eqn:rotation} \mathbf{x} = \mathbf{M}_{-\gamma} \mathbf{M}_{-\psi} \begin{bmatrix}x^{*}-x^{*}_T \\ y^{*} - y^{*}_T \\ z^{*} - z^{*}_T\end{bmatrix} \end{equation} where $$ \mathbf{M}_{-\gamma} = \begin{bmatrix} \cos \gamma & 0 & -\sin \gamma \\ 0 & 1 & 0 \\ \sin \gamma & 0 & \cos \gamma \end{bmatrix} \mbox{\;and\;} \mathbf{M}_{-\psi}= \begin{bmatrix} \cos \psi & -\sin \psi & 0 \\ \sin \psi & \cos \psi & 0 \\ 0 & 0 & 1 \end{bmatrix} $$ are the rotation matrices for rotor tilt and yaw, respectively. Figure {\ref{fig:3d-volume-skew}} shows the transformation between coordinate systems. Similarly, the velocity at a mesh node in the simulation frame of reference $\mathbf{u}^{*}=(u^{*}, v^{*}, w^{*})$ can be transformed to $\mathbf{u}=(u, v, w)$ in the turbine reference frame by \begin{equation} \mathbf{u} = \mathbf{M}_{-\gamma} \textbf{M}_{-\psi} \: \mathbf{u}^{*} \label{eqn:velocity-rotation} \end{equation} Only nodes within a cylindrical turbine volume $V$ generate body forces. Nodes are considered to be within $V$ where $-L/2 < x < L/2$ and $r (=\sqrt{y^2+z^2}) < R$, with $L$ being the length of the cylinder, and $R$ the radius of the turbine rotor. \begin{figure}[t] \begin{centering} \includegraphics[width = 0.5\columnwidth]{Fig9} \caption{The cylindrical turbine volume V, with radius $R$ and length $L$. The transformation between the two coordinate systems are shown, with the axes $x^{*}$, $y^{*}$, $z^{*}$ representing the simulation coordinate axes, and $x$, $y$, $z$ representing those of the turbine reference frame. The yaw angle $\gamma$ is an anti-clockwise rotation about the $z^{*}$ axis, and the rotor tilt $\psi$ is a clockwise rotation about the $y$ axis.} \label{fig:3d-volume-skew} \end{centering} \end{figure} \subsubsection{Calculating lift and drag} \label{sub: liftdrag} Now that the coordinates and flow field have been transformed to the turbine rotor's reference frame, blade element momentum (BEM) theory can be applied to calculate the lift and drag forces acting on the blades. Fundamental to this are the calculated lift and drag coefficients, $C_L$ and $C_D$, which are dependent upon angle of attack $\alpha$ and the Reynolds number $Re$ of the flow over the blade. The tabulated data for $C_L$ and $C_D$ are specific to each aerofoil, and are discussed in section~\ref{S:turbine-parameterisations}. Following the approach in \citet{CFC11}, the lift and drag forces on the blades per span unit length are \begin{equation} \label{fL_def} f_L = C_L(\alpha, Re) \, \frac{1}{2}\rho u^2_{rel} c(r) \end{equation} \begin{equation} \label{fD_def} f_D = C_D(\alpha, Re) \, \frac{1}{2}\rho u^2_{rel} c(r) \end{equation} where $\rho$ is the density of air, $u_{rel}$ is the speed of the air relative to the blades, and $c(r)$ is the chord length of the blade at radial distance $r$ from the rotor centre. This approach assumes a steady state response of the aerofoil to flow conditions, ignoring transient effects such as dynamic stall~\citep{CFC11} or tower shadow \citep{fruh2008}. Furthermore, rotational augmentation \citep{SchSoRo07,FruhCreech2015ICREPQ} is omitted at this stage as it is expected to be a minor correction at the operational conditions used here. The relative speed $u_{rel}$ is calculated at each mesh node in $V$, and takes into account both rotation of the blades and of the incoming flow. For a node at a radial distance of $r$ from the rotor centre, this is written as \begin{equation} \label{eqn:urel} u_{rel} = \sqrt{(r\omega_{rel})^2 + u^2} \end{equation} The angular velocity component $\omega_{rel}$ is the angular velocity of the blade relative to the local angular velocity of the air, i.e. \begin{equation} \omega_{rel} = \omega - \omega_{air} \end{equation} where $\omega$ is the angular velocity of the blades, and $\omega_{air}$ is the angular velocity of the air within the turbine volume $V$: \begin{equation} \omega_{air} = \frac{1}{r^2} \left( yw - zv \right) \end{equation} The inclusion of $\omega_{air}$ is due to Newton's third law. As lift and drag forces act to turn both the blades and the generator, so must an equal and opposite reaction force act on the flow, causing the air to rotate in the opposite direction of the blades, as can be seen in figure \ref{fig:blade-pitch-chord}. This, in turn, increases the magnitude of $u_{rel}$ quadratically, and so generating larger lift and drag forces, shown by equations (\ref{fL_def}) and (\ref{fD_def}). Whilst it has been demonstrated {\citep{sorensen2002}} that the azimuthal induction factor is small (5\%) for the most part of the blade under normal operating conditions and can be generally ignored, equation~(\ref{eqn:urel}) also remains valid near the blade root, and during start-up conditions where $\omega$ is small and the condition $u >> r\omega_{rel}$ cannot be guaranteed. \begin{figure} \begin{centering} \includegraphics[width = 0.45\columnwidth]{Fig10} \caption{Diagram of a turbine blade showing chord, pitch and paths of motion. The dashed line represents a turbine blade with no twist parallel to the rotor plane, and the solid line the actual blade incorporating both pitch and twist. $\beta_{pitch}$ is taken clockwise from the rotor plane at the blade tip; $\beta$ incorporating blade twist is shown at distance $r$ from the hub centre. The rotor rotates in the opposite direction to the wake. The directions of the $y$ and $z$ axes are shown on the bottom right.} \label{fig:blade-pitch-chord} \end{centering} \end{figure} The relative flow angle of the air to the rotor plane, shown in Figure~\ref{fig:alpha-beta-theta}, is given as \begin{equation} \theta_{rel} = \tan^{-1} \left(\frac{u}{r\omega_{rel}}\right) \end{equation} \begin{figure} \begin{centering} \includegraphics[width = 0.50\columnwidth]{Fig11} \caption{The relationship between the axial velocity component of the incoming air, $u$, the relative speed of the air, $u_{rel}$, and the relative azimuthal velocity $r \omega_{rel}$. The relative angle of the air flow to the rotor plane, $\theta_{rel}$, is the sum of the angle of attack $\alpha$ and local blade twist $\beta(r)$. The forces on the blades, $F_L$ and $F_D$, are indicated by solid blue lines; the dotted blue lines are the reaction forces acting on the air, which are opposite in direction but equal in magnitude. Note that $\beta$ can become negative when $r \omega_{rel}$ is large, so that an optimum angle of attack is maintained across the blade.} \label{fig:alpha-beta-theta} \end{centering} \end{figure} This allows us to compute the angle of attack as \begin{equation} \alpha = \theta_{rel} - \beta \label{eqn:alphabeta} \end{equation} where the local blade angle $\beta = \beta_{pitch} + \beta_{twist}$. The local blade twist angle $\beta_{twist}$ is a function of $r$, and calculated from the known turbine geometry; the methodology for determining this will be discussed in section~\ref{S:turbine-parameterisations}. The blade pitch angle $\beta_{pitch}$ is specified from the tip as shown in figure \ref{fig:blade-pitch-chord}, and is a dynamic variable altered through a blade pitch control mechanism -- this will be discussed in section~\ref{S:pitch}. Returning to the lift and drag forces, we transform the lift and drag per unit length into body forces, i.e.~force per unit volume, so that they can be applied as force terms in the Navier-Stokes momentum equation. This gives \begin{equation} F_{L} = \eta(x) \left( \frac{N_{blades}}{2\pi r}\right) f_L \end{equation} \begin{equation} F_{D} = \eta(x) \left( \frac{N_{blades}}{2\pi r}\right) f_D \end{equation} where $N_{blades}$ is the number of blades, and $\eta(x)$ is a Gaussian regularisation function similar to \citet{sorensen1998} and \citet{sorensen2002}. This only operates in the axial direction, as we are dealing with actuator discs and the influence of the blades is already spread azimuthally in a series of infinitely thin rings. We define the regularisation function as \begin{equation} \eta(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{1}{2} \left(\frac{x}{\sigma}\right)^2} \end{equation} where the standard deviation $\sigma$ controls the width of the filter. Smaller values of $\sigma$ gave greater accuracy in the axial force distribution, but as velocity and force interpolation was linear, this also required a prohibitively large increase in mesh resolution near the disc. During the turbine wind tunnel simulations detailed in section~\protect\ref{S:turbine-parameterisations}, iterative testing demonstrated that $\sigma=\frac{1}{2}L$, where $L$ is the length of the cylindrical volume, gave realistic turbine performance, whilst also allowing mesh resolutions that would permit the large domains necessary for wind farm simulation. Using explicit tip-loss correction is not necessary here; the use of CFD means the flow field upstream of the turbine is changed by the presence of the actuator forces, and so changes to the induction happen automatically \citep{sanderse2011}. Writing down the azimuthal and axial components of the body force acting on the fluid, which are in the opposite direction to the forces acting on the blade, we have \begin{equation} F_{azim} = - \left( F_{L} \sin \theta_{rel} - F_{D} \cos \theta_{rel} \right) \end{equation} \begin{equation} F_x = - \left( F_{L} \cos \theta_{rel} + F_{D} \sin \theta_{rel} \right) \end{equation} From $F_{azim}$ we can write the $y$ and $z$ components of these force terms as \begin{equation} F_y = \frac{z}{r} \, F_{azim} \end{equation} \begin{equation} F_z = - \frac{y}{r} \, F_{azim} \end{equation} The force terms are then transformed back from the turbine reference frame to the simulation reference frame, in an inverse operation of (\ref{eqn:velocity-rotation}) via \begin{equation} \mathbf{F}^{*} = \textbf{M}_{\psi} \textbf{M}_{\gamma} \: \mathbf{F} \end{equation} The body forces can now be applied to the momentum equation. \subsubsection{Power, torque and thrust}\label{S:torque} As the lift and drag exert forces on the blade, Newton's third law of motion dictates that there must be an equal and opposite reaction on the air; this reaction force is present at each point within the rotor volume $V$. This can be used to calculate the instantaneous power output of the turbine at time $t$, as shown in this section. We start with the total torque acting on the fluid, i.e. \begin{equation} \tau_{fluid} = \int^V \mathbf{r} \times \mathbf{F} \, dV = \int^V r F_{azim} \, dV \end{equation} This torque must be balanced by $\tau_{power}$, the torque that turns the generator to create power, and $\tau_{blades}$, the torque due to the momentum of inertia of the blades. These are resistive, i.e.~they are in the opposite direction of $\tau_{fluid}$, therefore \begin{equation} \tau_{fluid} = - \left( \tau_{power} + \tau_{blades} \right) \end{equation} From \citet{CFC11} we use a simple model for the generator torque based on dimensional analysis: \begin{equation}\label{eqn:torque-omega} \tau_{power} = k \omega^2 \end{equation} where $k=\frac{P_{max}}{\omega_{max}^3}$ is a constant, $P_{max}$ is the maximum power output (eg. the rated power), and $\omega_{max}$ is the maximum angular velocity of the blades. This gives us an expression for the instantaneous power output of the turbine \begin{equation} P = \tau_{power} | \omega | \end{equation} Note that this formulation does not include any efficiency losses or active generator control mechanisms, and assumes a direct relationship between blade angular velocity and power output. \citet{Hansen2012} show that for a Vestas V80, the maximum blade RPM is reached at $10\,\mathrm{ms}^{-1}$, whereas rated power is reached between $12.5-15\,\mathrm{ms}^{-1}$. For this paper our interest is in hub-height freestream wind speeds of $10\,\mathrm{ms}^{-1}$ and below, and in that regime the simple generator model is acceptable. Clearly a more realistic and manufacturer-specific formulation is required for higher wind speeds. This should be relatively straightforward once the generator behaviour is defined, requiring the replacement of the RHS of equation (\ref{eqn:torque-omega}) with a new model. With the generator torque defined, we can return to the torque that accelerates the blades. Firstly, we define the moment of inertia of the blades \begin{equation} I = N_{blades} \int^R r^2 m(r) \, dr \end{equation} where $m(r)$ is the mass-per-unit-span of the turbine blade. This is expressed as \begin{equation} \label{eqn:mass-per-unit-span-def} m(r) = \rho_{m} A(r) \end{equation} Where $A(r)$ is the cross-sectional area of the aerofoil, and $\rho_{m}$ is the mean blade material density. As both $c(r)$ and the aerofoil profile will be already known, we can numerically integrate to find $A(r)$, eg. by the trapezoidal rule. The moment of inertia can now be determined, so we calculate the angular acceleration of the blades \begin{equation} \dot{\omega} = \frac{\tau_{blades}}{I} \end{equation} With $\dot{\omega}$ we can then update $\omega$ at each time-step. In this paper, the simulations used an explicit two-step Adams-Bashforth integration method to calculate $\omega$ for the next time-step. The order of calculation from time-step $n$ to time-step $(n+1)$ can be described as \begin{equation*} \overbrace{\omega^{(n)} \longrightarrow \left( \tau^{(n)}_{fluid}, \, \tau^{(n)}_{power} \right) \longrightarrow \tau^{(n)}_{blades} \longrightarrow {\dot{\omega}}^{(n)}}^{\mathrm{time-step} \, n} \longrightarrow \overbrace{\omega^{(n+1)} \longrightarrow ...}^{\mathrm{time-step} \, (n+1)} \end{equation*} Lastly, as graphs of wind speed versus blade thrust are readily available for a number of wind turbines, they give us a useful measure of the model's correctness. The thrust is obtained by integrating the axial body forces across the turbine volume, i.e. \begin{equation} T = \int^V F_x(\mathbf x) \, dV \end{equation} This will be used in comparison with figures from an actual wind turbine in section~\ref{S:Turbine-validation}. \subsubsection{Active pitch control}\label{S:pitch} Like most modern utility-scale wind turbines, those at Lillgrund feature active pitch control, and so blade pitching was incorporated into the turbine model to mimic this behaviour. Our wind farm simulations would only consider wind speeds below the power knee, i.e.~below speeds at which blade feathering occurs, so the active pitch algorithm would only need to optimise the blade pitch (abbreviated in this section only from $\beta_{pitch}$ to just $\beta$) for maximum lift. It is a complex optimisation problem, as the only \textit{a priori} variable is $\beta$. The angle of attack $\alpha$ is \textit{a posteriori}, as it is a function of the time-dependent blade pitch, turbine performance and local flow conditions. This means that the optimal blade pitch $\beta_{opt}$ cannot be known beforehand without prior empirical measurements or calculation, neither of which are assumed to be available. The methodology below adapts the core arguments from \citet[Ch.4]{Creech2009} insofar as treating the blade pitching as damped harmonic oscillation, giving the solution not only of $\beta$ for maximum performance, but also that the rate of $\beta$ at $\beta_{opt}$ to be zero. The second condition ensures stability, by avoiding negative feedback between changes in $\beta$ and $\alpha$. The first step is to define $\alpha_{opt}$, the optimum angle of attack at which the maximum lift occurs for minimum drag. This is straightforward to calculate from graphs of $C_L$ and $C_D$ for a particular aerofoil as \begin{equation} \alpha_{opt} = \max\left[ C_L(\alpha, Re) - C_D(\alpha, Re)\right] \end{equation} This is related to the more traditional definition of optimal attack, $\alpha_{trad} = \max\left[ C_L / C_D\right]$, conventionally used for the design of the blade twist, but it is not equal, as the target is here used to determine best blade pitching in a situation where the actual angle of attack varies across the rotor area. For this reason, the next step is to calculate the weighted average of the angle of attack across the blades, $\alpha_{wt}$. The weighting is necessary as the aim is to maximise lift rather than simply ensuring that the mean angle of attack $\alpha$ across the blades is as close to $\alpha_{opt}$ as possible, which could plausibly result in sub-optimal performance. The weighting must consider the factors that increase lift, such as chord length and relative air speed, so at each mesh node $i$ within $V$ it is defined as \begin{equation} w_i = c(r_i) u_{rel, i}^2 \end{equation} Using the sum of weights, $W$, \begin{equation} W = \sum_i w_i \end{equation} gives the weighted average as \begin{equation} \alpha_{wt} = \frac{1}{W} \sum_i w_i \alpha_i \end{equation} This emphasises the values that currently give greatest lift. Now we define the desired angle of attack $\alpha_d$, i.e. the angle of attack that the algorithm will aim for. As we are not considering blade feathering in these simulations, where the lift is reduced by lowering the angle of attack below the optimum for lift, we set this to~$\alpha_d = \alpha_{opt}$. We define the maximum pitching rate of the blade, $|\dot{\beta}|_{max}$ below, by setting the shortest time a blade can pitch through one full rotation, $t_{pitch}$: \begin{equation} |\dot{\beta}|_{max} = \frac{2 \pi}{t_{pitch}} \end{equation} The value of $t_{pitch}$ had to be chosen with care, as too small a value would cause unstable oscillations in blade pitch. For all simulations in this paper, $t_{pitch}=10 \, \mathrm{s}$. If we assume that as the timestep $\Delta t \to 0$, so $|\Delta \beta| \to |\Delta \alpha|$, i.e.~over small periods of time, changes in the blade pitch $\beta$ lead to a change of equal magnitude in the angle of attack $\alpha$. From eq. (\ref{eqn:alphabeta}) these changes are opposite in sign, so in the limit, we also state generally that rate of change of pitch $\dot{\beta}$ is equivalent to the negative of the rate of change of angle attack $\dot{\alpha}$, i.e. \begin{equation} \dot{\beta} \approx -\dot{\alpha} \label{e:dotbetaalpha} \end{equation} The desired rate of change of attack $\dot{\alpha}_d$ is stated as \begin{equation} \dot{\alpha}_d = \frac{\alpha_d - \alpha_{wt}} {t_{pitch}} \end{equation} This ensures that smaller differences between $\alpha_d$ and $\alpha_{wt}$ result in smaller changes in the angle of attack, i.e. aiming for no change in angle of attack at $\alpha=\alpha_d$. If we write the desired change in blade pitch as an equal weighting of the current, known rate of change of pitch $\dot{\beta_{k}}$, and the desired rate $\dot{\beta}_d$, we can write \begin{equation} \Delta \beta_d = \Delta t \left( \frac{\dot{\beta_k} + \dot{\beta_d}}{2} \right) \end{equation} Through our equivalence relation in (\ref{e:dotbetaalpha}), we define $\dot{\beta}_d \approx -\dot{\alpha}_d$. As a final precaution, the rate of change in the pitch is limited by $|\dot{\beta}|_{max}$, so defining the maximum change in pitch as $\Delta \beta_{max} = \mbox{sign}(\Delta \beta_d) \, \Delta t \, |\dot{\beta}|_{max}$, the actual change in pitch is \begin{equation} \Delta \beta = \left\{ \begin{array}{ll} \Delta \beta_d & \mbox{if} \: |\Delta \beta_d| \leq |\Delta \beta_{max}| \\ \Delta \beta_{max} & \mbox{if} \: |\Delta \beta_d| > |\Delta \beta_{max}| \end{array} \right. \end{equation} Therefore the change in blade pitch from timestep $n$ to $n+1$ will be \begin{equation} \beta_{n+1} = \beta_n + \Delta \beta \end{equation} \subsubsection{Blade-generated turbulence}\label{S:Blade-turbulence} Blades in real turbines generate turbulence, particularly at the tips. However, as blades are not explicitly represented in the model, blade-induced turbulence must be described parametrically. In an approach used in previous work \citep{Creech2009,CFC11}, random fluctuations in the flow passing through the turbine volume are created by body forces, which match turbulent intensity measurements in experiment \citep{hossain2007}. Turbulence generation in the model is divided into three sections - the tip ($r > 0.9 R$), the main blade section ($0.1 R < r \leq 0.9 R$), and the hub at $r \leq 0.1 R$, as shown in Figure~\ref{fig:actuator-turbulence-regions}. \begin{figure} \begin{centering} \includegraphics[width = 0.5\columnwidth]{Fig12} \caption{The turbulence-generating regions of the turbine model volume, based on \citet{CFC11}: i) the tip, where the turbulent intensity is highest, ii) the main blade section, which creates turbulence approximately half that of the tip-section, and iii) the hub section.} \label{fig:actuator-turbulence-regions} \end{centering} \end{figure} The approach used will be briefly detailed here. A turbulence intensity function is defined \protect\begin{equation} \mathbf{Ti}(r, \omega, Ti_{max}) = \begin{bmatrix} Ti_x\\ Ti_y \\ Ti_z \end{bmatrix} \end{equation} Which varies with $r$, $\omega$ and $Ti_{max}$ the predetermined maximum turbulence intensity, such that $\mathbf{Ti}=0$ at $\omega=0$ and at its maximum values reach at $\omega=\omega_{max}$. This is then used to calculate the random variations in velocity which statistically match the specified blade-generated turbulence intensity. In the case of the axial velocity component, this gives \protect \begin{equation} \Delta u_{turb} = G_x (Ti_x) u \end{equation} Where $G_x$ is a coherent Gaussian-noise algorithm taken from \protect \citet{fox2007}. $\Delta v_{turb}$ and $\Delta w_{turb}$ are similarly defined. These fluctuations are then translated into body force terms which are then added to the body forces defined in section \ref{sub: liftdrag}. Further details on this approach and its validation with wake data can be found in \citet{CFC11}. It should be noted that experimental analysis has shown that the hub/root region of the turbine generates vortices, and thus significant levels of turbulence \citep{zhang2012, iungo2013}. This is to be expected due to the interaction of the flow with the blade root and the hub, a bluff body. We do not actively generate turbulence within the hub volume here, but nonetheless increased levels of vorticity near the blade root are present in simulations. Including the solid structure of the hub is at present prohibitively expensive, as it requires a very fine mesh resolution to resolve the hub geometry and the flow within the hub's boundary layer. However, we hope to include it in future work to assess its contribution to wake recovery. \section{Results} \label{S:Results} \subsection{Computational model}\label{S:Results:Model} This section gives an overview of the results from the computational model of Lillgrund. Instantaneous slices through the velocity field are used, together with the power outputs of selected turbines, to illustrate features of the wind farm flow dynamics and performance. To this end three wind directions are examined, namely 198\ensuremath{^\circ} and 236\ensuremath{^\circ}, which as table \ref{tab:Cases} shows, present a staggered arrangement to the oncoming wind, so that downwind turbines are relatively exposed, and 223\ensuremath{^\circ} where the rows of turbines are aligned with the mean wind direction. Turbines in row D are studied in more detail; this row crosses the gap in the array at positions D05 and E05, shown in Figure \ref{fig:Map2}. In Figure~\ref{fig:horiz-vel-slices-198-223-236} we can see horizontal slices through the instantaneous velocity field, two for each wind direction spaced 5 minutes apart. The flow is perpetually unsteady in all cases, as expected from Large Eddy Simulation CFD simulations with the SEM inlet boundary conditions described in ~\S~\ref{S:lillgrund-sea-surface}. The eddies through the domain range widely in size, from 100~m to over 1~km, and the turbulence is highly anisotropic, with those eddies typically 5-10 times longer (streamwise) than they are across (laterally). This results in varying flow speeds, ranging from approximately 6-15~m/s outwith the farm, and gusts can be seen passing through the wind farm, leading to higher wind speeds within. Turbine wakes are evident, with dark blue patches behind the turbines, indicating the regions of highest wake deficit; these wakes meander considerably. Wind farm wakes are also visible in Figure~\ref{fig:horiz-vel-slices-198-223-236}(a)-(d), extending downwind of the farm by approximately 3 km. Large scale turbulence structures particularly above and upwind of the turbine array can be seen in Figure \ref{fig:vert-slices-223}. However, a qualitative comparison between Figure \ref{fig:vert-slices-223}(b) and similar figures from other LES simulations in \citet{Churchfield2012} shows that the latter has higher frequency turbulent features especially near the turbine blades. This is not surprising, given that their simulations use a minimum cell dimension of 1~m near the turbines, whereas here the minimum is 5~m, therefore smaller eddies are resolved in the former. On the other hand, the large-scale turbulence structure seen in our results are not present in \citet{Churchfield2012}, who relied upon a log-law velocity profile passing over an empty domain to create turbulent inlet conditions. A better comparison can be made with \citet[Figure 1]{calaf2010} where periodic boundary conditions were used to create sufficient upstream turbulence; the work presented here shows similar turbulent flow features. This suggests that the SEM boundary conditions strongly influence the aerodynamics around the wind farm, and the turbine wakes within it. \begin{figure} a) 198\ensuremath{^\circ} at t=15~min\hspace{0.355\columnwidth} b) 198\ensuremath{^\circ} at t=20~min\\ \includegraphics[width = 0.49\textwidth]{Fig24a} \hfill \includegraphics[width = 0.49\textwidth]{Fig24b} \\ c) 223\ensuremath{^\circ} at t=15~min\hspace{0.355\columnwidth} d) 223\ensuremath{^\circ} at t=20~min \\ \includegraphics[width = 0.49\textwidth]{Fig24c} \hfill \includegraphics[width = 0.49\textwidth]{Fig24d} \\ e) 236\ensuremath{^\circ} at t=15~min\hspace{0.355\columnwidth} f) 236\ensuremath{^\circ} at t=20~min\\ \includegraphics[width= 0.49\textwidth]{Fig24e} \hfill \includegraphics[width= 0.49\textwidth]{Fig24f} \caption{Horizontal slices through the instantaneous velocity field at hub height for wind directions of 198\ensuremath{^\circ}, 223\ensuremath{^\circ} and 236\ensuremath{^\circ}.} \label{fig:horiz-vel-slices-198-223-236} \end{figure} The acceleration of flow between turbines due to the blockage effect, known as jetting, is noticeable in the results. Figure \ref{fig:horiz-vel-slices-198-223-236}(b) shows a gust of wind hitting the foremost turbines, B08, C08, D08 and E07, and a jet appears to pass around D08 and E07, before encountering turbines E06, D07 and D06. Figure \ref{fig:horiz-vel-slices-198-223-236}(e) also shows this, with a jet passing between B08 and A07 towards turbine A06; between B08 and C08 towards B07; and where a 3km-long gust encounters turbines H04, H03 and H02 at the north end, the jet is turned inward of the farm towards G02. The jetting has a more consistent pattern in the aligned case of 223\ensuremath{^\circ}, as the gaps between rows A to H in Figures \ref{fig:horiz-vel-slices-198-223-236}(c) and (d) all show evidence of accelerated flow. Moreover, both figures also indicate that air in these regions can exceed the average upstream hub-height wind speed, implying that jetting is an important method for injecting kinetic energy into the internal farm flow, affecting wind farm performance, and is highly dependent upon the alignment of the prevailing wind to the rows. \begin{figure} a) \\ \includegraphics[width = 0.99\columnwidth]{Fig25a} \\\\ b) \\ \includegraphics[width = 0.99\columnwidth]{Fig25b} \caption{Vertical slices through the instantaneous velocity field at t=20 minutes, for a wind direction of 223\ensuremath{^\circ}: a) cross-stream slice through the fifth column of turbines, and b) zoomed-in streamwise slice of instantaneous velocity field through row D.} \label{fig:vert-slices-223} \end{figure} \begin{figure} \begin{centering} \includegraphics[width = 0.75\columnwidth]{Fig26} \caption{Time-averaged normalised power plot for wind direction of 223\ensuremath{^\circ}.} \label{fig:power_averages} \end{centering} \end{figure} The wind farm is visualised as an array of time-averaged power plots for the wind direction of 223\ensuremath{^\circ} in Figure \ref{fig:power_averages}. These averages were computed over the last 10 minutes of simulation, by which point the flow had fully developed. The leading turbines all have an average power close to $P_0$, the median calculated from B08, C08 and D08. Immediately downwind, the performances of turbines B07, C07 and D07 drop to 20-30\% of this value. Surprisingly the turbines in column 6 with two turbines upwind show a mild increase in power, on average 37\%. After the empty space in column 4, D04 is over 50\% of $P_0$ while E04 rises to 72\%. This increase can be explained by looking at Figure \ref{fig:vert-slices-223}(b), where the wind speed increases in the gap behind D06 and E06, as faster air flowing over the wind farm is entrained downwards and mixed with the wake of upwind turbine. Beyond this, the turbines' performance remains at around 30\%, before decreasing slightly below this in column 1\@. It should be noted there is a large difference in the mean power between D06 and E06; this is also seen between D04 and E04. There is no obvious reason for this unusual behaviour. It may be due to particular eddies passing those turbines and, were additional computing time available, longer simulations with greater averaging periods could be reduce these disparities in mean power output. \begin{figure} a) \\ \includegraphics[width = \columnwidth]{Fig27a}\\ b) \\ \includegraphics[width = \columnwidth]{Fig27b}\\ c) \\ \includegraphics[width = \columnwidth]{Fig27c} \caption{Power time-series for selected modelled turbines in row D, at wind directions of a) 198\ensuremath{^\circ}, b) 223\ensuremath{^\circ} and c) 236\ensuremath{^\circ}. The mean power is averaged over the final 10 minutes of simulation.} \label{fig:instant-power-row-D} \end{figure} For wind directions 198\ensuremath{^\circ}, 223\ensuremath{^\circ} and 236\ensuremath{^\circ}, the time series of power output from selected turbines in Row D are shown in Figures \ref{fig:instant-power-row-D}(a), (b) and (c) respectively. As the rotors start from a stationary position, the power increases predictably for the first 4-6 minutes, before achieving statistically stable values after 10 minutes. The variability of power output is clear: while D08 for 223\ensuremath{^\circ} and 236\ensuremath{^\circ} appear to fluctuate about a value close to that shown in Table \ref{tab:power-thrust-errors}, the power can peak at 2250 kW or higher in both cases, as well as drop down to almost 1000 kW. This can be attributed to the passage of gusts of wind (and associated lulls) through the wind farm, causing the turbine rotor to speed up and slow down accordingly, and indeed these long, slow variations have a period of 3-4 minutes, which equates approximately to a distance of 2-2.5 km for a hub-height wind speed of 10 m/s. This observation agrees well with the size of the flow features shown in Figures \ref{fig:horiz-vel-slices-198-223-236}~(c)-(f). Comparing the aligned case in Figure \ref{fig:instant-power-row-D} (b) with the non-aligned cases in (a) and (c), it is clear that the second and third turbines, D07 and D06, experience higher performance when non-aligned due to increased exposure to the wind. This effect is enhanced by jetting particularly at a prevailing wind direction of 236\ensuremath{^\circ}, where their power outputs are comparable to the leading turbine. Indeed for 198\ensuremath{^\circ} in Figure \ref{fig:instant-power-row-D} (c), D07 spends the majority of its time outperforming the leading turbine. For this particular case, D08 is mostly underperforming, possibly due to insufficient hub-height wind speed; with jetting as a mechanism for accelerating the flow it would be possible for D07 to experience a wind speed greater than that upwind of the wind farm. \subsection{SCADA data}\label{S:Results:SCADA} In this section, the relevant results from the SCADA data are extracted to find episodes of at least 10 minutes' duration in which the wind speed was within the specified range, and the reference wind direction from the leading turbines was within a 3\ensuremath{^\circ}-sector of the wind direction corresponding to Table~\ref{tab:Cases}. Considering the consistent bias in wind direction recorded by the met mast and the nacelle, the relative performance of a few turbines against wind direction is analysed before focussing on the response at the selected key wind directions. \begin{figure} a) \hspace{0.46\columnwidth} b)\\ \includegraphics[width = 0.45\columnwidth]{Fig28a} \hfill \includegraphics[width = 0.45\columnwidth]{Fig28b} \caption{Relative turbine performance of turbines in row C against wind direction; a) The median performance of all seven turbines C07 to C01 (pale blue solid line, gold narrow-dashed line, turquoise dash-dotted line; purple medium-dashed line, green wide-dashed line, red double-dot-dashed line, respectively); b) the median and interquartile ranges for turbines C07 (pale blue), C06 (gold with narrow hatching), C01 (grey with wider hatching). } \label{fig:Turb_C3_8} \end{figure} The median of the relative performance over the front turbines (B08, C08, and D08) against the wind direction in Figure~\ref{fig:Turb_C3_8}~(a) for row C shows clearly that the performance of each turbine is affected as one would expect from the geometric shading of one turbine by another. The relative performance of C07 in second row shows a clear minimum at around 30\% when the wind is aligned with the turbines, and clear maxima around 100\% when C07 is between two front-row turbines, namely B08 and C08 for around 198\ensuremath{^\circ}, and C08 and D08 for around 236\ensuremath{^\circ}. On the other hand, the turbines in fifth row and beyond never show more than 30\% to 40\% of the front turbine's output; these turbines are in the `deep array' wake. The somewhat increased performance of C02, C03, and C04 above 230\ensuremath{^\circ}~can be explained by the fact that the wind is coming from the gap in the array, which allows for some wake recovery. Those in the third and fourth rows still perform better than the deep array with geometrically favourable wind directions, but they do not rise above 80\% and 50\%, respectively. The observation that the turbine in the second row produces less power than those further into the array was also noted by \citet{BarthSmith12}, but they could not reproduce it in any of their computational models. This strong power deficit is only apparent when the data are taken over 3\ensuremath{^\circ}~bins or narrower. To our knowledge, a deficit in the second row stronger than in the third row is seen in wind farms where the turbine spacing in the streamwise direction is less than about five rotor diameters ($5D$). To put the turbine-by-turbine observations into the context of the overall variability of the power output, the range around the median is shown as the extent of the interquartile range for three selected turbines, namely the second, third and last row in Figure~\ref{fig:Turb_C3_8}(b). This not only shows that the reduction in the second row turbine is significantly lower than that of the third turbine but it also shows that the variability across all wind directions is higher in the third row compared to that of the second row, which can be interpreted as resulting from a higher turbulence level created by the interaction of the turbulence generated by the first and second turbines. While the geometry of the wind farm suggests the strongest power deficit at 223\ensuremath{^\circ}, the observations plotted against the front turbine's yaw direction puts that minimum at 218\ensuremath{^\circ}. Plotting the same results against the wind direction measured at the met mast upstream of the wind farm would put the minimum at 229\ensuremath{^\circ} (cf.~\S~\ref{S:Lillgrund:FarmPerf}). Considering the presence of this systematic error in the directional data, a yaw direction of 218\ensuremath{^\circ}~is fully consistent with a true wind direction of 223\ensuremath{^\circ}. In the following, the yaw direction is adjusted by that possible bias of 5\ensuremath{^\circ}~and the results are presented according to their nominal true wind direction. To illustrate the variability for each case, we make use of the standard plot of power deficit of turbines within a row, as used by others \citep{Churchfield2012,Lillgrund15,Hansen2012}, but add the information about the variation around the mean or median value through the use of box-and-whisker plots \citep{RFAQ2011} instead of the more common single-valued charts. The cases shown in Figure~\ref{fig:RowBoxplots} show the three rows A (at the edge of the array), C (a full set of turbines through the centre) and D (a set with a gap) for four selected wind directions of 198\ensuremath{^\circ}, 212\ensuremath{^\circ}, 223\ensuremath{^\circ}, and 236\ensuremath{^\circ}~(cf. Table~\ref{tab:Cases}). At a wind direction of $198^\circ$, each turbine in row A is nominally fully exposed to the wind which is reflected in a uniform median relative power output around 100\%. However, the variation around the median increases progressively towards the back of the row, from around $\pm 20\%$ at the front (turbine A07) to around $\pm 80\%$ at turbine A01 at the back. This suggests that each turbine adds variability or turbulence to the wind even outside the typical wake direction, possibly due to wake meandering. A similar observation is made for the second turbine in row C, turbine C07, which according to Table~\ref{tab:Cases} is expected to be exposed to the wind and situated between the wakes of B08 and C08. As expected, the average relative power output of C07 is around 100\% but with a substantial variability. Deeper into the wind farm, C06 would be partially in the wake of C08 and, as expected from this, the performance of C06 is reduced to around 60\% which deepens further towards the back of the array to around 40\%. A similar behaviour is seen in the adjacent row D\@. The second row in Figure~\ref{fig:RowBoxplots} shows the case of $212^\circ$ where turbines in the second and third row are not directly shielded but expected to be affected by wake expansion. At 223\ensuremath{^\circ}~the full shading of all turbines is evident, including the very strong deficit in second row followed by a slight recovery in third row. The clearly enhanced performance of turbine D04 can be explained by the gap in the row leading to an effective turbine spacing between D06 and D04 of $8.6D$. At 236\ensuremath{^\circ}, finally the behaviour for columns A and C is qualitatively similar to that at 212\ensuremath{^\circ}, while column D benefits from the shape of the wind farm where column E terminates at turbine E07 and column F and F06. \begin{figure}[p] \begin{centering} $198^\circ$ \hfill ~\\ \includegraphics[width = 0.32\columnwidth]{Fig29_198_A} \includegraphics[width = 0.32\columnwidth]{Fig29_198_C} \includegraphics[width = 0.32\columnwidth]{Fig29_198_D} \\ $212^\circ$ \hfill ~\\ \includegraphics[width = 0.32\columnwidth]{Fig29_212_A} \includegraphics[width = 0.32\columnwidth]{Fig29_212_C} \includegraphics[width = 0.32\columnwidth]{Fig29_212_D} \\ $223^\circ$ \hfill ~\\ \includegraphics[width = 0.32\columnwidth]{Fig29_223_A} \includegraphics[width = 0.32\columnwidth]{Fig29_223_C} \includegraphics[width = 0.32\columnwidth]{Fig29_223_D} \\ $236^\circ$ \hfill ~\\ \includegraphics[width = 0.32\columnwidth]{Fig29_236_A} \includegraphics[width = 0.32\columnwidth]{Fig29_236_C} \includegraphics[width = 0.32\columnwidth]{Fig29_236_D} \\ \caption{Boxplots for rows of turbines at different wind directions. } \label{fig:RowBoxplots} \end{centering} \end{figure} This section has presented the result from the computer simulations and the observations in turn. Section \ref{S:Validation} combines these two sets of results for a qualitative and quantitative validation of the model. \section{Turbine parameterisation} \label{S:turbine-parameterisations} In this section, we detail the techniques used to create the model parameters for the turbines at Lillgrund wind farm. As complete specifications for the Siemens SWT-2.3-93 turbines used in Lillgrund \citep{assesslillgrund2009} are not publicly available, model parameters were validated by testing candidate turbines in a virtual wind tunnel, then comparing their performance with measured power and thrust data. The final parameters with which the turbine model was configured are shown in table~\ref{tab:siemens-params}. The rationale for the choice of aerofoil section and details of the blade geometry are explained in sections~\ref{S:Aerofoil} and~\ref{S:Blade} respectively \begin{table} \centering \caption{General model turbine specifications for Siemens SWT-2.3-93.} \begin{tabular}{ll} \hline\noalign{\smallskip} Property & Value \\ \hline\noalign{\smallskip} Rotor radius & 46.5 m \\ Hub height & 65 m \\ Rotor tilt & 6\ensuremath{^\circ} \\ Aerofoil type & NACA 63-415 \\ Hub fraction ($r_H / R$) & 0.1 \\ Blade material density & 100 kg/m$^3$ \\ Cut-in wind speed & 4 m/s \\ Cut-out wind speed & 25 m/s \\ Design tip-speed ratio & 6.2329 \\ Maximum power & 2.3 MW \\ Wind speed at which max. power occurs & 14 m/s\\ \hline\noalign{\smallskip} \end{tabular} \label{tab:siemens-params} \end{table} \subsection{Aerofoil}\label{S:Aerofoil} The aerofoil types used by the SWT-2.3-93 turbine are specified in \citep{planning-app} as `NACA 63.xxx, FFAxxx'. The FFAxxx series are thick aerofoils designed to bear high loads in the inboard part of the turbine blade \citep{risotech1998}. No information was available as to which FFA blade was used in the Siemens turbine, nor the extent of the blade that used it. Because of this, and because the inboard section generates a small portion of the total thrust, the same aerofoil type - NACA 63 series - was used across the whole blade. There were many candidate NACA 63 aerofoils, but eventually NACA 63-415 was chosen, as shown in figure \ref{fig:naca-63415}. This was based upon several factors: indication in literature that this is a common aerofoil used in modern turbines \citep{risotech2001}, desirable lift characteristics, and visual comparison of the NACA 63-415 profile with photographs of B45 blades. The modelled lift and drag characteristics are a compound of various sources. Initially, XFOIL \citep{xfoilpaper,drela1987,xfoilwebsite} plots of $\alpha$ versus $C_L$ and $C_D$ were used over the range $-10\ensuremath{^\circ} < \alpha < 20\ensuremath{^\circ}$. When these were compared to the Ellipsys2D and measured results in the Airfoil Catalogue \citep{risotech2001}, major discrepancies were found even at low angles of attack, and in particular at and above $\alpha_{opt}$. It was theoretically possible that the model may experiences angles of attack outwith this range, and so the modelled aerofoil could not be based solely upon the data in the Airfoil Catalogue, nor indeed the original NACA sources. Extreme values of attack are not experienced during normal operation, but lift and drag coefficients are nonetheless required for all possible values of $\alpha$ to prevent unpredictable behaviour in the model. Firstly JavaFoil \citep{javafoilwebsite} was used to plot both lift and drag for extreme angles of attack within $-90\ensuremath{^\circ}<\alpha< 90\ensuremath{^\circ}$ for $Re = 3 \times 10^6$. A secondary source was a report into aerofoil characteristics at extreme angles of attack \citep{sandia2001} providing data for $180\ensuremath{^\circ}$ for NACA symmetrical blades; whilst these have rather different aerodynamic properties, the same report concludes that at high angles of attack ($\alpha \gg 30 \ensuremath{^\circ} $) aerofoils effectively behave as flat plates. This means they can be modelled similarly. After several iterations of aerofoil parametrisation and verification of modelled turbine performance, the lift/drag graphs in figure \ref{fig:aerofoil-lift-drag} were found to be the most accurate. \begin{figure} \begin{centering} \includegraphics[width = 0.5\columnwidth]{Fig13} \caption{The lift (blue) and drag (red) coefficients as function of angle of attack for the NACA 63-415 aerofoil at $\operatorname{Re}=3 \times 10^6$.} \label{fig:aerofoil-lift-drag} \end{centering} \end{figure} \subsection{Blade geometry} \label{S:Blade} The SWT-2.3-93 uses Siemens' own B45 blade with active pitch correction. From Siemens' brochure \citep{siemens-brochure} and a technical specification published in a planning application \citep{planning-app}, rotor diameter, maximum RPM and rated wind speed were noted in order to calculate the optimum tip-speed ratio, as shown in table~\ref{tab:siemens-params}. These gave a plausible value of 6.2329. \subsubsection{Twist angle} To calculate the blade twist angle, we start with the predicted flow angle $\phi$ as defined in \citet[\S~3.7.2]{windenergyhandbook}: \begin{equation} \tan \phi(r) = \frac{1-\frac{1}{3}}{\lambda \mu \left( 1+ \frac{2}{3\lambda^2\mu^2}\right)} \end{equation} where $\lambda$ is the design tip speed ratio, and $\mu = \frac{r}{R}$. Using this with the optimum angle of attack $\alpha_{opt}$, gives the ideal blade twist $\beta_{ideal}$: \begin{equation} \beta_{ideal} = \tan^{-1} \left[ \frac{1 - \frac{1}{3}} {\lambda \mu \left( 1+ \frac{2}{3\lambda^2\mu^2}\right)} \right] - \alpha_{opt} \end{equation} In practice, this equation gives $\beta(r) \approx \beta_{ideal}(r)$ for $r > 0.2\,R$, twice the hub fraction (table \ref{tab:siemens-params}). Below this value of $r$ however, $\beta$ was iteratively increased in test simulation, until the model maintained optimum angles of attack for $r_H < r < R$ in test simulations, giving the final twist angles shown in figure \ref{fig:b45-chord-twist}. \subsubsection{Chord length} An exact specification for the chord length as it varies from hub-to-tip was not available; however the chord lengths at the hub and tip were given in \citep{planning-app}. Further information on chord length was taken from \citet{laursen2007}, and a near-linear tapering blade was assumed, shown in figure \ref{fig:b45-chord-twist}. \begin{figure*} \centering \begin{minipage}{.56\textwidth} \centering \includegraphics[width=.98\linewidth]{Fig14} \caption{Chord length and twist angle of the B45 blade as a function of r'=r/R.} \label{fig:b45-chord-twist} \end{minipage} \begin{minipage}{.01\textwidth} \hfill \end{minipage} \begin{minipage}{.41\textwidth} \centering \includegraphics[width=.98\linewidth]{Fig15} \caption{The NACA 63-415 profile.} \label{fig:naca-63415} \end{minipage} \end{figure*} \subsection{Turbine validation} \label{S:Turbine-validation} A strong indication that the turbine model is working effectively is that it will generate thrust and power values for different wind speeds that match measured data. Being entirely reactive, the model has an algorithm that continually changes blade pitch in response to wind conditions, so that at lower speeds it will aim to maximise lift. In turn, this will affect the dynamically changing values for rotor RPM, power output, and other turbine diagnostics. In theory, this means by altering the inflow wind speeds only, the model should produce equivalent performance to that of the real turbine in similar conditions. By taking the manufacturer's CT and CP curves for the SWT-2.3-93 and extrapolating thrust and power as functions of the upstream hub-height wind speed $u_0$, we can directly compare the time-averaged values for power and thrust from the model, when both the wake and turbine itself are dynamically stable. \begin{figure} \begin{centering} \includegraphics[width=0.6\columnwidth]{Fig16} \caption{The power (red) and thrust (blue) of the modelled and actual turbines, as a function of wind speed. The solid lines represent published turbine performance data, the dotted lines the time-averaged model diagnostics.} \label{fig:power-thrust-comparison} \end{centering} \end{figure} The model was run in a simulated wind tunnel 1 km long, with a cross-section of 250 m x 250 m. It had with a logarithmic inlet velocity profile, which was specified as a function of hub-height wind speed $u_0$: simulations were run at $u_0= 6, 8 {\rm~and~} 10\rm~m/s$, to cover typical wind speeds experienced at Lillgrund. The turbine was set to an initial RPM of 0, and to a blade pitch of 90\ensuremath{^\circ}. The simulations were run until at least 300s of simulation time had passed with relatively stable power and thrust values. The average of the power and thrust over the final 300s are plotted against calculated averages from \citet{assesslillgrund2009} in Figure~\ref{fig:power-thrust-comparison}. It is clear that both, the modelled power and thrust, closely follow the given specifications. The relative errors between the model and given values are shown in table \ref{tab:power-thrust-errors}. Especially considering that the precision in the reference values provided is relatively low and that the wind turbine response is very sensitive to the wind speed, the agreement between the turbine model and the manufacturer's specification are well within the uncertainty expected from the specifications. Therefore the agreement between the modelled SWT-2.3-93 turbine and the observations is acceptable for our purpose. \begin{table} \caption{Comparison of relative errors between actual and modelled turbine power (P) and thrust (T).} \centering \begin{tabular}{lllllll} \hline\noalign{\smallskip} $u_0$ (m/s) & $P_{actual}$ (kW) & $P_{model}$ (kW) & Relative error & $T_{actual}$ (kN) & $T_{model}$ (kN) & Relative error \\ \hline\noalign{\smallskip} 6 & 352 & 373 & 5.6 \% & 125 & 133 & 6.0 \% \\ 8 & 906 & 852 & 6.0 \% & 229 & 234 & 3.8 \% \\ 10 & 1767 & 1629 & 7.8 \% & 329 & 341 & 3.6 \% \\ \hline\noalign{\smallskip} \end{tabular} \label{tab:power-thrust-errors} \end{table} \section{Model Validation}\label{S:Validation} \subsection{Validation Methodology} In this section, the computational model results are directly compared to those from the actual wind farm SCADA data where the SCADA yaw direction was adjusted as in section~\ref{S:Results:SCADA}. \begin{figure*}[t] a) \hspace{0.45\columnwidth} b) \\ \includegraphics[width = 0.45\columnwidth]{Fig30a} \hfill \includegraphics[width = 0.45\columnwidth]{Fig30b} \\ c) \hspace{0.45\columnwidth} d) \\ \includegraphics[width = 0.45\columnwidth]{Fig30c} \hfill \includegraphics[width = 0.45\columnwidth]{Fig30d} \\ \caption{Time series of the relative power output from turbine C07 from the SCADA data, where each grey line represents a valid observation period, and the computational model (red dashed lines) for the following wind directions: a) 198\ensuremath{^\circ}, b) 207\ensuremath{^\circ}, c) 217\ensuremath{^\circ}, and d) 223\ensuremath{^\circ}. } \label{fig:Validation_T1} \end{figure*} The simulations with turbines covered a period of 20 minutes. The initial 10 minutes were considered a `spin-up' period, with fully-developed flow and stable turbine performance in the final 10 minutes. This resulted in a section of equilibrated flow of 10 minutes which was used for the comparison with the SCADA data. To ensure that the validation was based on truly comparable observed conditions, the comparison was made only with those sections from the SCADA data for which both, the wind speed was within the range of 5.5 to 10.5 m/s and the wind direction within a range of $\pm 1\ensuremath{^\circ}$ either side of the wind direction (corrected for the 5.5\ensuremath{^\circ}~bias) for a duration of at least 10 minutes. The computational results and all corresponding valid observed periods are shown together for four representative cases of the eight wind directions in Figure~\protect\ref{fig:Validation_T1}. Shown are time series of the relative power output from the numerical simulation as the dashed red line and each matching SCADA observation as a grey line. While there were very few periods of the wind speed and direction remaining within the specific range for over 20 minutes, there are ten or more instances where the conditions were met for at least 10 minutes. In most cases, the computational results are well aligned with the ensemble of observations both, in terms of their time-averaged power output and the magnitude of their fluctuation around that mean. In most cases, such as Figure~\ref{fig:Validation_T1}~(b) for 207\ensuremath{^\circ} (and similarly for 202\ensuremath{^\circ}~and 236\ensuremath{^\circ} - not shown here) as well as for 223\ensuremath{^\circ} , all valid SCADA episodes are very similar across each other as well as similar to the numerical simulation. In some cases, such as turbine C07 for 198\ensuremath{^\circ}~in Figure~\ref{fig:Validation_T1}~(a), a few episodes were very different from the behaviour of all others or, as the same turbine for 217\ensuremath{^\circ}~in Figure~\ref{fig:Validation_T1}~(c) (as well as 212\ensuremath{^\circ}~and 229\ensuremath{^\circ}), the observed episodes cover an extensive range without a clear representative behaviour. One concern was that the spread of the mean relative performance was an artefact, caused by wrongly assuming that the plateau in the relative performance shown in Figure~\ref{fig:RelPerformance_U} did not hold for all wind directions. Although this assumption is clearly supported by Figures~\ref{fig:Validation_T1}~(b) and (d), we checked this assumption for all eight wind directions discussed here. Figure~\ref{fig:relP_check} shows two examples of this check, in both cases for turbine C07, (a) for the most variable wind direction sector around 217\ensuremath{^\circ}, corresponding to Figures~\ref{fig:Validation_T1}~(c), and (b) for the wind direction of 223\ensuremath{^\circ}~when C07 is directly downstream of C08, corresponding to Figures~\ref{fig:Validation_T1}~(d). The individual symbols show the relative performance of C07 against the wind speed. In the case of 217\ensuremath{^\circ}, there might be at first sight a systematic change between wind speeds of 7~m/s and 11~m/s, but the two episodes with the lowest wind speeds cover the entire range observed in the power deficit. Inspecting corresponding plots for other turbines and other wind directions did not support the evidence of any persistent systematic bias in the relative performance with wind speed, as illustrated by Figure~\ref{fig:relP_check}~(b). As a result of this, we are confident that the relative performance within the investigated wind speed range is a robust performance indicator for wind farms, and that the variability observed in Figure~\ref{fig:Validation_T1}~(c) is caused by the actual variability of the flow induced by both, the inlet conditions and the upstream turbines. \begin{figure} a) \hspace{0.45\columnwidth} b) \\ \includegraphics[width = 0.47\columnwidth]{Fig31a} \hfill \includegraphics[width = 0.47\columnwidth]{Fig31b} \caption{Relative performance of turbine C07 against the wind speed measured by the front turbine, (a) wind direction 217\ensuremath{^\circ}~and (b) 223\ensuremath{^\circ}. Each symbol/colour represents one of the episodes covering that wind direction. } \label{fig:relP_check} \end{figure} An unusual case is Figure~\ref{fig:Validation_T1}~(a) for 198\ensuremath{^\circ}, where most episodes are very close to each other at the 100\% mark except for two episodes which fluctuate around a level of 40\%. Those two exceptional episodes are also those with the largest within-episode fluctuations. For the computational results, we do not have any measure of variation between different realisations (initial conditions) which would be equivalent to the different episodes, but the mean power level is consistent with the actual observations. Furthermore, the magnitude and time scales of the fluctuations found in the simulations results are consistent with those observed within the observed episodes. The agreement in the mean performance confirms the initial model validation in Figure~\ref{fig:power-thrust-comparison} that the power extraction is modelled correctly in the turbine model. The agreement in the time scale suggests that the relaxation time scales used in the control of the turbine parameters (mainly rotation rate and blade pitch) was appropriately set. To demonstrate the correspondence between the computer simulations and the SCADA data, we will combine the box plots for the SCADA of relative performance (e.g.,~Figure~\ref{fig:RowBoxplots}) with corresponding plots from the model results. In the comparison figures, the coxes and whiskers from the combined SCADA episodes are replaced by shaded regions indicating the interquartile range with dotted lines showing the median, while the results from the computer simulations are superimposed as standard box-and-whiskers plots. \subsection{Validation Results}\label{S:Validation:Results} \begin{figure} \begin{centering} $198^\circ$ \hfill ~\\ \includegraphics[width = 0.32\columnwidth]{Fig32_198_A} \includegraphics[width = 0.32\columnwidth]{Fig32_198_C} \includegraphics[width = 0.32\columnwidth]{Fig32_198_D} \\ $212^\circ$ \hfill ~\\ \includegraphics[width = 0.32\columnwidth]{Fig32_212_A} \includegraphics[width = 0.32\columnwidth]{Fig32_212_C} \includegraphics[width = 0.32\columnwidth]{Fig32_212_D} \\ $223^\circ$ \hfill ~\\ \includegraphics[width = 0.32\columnwidth]{Fig32_223_A} \includegraphics[width = 0.32\columnwidth]{Fig32_223_C} \includegraphics[width = 0.32\columnwidth]{Fig32_223_D} \\ $236^\circ$ \hfill ~\\ \includegraphics[width = 0.32\columnwidth]{Fig32_236_A} \includegraphics[width = 0.32\columnwidth]{Fig32_236_C} \includegraphics[width = 0.32\columnwidth]{Fig32_236_D} \\ \caption{Comparison of the observed relative wind turbine performance with that from the CFD simulations for the turbines in three selected rows (A, C, and D) at different wind directions (198\ensuremath{^\circ}, 212\ensuremath{^\circ}, 223\ensuremath{^\circ}, 236\ensuremath{^\circ}). The shaded area indicates the two centre quartiles and the dotted lines the 5\% and 95\% quantiles. The box plots show the quartiles from the equilibrated part of the CFD simulations. } \label{fig:SC_RowBoxplots} \end{centering} \end{figure} Figure~\ref{fig:SC_RowBoxplots} reproduces Figure~\ref{fig:RowBoxplots} of the relative performance for the three selected turbine rows A, C, and D where the SCADA data now are the lines and shaded regions in the background. Superimposed are the CFD results as the box-and-whisker plots using the same colour convention as the original Figure~\ref{fig:RowBoxplots}. An overview over the figure suggests that there is good agreement between observations and simulations with a few isolated discrepancies and very few systematic differences. One consistent feature across all panels is that the front turbines, A07, C08 and D08, show a much larger range than the SCADA data suggest. The other consistent feature is that the back turbines, A01, C01 and D01, show in most cases a slightly better performance in the model than the observations. At 198\ensuremath{^\circ}, the overall pattern of nearly 100\% performance in row A, and good performance from the first two turbines but reduced performance to around 40 - 60\% in rows C and D is well reproduced by the model, but the model shows substantial variation from the individual turbines in row A against the relative uniform observations from the SCADA data. In rows C and D, the front turbines show differences in the mean performance although the ranges are very large, so that the CFD and SCADA are still consistent with each other. The main difference is in the substantially elevated performance of the second turbines, C06 and D06, in the model. At 212\ensuremath{^\circ}, the model results are largely consistent with the SCADA results except for turbine D01. Even though the ranges are very large for the simulations and the observations, the drop-off from the front turbine to the deep array appears to be faster in the model than the simulations as the median in boxes for the CFD results for all second-row turbines, A6, C7 and D7, is below the median from the observations. The correspondence at 223\ensuremath{^\circ}~is very good, but here one can also see that the second row turbines and to some degree the third appear to be reduced more strongly in the model than the observations. \begin{figure} \centering \begin{minipage}{0.75\columnwidth} a) A07 \hspace{0.4\columnwidth} b) C07 \\ \includegraphics[width = 0.48\columnwidth]{Fig33a} \hfill \includegraphics[width = 0.48\columnwidth]{Fig33b} \\ c) A06 \hspace{0.4\columnwidth} d) C06 \\ \includegraphics[width = 0.48\columnwidth]{Fig33c} \hfill \includegraphics[width = 0.48\columnwidth]{Fig33d} \\ e) A04 \hspace{0.4\columnwidth} f) C04 \\ \includegraphics[width = 0.48\columnwidth]{Fig33e} \hfill \includegraphics[width = 0.48\columnwidth]{Fig33f} \\ g) A01 \hspace{0.4\columnwidth} h) C01 \\ \includegraphics[width = 0.48\columnwidth]{Fig33g} \hfill \includegraphics[width = 0.48\columnwidth]{Fig33h} \end{minipage} \caption{Comparison of the observed relative wind turbine performance with that from the CFD simulations against wind direction for four selected turbines (01, 04, 06 and 07) in two selected rows (A and C). The shaded area indicates the two centre quartiles and the dotted lines the 5\% and 95\% quantiles. The box plots show the quartiles from the equilibrated part of the CFD simulations. } \label{fig:Validation_T2} \end{figure} Changing perspective from the response of a row of turbines for a specific wind direction to the response of a single turbine against changing the direction, we turn to Figure~\ref{fig:Validation_T2} for which we have selected four turbines each from rows A and C\@. The structure of the figures follows the previous convention that the shaded areas show the interquartile ranges of the SCADA data, while the box-and-whisker plots represent the quartiles from the CFD simulations. As above, there are cases where the agreement between observations and simulations are extremely good but also some where there are substantial difference. The first impression is that the wind direction of 198\ensuremath{^\circ}~ shows substantial differences between observations and simulations in all eight panels. Putting that aside, the overall pattern of variation appears to be well captured by the model. In addition to the overall performance against wind direction, the model also appears to generate a larger variability (larger boxes) in places where the observed range in the SCADA data is also wider. Aggregating all turbines into the total wind farm output is shown in Figure~\ref{fig:ValFarm} against wind direction. Except for the unusual case of 198\ensuremath{^\circ}, the agreement between model and SCADA data is very good to a degree where the boxes from the model overlap substantially with the shaded region from the SCADA data. \begin{figure} \begin{center} \includegraphics[width = 0.6\columnwidth]{Fig34} \end{center} \caption{Comparison of the observed relative wind farm performance with that from the CFD simulations against wind direction. The shaded area indicates the two centre quartiles and the dotted lines the 5\% and 95\% quantiles. The box plots show the quartiles from the equilibrated part of the CFD simulations. } \label{fig:ValFarm} \end{figure} To quantify the agreement between model and observations, we can calculate the area under the normalised distribution of the model performance of a particular turbine from a selected model integration shared with the distribution from the corresponding SCADA events. This is illustrated in Figure~\ref{fig:Validationpdf} for three representative cases. In Fig.~\ref{fig:Validationpdf}~(a) both show a relatively narrow range around the mean performance but at different levels. As a result, the common area is only 15\% of the area of each of the two distributions. In Fig.~\ref{fig:Validationpdf}~(b) both show a broad distribution around somewhat different mean values, and the overlap is 68\%. In the last example, the distributions are very close, with an overlap of 87\%. \begin{figure} a)\hspace*{0.3\columnwidth} b)\hspace*{0.3\columnwidth} c) \\ \includegraphics[width = 0.3\columnwidth]{Fig35a} \hfill \includegraphics[width = 0.3\columnwidth]{Fig35b} \hfill \includegraphics[width = 0.3\columnwidth]{Fig35c} \caption{Probability density function of relative performance from model (solid blue line) and SCADA data (dashed red line). The common area is shaded. (a) for turbine A07 at 236\ensuremath{^\circ}, b) turbine C07 at 212\ensuremath{^\circ}, and c) turbine C05 at 223\ensuremath{^\circ}. } \label{fig:Validationpdf} \end{figure} \begin{figure}[htp] \centering \includegraphics[width = 0.50\columnwidth]{Fig36} \caption{Overview over agreement between CFD results and SCADA data for each turbine over the eight wind directions analysed. The $y$-axis shows the percentage agreement as the remainder, e.g.~1.51 corresponds to an agreement of $51\%$ for the turbines in row B\@. A level of 50\% agreement is indicated by the green dotted line and 75\% agreement by the dashed blue line.} \label{fig:summary_t} \end{figure} Aggregating the overlap for all wind directions into Figure~\ref{fig:summary_t} shows the overlap for each turbine. Overall the agreement of the wind farm performance as calculated by the model compared to the selected SCADA data, using this method is $70\% \pm 20\%$. From Figure~\ref{fig:summary_t} it is clear that the turbines exposed to the free stream show least agreement. The agreement of the turbines from the third row onwards is as high as $78\% \pm 18\%$. One possible cause could be the fact that only the wind speed and direction could be determined, but not the atmospheric stability or the freestream turbulence intensity. Given the observed spread of wind shear exponents (cf~Fig~\protect\ref{fig:Rose}b), one would expect a larger range in the agreement score rather than a systematically reduced agreement. Similarly, unless the turbulence intensity inlet conditions, which were chosen as typical for these latitudes, were systematically different from the actual ones, one would not expect this systematic difference. A further possible cause for the mismatch between model and observation in the front turbines could be the fact that we compare instantaneous results with a 0.5 s sampling rate with 1-minute averages. This is consistent with \citet{Poulsen2012}, who noted that increasing the averaging window eliminated local turbulence and wake meandering to a degree, producing results closer to those from standard engineering wake models. If this is the case, then it appears that the enhanced mixing and the establishing of a deep-array wake act to smooth out individual large features, so that the behaviour in the deep array tends toward more uniform flow, which is equally well described by a high time resolution or by time-averaged data. Considering that the inlet turbulence characteristics were chosen carefully to result in both a typical value at the wind farm location and realistic wake recovery, it is unlikely that the inlet turbulence conditions would result in this mismatch.
train/arxiv
BkiUdYLxaKgTq1C9OcOq
5
1
\section{Introduction} Humans can learn consecutive tasks and memorize acquired skills and knowledge throughout the lifetime, such as running, biking and reading. This ability, named continual learning, is also crucial to the development of Artificial General Intelligence. Deep neural networks(DNNs) have achieved remarkable success in various fields \cite{Deng2015ImageNet:,Hannun2014Deep,Simonyan2014Very,Lecun2015Deep}, however, the existing models are unable to handle dynamic tasks and data flow because of catastrophic forgetting, i.e. networks would forget the knowledge learned from previous tasks when training on new datasets \cite{McCloskey1989Catastrophic}.methods to mitigate catastrophic forgetting have been proposed in some literatures. For instance, Rusu et al. \cite{Rusu2016Progressive}, Fernando et al. \cite{Fernando2017Pathnet:} and Lomonaco et al. \cite{Lomonaco2017CORe50:} attempted to restore task-specific structures of model, including some layers or modules, but this would suffer from the limitation of complex selection strategies of genetic algorithms and poor utilization of network capacity. Works \cite{Lopez-Paz2017Gradient,Rebuffi2013icarl:} based on rehearsal strategy reinforce previous memories by replaying experience. An ideal learning system could learn consecutive tasks without increasing memory space, computation cost, as well as transfer knowledge from former tasks to current task. Methods of elastic parameters update\cite{Kirkpatrick2016Overcoming,ritter2018online} could meet these demands by finding the joint distribution of a sequence of tasks. However, it can not restore long-term memory mainly because getting the accurate joint distribution is hard and unnecessary in a long sequence of tasks. We propose a method to address this problem by getting the approximate solution space satisfying all tasks. It can be achieved through searching the approximate solution space from the approximate solution space corresponding to previous tasks.To achieve it, resistance based on the parameter importance are imposed on the update direction of parameters during learning new tasks. An appropriate evaluation on parameters is expected to meet the following demands: 1) expression precision, catching some essential parameters; 2) the distribution of values is centralized and polarized to ensure the parameter space separable; 3) unsupervised. Inspired by the idea of parametric pruning, we propose a method to measure the importance of parameters based on the contrastive magnitude of the change of information entropy with and without parameter pruning. To overcome catastrophic forgetting, exerting a resistance force on updating direction of parameters with large information in the training process, to achieve a balance point between the new task and the old one on the loss surface. The contributions of this paper are as follows: \begin{enumerate}[(1)] \item We propose SPP strategy. It requres smaller network-capacity for every single task, preserving the memory of previous tasks by constraining fewer parameters; visualization analysis shows that our method can retain important parameters according to tasks adaptively. \item We measure the importance of parameters by the variation of information entropy without the need of labels, rather than the variance of loss to the presence or absence of connection between units during a training progress; \item Experiment results show our method can effectively overcome catastrophic forgetting and improve overall performance with strong robustness and generalization ability in the case of limited network capability. \end{enumerate} \section{Related works} \textbf{Model prune and knowledge distillation:} Parameter pruning methods \cite{LeCun2015Optimal,Hassibi2014Second} are based on the hypothesis that nonessential parameters have little effect on the model's error after being erased and thus the key point is to search the optimum parameters that can minimum the interference to error . An effective way to narrow the representational overlap between tasks is to lessen coding parameters of representation in the continual learning model in limited capacity. Knowledge distillation pack the knowledge of complex network into a lightweight target network by the mode of teacher-student, and it also be used to tackle the problem of catastrophic forgetting. PackNet \cite{Mallya2017Packnet:} sequentially compress multiple tasks into single model by pruning redundancy parameters to overcome catastrophic forgetting. Dual memory network drew on this idea partially to overcome catastrophic forgetting by an external network. Inspired by the idea of model compression, our method utilizes parameter-importance to set up a soft mask rather than hard pruning based on binary mask, it dose not completely truncate the unimportant parameters, but adaptive adjust to later tasks to some extent, shares part of parameters among multiple tasks and save model capacity compared to hard pruning as well as enjoy lowered performance penalties. \textbf{Regularization strategies:} Those Methods reduce representational overlap among tasks to overcome catastrophic forgetting by regularization such as weights freezing and weight consolidation. Weight Freezing, enlightened by distributed encoding of human brain neurons, tries to avoid overlaps between crucial functional modules of tasks. For instance, Path-Net \cite{Fernando2017Pathnet:} sets up a huge neural network, then fixes specific function module of network to avoid being interfered by later tasks. Progressive Neural Network (PNN) \cite{Rusu2016Progressive} allocates separate networks for each task and performs multitasks by progressive expansion strategy. This kind of methods fix important parameters of a task durably to prevent network from forgetting acquired knowledge. However, those methods suffer network capacity exploding from long-term tasks. A classic weight consolidation method \cite{Aljundi2017Memory,Zenke2017Continual} is elastic weight consolidation (EWC) \cite{Kirkpatrick2016Overcoming}. EWC, inspired by the mechanism of synaptic plasticity, updates parameters elastically via determining important parameters. This type of method encode more tasks knowledge with less network capacity and lower computation complexity compared with Path-Net and PNN. The upper bound of tasks EWC can learn is constrained by capacity of network which is determined by the model structure. Since the model structure is invariant during learning process, the increased tasks potentially lead to performance of degenerate of EWC. \section{Methodology} \subsection{Motivation} The cause resulting in catastrophic forgetting is the drift of local minimum point when training a new task. We claim that it is feasible to approximate a distribution satisfying all tasks through seeking for the current solution space from that of previous tasks. It can be achieved by imposing resistance on parameters in proportional to their importance.(Figure \ref{fig:1}). The key is to ensure the sparseness and representation precision of parameter importance, but there are still some problems in the previous methods of measuring parameter importance: \begin{enumerate}[(1)] \item Getting importance value of parameters by calculating gradient descent of loss function would unavoidably underestimates these importance value as the model reaches its convergence; \item Supervised, it relies heavily on labeled training data and testing data; \item The distribution of important parameters is relatively divergent. Calculating the importance of parameters depends on sensitive of a model responding to the parameters perturbation instead of the magnitude of parameters' weights. Lower magnitude could possibly result in higher model sensitivity than higher ones , without considering the effects of cumulative changes in parameters, resulting in a high occupancy of capacity for each task. \end{enumerate} We design a method to measure the parameter importance satisfying the concentrated and polarized distribution, and then propose a framework to overcome catastrophic forgetting by SPP. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{imgs/1.png} \end{center} \caption{Weight consolidation for overcoming catastrophic forgetting. Blue arrow denotes standard SGD optimizer; black arrow is the resistance about the memories of previous tasks on parameters and green denotes a revised direction of SGD-derection, under the constraint of resistance direction.} \label{fig:1} \end{figure} \subsection{Framework} Following 3.1, we present the framework of SPP strategy in Figure \ref{fig:2}. We calculate the the coefficient of resistance of parameters on previous $T-1$ tasks after learning the$ (T-1)^{th}$ task. Then when the $T^{th}$ task is coming, we update the direction of gradients according to the former coefficient of resistance. \subsubsection{Measure of importance of parameter} \textbf{Definition}: Given a well-trained model, we try to train parameters W on input X to reduce the error $E=\sum_{i=1}^C p_i \log q_i$, and the model learned can be expressed as $F(X,W)\to E$. If we set $W_k$ as 0,the change of error $\delta E$ corresponding to $W_k$ can be written as $F(X,W,0)-F(X,W,W_k) \to \delta E$. The larger $\delta E$ means the more important $W_k$. The formula of Taylor expansion is: \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{imgs/2.png} \end{center} \caption{Framework for continual learning on T sequential tasks.} \label{fig:2} \end{figure} \begin{equation}\label{Acer} \delta E = (\frac{\partial E}{\partial W})^T\delta W + \frac{1}{2}\delta W^T H\delta W + O(\lVert \delta W\rVert^3) \end{equation} $H \equiv \partial ^2 E/\partial W^2$ is the Hessian matrix on parameters; $ \partial E/\partial W$ represents the gradient on the $W$. The gradient will close to 0 when the model converges and the first item on the right side will be too minimal to calculate a precise value of the error change to the parameter perturbation. Therefore second-order approximate solution is used as a satisfying amendment. \textbf{Solution} - The method above depends on the true distribution p which is calculated by labeled data. To get rid of the need of labels, we use the information entropy approximate the error $E$ because the distribution p and predicted distribution q are proximate on a well-trained model. Thus, the parameter importance is in term of: \begin{equation}\label{Acer} \delta E = (\frac{\partial H(q)}{\partial W})^T\delta W + \frac{1}{2}\delta W^T H\delta W + O(\lVert \delta W\rVert^3) \end{equation} Where $H(q)=\sum_{i=1}^C q_i \log q_i$. The idea behind is to measure the steady state of a learning system utilizing information entropy. We explain it that: the output distribution of model will gradually involve from a random state into a stable state and with entropy decreasing. When the model converges, the system will perform stably on the training data, which means low entropy and certain output distribution. And the statement of parameters connection would influence the stability, which means the change of entropy. \textbf{Hessian diagonalization} - The calculation of Hessian is complex and high computation. We introduce the Diagonal Fisher Information Matrix \cite{Pascanu2013Revisiting} to approximate Hessian matrix. The advantage is that its computational complexity is linear and can be solved quickly through gradient. However, the diagonalization may lead to a loss of precision. We speculate that better results can be obtained if we adopt a better Hessian approximation method. \subsubsection{Cumulative importance computation} Given a set of t+1tasks, we calculate the importance $\Omega_{i,j}^t$ of the parameter $w_{i,j}$ after learning the $t^{th}$ task. Where i and j represent the connection between the $i^{th}$ neuron and the $j^{th}$ neuron in neural networks respectively. \begin{equation}\label{Acer} \Omega _{i,j}^t=max(0,\Omega _{i,j}^t) \end{equation} According to equation (2), the positive values of parameter importance indicate these parameters are significant to current task and vice versa. Thus, We set the negative value of importance all to 0 to reduce the resistance of learning new tasks. After learning the t+1 task, we accumulate the importance of the previous tasks to obtain the accumulated importance on the parameters on t+1 tasks: \begin{equation}\label{Acer} \Omega_{i,j}^{1:t+1} = \Omega_{i,j}^{1:t} + \Omega_{i,j}^{t+1} \end{equation} \subsubsection{Weight consolidation} To avoid previous memories forgetting, we protect important parameters from being destroyed in subsequent training process by adding additional regular terms in the target function: \begin{equation}\label{Acer} L = L_{new} + \lambda \sum_{i,j}^{W} \Omega_{i,j}^{1:t}(w_{i,j}-w_{i,j}^{'})^{2} \end{equation} $L_{new}$ is the loss function of the current task. Here, $w_{i,j}^{'}$ is the parameter of the model after learning last task, $w_{i,j}$ denotes the parameter corresponding to current task, and $\Omega_{i,j}^{1:t+1}$ denotes the cumulative importance of a parameter in the previous t tasks.And we present roughly our algorithm in Table \ref{tab:1}. \begin{table} \begin{center} \caption{Pseudo-code for overcoming catastrophic by soft parameter pruning} \label{tab:1} \scriptsize \begin{tabular}{l} \toprule \textbf{Overcome Catastrophic Forgetting by Soft Parameter Pruning:}\\ \midrule \textbf{Start with:}\\ \quad $W_{i,j}^{*}$: old task parameters\\ \quad $W_{i,j}$: new task parameters\\ \quad $X,Y$: training data and ground truth on the new task\\ \quad tasks: all tasks numbers\\ \quad $H(q)$: information entropy of output\\ \quad H: Hessian Matrix\\ \quad $e_{k}^{T}$: the unit vector with correspongding to $W_{k}$\\ \textbf{Training:}\\ \quad \textbf{for task$\in$ tasks do}\\ \qquad $W_{i,j}^{*}$.assign($W_{i,j}$) \qquad //Update old task parameters\\ \qquad //Calculate the importance of the parameters of the T-1 tasks\\ \qquad $\Omega_{i,j}^{t}=max(0,(\frac{\partial H(q)}{\partial W_{i,j}})^T\delta W_{i,j}+\frac{1}{2}\delta W_{i,j}^T H\delta W_{i,j})$\\ \qquad $s.t.e_k^T\delta W_{i,j} + W_k =0$\\ \qquad$\Omega_{i,j}^{1:t}=\Omega_{i,j}^{1:t-1}+\Omega_{i,j}^t$ \qquad //Cumulative importance computation\\ \qquad \textbf{Define:} $\hat{Y}=CNN(X,W_{i,j})$ \qquad //new task output\\ \qquad $W_{i,j}\leftarrow arg_{W_{i,j}}min(L_{new}(Y,\hat{Y})+\lambda \sum_{i,j}^{W}\Omega_{i,j}^{1:t}(W_{i,j}-W_{i,j}^{*})^2)$\\ \qquad \qquad //Update new task parameters\\ \quad \textbf{end for}\\ \bottomrule \end{tabular} \end{center} \end{table} \section{Experiment and analysis} \subsection{Experiments setting} \textbf{Data}. The permuted MNIST \cite{Srivastava2014Compete} or Split MNIST \cite{Lee2015Overcoming} is too simple to evaluate our methods. In order to verify the ability of generalization, we tested the proposed method on three tasks: image classification task with CNN model, long-term incremental learning and generative task with Variational AutoEncoder (VAE) model. In image classification task, the Cifar10 \cite{Krizhevsky2009Learning}, the NOT-MNIST \cite{Bulatov2011Notmnist}, the SVHN \cite{Netzer1989Reading}, and the STL-10 \cite{Coates2015An} which are all RGB images with the same size of 32*32 pixels, are chosen. In long-term incremental learning task, Cifar100 \cite{Krizhevsky2009Learning} is used for medium scale network model, and Caltech101 \cite{Fei-Fei2006One-shot} is used for large scale network model (shown in supplement). In the generative task, celebA \cite{Liu2018Large-scale} and anime face crawled from web are selected as test data. These two database share the same resolution. \textbf{Baseline} – We compared our method with state of the art methods, including LWF \cite{Li2017Learning}, EWC \cite{Kirkpatrick2016Overcoming}, SI \cite{Zenke2017Continual} and MAS \cite{Aljundi2017Memory}, as well as some classic methods, including standard SGD with single output layer (single-head SGD), SGD with multi-output layers, SGD with the intermediate layers frozen(SGD-F), and fine-tuning intermediate layer(finetuning). We defined a multi-tasks joint training with SGD (Joint) \cite{yuan2012visual} as the baseline to evaluate the difficulty of a sequential tasks. \textbf{Evaluation} – We utilize Average Accuracy(ACC), Forward Transfer (FWT), and Backward Transfer (BWT) \cite{Lopez-Paz2017Gradient} to estimate the model performance: (1) ACC, evaluating the average performance of processing tasks; (2) FWT, describing the suppress of former tasks on later tasks; (3) BWT, describing the forgetting of previous tasks. Evaluating the difficulty of individual task through testing the model by multi-tasks joint training \cite{yuan2012visual} is more objective than testing the model of single task. Therefore we put forward a modified version. Given $T$ tasks, we evaluate previous t tasks after trained on the $t^{th}$ task. Denoting the result of i task tested on the $j^{th}$ task model as $P_{j,i}$. We use three indicators: \begin{equation}\label{Acer} ACC(i) =\frac{1}{T} \sum_{i=1}^{T}P_{T,i} \end{equation} \begin{equation}\label{Acer} FWT =\frac{1}{T-1} \sum_{i=1}^{T-1}P_{i,i}-m_{i} \end{equation} \begin{equation}\label{Acer} BWT =\frac{1}{T-1} \sum_{i=1}^{T-1}P_{T,i}-P_{i,i} \end{equation} Higher value of ACC indicates better overall performance, and higher value of BWT and FWT indicate better trade-off between memorizing previous tasks and learning new ones. \textbf{Training} – All models share the same network structure with dropout layer\cite{Goodfellow2013An}, and we initialized all parameters on MLP with random Gaussian distribution which has the same mean and variance ($\mu=0, \sigma=0.1 $ ), and applied Xavier on CNN. We optimized models by SGD with initial learning rate searching from ${0.1, 0.01,0.001}$ with a decay ratio of 0.96, and with uniform batch size. We trained models with fixed epoch and global hyper-parameters for all tasks. We chose the optimal hyper-parameters by greedy search. \subsection {Experiment results and analysis} \subsubsection{MLP\& MNIST} \textbf{Split MNIST} We divided the data into 5 sub-datasests, and trained an MLP with 784-512-256-10 units. In Table \ref{tab:2}, we present experiment results with split MNIST. Not all continual learning strategies work well on all indexes. Fine-tuning and SGD perform best on FWT, because no free memories are required for the subsequent tasks, and some features may be reused to improve learning of the new tasks while tasks are similar. LWF, MAS and SI perform well on BWT and ACC, and our method achieve the best performance on both indexes besides the joint learning method. We conclude that the model learns the general features from multiple datasets, which means models implicitly benefit from data augmentation. Our results of ACC and FWT can rival the best ones in single index and our model has the least catastrophe forgetting problem on BWT and it has only a 1.5 reduction in ACC after learning 10 tasks. In general, our method outperforms another eight approaches. \begin{table} \begin{center} \caption{The results of Split-MNIST} \label{tab:2} \begin{tabular}{lccl} \toprule Method & FWT(\%) & BWT(\%) & ACC(\%) \\ \midrule SGD & -0.31 & -34.01 & 61.53\\ SGD-F & -18.6 & -12.9 & 84.82\\ Fine-tuning & \textbf{-0.29} & -13.9 & 82.04\\ EWC \cite{Kirkpatrick2016Overcoming} & -4.99 & -6.43 & 88.75\\ SI \cite{Zenke2017Continual} & -6.19 & -3.51 & 90.67\\ MAS \cite{Aljundi2017Memory} & -4.38 & -2.08 & 94.09\\ LWF \cite{Li2017Learning} & -4.42 & -2.04 & 94.08\\ Joint \cite{yuan2012visual} & / & / & \textbf{99.87}\\ Ours & -0.44 & \textbf{-0.75} & 98.31\\ \bottomrule \end{tabular} \end{center} \end{table} \textbf{Permuted MNIST} We evaluate our method on 10 permuted MNIST tasks. In Table \ref{tab:3}, we present the results of our approaches and those of others. As what we expected, our method performs best on FWT, which are superior to SGD, we own it to the possibility of some features of lower layer can be shared by new task and there is enough capacity to relieve the pressure on capacity demand in new tasks. SGD-F gets the highest score on BWT, because SGD-F has fixed parameters, which help protecting the parameters of previous tasks from being overwritten, but it is at cost of the disability to learn new tasks flexibly. LWF performs worst on permuted MNIST compared with the split MNIST despite a good score, which may be attributed to dataset changing mentioned above on FWT. And our method gets the comparable performance on ACC. \begin{table} \begin{center} \caption{The results of permuted MNIST of 10 tasks} \label{tab:3} \begin{tabular}{lccl} \toprule Method & FWT(\%) & BWT(\%) & ACC(\%) \\ \midrule SGD & 1.11 & -18.05 & 70.45\\ SGD-F & -14.90 & \textbf{0.10} & 81.99\\ Fine-tuning & 0.75 & -6.21 & 80.69\\ EWC \cite{Kirkpatrick2016Overcoming} & -0.98 & -2.57 & 91.97\\ SI \cite{Zenke2017Continual} & -0.56 & -4.40 & 90.21\\ MAS \cite{Aljundi2017Memory} & -1.23 & -1.61 & 92.6\\ LWF \cite{Li2017Learning} & 0.67 & -24.02 & 74.15\\ Joint \cite{yuan2012visual} & / & / & \textbf{95.05}\\ Ours & \textbf{2.33} & -3.22 & 94.51\\ \bottomrule \end{tabular} \end{center} \end{table} \subsubsection{CNN \& image recognition} \textbf{Sequence of image recognition tasks} Further, we test out method on nature visual datasets based on VGG \cite{Simonyan2014Very} with 9 layers and batch normalization layer to prevent gradient exploding. Specifically, we train and test on MNIST, notMNIST, SVHN, STL-10 and Cifar10 in this order, which have been processed to the same amount of train images and categories (50,000 and 10, respectively). Overall, our method achieved the best performance on FWT, BWT and ACC. As Figure \ref{fig:3} shown, our method is almost one-third of LWF and MAS on FWT. It indicates that our proposed method work well on alleviating the dilemma of memory, and the test accuracy is close to the baseline nearly. Namely, our method drives the network to train well on the sequence of tasks and on BWT, ours also reaches the top result, which means that it ensures the network still have a good ability to handle the old tasks after the continual training. On ACC, our method has also achieved nearly performance as multi-task joint training, which shows that networks can do a good trade-off among tasks. The result of Fine-tuning is better than that of SGD, indicating that the configuration of independent classifier for each task can prevent catastrophic forgetting to some extent. We speculate that it is because the features of different tasks at the high layer are highly entangled, and using different classifiers can alleviate this situation a little. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{imgs/4.png} \end{center} \caption{The performance of different overcoming catastrophic forgetting methods on the sequence of visual datasets. The method based on regularization produces a little effect starting from EWC, although the effect is limited; MAS and LWF are close; Our method reaches the best performance on all the indicators.} \label{fig:3} \end{figure} \subsubsection{Robust analysis} To test the stability of our method to hyper-parameter, based on the above experiment, we test the method under different $\lambda$. The result shows that ours have a strong robustness to hyper-parameter in a large range of values and can overcome strophic forgetting to some extent. As shown in Figure \ref{fig:4}, when $\lambda$ is 0.01, the network only focuses on the training on the new task and does not care about the protection of the old task. In this case, all the three indicators are extremely poor, and the proposed method and SGD are almost the same at this time. When $\lambda$ reaches 0.1, the proposed method has achieved relatively good performance and has greatly improved on all three indicators. If $\lambda$ is in the range of 0.5 to 4,the performance is relatively stable. The proposed method achieves the best performance with $\lambda=4$. As $\lambda$ continues rise, the network memorizing too much results in lack of capacity to learn, which makes the performance of new task processing decreased. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{imgs/5.png} \end{center} \caption{The performance of SPP under different hyper-parameter. The horizontal axis represents the value of hyper-parameter. The vertical axis represents the result of three indicators. The dotted black line indicates the baseline of accuracy.} \label{fig:4} \end{figure} \subsubsection {Continual learning in VAE} To test the generalization of our method, we apply it in VAE, we carry out tasks in sequence from human face to anime face. We resize the samples of two datasets to the same size of 96*96, and train a VAE with conv-conv-fc encoder layer and fc-deconv-deconv layer on both sides. Then we use separate latent variable to train single task, which is essential for the performance of VAE because of significant difference of distributions between two tasks. We trained models by three manners: (1)training on the Celeba dataset from scratch; (2) training on the Celeba and then training on the anime face with SGD; (3) training on the Celeba and then training on the anime face with SPP. In Figure \ref{fig:6}, we present the samples of human face produced by three models. The results show that our approach can well preserve the skill of human face generation while learning anime face. The model with SPP works well as the model train on the Celeba, but the model with SGD loses the ability. it p roves that SPP has strong generalization. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{imgs/7.png} \end{center} \caption{Overcome catastrophic forgetting from face dataset to anime dataset use VAE. To guarantee the objectivity of the results, we respectively utilize different data and different network structures. left: The test sample of human face with generator from the human face dataset; middle: The test sample of human face with generator after trained from celebA to anime face dataset, without using our approach; right: the test sample of human face with generator after trained from celebA to anime face dataset using our approach.} \label{fig:6} \end{figure*} \subsubsection{Discussions} \textbf{Analysis of parameter-importance.} As mentioned above, we expect the distribution of parameter-importance is concentrated and polarized. In Figure \ref{fig:7}, We show the distribution of parameter importance obtained by the three methods. The results shows that a distribution with these two characters contributes to overcoming catastrophic forgetting. The left figure shows the our distribution is sharp at low importance and high importance, indicating that our method frees more parameters to learn more tasks. And the figure on the right shows similar results based on CNN and CIfar10. Compared with other methods, the distance between peaks of the distribution is closer shown in the middle figure. We speculate the absolute distance at is large enough(0-0.4) to distinguish different tasks. As shown in the right figure, the overall parameter importance is low in values, we believe that the capacity of vgg9 for MNIST is sufficient enough, and compared with other methods, our method is also polarized. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{imgs/8.png} \end{center} \caption{Distribution of parameter-importance. The horizontal axis is importance value; the vertical axis is density; the blue solid line is the calculation result of Fisher informationmatrix in EWC; the orange solid line represents the method of MAS; the green solid line is the result of our method. Left: the distribution of parameter-importance measure on MLP model trained with Permuted-MNIST; mid: the result measure on vgg9 trained with MNIST; right: the result measure on vgg9 trained with CIFAR10} \label{fig:7} \end{figure*} \textbf{Parameter space similarity and changing analysis.} We carried out six sequential tasks using our method with Permuted-MNIST and analyze the experiment results in comparison with SGD of single-head and Fine-tuning of multi-head as control groups: \begin{enumerate}[(1)] \item The evolving of overall average accuracy is shown in Figure \ref{fig:8}(a) which indicates that our method is more stable and achieves better results as growing number of tasks; \item In order to verify model can preserve previous memory efficiently, we use Fréchet distance \cite{frechet1906quelques} to measure the similarity of parameter importance distribution between the first tasks and the last tasks, Figure \ref{fig:8}(b). In general, F value in our method is far greater than other two methods, which indicates that our method can better retain the important parameters of previous task, and the F values are greater in deeper layers of networks, which reveals that strengthening protection of parameters on deep layers may tremendously help tackle catastrophic forgetting; \item In Figure \ref{fig:8}(c), We utilized weighted sum of squares of difference between the first and the last task to measure the parameters' change. Our results show that parameters in deeper layers change less, and the fluctuation of parameters based on our methods is much larger than other methods. It indicates our method can preserve the former memories but it leads to larger network capacity to learn new tasks. \end{enumerate} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{imgs/9.png} \end{center} \caption{Parameter space similarity and changing analysis on Pemuted-MNIST sequential tasks, a red line denotes our method, a blue line denotes fine-tuning and a green line denotes standard SGD with single head; (a): Overall average accuracy in 6 permuted MNIST sequential sub tasks; (b): Similarly of parameter space; (c): Parameter variance between parameters of tasks.} \label{fig:8} \end{figure*} \textbf{Visualization analysis.} We visualize the negative of absolute value of parameters change(left), and compare it with the distribution of parameter importance(right). Our results show that our method can prevent significant parameters from being updated and make full use of non-significant parameters to learn new tasks. In Figure \ref{fig:9}, in the black dotted bordered rectangle of the $1^{st}$ row, parameters with warm color change little. In contrast, parameters in the second column of the picture on right are unimportant and they change hugely. It indicates that our method can precisely capture the significant parameters and prevent them from being updated to prevent forgetting. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{imgs/10.png} \end{center} \caption{Visualization of importance and variance of parameters. The horizontal axis represents the neurons of the output layer, the vertical axis represents the neurons of the input layer, and each element represents the connection between the neurons of the input and output layer. Left: variance of parameters between two tasks, the colder the color is, the smaller the variance is; right: importance of parameters of the first task, the warmer the color is, the more significant the parameter is.} \label{fig:9} \end{figure} \section{Conclusion and future works} In this paper, we proposed a Soft Parameters Pruning (SPP) strategy to overcome catastrophic forgetting by find a trade-off between short-term and long-term profit of a learning model. Our strategy tries to free those parameters less contributing to remember former task domain knowledge to learn future tasks, and preserve memories about previous tasks via those parameters effectively encoding knowledge about tasks at the same time. The SPP strategy also catches parameters with high information and prevent them from being overwritten in a soft way to prevent forgetting. Experiments show some advantages of SPP strategy: \begin{enumerate}[(1)] \item Defining a measurement strategy guaranteeing the precision; \item Our approach is low-sensitive to hyper parameters; \item Our approach can be extended to generative model. \end{enumerate} Our evidences suggest that that finding a approximate optimize or sub-optimal solution will benefit alleviating the dilemma of memory. We also find that concentration and polarization properties of parameters distribution are significant for overcoming catastrophic forgetting. The aim of overcoming forgetting in long-sequence tasks has not been fully achieved because of protecting some parameters through measurement based on single strategy is not entirely convincing. We suggest that well-structured constraints to control parameters behavior or well-designed pattern of parameters distribution may be crucial to the good performance of a model to overcome forgetting. Also, research on human brain memory is considering a potential way to solve this problem \cite{Hassabis2017Neuro}. The problem of overcoming catastrophic forgetting is still open. {\small \bibliographystyle{ieee}
train/arxiv
BkiUdZbxaKgS2M58gJGo
5
1
\section{Introduction} Consider the graph having $\mathbb{Z}^d$ as vertex set and all edges of the form $\{x, x\pm e_i\}$ and $\{x,x\pm k\cdot e_i\}$ for some $k\ge 2$. It was shown in~\cite{LimaSanchisSilva11} that the critical probability for Bernoulli bond percolation on this graph converges to that of $\mathbb{Z}^{2d}$ as $k \to \infty$. This result, later generalized in~\cite{MartineauTassion17}, is a particular instance of Schramm's conjecture~\cite{BenjaminiNachmiasPeres11} that the percolation threshold for transitive graphs is a local property. The convergence proved in~\cite{LimaSanchisSilva11} is conjectured to be monotone, that is, the percolation threshold for the above graph should be decreasing in the length $k$ of long edges.\footnote {In support of this conjecture, simulations~\cite{AtmanSchnabelYY} confirm that increasing $k$ decreases the critical parameter, and the proof of~\cite[Lemma~2]{LimaSanchisSilva11} shows that replacing $k$ by a multiple of $k$ does not increase it.} Monotonicity questions are often intriguing for being extremely simple to ask and hard to answer. A good example~\cite{Berg07} is the following: for Bernoulli bond percolation on the usual graph $\mathbb{Z}^d$, prove that the probability of the origin being connected to $(n,0,\dots,0)$ monotone in $n$. This problem is still open, except when the parameter is close to $0$ or $1$~\cite{LimaProcacciSanchis15}. For oriented percolation on $\mathbb{Z}^2_+$, the probability of the origin being connected to $(m-n, m+n)$ is decreasing in $n\in\{0,\dots,m\}$ for fixed $m$; this may be obvious but the proof is not straightforward~\cite{AndjelSued08}. In the same spirit, for unoriented percolation on $\mathbb{Z}^2_+$, if the parameter is smaller for horizontal edges than for vertical ones, the above probability should be larger than the probability of the origin being connected to $(m+n, m-n)$. This has only been proved under the assumption that the ratio between horizontal and vertical parameters is small enough~\cite{DeLimaProcacci04}. For first-passage percolation, it was conjectured~\cite{HammersleyWelsh65} that the expected minimum travel time from $(0,0)$ to $(n,0)$ along paths contained in the strip $\{(x,y):0\leq x\leq n\}$ is nondecreasing in $n$. This question is still open, with a number of partial results~\cite{Ahlberg15,AlmWierman99,Howard01,Gouere14}. In the negative direction, for first-passage percolation on $\mathbb{Z}_+\times\mathbb{Z}$, there is a counter-example~\cite{Berg83} where the expected passage time from the origin to $(2,0)$ is less than the expected passage time from the origin to $(1,0)$. Another context where strict monotonicity is expected to happen is in the case of essential enhancements as introduced in~\cite{AizenmanGrimmett91}, see also~\cite{BalisterBollobasRiordan14}. In this paper we consider percolation on $\mathbb{T}_{d,k}$, the graph given by the oriented rooted $d$-ary tree ($d \ge 2$) with a root at the top, bearing the usual ``short'' downward edges plus the addition of all downward edges of length $k$, called ``long'' edges. This is an oriented version of Trofimov's grandfather graph for $k=2$, or the great$^k$-grandfather graph for larger $k$. We let short and long edges be open independently with probability $p$ and $q$, respectively. The phase space $[0,1]^2$ is decomposed in two regions: a set $\mathcal{P}_k$ of pairs $(p,q)$ for which a.s.\ there are infinite open paths, and a set $\mathcal{N}_k$ of pairs for which a.s.\ there are none, see Figure~\ref{fig:phasespaceab}a. For $p>\frac{1}{d}$ there are a.s.\ infinite open paths of short edges, and for $q>\frac{1}{d^k}$ there are a.s.\ infinite open paths of long edges. For $dp +d^k q \leq 1$, a simple comparison with a branching process (to be given in \S\ref{sec:pcqc}) shows that a.s.\ there are no infinite open paths. By monotonicity of the process with respect to $p$ and $q$, there is a \emph{critical curve} $\gamma_k$ joining the points $(\frac{1}{d},0)$ and $(0,\frac{1}{d^k})$ which separates $\mathcal{N}_k$ and $\mathcal{P}_k$, as depicted in Figure~\ref{fig:phasespaceab}a. Define \begin{equation} \label{eq:qcandpc} q_c(p,k) = \inf\{q: (p,q)\in\mathcal{P}_k\} \quad \text{and} \quad p_c(q,k) = \inf\{p: (p,q)\in\mathcal{P}_k\} . \end{equation} \begin{figure}[t] \centering \hspace*{\stretch{1}} \includegraphics[width=.48\textwidth]{figures/phasespace1} \hspace*{\stretch{2}} \includegraphics[width=.48\textwidth]{figures/phasespace2} \hspace*{\stretch{1}} \\ \scriptsize \hspace*{\stretch{1}} (a) \hspace*{\stretch{2}} (b) \hspace*{\stretch{1}} \caption{Phase space and critical curve separating the percolative region $\mathcal{P}_k$ from the non-percolative region $\mathcal{N}_k$. (a)~% The curve stays between the three dotted lines. In the gray region, infinite paths necessarily use both short and long edges. (b)~% Critical curves for different ranges $k$ meet at one point. } \label{fig:phasespaceab} \end{figure} Let $k$ be fixed. We show that $q_c$ is continuous and strictly decreasing in $p$ (equivalent formulations are that $p_c$ is strictly decreasing and continuous in $q$, that both $p_c$ and $q_c$ are continuous, or that $\gamma$ contains neither vertical nor horizontal segments). In particular, $\gamma_k$ is described by $q=q_c(p,k)$ as well as by $p=p_c(q,k)$, and there is a non-trivial subregion of $\mathcal{P}_k$ at which infinite open paths necessarily use both long and short edges, see Figure~\ref{fig:phasespaceab}a. A similar description is given in~\cite{IlievJansevanRensburgMadras15} for percolation with a defect plane. We also show that $\gamma_{k+1}$ stays strictly below $\gamma_k$ for $p<d^{-1}$, and they meet only at the critical point $(d^{-1},0)$. This means that $q_c(p,k)$ is strictly decreasing in $k$ for as long as it is positive, and analogously for $p_c(q,k)$, see Figure~\ref{fig:phasespaceab}b. In \S\ref{sec:defresults}, we present the model and the above statements more formally. In \S\ref{sec:pcqc}, we prove that $q_c(k,p)$ is continuous and decreasing in $p$. In the proof, we tile $\mathbb{T}_{d,k}$ by layers and consider a construction of the process where the state of tiles are sampled independently. We then couple configurations with different values of $p$ and $q$ so that some advantage in $q$ compensates for small decreases in $p$ and vice-versa. Each comparison is done by finding one particular tile that makes no useful connections without extra open edges and at the same time makes all possible connections with their help. We learned this idea from~\cite{Teixeira06}. In \S\ref{sec:hubs}, we show that $q_c(p,k+1)<q_c(p,k)$ for $p<d^{-1}$. Together with the results of \S\ref{sec:pcqc}, this inequality completes the previous description illustrated by Figure~\ref{fig:phasespaceab}b. The proof involves a joint exploration of a percolation ``cluster'' in $\mathbb{T}_{d,k}$ and a percolation cluster in $\mathbb{T}_{d,k+1}$. The joint exploration is an algorithm in which parts of both clusters are revealed simultaneously using the same random variables. After each step of the algorithm is concluded, there is an injective function from the revealed portion of the cluster in $\mathbb{T}_{d,k}$ to the one in $\mathbb{T}_{d,k+1}$. When trying to ensure this, one might run into collisions, that is, situations where an edge that could potentially grow the cluster in $\mathbb{T}_{d,k}$ has as a counterpart an edge which does not grow the cluster in $\mathbb{T}_{d,k+1}$. The challenge is thus to design the algorithm so that collisions do not occur. We succeed in doing so by introducing a recursive procedure which alternately reveals clusters of short edges and then groups long edges, in a way that allows the comparison between the $k$ and $k+1$ scenarios. This gives $q_c(p,k+1) \le q_c(p,k)$. Strict inequality is obtained by extending the idea mentioned in the previous paragraph to a dynamic, hybrid construction. When revealing the state of a whole batch of long edges at once we can use the increase in $k$ to compensate for a small decrease in $q$. As a final remark, there seems to be no obvious way to adapt the argument just described to the graph with the vertices in $\mathbb{Z}^d$ and edges of the form $\{x,x\pm e_i\}$ and $\{x,x \pm k\cdot e_i\}$. A similar joint exploration would lead to collisions as illustrated in Figure~\ref{fig:counter}. Proving the inequality $p_c(k+1) \le p_c(k)$ mentioned in the previous footnote remains open, let alone strict inequality. \begin{figure}[t] \centering \includegraphics[width=.8\textwidth]{figures/counter} \caption{Situation where the natural coupling of explorations which maps short edges to short edges and long edges to long edges leads to a ``collision'' in the graphs given by adding edges of length 3 and 4 to $\mathbb{Z}$. Dashed lines represent closed edges and full lines represent open edges. The bold edge being open increases the cluster of the first graph by one vertex, but has no effect on the cluster of the second graph.} \label{fig:counter} \end{figure} \section{Definitions and results} \label{sec:defresults} Let $d\in\{2,3,4,\dots\}$ be fixed. We denote $[d]= \{1,\ldots, d\}$, and we will make frequent use of the set \begin{align*} [d]_\star = \bigcup_{0\leq n < \infty} [d]^n; \end{align*} the set $[d]^0$ is understood to consist of a single point $o$. Points of $[d]_\star\backslash \{o\}$ are represented as sequences $u = (u_1,\ldots, u_n)$. In case $u = (u_1,\ldots, u_m)$ and $v = (v_1,\ldots, v_n)$, we define the concatenation $u\cdot v = (u_1,\ldots, u_m, v_1,\ldots, v_n).$ Given $k \in \mathbb{N} = \{1,2,3,\dots\}$, we define the oriented graph $\mathbb{T}_{d,k}$ as the graph with vertex set $\mathbb{V}_{d,k} = [d]_\star$ and edge set $\mathbb{E}_{d,k} = \mathbb{E}^{\mathdutchcal{s}}_{d,k} \cup \mathbb{E}^\ell_{d,k}$, where \begin{align*} &\mathbb{E}^{\mathdutchcal{s}}_{d,k} = \{\langle u, u\cdot a\rangle: \;u \in \mathbb{V}_{d,k},\;a\in [d]\}, \\ &\mathbb{E}^\ell_{d,k} = \{\langle u, u\cdot r\rangle:\;u\in \mathbb{V}_{d,k},\; r\in [d]^k\}. \end{align*} These will be referred to as the sets of short and long edges of $\mathbb{T}_{d,k}$. As the above notation suggests, we will normally use the letters $a,b$ for elements of $[d]$, the letters $r,s$ for elements of $[d]^k$ and the letters $u,v,w,x$ for general vertices of $\mathbb{T}_{d,k}$. Consider the process in which, independently, short edges are open with probability $p$ and long edges are open with probability $q$. Let $\P_{p,q}$ denote the corresponding probability measure. We define the event $u \leadsto v$ that there exist $u^0,u^1,\dots,u^{n-1},u^n$ such that $u^0=u$, $u^n=v$ and the edge $\langle u^j,u^{j+1} \rangle$ is open for all $j<n$. The event $u \leadsto \infty$ means that $u\leadsto v$ for infinitely many $v$. Let $\mathcal{P}_k = \{(p,q):\P_{p,q}(o\leadsto \infty) > 0\}$, $\mathcal{N}_k = [0,1]^2 \setminus \mathcal{P}_k$, and let $p_c(q,k)$ and $q_c(p,k)$ be given by~(\ref{eq:qcandpc}). We prove the following monotonicity property. \begin{theorem} \label{prop:hub} The inequality $q_c(p,k+1) < q_c(p,k)$ holds unless $q_c(p,k) = 0$. \end{theorem} This says that $\gamma_{k+1}$ stays under $\gamma_{k}$, and they can only intersect each other at the boundary $\{pq=0\}$, except maybe where one of them contains a vertical segment. The next result rules out the latter possibility, thus completing the picture provided in Figure~\ref{fig:phasespaceab}b. \begin{theorem} \label{thm:qccontinous} For each fixed $k\in\mathbb{N}$, the function $p \mapsto q_c(p,k)$ is continuous on $[0,1]$ and strictly decreasing on $[0,d^{-1}]$. \end{theorem} We observe that, as a consequence of the above results, defining $$p_c(k) = \inf\{p: (p,p)\in\mathcal{P}_k\},$$ we have $p_c(k+1) < p_c(k)$, as the diagonal $\{(p,p):0\leq p \leq 1\}$ intersects the critical curves $\gamma_k$ at distinct points for different values of $k$. However, for $k\ge 2$ this conclusion can be drawn from the simpler observation that the curves $\gamma_k$ are delimited by the dotted lines in Figure~\ref{fig:phasespaceab}b. The next result says that there is no percolation along the critical curves $\gamma_k$.% \begin{theorem} \label{prop:qcnopercolation} For $(p,q)$ on the critical curve $\gamma_k$, $\P_{p,q}(o\leadsto \infty)=0$. \end{theorem} \begin{proof} It is enough to prove that $\mathcal{P}_k$ is an open set in $[0,1]^2$. Define $$N_n = \#\left\{u \in [d]^{kn}:\; \text{there exists } v \in \cup_{i=0}^{k-1}[d]^i \text{ with }o \rightsquigarrow u\cdot v \right\},\quad n \in \mathbb{N}.$$ We claim that $N_n\rightarrow\infty$ a.s.\ on the event $o\leadsto \infty$. Indeed, assuming $p<1$ and $q<1$, for each $j\in\mathbb{N}$ we have $$ \P_{p,q} \left(\big. N_m = 0 \text{ for all } m > n \,\middle|\, N_1,\dots,N_n \right) > \sigma_j $$ on the event that $N_n \le j$, where $\sigma_j$ is a positive constant depending on $j$ and also on $p,q,d,k$, but not on $n$. This shows that $\P_{p,q}(N_n = j \text{ i.o.})=0$ thus a.s.\ either $N_n\to 0$ or $N_n \to \infty$. The case $p=1$ or $q=1$ being trivial, the claim is proved. Suppose $\theta_{p,q}:=\P_{p,q}(o\leadsto \infty)>0$ and let $\zeta<\theta_{p,q}$. By the previous claim, there exists $n^*$ such that $\P_{p,q}(N_{n^*}> \frac{2 k^2}{\zeta} )>\zeta$. Now observe that this probability is continuous in $(p,q)$, thus for $(p^\prime,q^\prime)$ close enough to $(p,q)$ it is still larger than $\zeta$. From this observation, using the definition of $N_n$ and reverse union bound, there is $\ell \in \{kn^*,\dots,kn^*+k-1\}$ such that, with probability larger than $\frac{\zeta}{k}$, there are at least $\frac{2k}{\zeta}$ sites $u\in[d]^\ell$ such that $o \rightsquigarrow u$. Therefore, the process $(N_{\ell i})_{i\in\mathbb{N}}$ dominates a supercritical branching process with offspring assuming values on $\{0,\lceil\frac{2k}{\zeta}\rceil\}$ and mean larger than $2$. This implies that $\P_{p^\prime,q^\prime}(o \rightsquigarrow \infty)>0$, proving that $(p^\prime,q^\prime)\in\mathcal{P}_k$. \end{proof} \section{Long and short edge compensation} \label{sec:pcqc} The goal of this section is to prove Theorem~\ref{thm:qccontinous}. We will need the following elementary fact. \begin{lemma} \label{lem:enhance} Let $P_\alpha$ denote probability measures on a given finite space $S$, parametrized by $\alpha\in[0,1]$, and such that $P_\alpha(x)$ is continuous in $\alpha$ for every $x\in S$. Let $\kappa$ and $y$ be such that $P_{\kappa}(y)>0$. Then for any $\alpha,\beta$ close enough to $\kappa$, there exists a coupling $(X,Y)$ such that $X \sim P_{\alpha}$, $Y \sim P_{\beta}$ and such that, almost surely, $X=Y$ unless $X=y$ or $Y=y$. \end{lemma} \begin{proof} Sample the pair $(X,Y)$ as \[ (X,Y)= \begin{cases} (z,z) \ \text{ w.p. } \ P_{\alpha}(z) \wedge P_{\beta}(z), \\ (y,z) \ \text{ w.p. } \ [P_{\beta}(z) - P_{\alpha}(z)]^+, \\ (z,y) \ \text{ w.p. } \ [P_{\alpha}(z) - P_{\beta}(z)]^+, \\ \end{cases} \] for $z\ne y$, and \[ (X,Y) = (y,y) \ \text{ w.p. } \ 1 - \sum_{z\ne y}P_{\alpha}(z) \vee P_{\beta}(z) . \] The last term is positive for $\alpha$ and $\beta$ are close to $\kappa$ because it is positive when $\alpha=\beta=\kappa$. This sampling only include pairs for which $X=Y$ unless $X=y$ or $Y=y$. From the first equation we have $\P(X=z)=P_\alpha(z)$ for all $z\ne y$, which all together imply $\P(X=y)=P_\alpha(y)$, and similarly for $Y$. \end{proof} We define the progeny of a vertex $u \in \mathbb{V}_{d,k}$ as the set \[ \prog(u) = \{ u\cdot v \in \mathbb{V}_{d,k}:\; v \in [d]_\star\} , \] i.e.\ it is the subtree started at $u$. The progeny of an edge is defined as the progeny of its endpoint, that is, if $e = \langle u,v\rangle$, then $\prog(e) = \prog(v)$. We now turn to the proof of Theorem~\ref{thm:qccontinous}. Recall that $k$ is fixed, $q_c(0)=d^{-k}$ and $q_c(p)=0$ for $p>d^{-1}$. Let $\mathscr{C}_{p,q,k}$ denote the percolation cluster of the root in $\mathbb{T}_{d,k}$ under the measure $\mathbb{P}_{p,q}$. (We use the word ``cluster'' to denote the set of sites which can be reached from the root, so unlike unoriented percolation it does not define an equivalence class.) We observe that, under this measure, the expected number of open edges having $o$ as an extremity is equal to $dp +d^k q$. If such expectation is less than one, we can embed $\mathscr{C}_{p,q,k}$ in a subcritical branching process to conclude that $P_{p,q}(o \leadsto \infty) = 0$. Therefore, $ q_c(p,k) \geq d^{-k} - d^{-k+1} p. $ This implies that $q_c(p,k)>0$ for $p<d^{-1}$. Since $q_c(p,k) \leq q_c(0,k) = d^{-k}$, we also conclude that $p\mapsto q_c(p,k)$ is continuous at $p = 0$. The proof of Theorem~\ref{thm:qccontinous} will thus be complete once we establish the following two facts: \begin{equation}\label{eq:cond_jump} \begin{split} &\text{for all } p_0, q, q' \in (0,1) \text{ with } q < q', \text{ there exist }p,p' \text{ with } p' < p_0 < p\\ &\hspace{5cm}\text{ such that } \P_{p',q'}(o\leadsto \infty) \geq \P_{p,q}(o\leadsto \infty); \end{split} \end{equation} \begin{equation}\label{eq:cond_jump2} \begin{split} &\text{for all } q_0, p, p' \in (0,1) \text{ with } p < p', \text{ there exist }q,q' \text{ with } q' < q_0 < q\\ &\hspace{5cm}\text{ such that } \P_{p',q'}(o\leadsto \infty) \geq \P_{p,q}(o\leadsto \infty). \end{split} \end{equation} Indeed, condition~\eqref{eq:cond_jump} rules out jump discontinuities in the curve $q = q_c(p,k)$ for $p > 0$, and condition~\eqref{eq:cond_jump2} rules out horizontal segments in this curve for $p < d^{-1}$. We start the proof of~\eqref{eq:cond_jump} by introducing some notation. We let $\bar{\mathbb{E}}_{d,k} = \bar{\mathbb{E}}_{d,k}^{\mathdutchcal{s}} \cup \bar{\mathbb{E}}_{d,k}^\ell$, where \begin{align*} &\bar{\mathbb{E}}_{d,k}^{\mathdutchcal{s}} = \left\{e = \langle u, v\rangle \in \mathbb{E}^{\mathdutchcal{s}}_{d,k}: u \in \cup_{n=0}^{2k-1}[d]^n\right\}, \\[.2cm]&\bar{\mathbb{E}}^\ell_{d,k} = \left\{e = \langle u, v\rangle \in \mathbb{E}^\ell_{d,k}: u \in \cup_{n=0}^{2k-1}[d]^n\right\} . \end{align*} Configurations in $\bar{\Omega} = \bar{\Omega}_{\mathdutchcal{s}} \times \bar{\Omega}_\ell = \{0,1\}^{\bar{\mathbb{E}}_{d,k}^{\mathdutchcal{s}} \cup \bar{\mathbb{E}}_{d,k}^\ell}$ are written as $\bar{\omega} =({\bar{\omega}_\s},{\bar{\omega}}_{\ell})$. Given $A \subseteq \cup_{n=0}^{k-1} [d]^n$ and $\bar{\omega} = ({\bar{\omega}_\s},{\bar{\omega}}_{\ell})$, we define \begin{equation} J_{\bar{\omega}}(A) = \bigcup_{u \in [d]^{2k}}\left\{\begin{array}{l}v \in \prog(u): \; \exists u^0, \ldots, u^n \in \mathbb{V}_{d,k} \text{ so that }u^0 \in A, \\ u^n = v \text{ and } \langle u^i, u^{i+1}\rangle \in \bar{\mathbb{E}}_{d,k},\; \bar{\omega}(\langle u^i,u^{i+1}\rangle) = 1 \; \forall i \end{array} \right\}. \end{equation} That is, $J_{\bar{\omega}}(A)$ is the set of vertices in $\cup_{u\in[d]^{2k}} \prog(u)$ that are reachable by paths started from $A$ and consisting only of open edges of $\bar{\mathbb{E}}_{d,k}$. Note that in such a path, all edges have both extremities in $\cup_{n=0}^{2k-1}[d]^n$ except for the last one, which has only one extremity in $\cup_{n=0}^{2k-1}[d]^n$. In particular, $J_{\bar{\omega}}(A) \subseteq \cup_{n=2k}^{3k-1}[d]^n$. Now, define the deterministic configurations $\bar{\omega}^*_{{\mathdutchcal{s}}} \in \bar{\Omega}_{\mathdutchcal{s}}$ and $\bar{\omega}^*_{\ell,1},\bar{\omega}^*_{\ell,2}\in\bar{\Omega}_\ell$ by setting \begin{align*}&\bar{\omega}^*_{\ell,1}\equiv 0,\quad \bar{\omega}^*_{\ell,2} \equiv 1 \quad \text{ and }\quad\bar{\omega}^*_{{\mathdutchcal{s}}}(\langle u,v\rangle) = 1 \text{ if and only if } u\notin [d]^{2k-1}. \end{align*} Let $0<p_0<1$ and $0<q<q'<1$. By Lemma~\ref{lem:enhance}, if $p$ and $p'$ with $p' < p_0 < p$ are chosen sufficiently close to $p_0$, then there exists a coupling of configurations $$X = (X_{\mathdutchcal{s}},X_{\ell,1},X_{\ell,2}) \quad \text{ and } \quad Y= (Y_{\mathdutchcal{s}}, Y_{\ell,1},Y_{\ell,2})$$ in $\bar{\Omega}_{\mathdutchcal{s}}\times \bar{\Omega}_\ell\times \bar{\Omega}_\ell$ so that the following holds: \begin{itemize} \item the values of $X_{\mathdutchcal{s}}$, $X_{\ell,1}$ and $X_{\ell_2}$ in all edges are independent; \item $X_{\mathdutchcal{s}}$, $X_{\ell,1}$ and $X_{\ell,2}$ assign each edge to be open with respective probabilities $p$, $q$ and $\frac{q'-q}{1-q}$; \item the values of $Y_{\mathdutchcal{s}}$, $Y_{\ell,1}$ and $Y_{\ell_2}$ in all edges are independent; \item $Y_{\mathdutchcal{s}}$, $Y_{\ell,1}$ and $Y_{\ell,2}$ assign each edge to be open with respective probabilities $p'$, $q$ and $\frac{q'-q}{1-q}$; \item the following event has probability one: \begin{equation}\label{eq:3_casos} \{X = Y\} \cup \{X = (\bar{\omega}^*_{{\mathdutchcal{s}}},\bar{\omega}^*_{\ell,1},\bar{\omega}^*_{\ell,2})\} \cup \{Y = (\bar{\omega}^*_{{\mathdutchcal{s}}},\bar{\omega}^*_{\ell,1},\bar{\omega}^*_{\ell,2})\}. \end{equation} \end{itemize} Now take $\bar{\omega}_{\mathdutchcal{s}} = X_{\mathdutchcal{s}}$, $\bar{\omega}_\ell = X_{\ell,1}$, $\bar{\omega}'_{\mathdutchcal{s}} = Y_{\mathdutchcal{s}}$, $\bar{\omega}'_\ell = Y_{\ell,1} \vee Y_{\ell,2}$. The main observation is that each of the three events in~\eqref{eq:3_casos} implies that, for every $A\subseteq \cup_{n=0}^{k-1}[d]^n$, \begin{equation}\label{eq:a_ver} J_{\bar{\omega}}(A) \subseteq J_{\bar{\omega}'}(A). \end{equation} Indeed, on the first event we have $\bar{\omega}' \geq \bar{\omega}$, on the second event we have $J_{\bar{\omega}}(A)=\emptyset$, and on the third event $J_{\bar{\omega}'}(A)$ contains the set of sites $y\in\cup_{n=2k}^{3k-1}[d]^n$ that are in $\prog(x)$ for some $x\in A$, which always contains $J_{\bar{\omega}}(A)$. Finally, with this coupling at hand, we can sample configurations $\omega,\omega' \in \{0,1\}^{\mathbb{E}_{d,k}}$ such that the restrictions of $\omega$ and $\omega'$ to sets of the form \[ \left\{\langle u\cdot v, w \rangle \in \mathbb{E}_{d,k}: v \in \cup_{n=0}^{2k-1}[d]^n\right\} \] with $u \in \cup_{m\in 2\mathbb{N}} [d]^{mk}$ are independent and sampled from the (appropriately translated) coupling measure. Then $\omega$ and $\omega'$ are distributed as $\P_{p,q}$ and $\P_{p',q'}$ respectively, and the cluster of the root in $\omega$ is a subset of the cluster of the root in $\omega'$. This concludes the proof of~\eqref{eq:cond_jump}. We now turn to the proof of \eqref{eq:cond_jump2}. As the two proofs are very similar, we now only outline the main steps of the argument. We let $\bar{\mathbb{E}}^{\mathdutchcal{s}}_{d,k}$, $\bar{\mathbb{E}}^\ell_{d,k}$, $\bar{\mathbb{E}}_{d,k}$, $\bar{\Omega}_{\mathdutchcal{s}}$, $\bar{\Omega}_\ell$ and $J_{\bar{\omega}}(A)$ be the same as before. A special configuration $\bar{\omega}^* \in \bar{\Omega}_{\mathdutchcal{s}} \times \bar{\Omega}_{\mathdutchcal{s}} \times \bar{\Omega}_\ell$ is defined as follows: $$\bar{\omega}^*_{{\mathdutchcal{s}},1} \equiv 0,\quad \bar{\omega}^*_{{\mathdutchcal{s}},2} \equiv 1,\quad \bar{\omega}^*_\ell(\langle r,s\rangle) = 1 \text{ if and only if } r \in \cup_{n=k}^{2k-1} [d]^n.$$ Using Lemma~\ref{lem:enhance}, we obtain $q'< q_0 < q$ and a coupling of $X = (X_{{\mathdutchcal{s}},1},X_{{\mathdutchcal{s}},2},X_\ell)$ and $Y=(Y_{{\mathdutchcal{s}},1},Y_{{\mathdutchcal{s}},2}, Y_\ell)$ so that the following hold. The values of $X_{{\mathdutchcal{s}},1}$, $X_{{\mathdutchcal{s}},2}$ and $X_{\ell}$ in all edges are independent; $X_{{\mathdutchcal{s}},1}$, $X_{{\mathdutchcal{s}},2}$ and $X_{\ell}$ assign each edge to be open with respective probabilities $p$, $\frac{p'-p}{1-p}$ and $q$; the values of $Y_{{\mathdutchcal{s}},1}$, $Y_{{\mathdutchcal{s}},2}$ and $Y_{\ell}$ in all edges are independent; $Y_{{\mathdutchcal{s}},1}$, $Y_{{\mathdutchcal{s}},2}$ and $Y_{\ell}$ assign each edge to be open with respective probabilities $p$, $\frac{p'-p}{1-p}$ and $q'$; the following event has probability one: \begin{equation*} \{X = Y\} \cup \{X = (\bar{\omega}^*_{{\mathdutchcal{s}},1},\bar{\omega}^*_{{\mathdutchcal{s}},2},\bar{\omega}^*_{\ell})\} \cup \{Y = (\bar{\omega}^*_{{\mathdutchcal{s}},1},\bar{\omega}^*_{{\mathdutchcal{s}},2},\bar{\omega}^*_{\ell})\}. \end{equation*} We then let $\bar{\omega}_{\mathdutchcal{s}} = X_{{\mathdutchcal{s}},1}$, $\bar{\omega}_\ell = X_\ell$, $\bar{\omega}'_{{\mathdutchcal{s}}} = Y_{{\mathdutchcal{s}},1} \vee Y_{{\mathdutchcal{s}},2}$ and $\bar{\omega}'_\ell = Y_\ell$. This coupling then guarantees~\eqref{eq:a_ver} as before, which concludes the proof of Theorem~\ref{thm:qccontinous}. \section{Comparison of different ranges} \label{sec:hubs} In this section we prove Theorem~\ref{prop:hub}. The general idea behind the proof is to explore short edges until reaching a dead end, then use a coupling construction to show that one has a better chance to proceed from each dead end when $k$ is larger. Let $u \in \mathbb{V}_{d,k}$ and $r = (r_1,\ldots, r_k) \in[d]^k$, so that $e= \langle u, u\cdot r\rangle \in \mathbb{E}^\ell_{d,k}$. We define the trace of $e$ to be the set of short edges $$\text{trace}(e) = \{\langle u, u\cdot r_1\rangle, \langle u\cdot r_1,u\cdot (r_1,r_2)\rangle,\ldots, \langle u\cdot (r_1,\ldots, r_{k-1}),u\cdot r\rangle\}.$$ Fix $\omega = (\omega_{\mathdutchcal{s}},\omega_\ell)$, with $\omega_{\mathdutchcal{s}} \in \{0,1\}^{\mathbb{E}^{\mathdutchcal{s}}_{d,k}}$ and $\omega_\ell \in \{0,1\}^{\mathbb{E}^\ell_{d,k}}$, and a set $A \subset \mathbb{V}_{d,k}$. We let $\Pi(A)$ be the cluster of $A$ in $\omega$, that is, the set of vertices of $\mathbb{T}_{d,k}$ which can be reached by a path started from some vertex of $A$ and consisting of directed edges which are open in $\omega$ (note that $\Pi(A)$ depends on $A$ and $\omega$ but we omit $\omega$ from the notation; this will also be the case for further notation that we introduce). We also let $\pi(A)$ be the cluster of $A$ in $\omega_{\mathdutchcal{s}}$, that is, the set of vertices of $\mathbb{T}_{d,k}$ that can be reached by a path started from some vertex of $A$ and consisting of \textit{short} edges, all of which are open in $\omega_{\mathdutchcal{s}}$. Note that $A \subseteq \pi(A) \subseteq \Pi(A)$. We say a short edge $e = \langle u, v\rangle \in \mathbb{E}^{\mathdutchcal{s}}_{d,k}$ is a \emph{hub} for $A$ (in $\omega$) if the following two conditions hold: \begin{equation}\label{eq:def_prog}\prog(v) \cap \pi(A) = \varnothing \qquad \text{and} \qquad \prog(u) \cap \pi(A) \neq \varnothing.\end{equation} We let $\sigma(A)$ denote the set of hubs for $A$ in $\omega$. \begin{lemma} \label{lem:pisigma} Let $\omega \in \{0,1\}^{\mathbb{E}_{d,k}}$ and $A \subset \mathbb{V}_{d,k}$. Then, \begin{equation} \label{eq:prog_are_disjoint} \text{the progenies }\prog(e)\text{ for }e\in \sigma(A) \text{ are disjoint.} \end{equation} Further assuming that \begin{equation} \text{there exists } w \in \mathbb{V}_{d,k} \text{ such that } A \subset \left\{ w \cdot v: v \in \cup_{n=0}^{k} [d]^n\right\}, \label{eq:assump_lem} \end{equation} we also have \begin{equation}\label{eq:prop_of_progs}\begin{split}\text{for any } e = \langle u,u\cdot r\rangle \in \mathbb{E}^\ell_{d,k}\text{ such that }u \in \pi(A)\text{ and }u\cdot r \notin \pi(A),\\\text{ there exists a unique }e'\in \text{trace}(e) \cap \sigma(A)\end{split} \end{equation} and \begin{equation}\label{eq:prop2_of_progs}\begin{split} \Pi(A) \text{ is the disjoint union of } \pi(A) \text{ and the sets }\\ \Pi(A) \cap \prog(e) \text{ for } e \in \sigma(A). \end{split}\end{equation} \end{lemma} \begin{proof} To prove~(\ref{eq:prog_are_disjoint}), assume that there are two distinct hubs \[ e = \langle u, v\rangle,\;e' = \langle u', v'\rangle \in \sigma(A):\quad\prog(e) \cap \prog(e') \neq \varnothing . \] Then either $u \in \prog(v')$ or $u' \in \prog(v)$. Without loss of generality we assume the latter. Together with~\eqref{eq:def_prog} applied to $e'$, this implies that $$\prog(v) \cap \pi(A) \supset \prog(u') \cap \pi(A) \neq \varnothing,$$ which contradicts~\eqref{eq:def_prog} applied to $e$. Now fix an edge $e = \langle u,u\cdot r\rangle$ as in~\eqref{eq:prop_of_progs}. Consider the $k$ short edges in the trace of $e$. By the first statement, we know that at most one of these short edges is in $\sigma(A)$. In order to show that one of them is in $\sigma(A)$, it suffices to show that \begin{equation} \label{eq:suffices} \prog(u) \cap \pi(A) \neq \varnothing\qquad \text{ and } \qquad \prog(u\cdot r) \cap \pi(A) = \varnothing . \end{equation} The first claim of~\eqref{eq:suffices} follows from the fact that $u \in \pi(A)$; let us prove the second. We are given that $u \cdot r \notin \pi(A)$, so it suffices to prove that $\prog(u\cdot r) \cap A = \varnothing$. For vertices $u', v'$ with $v' \in \prog(u')$, let $\text{dist}(u',v')$ denote the length of the unique path of short edges from $u'$ to $v'$. Then, \eqref{eq:assump_lem} gives $\text{dist}(w,v) \leq k$ for all $v \in A$. If $v \in \prog(u\cdot r)$ and $v \neq u \cdot r$, then $$\text{dist}(w,v) > \text{dist}(w,u\cdot r) = \text{dist}(w,u)+k,$$ so $v \notin A$. We also have $u\cdot r \notin A$, so the proof of \eqref{eq:suffices} is complete. Statement~\eqref{eq:prop2_of_progs} is an immediate consequence of~\eqref{eq:prog_are_disjoint} and~\eqref{eq:prop_of_progs}. \end{proof} Again fix $\omega \in \{0,1\}^{\mathbb{E}_{d,k}}$ and $A \subset \mathbb{V}_{d,k}$ satisfying~\eqref{eq:assump_lem}. For each hub $e \in \sigma(A)$, we define \begin{align*} & R(A,e) = \{e'=\langle u',v'\rangle \in \mathbb{E}_{d,k}^\ell:\;u'\in\pi(A) \text{ and } e\in \text{trace}(e')\} ,\\[.2cm] & \bar{S}(A,e) = \{v' \in \mathbb{V}_{d,k}: \langle u',v'\rangle \in R(A,e) \text{ for some }u' \in \mathbb{V}_{d,k}\} ,\\[.2cm] & S(A,e) = \{v' \in \mathbb{V}_{d,k}: \omega(\langle u',v'\rangle) = 1 \text{ for some } \langle u',v'\rangle \in R(A,e)\}. \end{align*} Note that $S(A,e) \subseteq \bar{S}(A,e) \subseteq \prog(e)$. Also note that, if $e_1, e_2 \in \sigma(A)$ are distinct, then $R(A,e_1)$ and $R(A,e_2)$ are disjoint, by~\eqref{eq:prog_are_disjoint}. Finally, note that for every $e \in \sigma(A)$, we have \begin{equation*} \Pi(A) \cap \prog(e) = \Pi(S(A,e)), \end{equation*} so that~\eqref{eq:prop2_of_progs} can be restated as \begin{equation}\label{eq:restate} \Pi(A) = \pi(A) \cup \left(\cup_{e \in \sigma(A)} \Pi(S(A,e))\right), \end{equation} where the union is disjoint. For $A$ satisfying~\eqref{eq:assump_lem}, we now let $\mathscr{C}_{p,q,k}(A)$ be the random set $\Pi(A)$ when $\omega$ is sampled from the measure $\mathbb{P}_{p,q}$ on percolation configurations on $\mathbb{T}_{d,k}$. Note that $\mathscr{C}_{p,q,k} = \mathscr{C}_{p,q,k}(\{o\})$. We observe that, conditioning on $\pi(A)$, $\sigma(A)$ is determined and the sets $\Pi(S(A,e))$ are independent over $e \in \sigma(A)$. Indeed, $\Pi(S(A,e))$ is determined by $\pi(A)$ and $\omega(e')$ for all $$e' = \langle u', v'\rangle \text{ with } v' \in \prog(e).$$ The sets of edges displayed above are disjoint for distinct choices of $e \in \sigma(A)$. Indeed, assume $e, f \in \sigma(A)$, $e \neq f$, and $e' = \langle u', v'\rangle$, $f' = \langle w', x' \rangle$ are long edges with $v' \in \text{prog}(e)$, $x'\in \text{prog}(f)$. Then, since~\eqref{eq:prog_are_disjoint} gives $\text{prog}(e) \cap \text{prog}(f) = \varnothing$, we obtain $v' \neq x'$, so $e' \neq f'$. Guided by this consideration, we now present a recursive exploration algorithm to reveal $\mathscr{C}_{p,q,k}(A)$. The algorithm starts by applying the following two steps to the set $A$: \emph{Step~1.} Explore $\pi(A)$ by revealing only the edges in $\omega_{\mathdutchcal{s}}$ that are necessary. More precisely, grow $\pi(A)$ progressively by starting from $A$ and querying the open/closed-state of short edges one by one, each time selecting a short edge $e = \langle u, v \rangle$ such that $u$ is already included in $\pi(A)$ and $v$ is not (and also following some lexicographic-type priority rule that guarantees that the full $\pi(A)$ is explored). Note that this also determines $\sigma(A)$, hence $\bar{S}(A,e)$ for each $e \in \sigma(A)$. \emph{Step~2.} For each $e \in \sigma(A)$, reveal $S(A,e)$. This is the same as revealing the value of $\omega_\ell(e')$ for each long edge $e'\in R(A,e)$. Note that, if $e = \langle u, v\rangle \in \sigma(A)$, then $S(A,e) \subseteq \{v \cdot w: w\in \cup_{n=0}^{k-1} [d]^n \}$, so that property~\eqref{eq:assump_lem} holds with $A$ replaced by $S(A,e)$. The algorithm then proceeds by applying Steps~1 and~2 to each of the sets $S(A,e)$, which take the role of $A$. That is: in Step~1 it explores $\pi(S(A,e))$, which also reveals $\sigma(S(A,e))$, and in Step~2, for each $e' \in \sigma(S(A,e))$, it reveals $S(S(A,e),e')$. The recursion then continues to further levels. By~\eqref{eq:restate}, this reveals the whole cluster $\mathscr{C}_{p,q,k}(A)$. We now want to look at the distributions of $S(A,e)$ and $\Pi(S(A,e))$ for $e \in \sigma(A)$. Although these distributions are easily understood, they are somewhat clumsy to describe, so we will need some more notation. First, fix $e = \langle u,v \rangle \in \sigma(A)$ with $v = (v_1,\ldots, v_n)$. Define $$ \beta(A,e) =\{ i \in \{1,\ldots, k\} : (v_1,\ldots, v_{n-i}) \in \pi(A)\},$$ it describes which ancestors of $e$ have been reached from $A$ using short edges and could reach $\prog(e)$ using long edges (the $\omega$-state of which is not looked at). Note that $$R(A,e) = \{\langle (v_1,\ldots, v_{n-i}), v\cdot w\rangle: i \in \beta(A,e),\; w \in [d]^{k-i}\},$$ so that $$\bar{S}(A,e) = \{v\cdot w: i\in\beta(A,e),\;w \in [d]^{k-i}\}.$$ Second, we define some shift mappings in $\mathbb{T}_{d,k}$. Given $u \in \mathbb{V}_{d,k}$, we let $\uptau_u: \prog(u) \to \mathbb{V}_{d,k}$ be the function $$\uptau_u(u \cdot v) = v,\quad v \in [d]_\star.$$ If $e = \langle u,v\rangle \in \mathbb{E}^{\mathdutchcal{s}}_{d,k}$, we let $\uptau_e = \uptau_v$. Third, given $b \subset \{1,\ldots, k\}$, we let $\mathscr{A}_{q,k}(b)$ denote the distribution of the random subset of $\cup_{i \in b} [d]^{k-i} $ in which, independently, each point is included with probability $q$. Let $A \subseteq \mathbb{V}_{d,k}$ satisfy~\eqref{eq:assump_lem}. Conditioning on $\pi(A)$, for each $e \in \sigma(A)$ we have \begin{align} \label{eq:lawofS}\uptau_e(S(A,e)) \stackrel{(d)}{=} \mathscr{A}_{q,k}(\beta(A,e)) \end{align} and the law of $\uptau_e(\Pi(S(A,e)))$ is equal to the law of the cluster of $B$ in $\mathbb{T}_{d,k}$, where $B$ is chosen according to $\mathscr{A}_{q,k}(\beta(A,e))$. We finally turn to the desired comparison between $\mathscr{C}_{p,q,k}$ for different values of the parameters. Given $A, B \subseteq \mathbb{V}_{d,k}$, let us write $A \preceq B$ in case there exist $u, v \in [d]_\star$ such that $A \subseteq \prog(u)$ and $\uptau_u(A) \subseteq \uptau_v(B \cap \prog(v))$. \begin{lemma} \label{lem:comparison_l} For any $k \in \mathbb{N}$ and $q \in (0,1)$, there exists $q'<q$ such that the following holds. Let $b' \subseteq \{1,\ldots, k+1\}$ and $b = b' \cap \{1,\ldots, k\}$. There exists a coupling $(A,B)$ of random sets $A, B \subseteq [d]_\star$ such that $$A \preceq B,\quad A \stackrel{(d)}{=} \mathscr{A}_{q,k}(b) \quad\text{ and }\quad B \stackrel{(d)}{=} \mathscr{A}_{q',k+1}(b').$$ \end{lemma} With this lemma at hand, we are ready to conclude the proof of Theorem~\ref{prop:hub}. Fix $p,q\in (0,1)$ and $k \in \mathbb{N}$, and choose $q'$ corresponding to $k$ and $q$ in Lemma~\ref{lem:comparison_l}. The idea is to compare the explorations of $\mathscr{C}_{p,q,k}$ and $\mathscr{C}_{p,q',k+1}$ using coupling. Recall that our algorithm to explore a cluster proceeds by the iterative application of two steps. Step~1 grows a portion of the cluster using only short edges, so it can be taken as the same for both explorations, since short edges have the same probability of being open in both. Step~2 inspects ``exit routes", using long edges, from the portion of cluster revealed in Step~1; Lemma~\ref{lem:comparison_l} guarantees that this is better (in the sense of $\preceq$-domination) for $\mathscr{C}_{p,q',k+1}$ than for $\mathscr{C}_{p,q,k}$. Let us now present the coupling of explorations more formally. Note that we are dealing with percolation in the two graphs $\mathbb{T}_{d,k}$ and $\mathbb{T}_{d,k+1}$ simultaneously; these graphs have the same set of vertices (namely, $[d]_\star$) and same set of short edges, but the long edges differ. A set $A \subset [d]_\star$ satisfying condition \eqref{eq:assump_lem} for $k$ also satisfies it when $k$ is replaced by $k+1$. For such a set, and for $e \in \sigma(A)$, instead of $S(A,e)$ we will now write $S_k(A,e)$ and $S_{k+1}(A,e)$ to distinguish this set in the two graphs. The coupled exploration of $\mathscr{C}_{p,q,k}$ and $\mathscr{C}_{p,q',k+1}$ starts with revealing $\pi(\{o\})$, which we can take as the same in both clusters. Thus, $\sigma(\{o\})$ is also the same in both graphs, and we enumerate $$\sigma(\{o\}) = \{e^1,\ldots, e^N\}.$$ Also write $$A^i = S_k(\{o\},e^i),\qquad B^i = S_{k+1}(\{o\},e^i),\qquad i=1,\ldots, N.$$ By Lemma~\ref{lem:comparison_l}, these can be sampled with $A^i \preceq B^i$, so there exist $u^i,v^i \in [d]_\star$ and $\tilde{B}^i \subseteq B^i$ such that $$A^i \subset \text{prog}(u^i),\; \tilde{B}^i \subset \text{prog}(v^i),\;\uptau_{u^i}(A^i) = \uptau_{v^i}(\tilde{B}^i).$$ The second level of the exploration then proceeds as follows. For each $i \in \{1,\ldots, N\}$, take $\pi(\uptau_{u^i}(A_i))$ and $\pi(\uptau_{v^i}(\tilde{B}^i))$ as the same in both clusters, enumerate $$\sigma(\uptau_{u^i}(A^i)) = \sigma(\uptau_{v^i}(\tilde{B}^i)) = \{e^{i,1},\ldots, e^{i,N_i}\},$$ and let $$A^{i,j} = S_k(\uptau_{u^i}(A^i),e^j),\;B^{i,j} = S_{k+1}(\uptau_{v^i}(\tilde{B}^i),e^j),\qquad j = 1,\ldots, N_i,$$ which can be sampled with $A^{i,j} \preceq B^{i,j}$ for each $j$. Further levels are then carried out in the same way. The construction guarantees that $\mathscr{C}_{p,q,k}$ is embedded in $\mathscr{C}_{p,q',k+1}$, concluding the proof of Theorem~\ref{prop:hub}. It remains only to prove the previous lemma. \begin{proof} [Proof of Lemma~\ref{lem:comparison_l}] We can assume that $k +1 \notin b'$, so that $b = b'$. In that case, for $\hat{q} \in (0,1)$ and $\hat{B}$ a random subset of $[d]_\star$, \begin{equation} \hat{B} \sim \mathscr{A}_{\hat{q},k+1}(b) \quad \text{if and only if} \quad \begin{array}{l}\uptau_{(1)}(\hat{B}),\ldots, \uptau_{(d)}(\hat{B}) \text{ i.i.d. }\\\text{ and distributed as } \mathscr{A}_{\hat{q},k}(b). \end{array} \end{equation} We now define sets $S_1^*, \ldots, S_d^* \subset [d]_\star$ by $$ S_1^* = \varnothing, \qquad S_2^*,\ldots, S_d^* = \bigcup_{i \in b}\; [d]^{k-i} . $$ By Lemma~\ref{lem:enhance}, there exists $q' < q$ and a coupling of random sets $X_1,\ldots, X_d$, $Y_1,\ldots, Y_d \subset [d]_\star$ so that $X_1,\ldots, X_d$ are independent and distributed as $\mathscr{A}_{q,k}(b)$, $Y_1,\ldots, Y_d$ are independent and distributed as $\mathscr{A}_{q',k}(b)$ and the following event has probability 1: \begin{equation}\begin{split} &\{(X_1,\ldots, X_d) = (Y_1,\ldots, Y_d)\} \cup \{(X_1,\ldots, X_d) = (S_1^*,\ldots, S_d^*)\} \\&\hspace{6cm}\cup \{ (Y_1,\ldots, Y_d) = (S_1^*,\ldots, S_d^*)\}.\end{split} \end{equation} The desired conclusion now follows by setting \[ A = X_1,\qquad B = \cup_{a \in [d]} \{a\cdot u: u \in Y_a \} . \qedhere \] \end{proof} \section*{Acknowledgements} The authors would like to thank Aernout van Enter for helpful discussions, and Gábor Pete for pointing out an inaccurate citation in an earlier version of this paper. B.N.B.L. would like to thank the University of Groningen and D.V. would like to thank NYU-Shanghai for support and hospitality. This project was supported by grants CNPq 309468/2014-0, FAPEMIG (Programa Pesquisador Mineiro), PIP 11220130100521CO, PICT-2015-3154, PICT-2013-2137, PICT-2012-2744, Conicet-45955 and MinCyT-BR-13/14. \renewcommand{\baselinestretch}{1} \setlength{\parskip}{0pt} \small \bibliographystyle{bib/leo}
train/arxiv
BkiUdrvxK7IAEeGRfi8z
5
1
\section{Introduction} Inverted pendulums are traditional dynamic problems. If an inverted pendulum is used in a moving cart, new type of interesting problems will appear. One of these problems is two-wheeled inverted pendulum systems. Because of their small size, great performance in quick driving, and their stability with controller, scientists and engineers are interested in them\textcolor{blue}{\cite{jeong2018development}}. Based on this interest, each year new models are introduced and new robots are made. Self-transportation systems such as hoverboards and small two-wheeled robots are the most important patents based on the moving inverted pendulums\textcolor{blue}{\cite{canete2015modeling}}. The idea of using self-transportation systems comes from a push in the transportation industry to develop transportation systems that contribute less to pollution and cause less damage to the environment overall. One approach to this has been a shift to Personal Electric Vehicles (PEVs), which are powered by electricity rather than combustion. PEVs provide many benefits to both consumers and society, including lower costs than automobiles, shorter trip times for short distances, cleaner transportation, and mobility for the disabled\textcolor{blue}{\cite{ulrich2005estimating}}. One popular type of PEV that has emerged in recent years is the "Stand-on Scooter". In 2005, Ulrich analyzed existing stand-on scooter technology and estimated that the light design of these PEVs combined with their modest range and speed would be "highly feasible technically, and with substantial consumer demand could be feasible economically"\textcolor{blue}{\cite{ulrich2005estimating}}. One type of stand-on scooter analyzed by Ulrich was the Segway, which is a type of PEV that marketed itself based on its inverted pendulum balancing mechanism and its agility. Inverted-pendulum transporter is a type of self-balancing system that allows for an operator to control it without the need for a throttle. Instead, the device applies a lateral movement to the system based on an angle applied by the operator, who acts as an inverted pendulum, in order to keep the inverted pendulum balanced and stable in the upright position. A popular inverted pendulum PEV is the "Hoverboard", which consists of two motorized wheels connected to two independent articulating pads. The operator controls the speed of travel by leaning forward and backward and controls the direction of travel by twisting the articulating pads with their feet. This allows for an inexpensive transportation system that is compact, as it does not require any large motor or steering apparatus, and agile, as it can be controlled easily and move quickly. Considerable work has already been done in developing a transportation device that uses this sort of self-balancing mechanism. Grasser et al. developed a scaled-down prototype of a self-balancing pendulum, but not a full-scale version that could be ridden by a person\textcolor{blue}{\cite{grasser2002joe}}. Tsai et al.\textcolor{blue}{\cite{tsai2010adaptive}} developed a self-balancing PEV but used handlebars for guidance rather than the articulating pads like the hoverboard. Moving inverted pendulum systems are also used in robots for different objects. For example, Solis et al. used the robot for educational purposes\textcolor{blue}{\cite{solis2009development}} or Double Robotics Inc made a robot for telecommuter\textcolor{blue}{\cite{double2019}}. Two steps are needed to design a proper controller for these robots. Finding a perfect model for the two-wheeled inverted pendulum is the first step and designing the best controller is the second step. It should be mentioned that the self-balancing systems like these robots create difficult control problems, as they are inherently unstable and subject to unpredictable external forces from their environment. To solve these problems, a robust dynamic model must be developed, in order to fully understand the physical properties of the system, and also a better comprehending of how to control it to maintain stable. These robots also need one or two motors for moving which their motors' models should be considered in the robot's dynamic\textcolor{blue}{\cite{frankovsky2017modeling}}. Decreasing the number of wheels which are unnecessary in most of the time and only used for the system's balancing was one of the first ideas of the two-wheeled robots\textcolor{blue}{\cite{kim2005dynamic}}. For this purpose, Kim et al. developed a mathematical model for a self-balancing two-wheeled robot that was capable of changing direction. This robot acted as a rigid single-pendulum, allowing the model to assume a lumped-parameter system and they designed a linear controller for their robot \textcolor{blue}{\cite{kim2005dynamic}}. Other researchers tried to develop\textcolor{blue}{\cite{jeong2018development}} or modify the previous models\textcolor{blue}{\cite{kim2015dynamic}}. They also tried to improve the controller performance during the robots' operation. In a research, Zafar et al. discussed a derived mathematical model for a self-balancing inverted-pendulum robot and implement the operational space controller\textcolor{blue}{\cite{zafar2016whole}}. In other researches, new controllers for the two-wheeled inverted pendulum systems were designed. These controllers can work with time-varying parametric uncertainties\textcolor{blue}{\cite{boukens2017design}}, strong nonlinear behaviors due to abrupt external disturbances\textcolor{blue}{\cite{kim2017nonlinear}}, and initial errors, pulse disturbance and random noises\textcolor{blue}{\cite{yue2018efficient}}. A new type of two-wheeled inverted pendulum robot is also designed for specific applications. This pendulum was not rigid and it behaves as a flexible system. Partial Differential Equations (PDEs) are the most standard way of mathematically representing continuous systems and has been used for vibrational analysis and control for different models and applications\textcolor{blue}{\cite{mehrvarz2019vibration, mehrvarz2018vibration, karagiannis2019exponential,marzban2016effect}}. Researchers work on the flexible structure such as beams and bars with the base motion for several years but they did not consider their systems as a moving robots. They used flexible structures with moving base for different applications, such as micro gyroscopes and piezoresponse force microscopy and etc.\textcolor{blue}{\cite{salehi2009vibration, ansari2009coupled, khodaei2018theoretical}}. The base motion creates different accelerations and complex nonlinear PDE equations of motion for the flexible system. The first idea of using flexible structure as an inverted pendulum in robot was created by Nguyen et al. They studied a linearized mathematical model and controller for a single flexible inverted pendulum, this model only accounted for lateral movement in one direction rather than in the two directions and it has four wheels\textcolor{blue}{\cite{nguyen2016designing}}. In another robot Mehrvarz et al. modeled a two-wheeled flexible inverted pendulum which can move in one direction\textcolor{blue}{\cite{mehrvarz2018modeling}}. They also designed a MPC controller for their robot and showed that the percision of their modeling\textcolor{blue}{\cite{clark2019control}}. Their robot cannot move in the plane and this was their main problem. Here, to solve the problem of two-wheeled flexible beam inverted pendulum which is used in\textcolor{blue}{\cite{mehrvarz2018modeling}}, two-wheeled two-flexible beam inverted pendulum model is addressed. This robot is designed to move in-plane and has not the previous problem. Because of its complexity and non-linearity, the dynamic model and the vibrations of the system are just considered and analyzed in this paper and controller is not designed for this system right now. The proposed model analyzes the pure dynamics of the robot and the vibrations of the beams at the same time, which is a novel approach. The main goal of this paper is to investigate and simulate the dynamic model of a piezoelectrically actuated cantilever beams on the two-wheeled robot. The remainder of this paper is organized as follow. The dynamic equations of the system are derived in \textcolor{blue}{{Section }\ref{Mathematical modeling}} and a brief summary of their solution is presented in \textcolor{blue}{{Section }\ref{numericalsimulation}} In \textcolor{blue}{{Section }\ref{simulationresults}} the simulation results are discussed and a conclusion is given in \textcolor{blue}{{Section }\ref{conclusion}}. \section{Mathematical modeling }\label{Mathematical modeling} In this section, the governing equations of motion are derived using the extended Hamilton's principle. In order to apply the extended Hamilton's principle to the system, the positions and translational and rotational velocities of these elements are needed to be defined mathematically. As shown in \textcolor{blue}{{Fig.}\ref{fig1}}, the system has two beams and two piezoelectric actuators to excite the beams. These beams are mounted on two independent bases, which are attached to two wheels and DC motors. In this model, as seen in \textcolor{blue}{{Fig.}\ref{fig1}}, two flexible cantilever beams act as flexible inverted pendulums fixed to two articulating bases. The mathematical model for the system represents the response of the system to small disturbances from piezoelectric actuators mounted to the base of each pendulum. These actuators cause deformation and bending in the continuous pendulums when voltage is applied to them. As seen in \textcolor{blue}{{Fig.}\ref{fig2}}, the position of the right (beam$_{1}$) and left (beam$_2$) beams in the $XYZ$ frame are as: \begin{figure} \centering \includegraphics[width=80mm]{Fig1.pdf} \caption{ The proposed dynamic model of the two-wheeled two-flexible-beam inverted pendulum robot.} \label{fig1} \end{figure} \begin{align} \label{eq1} \left[ \begin{matrix} {{X}_{beam{}_{1}}} \\ {{Y}_{beam{}_{1}}} \\ {{Z}_{beam{}_{1}}} \\ \end{matrix} \right]=\left[ \begin{matrix} {{X}_{c}}+a\sin \varphi +\cos \varphi \left( \cos {{\theta }_{1}}{{w}_{1}}+{{x}_{1}}\sin {{\theta }_{1}} \right) \\ {{Y}_{c}}-a\cos \varphi +\sin \varphi \left( \cos {{\theta }_{1}}{{w}_{1}}+{{x}_{1}}\sin {{\theta }_{1}} \right) \\ {{r}_{w}}-\sin {{\theta }_{1}}{{w}_{1}}+{{x}_{1}}\cos {{\theta }_{1}} \\ \end{matrix} \right] \end{align} \begin{align} \label{eq2} \left[ \begin{matrix} {{X}_{beam{}_{2}}} \\ {{Y}_{beam{}_{2}}} \\ {{Z}_{beam{}_{2}}} \\ \end{matrix} \right]=\left[ \begin{matrix} {{X}_{c}}-a\sin \varphi +\cos \varphi \left( \cos {{\theta }_{2}}{{w}_{2}}+{{x}_{2}}\sin {{\theta }_{2}} \right) \\ {{Y}_{c}}+a\cos \varphi +\sin \varphi \left( \cos {{\theta }_{2}}{{w}_{2}}+{{x}_{2}}\sin {{\theta }_{2}} \right) \\ {{r}_{w}}-\sin {{\theta }_{2}}{{w}_{2}}+{{x}_{2}}\cos {{\theta }_{2}} \\ \end{matrix} \right] \end{align} where $X_c$ and $Y_c$ are the positions of the center of gravity of the robot and $a$ denotes the distance between the center of the beams and the center of gravity in the $y$-direction. The robot can rotate around the $z$-axis and the bases have different rotations around the $y$-axis. These rotational angles are shown by $\varphi$, $\theta_1$ and $\theta_2$. In \eqref{eq1} and \eqref{eq2}, the parameters $w_1$ and $w_2$ represent the bending deflections of the beams and $r_w$ is the radius of the wheels. Since the beams are supposed to be continuous, the position of each particle in the beams are given by $x_1$ and $x_2$. The position of the right (wheel$_1$ and base$_1$) and left (wheel$_2$ and base$_2$) wheels and bases can be obtained as: \begin{equation} \label{eq3} \left[ \begin{matrix} {{X}_{whee{{l}_{1}}}} \\ {{Y}_{whee{{l}_{1}}}} \\ {{Z}_{whee{{l}_{1}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} {{X}_{c}}+2a\sin \varphi \\ {{Y}_{c}}-2a\cos \varphi \\ {{r}_{w}} \\ \end{matrix} \right] \end{equation} \begin{equation} \label{eq4} \left[ \begin{matrix} {{X}_{whee{{l}_{2}}}} \\ {{Y}_{whee{{l}_{2}}}} \\ {{Z}_{whee{{l}_{2}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} {{X}_{c}}-2a\sin \varphi \\ {{Y}_{c}}+2a\cos \varphi \\ {{r}_{w}} \\ \end{matrix} \right] \end{equation} \begin{equation} \label{eq5} \left[ \begin{matrix} {{X}_{bas{{e}_{1}}}} \\ {{Y}_{bas{{e}_{1}}}} \\ {{Z}_{bas{{e}_{1}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} {{X}_{c}}+a\sin \varphi \\ {{Y}_{c}}-a\cos \varphi \\ {{r}_{w}} \\ \end{matrix} \right] \end{equation} \begin{equation} \label{eq6} \left[ \begin{matrix} {{X}_{bas{{e}_{2}}}} \\ {{Y}_{bas{{e}_{2}}}} \\ {{Z}_{bas{{e}_{2}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} {{X}_{c}}-a\sin \varphi \\ {{Y}_{c}}+a\cos \varphi \\ {{r}_{w}} \\ \end{matrix} \right] \end{equation} Consequently, the translational velocity of all the elements can be obtained by applying a time derivation operator to \eqref{eq1} through \eqref{eq6}. It is assumed that the system move without slippage in $y$ and $x$ directions. Hence, velocity of the center of gravity of the system can have the following relationship, \begin{equation}\label{eq7} \frac{{{{\dot{Y}}}_{c}}}{{{{\dot{X}}}_{c}}}=-\tan \varphi \end{equation} It should be noted that the whole velocity of the beams can be calculated through integration of the each beam's particles velocity along the beam. Besides the translational motion, the elements of the robot have also some rotational movements. The rotational velocity of each element can be written as: \begin{equation} \label{eq8} \left[ \begin{matrix} {{\omega }_{{{x}_{_{whee{{l}_{1}}}}}}} \\ {{\omega }_{{{y}_{_{whee{{l}_{1}}}}}}} \\ {{\omega }_{{{z}_{_{whee{{l}_{1}}}}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} 0 \\ \frac{{{{\dot{X}}}_{c}}\cos \varphi +{{{\dot{Y}}}_{c}}\sin \varphi +2a\dot{\varphi }}{{{r}_{w}}} \\ {\dot{\varphi }} \\ \end{matrix} \right] \end{equation} \begin{equation} \label{eq9} \left[ \begin{matrix} {{\omega }_{{{x}_{_{whee{{l}_{2}}}}}}} \\ {{\omega }_{{{y}_{_{whee{{l}_{2}}}}}}} \\ {{\omega }_{{{z}_{_{whee{{l}_{2}}}}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} 0 \\ \frac{{{{\dot{X}}}_{c}}\cos \varphi +{{{\dot{Y}}}_{c}}\sin \varphi -2a\dot{\varphi }}{{{r}_{w}}} \\ {\dot{\varphi }} \\ \end{matrix} \right] \end{equation} \begin{equation} \label{eq10} \left[ \begin{matrix} {{\omega }_{{{x}_{_{bas{{e}_{1}}}}}}} \\ {{\omega }_{{{y}_{_{bas{{e}_{1}}}}}}} \\ {{\omega }_{{{z}_{_{bas{{e}_{1}}}}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} 0 \\ {{{\dot{\theta }}}_{1}} \\ {\dot{\varphi }} \\ \end{matrix} \right] \end{equation} \begin{equation} \label{eq11} \left[ \begin{matrix} {{\omega }_{{{x}_{_{bas{{e}_{2}}}}}}} \\ {{\omega }_{{{y}_{_{bas{{e}_{2}}}}}}} \\ {{\omega }_{{{z}_{_{bas{{e}_{2}}}}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} 0 \\ {{{\dot{\theta }}}_{2}} \\ {\dot{\varphi }} \\ \end{matrix} \right] \end{equation} \begin{equation} \label{eq12} \left[ \begin{matrix} {{\omega }_{{{x}_{_{bea{{m}_{1}}}}}}} \\ {{\omega }_{{{y}_{_{bea{{m}_{1}}}}}}} \\ {{\omega }_{{{z}_{_{bea{{m}_{1}}}}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} \dot{\varphi }\cos {{\theta }_{1}} \\ {{{\dot{\theta }}}_{1}}+\frac{{{\partial }^{2}}{{w}_{1}}}{\partial x\partial t} \\ \dot{\varphi }\sin {{\theta }_{1}} \\ \end{matrix} \right] \end{equation} \begin{equation} \label{eq13} \left[ \begin{matrix} {{\omega }_{{{x}_{_{bea{{m}_{2}}}}}}} \\ {{\omega }_{{{y}_{_{bea{{m}_{2}}}}}}} \\ {{\omega }_{{{z}_{_{bea{{m}_{2}}}}}}} \\ \end{matrix} \right]=\left[ \begin{matrix} \dot{\varphi }\cos {{\theta }_{2}} \\ {{{\dot{\theta }}}_{2}}+\frac{{{\partial }^{2}}{{w}_{2}}}{\partial x\partial t} \\ \dot{\varphi }\sin {{\theta }_{2}} \\ \end{matrix} \right] \end{equation} Since the system has 6 different parts, the kinetic energy of the whole system including the translational and rotational parts can be obtained as: \begin{equation}\label{eq14} T=\frac{1}{2}\sum\limits_{i=1}^{6}{\left( {{\rho }_{i}}{{A}_{i}}{{V}_{i}}^{2}+{{I}_{xi}}{{\omega}_{xi}}^{2}+{{I}_{yi}}{{\omega}_{yi}}^{2}+{{I}_{zi}}{{\omega }_{zi}}^{2} \right)} \end{equation} where ${V_i}^2={\dot{X_i}}^2+{\dot{Y_i}}^2+{\dot{Z_i}}^2$. $I_{xi}$, $I_{yi}$ and $I_{zi}$ are the mass moments of inertia of the $i$-th element and are assumed to be equal for each wheel, each base, and each beam. Also, the effect of rotary inertia terms of the beams are ignored as in\textcolor{blue}{\cite{bhadbhade2008novel}}. The combined $\rho A$ for the system can be calculated as: \begin{equation}\label{eq15} \rho A=\left\{ \begin{matrix} {{\rho }_{b}}{{A}_{b}}+{{\rho }_{p}}{{A}_{p}} & 0<x\le {{L}_{p}} \\ {{\rho }_{b}}{{A}_{b}} & {{L}_{p}}<x\le{L} \\ \end{matrix} \right. \end{equation} where $L$ and $L_p$ are the beam and the piezoelectric lengths, $\rho_b$ and $\rho_p$ are the densities of the beam and piezoelectric actuators, respectively, and $A_b$ and $A_p$ are the cross-sectional areas. Also, the potential energy of the beams and piezoelectric actuators can be expressed as: \begin{equation}\label{eq16} \begin{split} & U=2\rho gAL+{{\int_{0}^{L}{\frac{1}{2}{{E}_{b}}{{I}_{b}}\left( \frac{{{\partial }^{2}}{{w}_{1}}}{\partial {{x}_{1}}^{2}} \right)}}^{2}}d{{x}_{1}}\\ &+{{\int_{0}^{{{L}_{p}}}{\frac{1}{2}{{E}_{p}}{{I}_{p}}\left( \frac{{{\partial }^{2}}{{w}_{1}}}{\partial {{x}_{1}}^{2}}+{{z}_{p}}{{d}_{31}}\frac{{{v}_{1}}\left( t \right)}{{{t}_{p}}} \right)}}^{2}}d{{x}_{1}}\\ &+{{\int_{0}^{L}{\frac{1}{2}{{E}_{b}}{{I}_{b}}\left( \frac{{{\partial }^{2}}{{w}_{2}}}{\partial {{x}_{2}}^{2}} \right)}}^{2}}d{{x}_{2}}\\ &+{{\int_{0}^{{{L}_{p}}}{\frac{1}{2}{{E}_{p}}{{I}_{p}}\left( \frac{{{\partial }^{2}}{{w}_{2}}}{\partial {{x}_{2}}^{2}}+{{z}_{p}}{{d}_{31}}\frac{{{v}_{2}}\left( t \right)}{{{t}_{p}}} \right)}}^{2}}d{{x}_{2}} \\ & +\int_{0}^{L}{\rho gA}\left( -\sin {{\theta }_{1}}{{w}_{1}}+{{x}_{1}}\cos {{\theta }_{1}} \right)d{{x}_{1}} \\ &+\int_{0}^{L}{\rho gA}\left( -\sin {{\theta }_{2}}{{w}_{2}}+{{x}_{2}}\cos {{\theta }_{2}} \right)d{{x}_{2}} \end{split} \end{equation} where $E_b$ and $E_p$ denote Young's modulus of elasticity of the beam and the piezoelectric, respectively, and $I_b$ and $I_p$ are the mass moments of inertia of the beam and piezoelectric cross-section about $y$-axis, respectively. In Eq. \eqref{eq16}, $z_p$ is the neutral axis along the $z$-axis, $d_{31}$ denotes the piezoelectric constant of the actuator, $v_1 (t)$ and $v_2 (t)$ are voltages that are applied to the piezoelectric actuators and $t_p$ is the thickness of the piezoelectric actuators. The damping effects of the beams can be taken to account as virtual work terms as follows: \begin{equation}\label{eq17} \delta {{W}^{nc}}=\int_{0}^{L}{{{C}_{1}}\frac{\partial {{w}_{1}}}{\partial t}\delta {{w}_{1}}}d{{x}_{1}}+\int_{0}^{L}{{{C}_{2}}\frac{\partial {{w}_{2}}}{\partial t}\delta {{w}_{2}}}d{{x}_{2}} \end{equation} where $C1$ and $C2$ are the viscous damping coefficients of the beams. As noted, the robot is assumed to have two DC motors, which produce torques $\tau_1$ and $\tau_2$. The work of these external torques makes additional virtual work terms as: \begin{equation}\label{eq18} \begin{split} &\delta {{W}^{ext}}=\frac{{{\tau }_{1}}+{{\tau }_{2}}}{{{r}_{w}}}\left(\right. \cos \varphi \delta {{X}_{c}}+\sin \varphi \delta Y+\left(\right. {{Y}_{c}}\cos \varphi\\ &-{{X}_{c}}\sin \varphi \left.\right)\delta \varphi \left.\right)+{{F}_{s}}\left(\right. -\sin \varphi \delta {{X}_{c}}+\cos \varphi \delta {{Y}_{c}}-\\ &\left(\right. {{X}_{c}}\cos \varphi +{{Y}_{c}}\sin \varphi \left.\right)\delta \varphi \left.\right) \end{split} \end{equation} \begin{figure} \centering \includegraphics[width=80mm]{Fig2.pdf} \caption{ The robot kinematics.} \label{fig2} \end{figure} \begin{figure} \centering \includegraphics[width=80mm]{Fig3.pdf} \caption{The DC motor circuit diagram.} \label{fig3} \end{figure} In Eq. \eqref{eq18}, the torques $\tau_1$ and $\tau_2$ are produced by the wheel-connected DC motors. The circuit diagram of the DC motors is shown in \textcolor{blue}{{Fig.}\ref{fig3}}. In this figure, the parameter $R_a$ denotes the armature resistance and the variables $V_a$ and $i_a$ are the applied voltage and the motor current draw, respectively. The equation of this system can be derived by applying the Kirchhoff's voltage Law to the circuit as: \begin{equation}\label{eq19} \begin{matrix} {{V}_{aj}}(t)={{R}_{a}}{{i}_{aj}}(t)+{{\tau }_{Bj}} & ,j=1,2 \\ \end{matrix} \end{equation} where $\tau_{Bi}$ represents the back electromotive force (emf) and is equal to: \begin{equation}\label{eq20} \begin{matrix} {{\tau }_{Bj}}={{K}_{B}}\frac{{{{\dot{X}}}_{whee{{l}_{j}}}}}{{{r}_{w}}} & ,j=1,2 \\ \end{matrix} \end{equation} with $K_B$ being the motor speed coefficient. Here, torques $\tau_1$ and $\tau_2$ are given as: \begin{equation}\label{eq21} \begin{matrix} {{\tau }_{j}}={{K}_{t}}{{i}_{aj}} & ,j=1,2 \\ \end{matrix} \end{equation} where $K_t$ is the motor torque constant and is provided by the manufacturer. Hence, the coupled equation can be obtained for the DC motor part by substituting Eq. \eqref{eq20} into Eq. \eqref{eq19}. \begin{equation}\label{eq22} \begin{matrix} {{V}_{aj}}(t)={{R}_{a}}{{i}_{aj}}(t)+{{K}_{B}}\frac{{{{\dot{X}}}_{whee{{l}_{j}}}}}{{{r}_{w}}} & ,j=1,2 \\ \end{matrix} \end{equation} The extended Hamilton's principle for this system can be written as: \begin{equation}\label{eq23} \int_{0}^{t}{(\delta T-\delta U+\delta {{W}^{nc}}+\delta {{W}^{ext}})dt}=0 \end{equation} After substituting Eqs. \eqref{eq14}-\eqref{eq18} to Eq. \eqref{eq23} and some manipulations and simplifications, the dynamic equations of the system can be obtained as: \begin{strip} \begin{align} \label{eq24} \begin{split} & \left( \frac{{{\tau }_{1}}+{{\tau }_{2}}}{{{r}_{w}}} \right)\cos \varphi -\sin \varphi {{F}_{s}}+\frac{2{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}\left(\right. \sin 2\varphi \dot{X}\dot{\varphi }-\cos 2\varphi \dot{Y}\dot{\varphi } \left.\right)-2\left(\right. \frac{{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}{{\cos }^{2}}\varphi +{{m}_{w}}\\ &+{{m}_{base}} \left.\right)\ddot{X}+\frac{{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}\sin 2\varphi \ddot{Y}-\int_{0}^{L}{\rho A}\left[\right. \ddot{X}+\left( a\cos \varphi -\cos {{\theta }_{1}}\sin \varphi {{w}_{1}}-{{x}_{1}}\sin {{\theta }_{1}}\sin \varphi \right)\\ &\times{{{\ddot{\varphi }}}_{1}} \left.\right.+\left( -\cos \varphi \sin {{\theta }_{1}}{{w}_{1}}+{{x}_{1}}\cos \varphi \cos {{\theta }_{1}} \right)\ddot{\theta }+\cos \varphi \cos {{\theta }_{1}}\frac{{{\partial }^{2}}{{w}_{1}}}{\partial {{t}^{2}}}+\left(\right. 2\sin {{\theta }_{1}}\sin \varphi {{w}_{1}}-2\\ &\times{{x}_{1}}\cos{{\theta }_{1}}\sin\varphi){{{\dot{\theta }}}_{1}}\dot{\varphi }-2\sin \varphi \cos {{\theta }_{1}}\dot{\varphi }\frac{\partial {{w}_{1}}}{\partial t}+(-a\sin \varphi -\cos {{\theta }_{1}}\cos \varphi {{w}_{1}} -{{x}_{1}}\cos \varphi\sin {{\theta }_{1}})\\ &\times{{{\dot{\varphi }}}^{2}}-2\cos \varphi \sin {{\theta }_{1}}{{{\dot{\theta }}}_{1}}\frac{\partial {{w}_{1}}}{\partial t}+\left( -\cos {{\theta }_{1}}\cos \varphi {{w}_{1}}-{{x}_{1}}\cos \varphi \sin {{\theta }_{1}} \right){{{\dot{\theta }}}_{1}}^{2} \left.\right]d{{x}_{1}}-\int_{0}^{L}{\rho A}[ \ddot{X}-\\ &(a\cos \varphi+\cos {{\theta }_{2}}\sin \varphi {{w}_{2}}+{{x}_{2}}\sin {{\theta }_{2}}\sin \varphi)\ddot{\varphi } -\left( \cos \varphi \sin {{\theta }_{2}}{{w}_{2}}-{{x}_{2}}\cos \varphi \cos {{\theta }_{2}} \right){{{\ddot{\theta }}}_{2}}+\cos \varphi\\ &\times\cos {{\theta }_{2}}\frac{{{\partial }^{2}}{{w}_{2}}}{\partial {{t}^{2}}} +\left(\right. 2\sin {{\theta }_{2}}\sin \varphi {{w}_{2}}-2{{x}_{2}}\cos {{\theta }_{2}}\sin \varphi \left.\right){{{\dot{\theta }}}_{2}}\dot{\varphi }-2\sin \varphi \cos {{\theta }_{2}}\dot{\varphi }\frac{\partial {{w}_{2}}}{\partial t}+( a\sin \varphi\\ &-\cos {{\theta }_{2}}\cos \varphi {{w}_{2}}-{{x}_{2}}\cos \varphi \sin {{\theta }_{2}}){{{\dot{\varphi }}}^{2}}-2\cos \varphi \sin {{\theta }_{2}}{{{\dot{\theta }}}_{2}}\frac{\partial {{w}_{2}}}{\partial t}-\left(\right. \cos {{\theta }_{2}}\cos \varphi {{w}_{2}}+{{x}_{2}}\cos \varphi\\ &\times\sin {{\theta }_{2}} \left.\right){{{\dot{\theta }}}_{2}}^{2} \left.\right]d{{x}_{2}}=0 \end{split} \end{align} \begin{align} \label{eq25} \begin{split} &\left( \frac{{{\tau }_{1}}+{{\tau }_{2}}}{{{r}_{w}}} \right)\sin \varphi +\cos \varphi {{F}_{s}}-\frac{2{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}\left( \sin 2\varphi \dot{Y}\dot{\varphi }+\cos 2\varphi \dot{X}\dot{\varphi } \right)-2\left(\right. \frac{{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}{{\sin }^{2}}\varphi +{{m}_{w}}\\ &+{{m}_{base}}\left.\right)\ddot{Y}-\frac{{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}\ddot{X}\sin 2\varphi-\int_{0}^{L}{\rho A}\left[\right. \ddot{Y}+( a\sin \varphi +\cos {{\theta }_{1}}\cos \varphi {{w}_{1}}+{{x}_{1}}\cos \varphi \sin {{\theta }_{1}}) \\ &\times\ddot{\varphi }+\left(\right. {{x}_{1}}\cos {{\theta }_{1}}\sin \varphi -\sin {{\theta }_{1}}\sin \varphi {{w}_{1}} \left.\right){{{\ddot{\theta }}}_{1}}+\cos {{\theta }_{1}}\sin \varphi \frac{{{\partial }^{2}}{{w}_{1}}}{\partial {{t}^{2}}}+\left(\right. 2{{x}_{1}}\cos {{\theta }_{1}}\cos \varphi -2\times\\ &\cos \varphi \sin {{\theta }_{1}}{{w}_{1}})\dot{\varphi }{{{\dot{\theta }}}_{1}}+2\cos {{\theta }_{1}}\cos \varphi \dot{\varphi }\frac{\partial {{w}_{1}}}{\partial t}+\left(\right. a\cos \varphi -\cos {{\theta }_{1}}\sin \varphi {{w}_{1}}-{{x}_{1}}\sin {{\theta }_{1}}\sin \varphi \left.\right){{{\dot{\varphi }}}^{2}}\\ &-2\sin {{\theta }_{1}}\sin \varphi {{{\dot{\theta }}}_{1}}\frac{\partial {{w}_{1}}}{\partial t} +( -{{x}_{1}}\sin {{\theta }_{1}}\sin \varphi -\cos {{\theta }_{1}}\sin \varphi {{w}_{1}}){{{\dot{\theta }}}_{1}}^{2} ]d{{x}_{1}}-\int_{0}^{L}{}\rho A[ \ddot{Y}-(a\sin \varphi \\ &-\cos {{\theta }_{2}}\cos \varphi {{w}_{2}}-{{x}_{2}}\cos \varphi\sin {{\theta }_{2}} \left.\right)\ddot{\varphi }{}+\left( {{x}_{2}}\cos {{\theta }_{2}}\sin \varphi -\sin {{\theta }_{2}}\sin \varphi {{w}_{2}} \right){{{\ddot{\theta }}}_{2}}+\cos {{\theta }_{2}}\sin \varphi \\ &\frac{{{\partial }^{2}}{{w}_{2}}}{\partial {{t}^{2}}}+2\cos {{\theta }_{2}}\cos \varphi \dot{\varphi }\frac{\partial {{w}_{2}}}{\partial t}+\left(\right. 2{{x}_{2}}\cos {{\theta }_{2}}\cos \varphi-2\cos \varphi\sin {{\theta }_{2}}{{w}_{2}} \left.\right)\dot{\varphi }{{{\dot{\theta }}}_{2}}-2\sin {{\theta }_{2}}\sin \varphi {{{\dot{\theta }}}_{2}} \\ &\times\frac{\partial {{w}_{2}}}{\partial t}+\left( -a\cos \varphi -{{w}_{2}}\cos {{\theta }_{2}}\sin \varphi -{{x}_{2}}\sin {{\theta }_{2}}\sin \varphi \right){{{\dot{\varphi }}}^{2}}+\left(\right. -{{x}_{2}}\sin {{\theta }_{2}}\sin \varphi-\cos {{\theta }_{2}}\times \\ &\sin \varphi {{w}_{2}} \left.\right){{{\dot{\theta }}}_{2}}^{2} \left.\right]d{{x}_{2}}=0 \end{split}\\ \label{eq26} \begin{split} & 2a\left( \frac{{{\tau }_{1}}-{{\tau }_{2}}}{{{r}_{w}}} \right)-(\frac{8{{a}^{2}}{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}+8{{a}^{2}}{{m}_{w}}+2{{a}^{2}}{{m}_{base}}+2{{I}_{{{z}_{wheel}}}}+2{{I}_{{{z}_{base}}}})\ddot{\varphi } +\frac{{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}\sin 2\varphi\\ &\times{{{\dot{Y}}}^{2}}-\frac{{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}\sin 2\varphi {{{\dot{X}}}^{2}} +2\frac{{{I}_{{{y}_{wheel}}}}}{{{r}_{w}}^{2}}\dot{Y}\dot{X}\cos 2\varphi-\int_{0}^{L}{\rho A}[\left(\right. a\cos \varphi -\sin \varphi \cos {{\theta }_{1}}{{w}_{1}}-{{x}_{1}}\\ &\sin \varphi \sin {{\theta }_{1}})\ddot{X} +a\cos {{\theta }_{1}}\frac{{{\partial }^{2}}{{w}_{1}}}{\partial {{t}^{2}}} +\left(\right. a\sin \varphi +{{x}_{1}}\cos \varphi \sin {{\theta }_{1}} +{{w}_{1}}\cos \varphi\cos {{\theta }_{1}} \left.\right)\ddot{Y} +\left(\right. {{a}^{2}}+\\ &{{x}_{1}}^{2}+{{\cos }^{2}}{{\theta }_{1}}{{w}_{1}}^{2}-{{x}_{1}}^{2}{{\cos }^{2}}{{\theta }_{1}}+{{x}_{1}}\sin 2{{\theta }_{1}}{{w}_{1}})\ddot{\varphi }+(a{{x}_{1}}\cos {{\theta }_{1}}-a\sin {{\theta }_{1}}{{w}_{1}}){{{\ddot{\theta }}}_{1}} +\left(\right. 2{{\cos }^{2}}{{\theta }_{1}}\\ & \times{{w}_{1}} +{{x}_{1}}\sin 2{{\theta }_{1}} )\dot{\varphi }\frac{\partial {{w}_{1}}}{\partial t}-2a\sin {{\theta }_{1}}{{{\dot{\theta }}}_{1}}\frac{\partial {{w}_{1}}}{\partial t}+(-a\cos {{\theta }_{1}}{{w}_{1}}-a{{x}_{1}}\sin {{\theta }_{1}} ){{{\dot{\theta }}}_{1}}^{2} +({{x}_{1}}^{2}\sin 2{{\theta }_{1}}\\ &-\sin 2{{\theta }_{1}}{{w}_{1}}^{2}+2{{x}_{1}}\cos 2{{\theta }_{1}}{{w}_{1}} \left.\right)\dot{\varphi }{{{\dot{\theta }}}_{1}}\left.\right]d{{x}_{1}}-\int_{0}^{L}{\rho A}[-( a\cos \varphi +\sin \varphi \cos {{\theta }_{2}}{{w}_{2}}+{{x}_{2}}\sin \varphi\\ &\times\sin {{\theta }_{2}})\ddot{X} +( {{a}^{2}}+{{x}_{2}}^{2}+{{\cos }^{2}}{{\theta }_{2}}{{w}_{2}}^{2}-{{x}_{2}}^{2}{{\cos }^{2}}{{\theta }_{2}}+{{x}_{2}}\sin 2{{\theta }_{2}}{{w}_{2}})\ddot{\varphi }-(a\sin \varphi -{{x}_{2}}\cos \varphi\\ &\times\sin {{\theta }_{2}}-\cos \varphi \cos {{\theta }_{2}}{{w}_{2}})\ddot{Y}+\left(\right. -a{{x}_{2}}\cos {{\theta }_{2}} +a\sin {{\theta }_{2}}{{w}_{2}}){{{\ddot{\theta }}}_{2}}-a\cos {{\theta }_{2}}\frac{{{\partial }^{2}}{{w}_{2}}}{\partial {{t}^{2}}}+( a\cos {{\theta }_{2}}\\ &\times{{w}_{2}}+a{{x}_{2}}\sin {{\theta }_{2}}){{{\dot{\theta }}}_{2}}^{2}+\left( 2{{\cos }^{2}}{{\theta }_{2}}{{w}_{2}}+{{x}_{2}}\sin 2{{\theta }_{2}} \right)\dot{\varphi }\frac{\partial {{w}_{2}}}{\partial t} +2a\sin {{\theta }_{2}}{{{\dot{\theta }}}_{2}}\frac{\partial {{w}_{2}}}{\partial t} +({{x}_{2}}^{2}\sin 2{{\theta }_{2}}\\ &-\sin 2{{\theta }_{2}}{{w}_{2}}^{2}+2{{x}_{2}}{{w}_{2}}\cos 2{{\theta }_{2}} \left.\right)\dot{\varphi }{{{\dot{\theta }}}_{2}}\left.\right]d{{x}_{2}}=0 \end{split} \\ \label{eq27} \begin{split} &-{{I}_{{{y}_{base}}}}{{{\ddot{\theta }}}_{1}}-\int_{0}^{L}{\rho A}\left[ \left( {{w}_{1}}^{2}+{{x}_{1}}^{2} \right){{{\ddot{\theta }}}_{1}}+\left( {{x}_{1}}\cos {{\theta }_{1}}\cos \varphi -\cos \varphi \sin {{\theta }_{1}}{{w}_{1}} \right)\ddot{X} \right.+{{x}_{1}}\frac{{{\partial }^{2}}{{w}_{1}}}{\partial {{t}^{2}}}+\\ &(a{{x}_{1}}\cos {{\theta }_{1}}-a\sin {{\theta }_{1}}{{w}_{1}})\ddot{\varphi } +({{x}_{1}}\cos {{\theta }_{1}}\sin \varphi -\sin {{\theta }_{1}}\sin \varphi {{w}_{1}})\ddot{Y}+2{{w}_{1}}{{{\dot{\theta }}}_{1}}\frac{\partial {{w}_{1}}}{\partial t}\left(\right.\sin {{\theta }_{1}}{{w}_{1}}^{2}\\ &\times\cos {{\theta }_{1}}-{{x}_{1}}^{2}\sin {{\theta }_{1}}\cos {{\theta }_{1}}-{{x}_{1}}\cos 2{{\theta }_{1}}{{w}_{1}} ){{{\dot{\varphi }}}^{2}}-g{{w}_{1}}\cos {{\theta }_{1}}-{{x}_{1}}\sin {{\theta }_{1}} ]d{{x}_{1}}=0 \end{split} \\ \label{eq28} \begin{split} &-{{I}_{{{y}_{base}}}}{{{\ddot{\theta }}}_{2}}-\int_{0}^{L}{\rho A}[ \left( {{w}_{2}}^{2}+{{x}_{2}}^{2} \right){{{\ddot{\theta }}}_{2}}+\left( {{x}_{2}}\cos {{\theta }_{2}}\cos \varphi -\cos \varphi \sin {{\theta }_{2}}{{w}_{2}} \right)\ddot{X} +{{x}_{2}}\frac{{{\partial }^{2}}{{w}_{2}}}{\partial {{t}^{2}}} -\\ &\left( a{{x}_{2}}\cos {{\theta }_{2}}-a\sin {{\theta }_{2}}{{w}_{2}} \right)\ddot{\varphi }+({{x}_{2}} \cos {{\theta }_{2}}\sin \varphi -\sin {{\theta }_{2}}\sin \varphi {{w}_{2}})\ddot{Y}+2{{w}_{2}}{{{\dot{\theta }}}_{2}}\frac{\partial {{w}_{2}}}{\partial t} +(\sin {{\theta }_{2}}\\ &\times\cos {{\theta }_{2}}{{w}_{2}}^{2}-{{x}_{2}}^{2}\sin {{\theta }_{2}}\cos {{\theta }_{2}}-{{x}_{2}}\cos 2{{\theta }_{2}}{{w}_{2}}){{{\dot{\varphi }}}^{2}}-g{{w}_{2}}\cos {{\theta }_{2}}-g{{x}_{2}}\sin {{\theta }_{2}}]d{{x}_{2}}=0 \end{split} \\ \label{eq29} \begin{split} &\rho A \left(\right. \frac{{{\partial }^{2}}{{w}_{1}}}{\partial {{t}^{2}}}+{{x}_{1}}{{{\ddot{\theta }}}_{1}}+a\cos {{\theta }_{1}}\ddot{\varphi }+\cos {{\theta }_{1}}\cos \varphi \ddot{X}+\cos {{\theta }_{1}}\sin \varphi \ddot{Y} -{{w}_{1}}{{{\dot{\theta }}}_{1}}^{2}+( -{{\cos }^{2}}{{\theta }_{1}}{{w}_{1}}-\\ &{{x}_{1}}\sin {{\theta }_{1}}\cos {{\theta }_{1}} ){{{\dot{\varphi }}}^{2}})+{{C}_{2}}\frac{\partial {{w}_{1}}}{\partial t}+EI\frac{{{\partial }^{4}}{{w}_{1}}}{\partial {{x}_{1}}^{4}}+\rho gA\sin {{\theta }_{1}}+\frac{{{\partial }^{2}}}{\partial {{x}_{1}}^{2}}\left( \frac{EIS(x)z{{d}_{31}}}{{{t}_{p}}}{{V}_{1}}(t) \right)=0 \end{split} \end{align} \begin{align} \label{eq30} \begin{split} &\rho A(\frac{{{\partial }^{2}}{{w}_{2}}}{\partial {{t}^{2}}}+{{x}_{2}}{{{\ddot{\theta }}}_{2}}-a\cos {{\theta }_{2}}\ddot{\varphi }+\cos {{\theta }_{2}}\cos \varphi \ddot{X}+\cos {{\theta }_{2}}\sin \varphi \ddot{Y} -{{w}_{2}}{{{\dot{\theta }}}_{2}}^{2}+\left(\right. -{{\cos }^{2}}{{\theta }_{2}}{{w}_{2}}-\\ &{{x}_{2}}\sin {{\theta }_{2}}\cos {{\theta }_{2}} ){{{\dot{\varphi }}}^{2}}) +{{C}_{1}}\frac{\partial {{w}_{2}}}{\partial t}+EI\frac{{{\partial }^{4}}{{w}_{2}}}{\partial {{x}_{2}}^{4}}+\rho gA\sin {{\theta }_{2}}+\frac{{{\partial }^{2}}}{\partial {{x}_{2}}^{2}}\left( \frac{EIS(x)z{{d}_{31}}}{{{t}_{p}}}{{V}_{2}}(t) \right)=0 \end{split} \end{align} \end{strip} where \begin{equation}\label{eq31} S(x)=H(x)-H(x-{{L}_{p}}) \end{equation} and $H(x)$ is the Heaviside function. The boundary conditions of the above equations are represented as: \begin{equation} \label{eq32} \left( \frac{{{\partial }^{2}}{{w}_{1}}}{\partial {{x}_{1}}^{2}}\delta \frac{\partial {{w}_{1}}}{\partial {{x}_{1}}}-\frac{{{\partial }^{3}}{{w}_{1}}}{\partial {{x}_{1}}^{3}}\delta {{w}_{1}} \right)_{0}^{L}=0 \end{equation} \begin{equation} \label{eq33} \left( \frac{{{\partial }^{2}}{{w}_{2}}}{\partial {{x}_{2}}^{2}}\delta \frac{\partial {{w}_{2}}}{\partial {{x}_{2}}}-\frac{{{\partial }^{3}}{{w}_{2}}}{\partial {{x}_{2}}^{3}}\delta {{w}_{2}} \right)_{0}^{L}=0 \end{equation} To show the accuracy of Eqs. \eqref{eq24}-\eqref{eq33}, it is sufficient to omit the $\varphi$ and one of these beams. Then, the final equations will convert to\textcolor{blue}{\cite{mehrvarz2018modeling}}. \section{Numerical simulation}\label{numericalsimulation} As described in the previous section, the derived equations of motion are complex and have many nonlinear and coupled terms. In this section, a solution technique for these nonlinear equations is briefly presented. The natural frequencies of the beams need to be obtained first in order to solve the obtained equations of motion and extract the natural frequencies of the whole system. In order to extract the natural frequencies of the beams, the following undamped, unforced equations of motion and boundary conditions of the beams are to be used: \begin{equation} \label{eq34} \rho A\frac{{{\partial }^{2}}w}{\partial {{t}^{2}}}+EI\frac{{{\partial }^{4}}w}{\partial {{x}^{4}}}=0 \end{equation} \begin{equation} \label{eq35} \left\{ \begin{matrix} w\left( 0,t \right)=0 \\ \frac{\partial w}{\partial x}\left( 0,t \right)=0 \\ \frac{{{\partial }^{2}}w}{\partial {{x}^{2}}}\left( L,t \right)=0 \\ \frac{{{\partial }^{3}}w}{\partial {{x}^{3}}}\left( L,t \right)=0 \\ \end{matrix} \right. \end{equation} Here, the beams are assumed to have harmonic motions with frequency $\omega$ as\textcolor{blue}{\cite{jalili2009piezoelectric}}: \begin{equation}\label{eq36} w=W(x){{e}^{i\omega t}} \end{equation} Substituting Eq. \eqref{eq36} to \eqref{eq34}, results the below equation: \begin{equation}\label{eq37} -{{\rho }_{b}}{{A}_{b}}{{\omega }^{2}}W+E{{I}_{b}}{{W}''}''=0 \end{equation} The general solution for \eqref{eq37} and its boundary conditions is given as \begin{equation}\label{eq38} W(x)={{a}_{1}}\cos \beta x+{{a}_{2}}\sin \beta x+{{a}_{3}}\cosh \beta x+{{a}_{4}}\sinh \beta x \end{equation} where \begin{equation}\label{eq39} {{\beta }^{2}}=\sqrt{\frac{\rho A}{EI}}\omega \end{equation} Substituting boundary conditions \eqref{eq35} into eigenfunction \eqref{eq38}, results in the following set of equations: \begin{equation}\label{eq40} {{A}_{2\times 2}}{{X}_{2\times 1}}=0 \end{equation} where \begin{equation}\label{eq41} {{X}_{2\times 1}}=\left[ \begin{matrix} {{a}_{1}} \\ {{a}_{2}} \\ \end{matrix} \right] \end{equation} To obtain a nontrivial solution, the determinant of matrix $A$ in Eq. \eqref{eq40} should equal to zero. This equation gives the natural frequencies of the beams. The next step is to solve the equations of motion of the system using the assumed mode model expansion technique and the obtained natural frequencies. In this technique, the lateral displacements $w_1$ and $w_2$ are assumed as follows\textcolor{blue}{\cite{rao2007vibration}}: \begin{equation}\label{eq42} \begin{matrix} & {{w}_{1}}=\sum\limits_{i=1}^{\infty }{{{W}_{i}}({{x}_{1}}){{q}_{1i}}(t)} \\ & {{w}_{2}}=\sum\limits_{i=1}^{\infty }{{{W}_{i}}({{x}_{2}}){{q}_{2i}}(t)} \\ \end{matrix} \end{equation} where $q_{1i} (t)$ and $q_{2i} (t)$ are the generalized coordinates for the bending of the beams and $W_i (x_1)$ and $W_i (x_2)$ are the mode shapes of a fixed-free beam. These functions are defined as: \begin{equation}\label{eq43} \begin{split} &{{W}_{i}}({{x}_{i}})={{{A}'}_{1}}(sin({{\beta }_{n}}{{x}_{i}})-sinh({{\beta }_{n}}{{x}_{i}}))\\ &+{{{A}'}_{2}}(cos({{\beta }_{n}}{{x}_{i}})-cosh({{\beta }_{n}}{{x}_{i}})) \end{split} \end{equation} where $A'_1$ and $A'_2$ are two tunable and dependent coefficients as \begin{equation}\label{eq44} {{{A}'}_{2}}=-\frac{(sin({{\beta }_{n}}L)-sinh({{\beta }_{n}}L))}{(cos({{\beta }_{n}}L)+cosh({{\beta }_{n}}L))}{{{A}'}_{1}} \end{equation} with $\beta_n$ being defined as follows for each mode: \begin{equation}\label{eq45} {{\beta }_{n}}^{4}=\frac{{{\rho }_{b}}{{\omega }_{1n}}^{2}}{E{{I}_{b}}} \end{equation} The equations of motion can be obtained by substituting Eq. \eqref{eq42} to Eqs. \eqref{eq24}-\eqref{eq30} and also multiplying Eq. \eqref{eq43} to Eqs. \eqref{eq29} and \eqref{eq30}, and then integrating the obtained equations over $0$ to $L$. The final equations represent $2n+5$ DOF of the system. \section{Simulation Results} \label{simulationresults} To investigate the dynamic behavior of the system, the equations of motions are solved numerically in Matlab and the results are presented in different scenarios. The numerical values of the physical parameters are presented in \textcolor{blue}{{Table }\ref{table1}} and only two modes are considered for both beams. First, a sweep frequency input, shown in \textcolor{blue}{{Fig.}\ref{fig4}}, is applied to the system to extract the natural frequencies of the system. This standard input provides a fairly uniform spectral excitation and gives the modes of the systems\textcolor{blue}{\cite{banazadeh2017identification}}. \textcolor{blue}{{Fig.}\ref{fig5}} shows the spectral analysis of time history of the system and its natural frequencies. \begin{figure} \centering \includegraphics[width=80mm]{Fig4.pdf} \caption{The swipe frequency input voltage.} \label{fig4} \end{figure} \begin{figure} \centering \includegraphics[width=80mm]{Fig5.pdf} \caption{The FFT analysis of the system.} \label{fig5} \end{figure} In the next scenario, the lateral movement of the system in $X$-direction is presented by applying two same unit step input voltages to the piezoelectric actuators. It is assumed that the robot just moves forward. Hence, the below relation is set between the beams' rotation angle and the DC motors' voltage. \begin{equation}\label{eq46} \begin{matrix} \left\{ \begin{matrix} {{V}_{ai}}={{10}^{-2}}{{\theta }_{i}} & \theta >0 \\ {{V}_{ai}}=0 & \theta <0 \\ \end{matrix} \right. & ,i=1,2 \\ \end{matrix} \end{equation} \textcolor{blue}{{Fig.}\ref{fig6}} shows the lateral vibrations of the beams. These lateral vibrations, as shown in \textcolor{blue}{{Fig.}\ref{fig7}}, make the beam rotation $\theta$ variations and consequently produce the DC motors voltages. As considered in the previous section, these voltages cause the external torques and the lateral movement in $X$-direction. The displacement of the robot is shown in \textcolor{blue}{{Fig.}\ref{fig8}}. Because of nonlinearity of the system, $\varphi$ and $Y$ are not zero but their size are small with respect to both $X$ and $\theta_i$ and considered negligible. \begin{figure} \centering \includegraphics[width=80mm]{Fig6.pdf} \caption{Tip deflection of the beams $w_1 (L,t)$ and $w_2 (L,t)$ with unit step input.} \label{fig6} \end{figure} \begin{figure} \centering \includegraphics[width=80mm,scale=0.5]{Fig7.pdf} \caption{Angular rotations of the beams $\theta_1$ and $\theta_2$ around the robot's base with unit step input.} \label{fig7} \end{figure} \begin{figure} \centering \includegraphics[width=80mm]{Fig8.pdf} \caption{Displacement of the robot in $X$-direction with step input.} \label{fig8} \end{figure} The final scenario is rotation test in the $X$-$Y$ plane. In this test, two step input voltages with different amplitudes are applied to the piezoelectric actuators. This makes different beams' lateral vibrations and bases' rotational angles, and consequently different torques applied to the wheels. This causes different angular velocities in the wheels and produces rotational angle $\varphi$ in the robot system. \textcolor{blue}{{Fig.}\ref{fig9}} shows the rotation angles $\theta_1$ and $\theta_2$. The rotation and the path of the robot are also shown in \textcolor{blue}{{Fig.}\ref{fig10}} and \textcolor{blue}{{Fig.}\ref{fig11}}. \begin{figure} \centering \includegraphics[width=80mm]{Fig9.pdf} \caption{Angular rotations of the beams $\theta_1$ and $\theta_2$ around the robot's base with $v_1=H(t)$ and $v_1=\frac{9}{10} H(t)$.} \label{fig9} \end{figure} \begin{figure} \centering \includegraphics[width=80mm]{Fig10.pdf} \caption{Angular rotation $\varphi$ of the robot around the $z$-axis.} \label{fig10} \end{figure} \begin{figure} \centering \includegraphics[width=80mm]{Fig11.pdf} \caption{Displacement $X$ and $Y$ of the robot in $XY$-plane.} \label{fig11} \end{figure} \begin{table*} \centering \caption{The system parameters.} {\footnotesize \begin{tabular} {llll} \hline \hline Parameter & Value & Parameter & Value\\ \hline \hline Beam length (mm) & 271.46 & Piezo layer shear modulus (GPa) & 5.515 \\ Beam thickness (mm) & 0.5 & Piezo layer elastic modulus (GPa) & 30.33 \\ Beam width (mm) & 25.65 & Piezo layer density (Kg/m3) & 5440 \\ Beam elastic modulus (GPa) & 70 & First flexural damping ratio (\%) & 0.0058 \\ Beam shear modulus (GPa) & 30 & Second flexural damping ratio (\%) & 0.015 \\ Piezo layer length (mm) & 38 & PZT layer width (mm) & 23 \\ Piezo layer thickness (mm) & 0.3 & Beam density (Kg/m3) & 2700 \\ \hline \end{tabular} } \label{table1} \end{table*} \section{Conclusions}\label{conclusion} A new configuration for the conventional two-wheeled inverted pendulum system was presented in this paper. The developed model had $2n+5$ DOF and its main purpose was to simulate two cantilever beams and piezoelectric actuators on a moving base as a new robot which can move in-plane. The governing equations of motion were obtained by employing the extended Hamilton's Principle. The derivation steps of these equations were presented in detail. The obtained model indicates that these systems have several coupled and nonlinear terms in their dynamics. To investigate the dynamic behavior of the system, these complex equations were solved numerically and the natural frequencies of the system were extracted. Finally, the result of two different tests including the lateral and circular movements were presented. \bibliographystyle{IEEEtran}
train/arxiv
BkiUeBI5qWTD6fxGXSlB
5
1
\subsection*{1. Introduction}\setcounter{equation}{0} \setcounter{section}{1} String theory is no longer a theory of only strings (one-branes), but also contains various other dynamical $p$-branes. In particular, the type IIA theory in ten dimensions contains a two-brane. In the original formulation of string theory, where the one-branes are regarded as fundamental objects, the two-brane appears as a Dirichlet brane \cite{polchinski} - the stringy version of a soliton. In a dual formulation (Matrix theory \cite{susskind}), that regards the zero-branes of type IIA string theory as fundamental objects, the two-brane appears as a bound state. No matter how the theory is formulated, one would like to better understand the dynamics of the two-brane. Here we would like to present some observations which seem to suggest that the standard theory of dynamical one-branes can be embedded in a theory of dynamical two-branes. This involves applying an old concept of field theory - confinement - to gravity, and it might explain why branes wrap in M-theory. Dynamical one-branes of given topology are very successfully described by renormalizable theories of two-dimensional quantum gravity coupled to matter \cite{polyakov2d}. Moreover, orientable one-brane-topologies are easily classified by their genus, and there is a topological expansion parameter - the string coupling constant - that can be used to pick out the simplest topology - the two-sphere - to start with. Then the other topologies can be added as perturbations. But there are at least two outstanding problems in extending this approach to the three-dimensional two-branes: \begin{enumerate} \item The three-dimensional Hilbert-Einstein action should appear as a counterterm in the world-brane action. But $3d$ gravity coupled to matter is not a renormalizable theory. \item There is no topological expansion parameter that can be used to restrict attention to a single simple world-brane topology, such as $S^3$ or $S^2\times S^1$. \end{enumerate} One might try to avoid the first problem by fine-tuning the Hilbert-Einstein action away, or by hoping that it is excluded by an unknown nonrenormalization theorem. But even if this does solve the first problem, it does not solve the second one. What we would like to suggest instead is that summing over three-dimensional world-brane topologies has the potential of reducing the world-brane theory to a renormalizable two-dimensional one. In the absence of a classification of general three-dimensional topologies we will restrict attention to a certain - still very rich - subset of topologies that includes both $S^3$ and $S^2\times S^1$: Seifert manifolds \cite{seifert}. Those are the manifolds that admit a foliation by circles, and they can be completely classified. We will argue that, within this subset, summing over topologies has a dramatic effect: it leads to linear confinement of Kaluza-Klein momentum in the circular direction. This is a lower-dimensional version of previous suggestions by Gross \cite{gross} and Witten \cite{wittencom}. We will then offer a speculative interpretation of confinement of Kaluza-Klein momentum in terms of ``wrapping'' of two-branes. In section 2, properties of the two-brane of type IIA string theory in 10 dimensions are recalled. The auxiliary world-brane metric is introduced and an induced three-dimensional Hilbert-Einstein term is included in the action. In sections 3 and 4, this theory is considered on a two-brane of topology $S^2\times S^1$. It is shown that the classical equations of motion have finite-action solutions that describe string solitons which are wrapped around the $S^1$. We call them Kaluza-Klein flux tubes, because they carry fractional magnetic flux with respect to the Kaluza-Klein gauge field that arises from the world-brane metric upon compactification along the $S^1$. These flux tubes involve vortex lines of the eleventh dimension of M-theory; we argue that such vortex lines are allowed on the membrane, even though vortices are forbidden on a string world-sheet. We then discuss two effects of these flux tubes: \begin{itemize}\item In section 5, it is shown that the Kaluza-Klein flux tubes change the topology of the two-brane by performing what is called ``Dehn surgery'' on it. A single flux tubes changes the topology from $S^2\times S^1$ to a lens space, and by summing over an arbitrary number of flux tubes one sums over all Seifert manifolds. \item In section 6, the Wilson loop for the Kaluza-Klein gauge field is computed. It is seen to obey an area law. This implies linear confinement of the associated electric charge, which is nothing but Kaluza-Klein momentum. \end{itemize} In the latter computation we take the semiclassical limit, where the three-dimensional Newton constant goes to zero. In this limit we define the path integral simply as a sum over soliton configurations. Because of unboundedness problems one should really study the supersymmetric version of the theory where the Kaluza-Klein flux tubes are BPS-saturated; this is under investigation. Section 7 offers a speculative interpretation of confinement of Kaluza-Klein momentum: it is argued to imply that two-branes are ``permanently compactified'' (or ``wrapped'') to one-branes, similarly as quarks are permanently confined in QCD. Moreover, it is argued that neutral bound states of Klauza-Klein modes (``baryons'') cannot exist and that, as a consequence, these one-branes are standard strings. Comments on the nonabelian generalization of these proposals conclude the paper. Some of these ideas have appeared in a previous publication by the author at the example of a simplified toy model \cite{ichtop}. {}\subsection*{2. Dynamical membranes}\setcounter{equation}{0} \setcounter{section}{2} Let us begin by recalling some of the properties of the solitonic two-brane of type IIA string theory in ten dimensions. These properties can be computed directly using techniques of open string theory \cite{polchinski,leigh,clny,ichD}. Among the results that one finds are the following: The world-brane fields that live on the two-brane are the space-time coordinates $x^\mu$ with $\mu\in\{1,...,10\}$, as well as an eleventh embedding dimension $x^{11}$, which is the dual of the world-brane U(1) gauge field \cite{duff, townsend, ichD}. An important property of $x_{11}$ is that it is compact. The compactification radius $R$ is related to the string coupling constant $\kappa$ \cite{witten11}: $$ x_{11}\ \equiv\ x_{11}+2\pi R\ ,\ \ \ R\sim\kappa^{2\over3}\ .$$ The world-brane action of the Dirichlet two-brane comes out to be the eleven-dimensional supermembrane action \cite{bst} whose bosonic part is \ba S\ =\ T \int d^3\sigma\ {\sqrt{\det\ G_{ij}}}\ +\ \hbox{Wess-Zumino term}\ . \la{bettina}\ea Here $i,j\in \{1,2,3\}$ and the world-brane and space-time signatures are taken to be Euclidean. $G_{ij}$ is the induced world-brane metric $\partial_i\vec x\partial_j\vec x$ with $\vec x\equiv(x^\mu,x^{11})$. An interesting aspect of the two-brane action is that it has eleven-dimensional general covariance. So although type IIA string theory was defined purely in ten dimensions, its solitonic two-brane really thinks she lives in eleven dimensions and she is the supermembrane. The world-brane action (\ref{bettina}) is in Nambu-Goto form. Because of their complicated forms, Nambu-Goto actions are not very useful when one tries to quantize branes. So we rewrite (\ref{bettina}) with the help of a $3d$ metric $h_{ij}$ \cite{tucker} as \ba S\ =\ {T\over2} \int d^3\sigma\ {\sqrt{h}}\{h^{ij}\partial_i\vec x\partial_j\vec x \ -\ 1 \}\ , \la{diana}\ea Classically, the saddle point value of the integral over $h$ at $h_{\alpha\beta} = \partial_\alpha x^\mu\partial_\beta x_\mu$ reproduces the Nambu-Goto action. Quantum mechanically, we must add to the Lagrangean counterterms that are induced in the process of renormalizing this theory of three-dimensional gravity coupled to matter fields $x^A$. Among them is - at least for the bosonic membrane - the Hilbert-Einstein term \ba -{1\over 2e^2}\int {\sqrt{h}}{\cal R}^{(3)}\ .\la{beate}\ea where ${\cal R}^{(2)}$ is the two-curvature and $e$ is some coupling constant. But this leads to a nonrenormalizable theory of three-dimensional gravity coupled to matter that really does not seem to make any sense as a quantum theory. As mentioned in the introduction, it is not clear how to make sense of the theory even without the Hilbert-Einstein action, i.e. in the limit $e\rightarrow\infty$. So here we will consider the theory instead in the semiclassical limit $e\rightarrow0$, where we define the path integral as a sum over classical solutions of the Einstein equations. In section 4 we will find a set of nontrivial classical solutions. Specifically, we will consider membranes of topology $$R^2\ \times S^1\ .$$ Let us parametrize the $S^1$ by the coordinate $z\in[0,2\pi[$, and the $R^2$ by coordinates $\sigma_1,\sigma_2$. If the membrane wraps $n$ times around $x^{11}$, this means that $x^{11}=nRz$. In the solutions of section 4, all the other coordinates $x^\mu$ as well as the world-brane metric $h_{ij}$ depend only on $\sigma_1,\sigma_2$; $x^{11}$ may also have a piece that depends on $\sigma_1,\sigma_2$: \ba h_{ij}&=&h_{ij}(\sigma_1,\sigma_2)\la{miriam}\\ x^\mu&=& x^\mu(\sigma_1,\sigma_2)\\ x_{11}&=& nRz\ +\ \tilde x_{11}(\sigma_1,\sigma_2)\ . \la{clara}\ea In this case it is useful to perform a standard Kaluza-Klein reduction of the three-dimensional metric to a two-dimensional metric $h_{\alpha\beta}$ on $R^2$, a two-dimensional Kaluza-Klein gauge field $A_\alpha$ and a scalar $L$, which measures the size of the circular world-brane direction. They are defined by the line element \ba (ds)^2\ =\ h^{(2)}_{\alpha\beta}\ d\sigma^\alpha d\sigma^\beta \ +\ L^2(dz+A_\alpha d\sigma^\alpha)^2\ . \la{elise}\ea The three-dimensional Ricci scalar becomes: $$ {\cal R}^{(3)} \ \rightarrow\ {\cal R}^{(2)} - {1\over2}L^2F_{\alpha\beta}^2\ , $$ where $F_{\alpha\beta}$ is the field strength of $A_\alpha$. Then the action (\ref{diana} plus \ref{beate}) becomes: \ba S\ \ \sim\ \ \int d^2\sigma\ L\sqrt{ h ^{(2)}}&\times& \{\ {T\over2}h^{\alpha\beta}\partial_\alpha x^\mu \partial_\beta x_\mu \ -\ {1\over 2e^2} {\cal R}^{(2)}\la{erika}\\ &&+ {T\over2}h^{\alpha\beta}(\partial_\alpha x_{11}-nRA_\alpha ) (\partial_\beta x_{11}-nRA_\beta) + {1\over4e^2}L^2F_{\alpha\beta}^2\\ &&+ {T\over2}({n^2R^2\over L^2}\ -\ 1)\ \}\la{flavia}\ . \la{gabi}\ea Up to the overall factor $L$, the first line resembles a standard Nambu-Goto string embedded in ten dimensions. M-theory adds to this string the gauge field and $x^{11}$ appearing in the second line (plus a potential term). What M-theory adds seems to be trivial at first sight - the gauge field seems to simply ``eat'' $x^{11}$. However, $x_{11}$ is compact and therefore there might be vortices in the system. {}\subsection*{3. Are vortex lines of $x^{11}$ allowed?}\setcounter{equation}{0} \setcounter{section}{3} Let us now parametrize the two-brane of topology $R^2\times S^1$ by the circular coordinate $z$ and by polar coordinates $(r,\phi)$ in $R^2$. By a Kaluza-Klein flux tube centered at $r=0$, we mean a vortex configuration that obeys in addition to (\ref{miriam}-\ref{clara}): \ba x^{11}&=&mR\phi\ +\ nRz \la{rebecca} \\ A_\phi &\rightarrow& {m\over nr}\ \ \ \hbox{for}\ \ \ r\rightarrow\infty\la{rosi}\\ A_\phi &\rightarrow& 0\ \ \ \ \ \hbox{for}\ \ \ r\rightarrow0\ ,\la{ruth} \ea where the Kaluza-Klein gauge field has been assumed to only have an angular component $A_\phi$. Such a vortex obviously encloses fractional magnetic flux $$\int F\ =\ 2\pi{m\over n}\ .$$ Before finding solutions of the equations of motion, let us ask whether we are really permitted to include such vortex configurations in the path integral. There are two reasons why one may have doubts. First one may ask, isn't there a Dirac quantization rule that forces the magnetic flux to be integer? We will return to this in section 5, where we will show that in the case at hand the Dirac condition indeed only requires the flux to be $2\pi$ times a {\it rational} number. Second, one may think that vortex lines of $x^{11}$ are forbidden on the membrane for the same reason that vortices of compact coordinates are forbidden on a string world-sheet. Suppose there is a compact coordinate $x^1$ with radius $R$ in string theory. Vortices $$x^1\ =\ mR\phi$$ on the string world-sheet are forbidden, because they create holes in the world-sheet: an infinitesimal circle $\zeta$ drawn on the world-sheet around the vortex center is mapped in target space onto a line of finite length $2\pi mR$, so the world-sheet acquires a boundary. One consequence of this is that gauge invariance of the Neveu-Schwarz two-form of string theory is lost \cite{wittenbound}: the vortex is a source of violation of the gauge invariance $$B_{1\mu}\ \rightarrow\ B_{1\mu}+\partial_\mu \Lambda\ ,$$ where $\Lambda$ is an arbitrary space-time scalar field, since $$\delta\int B\ \equiv\ \delta\int\epsilon^{\alpha\beta}\partial_\alpha x^\mu\partial_\beta x^\nu B_{\mu\nu} \ =\ \oint_\zeta\Lambda d x_1\ =\ 2\pi mR\ \Lambda(r=0)\ .$$ (By $dx_1$ we mean $\partial_\alpha x_1 d\sigma^\alpha$.) Another way of putting things is, vortices are vertex operators for string winding modes. But in string theory we only allow vertex operators and target space fields for momentum modes and not for winding modes, for the reason just mentioned.\footnote{I thank Joe Polchinski for making this point.} Similarly, one may wonder whether vortex lines on the membrane spoil gauge invariance of the three-form gauge field of M-theory, $$C_{11\mu\nu}\ \rightarrow\ C_{11\mu\nu}+\partial_{[\mu}\Lambda_{\nu]}\ .$$ One can easliy check that this is not the case. If $\zeta\times S^1$ is a thin torus enclosing the vortex line, then $$\delta\int C\ =\ 2\pi nR\oint_{\zeta\times S^1} \Lambda_\nu dx^\nu\wedge dx^{11}\ \rightarrow\ 0\ $$ as $\zeta$ shrinks to a point. The intuitive reason is that vortex lines (\ref{rebecca}) do {\it not} create boundaries of the membrane. In fact, they can locally be absorbed in a large diffeomorphism such as $$z\ \rightarrow\ z+m\phi\ $$ for $n=1$ (the case $n\neq1$ will be discussed below). Since vortices can locally be absorbed in large diffeomorphisms, one may think that they should still be excluded, as they are ``pure gauge''. But the point is that they cannot be absorbed globally for a general world-brane topology, e.g. for a lens space. This will become clear below. So there does not seem to be an objection of principle against vortices of $x_{11}$. We will therefore now discuss solutions of the equations of motion that involve them. {}\subsection*{4. Kaluza-Klein flux tubes}\setcounter{equation}{0} \setcounter{section}{4} To find solutions with boundary conditions (\ref{rebecca}-\ref{ruth}), we make the following ansatz for the $3d$ metric: $$ds^2\ =\ dr^2\ +\ \rho(r)^2d\phi^2\ +\ L(r)^2(dz^2+\rho(r) A(r)d\phi)^2\ .$$ In comparing with (\ref{elise}), note that $A$ is here defined to be accompanied by $\rho$. The field strength of the Kaluza-Klein gauge field $A$ is then $$ F(r)\ =\ {(\rho A)'\over \rho} .$$ We also assume that the membrane is ``stretched'', i.e. the $r-\phi-$plane is identified with the $x_1-x_2-$plane: \ba x_1&=& f(r)\cos\phi\\ x_2&=& f(r)\sin\phi\ .\ea Because of the presence of the vortex, the function $f(r)$ cannot be set equal to $r$ but must be determined from the equations of motion. It is straightforward to compute the spin connection, the curvature and from it the equations of motion of the action (\ref{diana} plus \ref{beate}). For the equations of motion we find: \ba f''\ +\ {(\rho L)'\over\rho L }f'\ -\ {1\over\rho^2}f&=&0 \la{zero}\\ \ -{1\over e^2T}\{{\rho''\over\rho}+{L''\over L}+{L^2F^2\over2}\}&=& (f')^2\ -\ 1\la{one}\\ \ -{1\over e^2T}\{{\rho''\over\rho}+{\rho'\over\rho}{L'\over L}+{L^2F^2\over2}\}&=&R^2\ ({m\over\rho}-nA)^2\ +\ {f^2\over\rho^2}\ -\ 1\la{two}\\ \ -{1\over e^2T}\{{L''\over L}+{\rho'\over\rho}{L'\over L}-{L^2F^2\over2}\}&=&R^2\ {n^2\over L^2}\ -\ 1 \la{three}\\ \ -{1\over e^2T}\{3L'F+LF'\}&=&R^2\ {2n\over L}\ ({m\over \rho}-nA) \ \la{four}. \ea Only four of these five equations are independent. In fact, a linear combination of them, (\ref{one}) minus (\ref{two}) minus (\ref{three}), is a constraint that is first-order in derivatives. Its derivative is implied by the other equations. Up to this constraint, the equations of motion are those of action (\ref{erika}-\ref{flavia}), reduced to one dimension: \ba S&=&{4\pi^2\over e^2}\int dr\ \{{\rho L^3\over4}F^2\ -\rho'L'\}\ +\ {4\pi^2\over e^2}[L\rho']^\infty_0 \la{uli1}\\ &+& 2\pi^2 T\int dr\ \rho L\ \{\ {n^2\over L^2}R^2\ +\ ({m\over\rho}-nA)^2R^2\ +\ (f')^2\ +\ {f^2\over\rho^2}\ -\ 1\ \}\ .\la{uli2}\ea In the first line we have used the fact that ${\sqrt {g^{(3)}}}R^{(2)}=-2L\rho''$ and kept the boundary term. For $m=0$, the equations of motion have the trivial solution \ba f(r)\ =\ \rho(r)\ =\ r\ ,\ \ L\ =\ nR\ ,\ \ A\ =\ 0\ .\la{sandy}\ea For $m\neq0$ we have only been able to find analytic solutions near the origin and at infinity, but Mathematica has been able to find well-behaved vortex solutions everywhere; an example is shown in Fig.1. There, $(\rho A),L,\rho$ and $f$ are plotted as functions of $r$ for $e=R=1$ and $n=5,m=2$.\footnote{For anyone who would like to reproduce the figure: the ``initial conditions'' were roughly determined from the asymptotic solution near $r=0$ (given below) and then fine-tuned to satisfy the boundary conditions at infinity (given below). They were fine-tuned at $r=0.000123$: $\rho A=0, (\rho A)'=0.0002307{2\over 5z^2}, L=15z, L'=-6500 z, r=0.00074, f=0.00143 f_0, f'=3.87 f_0$ with $z=0.626055, f_0=72.996.$ By fine-tuning, one can simultaneously match the boundary conditions at zero and infinity with arbitrary precision.} \begin{picture}(500,300) \put(10,260){$\rho(r)A(r)$} \put(250,260){$\rho(r)$} \put(20,110){$L(r)$} \put(250,110){$f(r)$} \put(180,165){$r$} \put(180,40){$r$} \put(425,165){$r$} \put(425,20){$r$} \end{picture} {}\epsffile[-5 5 0 0]{vortex.ps} \begin{center} {\small Fig. 1: Example of a Kaluza-Klein flux tube with n=5, m=2. }\end{center} \vskip.5cm These solutions have the following asymptotic behavior. At $r\rightarrow\infty$: \ba L&=&nR\ +\ ...\\ \rho&=&r + r_0\ +\ ...\\ A&=&{m\over n(r+r_0)}\ +\ ...\\ f&=&r + r_0\ +\ ...,\ \ea where the dots represent terms of order $e^{-{\sqrt{2T}}er}$. Here $r_0$ is a constant that could be absorbed in a shift $r\rightarrow r-r_0$; then the vortex center would be at $r=r_0$. Thus the vortex is classically invisible far away from its core, in the sense that not only the magnetic field vanishes but also $L, f$ and $\rho$ assume the same values as the solution (\ref{sandy}) with $m=0$.\footnote{But there is a quantum mechnaical Aharonov-Bohm effect; see section 5.} Note, in particular, that there is no deficit angle at infinity. To achieve this asymptotic behavior at infinity, it suffices to impose three boundary conditions: $$ \rho A\rightarrow{m\over n} \ \ ,\ \ \ \ \ L'\rightarrow0\ \ , \ \ \ \ \ f'\rightarrow1\ \ \ \ \ \ \ \ \ \hbox{at}\ r\rightarrow\infty\ .$$ The rest follows from the equations of motion. Near the origin, the solutions behave as follows: \ba L&=& nR{\sqrt T}e\lambda\ |\log er|^{1\over2}+...\la{maja1} \\ \rho&=& |m|R{\sqrt T}\ er|\log er|^{1\over2}+... \\ \rho A&=& {2\over3}{m\over n}{r^2\over\lambda^2} +...\\ f&=& f_0|\log er|^{-{1\over4}}\exp\{-{2\over |m|eR{\sqrt T}}|\log er|^{1\over2}\}+...\la{maja4} \ea with free parameters $\lambda$ and $f_0$. This behavior follows from the equations of motion if three more boundary conditions are imposed: $$ \ A(r)\rightarrow0\ \ , \ \ \ \ \ \rho(r)\rightarrow0\ \ ,\ \ \ \ \ f(r)\rightarrow0\ \ \ \ \ \ \ \ \ \hbox{at}\ r\rightarrow0\ .$$ The second and third conditions insure that the vortex does not create a hole in the membrane, neither in the internal nor in the embedding space.\footnote{To see that (\ref{maja1}-\ref{maja4}) solve the equations of motion near $r=0$, note that these expressions obey $L'=-{1\over2}n|m|\lambda e^2R^2T{1\over\rho}; \rho'=|{m\over n}|{1\over\lambda}L;f'={f\over\rho}; F=\pm{4\over3\lambda},$ neglecting terms that are suppressed by negative powers of $|\log er|$ and by exp\{$-{\sqrt{\log er}}$\}. Under the same approximations, these first order equations imply the equations of motion (\ref{zero}-\ref{four}).} For these solutions, the ``brane thickness '' $L$ diverges very slowly at the origin. There is also a curvature singularity at the origin, in the sense that the deficit angle $-2\pi\rho'|^{r=\infty}_{r=0}$ also diverges very slowly there: $$2\pi\ \rho'(r)|^{r=\infty}_{r=0}\ \rightarrow\ 2\pi |m|eR{\sqrt T} \ |\log er|^{1\over2}\ \ \ \ \ \ \hbox{as}\ \ r\rightarrow0\ .$$ This divergence causes no problems, since the flux tube action nevertheless turns out to be finite: it is easy to check that on solutions of the equations of motion action (\ref{uli1}-\ref{uli2}) becomes a boundary term plus a volume term: \ba S\ =\ -{4\pi^2\over e^2} [\rho L']^{r=\infty}_{r=0} \ +\ 4\pi^2T\int dr\ \rho L\ .\la{marina}\ea The second term is, after shifting $r$, the action of the solution (\ref{sandy}) with $m=0$ plus something finite, whose precise value we will not need to know. The point is that the boundary term $[\rho L']^\infty_0$ is {\it not} the same as the deficit angle. From the asymptotic solutions at the origin and infinity we obtain: $$-{4\pi^2\over e^2}[\rho L']^{r=\infty}_{r=0}\ =\ 0\ +\ 2\pi^2\vert mn\vert \lambda R^2T\ .$$ So the contribution of the vortex center to the action is indeed finite. This differs from the situation for the simplified model studied in \cite{ichtop}, where the action was logarithmically divergent at the origin. Note that the flux tube action is also independent of $e$, which means that we must still sum over Kaluza-Klein flux tubes in the semiclassical limit $e\rightarrow0$. However, the vortices disappear (the vortex action becomes infinite) in the decompactification limit $\kappa\rightarrow\infty$, i.e. $\kappa\rightarrow\infty$ \cite{witten11} of M-theory. In the above we have imposed 6 constraints, but there are 7 free parameters from four second-order equations with one first-order constraint. This leaves one modulus of the vortex solution. This modulus is, roughly, the size of the vortex. To see this, note that without the cosmological constant (i.e. without the ``1'' on the RHS of (\ref{one}-\ref{three})), the equations of motion have the ``scaling'' symmetry $$r\rightarrow pr\ \ ,\ \ \ \ L\rightarrow pL\ \ ,\ \ \ \ \rho\rightarrow p\rho\ \ , \ \ \ \ A\rightarrow{A\over p}\ \ ,\ \ \ \ f\rightarrow f\ \ $$ with an arbitrary real number $p$. Of course, this symmetry is broken by the cosmological constant. To summarize, the Kaluza-Klein flux tubes presented in this section for the $3d$ gravity theory on the membrane have the following features: they involve vortex lines of $x_{11}$, they carry fractional magnetic charge with respect to the Kaluza-Klein gauge field, they have finite action, and they are classically invisible outside their core. These vortices are somewhat analogous to the Nielsen-Olesen vortices of the abelian Higgs model \cite{nielsen} or to the Abrikosov flux tubes in a type II superconductor \cite{abrikosov}, with $x^{11}$ in the role of the Goldstone boson. As stressed in the introduction, we should really consider a supersymmetric version of the model presented here, which has BPS-saturated (and therefore stable) vortices. This will be discussed elsewhere. {}\subsection*{5. Sum over $3d$ topologies}\setcounter{equation}{0} \setcounter{section}{5} Let us now come to the geometric interpretation of fractional magnetic flux (see \cite{ichtop} for some more details). This flux has a geometric interpretation because the Maxwell field is not just any gauge field, but the Kaluza-Klein gauge field appearing in the line element $$ds^2=dr^2+\rho^2d\phi^2+L^2(dz+\rho A d\phi)^2\ ,$$ where $(r,\phi,z)$ are cylinder coordinates. Let us start with a two-brane of topology $S^2\times S^1$. Now assume that there is magnetic flux \ba\int_{S^2} F\ =\ 2\pi {m\over n}\ .\la{julchen}\ea It is not difficult to see that this flux changes the topology of the manifold from $S^2\times S^1$ to the lens space $L(m,n)$. As we know, we cannot define the gauge field such that it is everywhere nonsingular. If we set it to zero at the South pole of the two-sphere, then there is a Dirac string at the North pole, so the three-dimensional metric will be singular there: $$\rho A\ \rightarrow\ -{m\over n}\ \ \ \hbox{as}\ \ r\rightarrow 0\ .$$ This leads to some pathologies: e.g., the circle defined by $r=\epsilon, z=0$ retains finite size $2\pi{mL\over n}$ as $\epsilon\rightarrow 0$. On the other hand, the spiral line defined by $$r\ =\ \epsilon\ ,\ \ \ - m\phi+nz\ =\ 0$$ shrinks to zero size as $\epsilon\rightarrow0$. These pathologies can be removed by a large diffeomorphism. By this we mean an $SL(2,Z)$ transformation on the torus that is - for fixed $r$ - parametrized by the compact coordinates $z$ and $\phi$: \ba \phi&\rightarrow&\ \ \ a\phi+ bz\la{laura1}\\ z&\rightarrow&-m\phi+nz\la{john}\\ \tau &\rightarrow&\ {\ \ n\tau+m\over -b\tau+a}\ ,\la{laura3}\ea where $$\tau\ = \rho A\ +\ i{\rho\over L}$$ is the modular parameter of the torus and $a,b$ are integers such that $an+bm=1$. The latter condition ensures that the determinant of the map is one, while the condition $a,b,m,n\in Z$ ensures that closed lines are mapped onto closed lines. This $SL(2,Z)$ transformation leaves the line element invariant and removes the singularity of the Klauza-Klein gauge field. But it changes the topology of the membrane. More precisely, a manifold on which a metric that corresponds to fractional magnetic flux (\ref{julchen}) can live is constructed as follows. One decomposes the two-sphere into two coordinate patches $D_1$ and $D_2$, where we can choose $D_1$ to be a little disc around the North pole and $D_2$ to be its complement in $S^2$. One cuts out the solid torus $D_1\times S^1$ from the three-manifold $S^2\times S^1$, twists it by the $SL(2,Z)$ transformation and glues it back in, such that its meridian $M'$ gets identified with a spiral line on the surface of the $D_2\times S^1$: $$M'\ =\ nM+mL\ .$$ Here, $M$ is the meridian and $L$ is the longitude of the torus that bounds $D_2\times S^1$. This operation of cutting a tube out of a three-manifold, twisting it as described and gluing it back in is called Dehn surgery. $m$ and $n$ are called Seifert invariants; they must be relatively prime. It can be shown that the values of $a,b$ do not influence the topology of the obtained manifold. The three-manifolds constructed in this way are by definition the lens spaces $L(m,n)$. They definitely differ from $S^2\times S^1$: e.g., their first homology group is finite, $Z_m$. As a first example, consider the case $n=1$. Then we can choose $a=1,b=0$, so the large diffeomorphism (\ref{laura1}-\ref{laura3}) is: $$\phi\rightarrow\phi\ ,\ \ \ \ z\rightarrow z-m\phi\ ,\ \ \ \ \rho A\rightarrow\rho A+m\ .$$ Those are the large gauge transformations of an ordinary $U(1)$ gauge theory, if the membrane is interpreted as the total space of the $U(1)$ bundle over $S^2$ that is represented by the compact coordinate $z$. $m$ is the obstruction to the existence of a section of the $U(1)$ bundle, and $M'$ is identified with $M+mL$. As an example of a large gauge transformation in Kaluza-Klein gauge theory that does not have an analog in ordinary gauge theory, consider the case $n=5,m=-2$. Then one can pick $a=1,b=2$. In the old coordinate system, $L=5$ and $\rho=r$. Then the modular parameter transforms as $$\tau\ =\ {2\over5}+{i\over5}r\ \rightarrow\ {-5ir}\ +\ O(r^2)\ =\ \tau'\ ,$$ which corresponds to $L=1, \rho=5r$ in the new coordinate system (flipping an orientation). The resulting manifold is the Lens space $L(5,2)$; the deficit (or rather surplus) angle $-8\pi$ does not change the topology of the manifold further. As in \cite{ichtop}, by performing Dehn surgery with arbitrary Seifert invariants along an arbitrary number of vortex lines running around the $S^1$, and by also replacing the $S^2$ by an orientable surface of arbitrary genus, or by an unorientable surface (in this case the $S^1$ must be replaced by a circle bundle that reverses orientation along closed lines on the surface along which the surface reverses orientation), one can construct all possible orientable three-dimensional topologies that admit a foliation by circles (Seifert manifolds \cite{seifert}). Those also include the three-sphere (which admits infinitely many ``Hopf fibrations''). So the sum over Kaluza-Klein flux tubes implies a sum over membrane topologies! For completeness, let us mention how one can construct {\it all} three-dimensional topologies: one starts with an $S^3$ and draws an arbitrary knot, or a set of linked knots on it. One performs Dehn surgery along the knot lines by cutting out tubes around them, then twists each tube with arbitrary Seifert invariants $m,n$ as described above and glues it back in. The statement is that if one draws all possible knots and links and performs all possible surgeries on them, then one obtains all possible topologies. This does of course not classify three-dimensional topologies, because, first, one still has to classify knots, which is an unsolved problem; and second, even if one starts with different knots and performs different surgeries, one might still end up with the same topology. But in one way or another, all possible topologies occur. {}\subsection*{6. Confined Kaluza-Klein momentum}\setcounter{equation}{0} \setcounter{section}{6} Let us now consider the Wilson loop for the Kaluza-Klein gauge field on a membrane of initial topology $R^2\times S^1$, keeping the two-dimensional meric on the $R^2$ fixed: $$ \exp\{W(q,C)\}\ =\ <\exp\{iq\oint_CA_\alpha d\sigma^\alpha\}>\ .$$ Here $C$ is a closed contour in $R^2$ and $q$ is a test charge. We assume that the semiclassical limit $e\rightarrow0$ has been taken; so what we mean by the brackets on the right-hand side is an average over classical solutions, including an arbitrary numbers of Kaluza-Klein flux tubes (or ``vortices'') running through the loop. In the absence of flux tubes, the gauge theory is Higgsed and the Wilson loop obeys a perimeter law. In order to argue that flux tubes turn this into an area law, we adapt a well-known argument from the two-dimensional Abelian Higgs model (see, e.g., \cite{callan}) to the case at hand. As seen in section 3, the flux tubes are classically invisible outside their core, and in particular there is no deficit angle. So let us assume that we can approximate the system of flux tubes by a dilute gas of noninteracting loops that are wrapped around the $S^1$ and carry magnetic flux $2\pi{m\over n}$. Flux tubes that run through the inside of the loop $C$ contribute an Aharonov-Bohm phase exp($\pm 2\pi iq{m\over n}$) to the Wilson loop.\footnote{We mean flux tubes that are inside the Wilson loop and are linked with it by running around the $S^1$. Flux tubes that are not linked with the loop do not contribute a phase.} In the path integral, each vortex also comes with a weight factor $e^{-S_{n,m}}$, where $S_{n,m}$ is the finite action of a single $(n,m)$ vortex. A vortex also comes with a factor $${\hbox{Area inside $C$}\over a^2} \ ,$$ which is the number of possible vortex locations inside the loop\footnote{We should also sum over the possible shapes of vortex lines. We neglect this; this is like describing the flux tubes in a $3d$ superconductor by the Nielsen-Olesen vortices in a $2d$ abelian Higgs model (which works).} ($a$ is a length scale such as a short-distance cutoff on $R^2$). Consider first the partition function $Z_{n,|m|}$ of the gas of $(n,\pm |m|)$ vortices inside $C$ in the presence of the Wilson loop, with $n,|m|$ held fixed: \ba Z_{n,|m|}(q,C)&\sim& \sum_{N_+,N_-=0}^\infty {1\over N_+!N_-!} ( e^{-S_{n,m}}\ {\hbox{Area}\over a^2})^{N_++N_-} \ e^{ 2\pi iq{m\over n}(N_+-N_-)}\\ &=& \sum_{N=0}^\infty{1\over N!} \{ e^{-S_{n,m}}\ {\hbox{Area}\over a^2}\ (e^{ 2\pi iq{m\over n}}+e^{-2\pi iq{m\over n}})\}^N\\ &=& \exp\{2\cos(2\pi{m\over n}q)\ {\hbox{Area}\over a^2}\ e^{-S_{n,m}}\}\ .\ea Here $N_+$ and $N_-$ are, respectively, the number of $(n,|m|)$ and $(n,-|m|)$ vortices inside the loop, and $N=N_++N_-$. This yields, up to perimeter terms, the Wilson loop \ba W_{n,m}(q,C)\ \ =\ \ \log {Z_{n,|m|}(q)\over Z_{n,|m|}(q=0)}\ \ =\ \ \sigma_{n,m}\ \times\ \hbox{Area inside $C$}\ ,\la{square} \ea where the string tension \ba\sigma_{m,n}\ \sim\ {2e^{-S}\over a^2}\ (\cos\ 2\pi{m\over n}{q}\ -\ 1)\ \la{triangle}\ea is periodic in $q$ with period $n$. $W(q,C)$ is simply the sum over the $W_{n,m}$. The string tension is proportional to the vortex density, which diverges as ${1\over a^2}$ since our vortices have finite action and are therefore always in a condensed phase.\footnote{This differs from the situation in the toy model studied in \cite{ichtop}, where the vortex action was logarithmically divergent; this led to an anomalous dimension of the vortex density and to a phase transition.} We see that the condensation of flux tubes with given $n$ leads to linear confinement of electric charges $q$, unless $q$ is an integer multiple of $n$. Electric charges that are a multiple of $n$ are screened rather than linearly confined (the string tension is zero). But what is electric charge in our context? The electric charge that couples to the Kaluza-Klein gauge field is nothing but Kaluza-Klein momentum: the Kaluza-Klein modes \ba x_{q}(\sigma_1,\sigma_2)e^{iqz}\la{star}\ea appear in the two-dimensional gauge theory as scalar fields with charge $Q=q$ and mass $M=q$, since in (\ref{diana}): $$|\partial_ix_{q}|^2\ \ \rightarrow\ \ |(\partial_\alpha \ -\ iqA_\alpha) x_{q}|^2\ +\ {q^2\over L^2}\ x_{q}^2\ .$$ So the linearly confined charge is Kaluza-Klein momentum in the circular membrane direction. Before we speculate about what this means, let us compare with previous suggestions in the context of Kaluza-Klein compactification from four to three dimensions. In this situation one has a $3d$ $U(1)$ Kaluza-Klein gauge field, and there exist Kaluza-Klein instantons (instead of Kaluza-Klein flux tubes). Those are the Kaluza-Klein monopoles of \cite{sorkin}, dimensionally reduced. In ordinary $3d$ compact $U(1)$ gauge theory, a Coulomb plasma of instantons and anti-instantons forms that leads to linear confinement of electric charge \cite{polyakov3d}. In \cite{gross}, it was asked whether Kaluza-Klein momentum might also be linearly confined due to the condensation of Kaluza-Klein instantons, and a problem was pointed out: In an ordinary $3d$ gauge theory, there is an attractive Coulomb potential between instantons of opposite magnetic charges, and a repulsive potential between instantons of like charges. But while in a Kaluza-Klein gauge theory there is still an attractive Coulomb potential between instantons of opposite magnetic charges, it can be seen that there is no potential between instantons of like charges. Can a Coulomb plasma still form? A comparison with $N=2$ supersymmetric Yang-Mills theory \cite{seibergwitten} suggests that the answer is ``No": At the $N=2$ supersymmetric point the magnetic monopoles are BPS and there is an attraction between opposite magnetic charges, but no repulsion between like magnetic charges. The fact that there is no confinement at the $N=2$ point confirms that no stable monopole-antimonopole plasma can form. When one perturbs to $N=1$, one also creates a small repulsion between monopoles of like magnetic charges. Then a stable monopole-antimonopole plasma forms that linearly confines electric charge.\footnote{I thank A.M. Polyakov for pointing out this interpretation.} Thus, confinement of Kaluza-Klein momentum due to the condensation of Kaluza-Klein instantons does not quite seem to work. But in compactification from 3 to 2 dimensions, there is no similar problem! There is neither an attractive nor a repulsive force between the Kaluza-Klein flux tubes. So as vortices in a $2d$ abelian Higgs model, vortices in the Kaluza-Klein model can still form a dilute gas that leads to linear confinement. {}\subsection*{7. Interpretation}\setcounter{equation}{0} \setcounter{section}{7} Let us now offer a speculative interpretation of the result (\ref{square}-\ref{triangle}). The total string tension is simply the sum $\sum_{m,|n|}\sigma_{m,|n|}$ of the individual string tensions. So it seems that {\it all} Kaluza-Klein momenta are linearly confined. As usual, this should imply that all states in the physical Hilbert space are neutral under global symmetry transformations. But here, those are just translations in $z$-direction. Thus, at large scales, the ``matter fields'' should depend only on $\sigma_1$ and $\sigma_2$, and not on $z$. We conclude that the membrane is effectively two-dimensional in embedding space. One could perhaps say that the membrane is ``permanently Kaluza-Klein compactified'', similarly as quarks are permanently confined in QCD. This does not yet imply that the Kaluza-Klein modes (\ref{star}) are completely gone. In ordinary gauge theory, there can be neutral bound states of charged fields. Can there be bound states of Kaluza-Klein modes with zero total Kaluza-Klein momentum (``baryons'')? In ordinary gauge theory, neutral bound states can exist because opposite electric charges attract and like charges repel, so there are no net long-range forces between neutral pairs. But while in Kaluza-Klein gauge theory opposite electric charges still attract, there is no force between particles of equal charge. This can easily be seen by computing the classical forces between strings of matter that are wrapped around $z$ and rotate in $z$-direction at the speed of light (those are the three-dimensional objects that reduce, upon Kaluza-Klein compactification, to electrically charged particles). This might mean that in Kaluza-Klein gauge theory free baryons cannot exist, because the attractive and repulsive forces between their constituents no longer balance each other. So after integrating out $A_\alpha$ and $x_{11}$, the Kaluza-Klein modes might disappear completely and what remains might be standard strings with embedding coordinates $x^\mu(\sigma_1,\sigma_2)$. This might be exactly what is needed to make sense out of M-theory, as the statistical mechanics of fluctuating membranes that have a continuum limit in which they are ``wrapped'' down to strings. Of course the strings would be very foamy from a $3d$ viewpoint. In 3+1 dimensions, the sum over spacetime topologies has been made responsible for various phenomena, such as the loss of quantum coherence or the vanishing of the cosmological constant. Our sum over $3d$ topologies that admit a foliation by circles suggests neither of these effects, but instead ``confinement of Kaluza-Klein momentum'' as an equally dramatic one: perhaps some sense can be made out of dynamical gravity in more than two dimensions after all - namely, that it dynamically reduces to renormalizable $2d$ gravity. The discussion can be generalized to $p-$branes with $p>2$. The Dirichlet $p$-brane contains a world-brane gauge field \cite{polchinski}, whose dual is a $(p-2)-$form. The field strength of this $(p-2)$-form can be integrated over a $(p-1)-$ cycle $K$ on the world-brane. There could be Kaluza-Klein flux tubes of the nonabelian Kaluza-Klein gauge theory that is obtained by compactifying the $p-$brane on $K$. Summing over them would correspond to summing over $(p+1)-$dimensional topologies that can be foliated by $K$. Criteria for confinement might translate into statements about when branes wrap and when they don't. \subsection*{Acknowledgements} I thank the members of the Theory groups in Santa Barbara, Pasadena, Princeton, Bern, Z\"urich, Potsdam and Berlin for questions, discussions and hospitality. In particular, I thank J. Fr\"ohlich, D. Gross, M. Leibundgut, D. L\"ust, H. Nicolai, M. Niedermaier, A. Polyakov and J. Schwarz for conversations. {}\baselineskip=10pt\parskip=0mm
train/arxiv
BkiUd7bxaKgS2KSYnlSm
5
1
\section{Introduction} We consider quad equations defined in terms of a polynomial in four variables, $\mathcal{Q}$, that is equations of the form \begin{equation} \mathcal{Q}(u,\tilde{u},\hat{u},\th{u})=0,\label{ge} \end{equation} where in the simplest setting $u=u(n,m)$, $\tilde{u}=u(n+1,m)$, $\hat{u}=u(n,m+1)$ and $\th{u}=u(n+1,m+1)$ are values of a dependent variable taking values in $\mathbb{C}\cup\{\infty\}$ as a function of independent variables $n,m\in\mathbb{Z}$. The quad equation is called multi-affine if the defining polynomial, $\mathcal{Q}$, is degree one in each variable, and we call it multi-quadratic if $\mathcal{Q}$ is degree two in each variable. An important integrability feature that is possible for the quad equation is the multidimensional consistency \cite{FrankABS,BS}; it has proven to be a natural property of many integrable equations in the multi-affine class. Significant work on this is the list of multi-affine quad equations with the consistency property obtained by Adler, Bobenko and Suris (ABS) in \cite{ABS,ABS2}. Quad equations beyond the multi-affine class were considered in \cite{KaNie} where several multidimensionally consistent examples (also beyond the multi-quadratic) were obtained. In fact these equations appeared naturally in relation to underlying models in a different class, namely the Yang-Baxter maps \cite{Ves,drin}, and emerge from a generalisation of results connecting the Yang-Baxter maps with the multi-affine quad equations \cite{ABS,PSTV,Tasos}. One multidimensionally consistent multi-quadratic quad equation identified in \cite{KaNie} was known earlier due to observations of Adler and Veselov in \cite{AdVeQ}, it in fact arises as the superposition principle for B\"acklund transformations of the KdV equation. Recently it has also come to light that the well-known discrete version of the KdV equation due to Hirota \cite{hirota-0}, which is multi-affine but absent from the ABS list, is naturally understood within the consistency framework as a special case of a multi-quadratic quad equation \cite{JamesQ}. The few known examples therefore indicate that multi-quadratic quad equations are a quite natural and potentially rich class of integrable systems. A feature of higher degree discrete models, like the multi-quadratic quad equations, is that they define multivalued evolution from initial data. This adds richness, but also another level of difficulty in dealing with such systems. This difficulty is however mitigated in the mentioned integrable cases because they can be reformulated as a single-valued system with augmented initial data. For the models in \cite{KaNie} this is because the variables present in the associated Yang-Baxter map themselves satisfy a single-valued system. But that situation can actually be viewed as a special case of a more general constructive approach to this kind of reformulation \cite{JamesQ}. In this approach auxiliary variables are introduced on lattice edges, which are similar to variables of an associated Yang-Baxter map, however, rather than satisfying an independent system, they instead participate in a mixed system involving both vertex and edge variables. The resulting model is of a type similar to the class introduced in \cite{hv11} but with the additional feature of preserving algebraic relations on the lattice edges. The reformulation procedure relies completely on a discriminant factorisation property of the defining polynomial. It is this special property that is the departure point and the main emphasis of the present work, we use it as a foundation to enable some systematic investigations of the multi-quadratic quad equations. The main result presented here is a list of multi-quadratic quad equations with the discriminant factorisation property. To construct the models we start from biquadratic polynomials (the factors of the discriminant) which are associated with the edges of the lattice, and this provides a natural correspondence with the multi-affine ABS equations whose construction also involves edge biquadratics. Due to the factorised discriminant hypothesis the models we list admit reformulation as single-valued systems. However the more important property of these equations, which is actually rather remarkable because it is not explicitly built into the construction, is the integrability feature of multidimensional consistency. The second part of our paper is devoted to developing the transformation theory of the obtained models. The most pressing question we seek to address relates to existence of transformations of B\"acklund or Miura type connecting these multi-quadratic models back to the better studied multi-affine ABS equations. Such transformations are not inherent from our construction, rather we tackle the problem of obtaining transformations a posteriori using separate methods. The main result is to connect all except the primary model, namely the multi-quadratic counterpart of Q4, back to equations in the multi-affine (ABS) class. We proceed as follows. In Section \ref{dfh} we explain the basic discriminant factorisation hypothesis for the multi-quadratic quad equations. Additional assumptions are explained in Section \ref{assumptions} and the obtained models are listed in Section \ref{list}. The method to reformulate these models as systems that define single-valued evolution from initial data is given in Section \ref{SVS} and their multidimensional consistency is described in Section \ref{MDC}. Section \ref{transformations} is devoted to the transformation theory of the models, in particular we recall the multi-affine equations from the ABS list in Section \ref{ABSlist} and give transformations connecting the multi-quadratic models to them in Section \ref{BTex}. The methods used to obtain these transformations are explained in Sections \ref{nsd}, \ref{idolons} and \ref{quadlin}. Some questions raised by the results reported here are discussed in Section \ref{discussion}. \section{The factorised-discriminant hypothesis}\label{dfh} In the multi-quadratic case considered here, the defining polynomial $\mathcal{Q}$ of the quad equation (\ref{ge}) is of degree two in each of the four variables. To calculate the evolution defined by (\ref{ge}) from some initial data requires solving this equation locally as a quadratic equation for one of the arguments, and in particular taking the square root of its discriminant with respect to that argument. In general this discriminant is a polynomial of degree four in each of the remaining three arguments; the property of polynomial $\mathcal{Q}$ that we study here is the factorisation of this discriminant into a product of three factors: \begin{equation} \eqalign{ \Delta[\mathcal{Q}(u,\tilde{u},\hat{u},\th{u}),\th{u}] = H_1(u,\tilde{u})H_2(u,\hat{u})G_1(\tilde{u},\hat{u}),\\ \Delta[\mathcal{Q}(u,\tilde{u},\hat{u},\th{u}),\hat{u}] = H_1(u,\tilde{u})H_2(\tilde{u},\th{u})G_2(u,\th{u}),\\ \Delta[\mathcal{Q}(u,\tilde{u},\hat{u},\th{u}),\tilde{u}] = H_1(\hat{u},\th{u})H_2(u,\hat{u})G_3(u,\th{u}),\\ \Delta[\mathcal{Q}(u,\tilde{u},\hat{u},\th{u}), u ] = H_1(\hat{u},\th{u})H_2(\tilde{u},\th{u})G_4(\tilde{u},\hat{u}), }\label{df} \end{equation} where $H_1$ and $H_2$ are degree-two polynomials in each of two variables, i.e., biquadratics, and $G_1,\ldots,G_4$ are degenerate biquadratics of the form $G_i(\tilde{u},\hat{u})=(a_i+b_i\tilde{u}+c_i\hat{u}+d_i\tilde{u}\hat{u})^2$, $i\in\{1,2,3,4\}$, i.e., the square of a polynomial which is degree-one in each variable. This is the most general hypothesis that allows replacement of (\ref{ge}) by an associated single-valued system through introduction of auxiliary edge variables $\sigma_1$ and $\sigma_2$ satisfying the edge relations \begin{equation} \sigma_1^2 = H_1(u,\tilde{u}), \quad \sigma_2^2 = H_2(u,\hat{u}).\label{er} \end{equation} The key features allowing this are that all discriminants in (\ref{df}) are squares which clearly leads locally to a rational model, and furthermore that the edge relations on opposite sides of the quad are the same, allowing the rational model to replace the quad equation globally (for instance in the simplest setting throughout $\mathbb{Z}^2$). The procedure to obtain the single-valued system is straightforward, it was explained in detail in \cite{JamesQ} and an example will also be included later in this paper (Section \ref{SVS}). We remark that the restriction to $G_1,\ldots, G_4$ being degenerate seems unnatural in some regards, in fact there exist sets of polynomials $\{\mathcal{Q},H_1,H_2,H_3\}$ satisfying system (\ref{df}) with $G_1=G_2=G_3=G_4=H_3$, and where $H_3$ is also non-degenerate. These more symmetric solutions of the problem are interesting, however to understand the sense in which the resulting polynomial $\mathcal{Q}$ defines an integrable discrete model requires a substantial alteration of the setting, and this will be studied in detail elsewhere. Also we remark that the assumed symmetry can be relaxed, for instance replacing $H_1$ appearing in the third and fourth equations of (\ref{df}) with $\hat{H}_1\neq H_1$. Such systems are of interest too but the associated polynomial $\mathcal{Q}$ in that case is more naturally interpreted as defining a B\"acklund transformation between models. \section{Additional assumptions}\label{assumptions} The solution of (\ref{df}), in the sense of obtaining a set of polynomials \[\{\mathcal{Q},H_1,H_2,G_1,G_2,G_3,G_4\}\] for which system (\ref{df}) is identically satisfied, is a difficult classification problem in its full generality. Further assumptions allow this problem to be solved using computer algebra, these come from looking at examples identified in \cite{JamesQ}. We assume invariance of $\mathcal{Q}$ when the arguments are permuted by the Klein four-group (the Kleinian symmetry) \begin{equation} \mathcal{Q}(u,\tilde{u},\hat{u},\th{u})=\mathcal{Q}(\tilde{u},u,\th{u},\hat{u})=\mathcal{Q}(\hat{u},\th{u},u,\tilde{u}). \end{equation} This reduces the generic multi-quadratic polynomial $\mathcal{Q}$ to the form \begin{equation} \fl \eqalign{ \mathcal{Q}(u,\tilde{u},\hat{u},\th{u})= a_{{1}} +a_{{2}} \left( u+\tilde{u}+\hat{u}+\th{u} \right)+a_{{3}} \left( {u}^{2}+{\tilde{u}}^{2}+{\hat{u}}^{2}+{\th{u}}^{2} \right) +a_{{4}} \left( \hat{u}\th{u}+u\tilde{u} \right) \\\quad +a_{{5}} \left( \tilde{u}\hat{u}+u\th{u} \right) +a_{{6}} \left( \tilde{u}\th{u}+u\hat{u} \right) +a_{{7}} \left( \tilde{u}\hat{u}\th{u}+u\hat{u}\th{u}+u\tilde{u}\th{u}+u\tilde{u}\hat{u} \right) \\\quad +a_{{8}} \left( u{\tilde{u}}^{2}+\hat{u}{\th{u}}^{2}+{u}^{2}\tilde{u}+{\hat{u}}^{2}\th{u} \right) +a_{{9}} \left( u{\hat{u}}^{2}+{u}^{2}\hat{u}+\tilde{u}{\th{u}}^{2}+{\tilde{u}}^{2}\th{u} \right) \\\quad +a_{{10}} \left( u{\th{u}}^{2}+{u}^{2}\th{u}+\tilde{u}{\hat{u}}^{2}+{\tilde{u}}^{2}\hat{u} \right) +a_{{11}}u\tilde{u}\hat{u}\th{u} +a_{{12}} \left( \tilde{u}\th{u}+u\hat{u} \right) \left( \tilde{u}\hat{u}+u\th{u} \right) \\\quad +a_{{13}} \left( \tilde{u}\hat{u}+u\th{u} \right) \left( \hat{u}\th{u}+u\tilde{u} \right) +a_{{14}} \left( \tilde{u}\th{u}+u\hat{u} \right) \left( \hat{u}\th{u}+u\tilde{u} \right) +a_{{15}} \left( {\hat{u}}^{2}{\th{u}}^{2}+{u}^{2}{\tilde{u}}^{2} \right) \\\quad +a_{{16}} \left( {\tilde{u}}^{2}{\hat{u}}^{2}+{u}^{2}{\th{u}}^{2} \right) +a_{{17}} \left( {\tilde{u}}^{2}{\th{u}}^{2}+{u}^{2}{\hat{u}}^{2} \right) +a_{{18}}u\tilde{u}\hat{u}\th{u} \left( u+\tilde{u}+\hat{u}+\th{u} \right) \\\quad +a_{{19}} \left( \tilde{u}{\hat{u}}^{2}{\th{u}}^{2}+{u}^{2}{\tilde{u}}^{2}\th{u}+u{\hat{u}}^{2}{\th{u}}^{2}+{u}^{2}{\tilde{u}}^{2}\hat{u} \right) +a_{{20}} \left( {\tilde{u}}^{2}\hat{u}{\th{u}}^{2}+u{\tilde{u}}^{2}{\th{u}}^{2}+{u}^{2}{\hat{u}}^{2}\th{u}+{u}^{2}\tilde{u}{\hat{u}}^{2} \right) \\\quad +a_{{21}} \left( {\tilde{u}}^{2}{\hat{u}}^{2}\th{u}+u{\tilde{u}}^{2}{\hat{u}}^{2}+{u}^{2}\hat{u}{\th{u}}^{2}+{u}^{2}\tilde{u}{\th{u}}^{2} \right) +a_{{22}}u\tilde{u}\hat{u}\th{u} \left( \hat{u}\th{u}+u\tilde{u} \right) \\\quad +a_{{23}}u\tilde{u}\hat{u}\th{u} \left( \tilde{u}\hat{u}+u\th{u} \right) +a_{{24}}u\tilde{u}\hat{u}\th{u} \left( \tilde{u}\th{u}+u\hat{u} \right) \\\quad +a_{{25}} \left( {\tilde{u}}^{2}{\hat{u}}^{2}{\th{u}}^{2}+{u}^{2}{\hat{u}}^{2}{\th{u}}^{2}+{u}^{2}{\tilde{u}}^{2}{\th{u}}^{2}+{u}^{2}{\tilde{u}}^{2}{\hat{u}}^{2} \right) \\\quad +a_{{26}}u\tilde{u}\hat{u}\th{u} \left( \tilde{u}\hat{u}\th{u}+u\hat{u}\th{u}+u\tilde{u}\th{u}+u\tilde{u}\hat{u} \right) +a_{{27}}{u}^{2}{\tilde{u}}^{2}{\hat{u}}^{2}{\th{u}}^{2}, } \end{equation} for some set of coefficients $\{a_1,\ldots,a_{27}\}$. This symmetry is a strong additional assumption, for instance by itself it is actually sufficient for integrability of quad equations in the multi-affine class \cite{ABS2,via}. We also assume that both biquadratic polynomials $H_1$ and $H_2$ are symmetric, and furthermore that they are taken from one of the following one-parameter families (the parameter being denoted by $p$) \begin{equation} \label{q4} \fl\textrm{q4}^*:\quad\frac{c}{2}\left(1+u^2\tilde{u}^2+\tilde{u}^2p^2+p^2u^2\right)-\frac{1}{2c}\left(u^2\tilde{u}^2p^2+u^2+\tilde{u}^2+p^2\right)-\left(c^2-\frac{1}{c^2}\right)u\tilde{u}p, \end{equation} \begin{equation} \label{q3} \fl\textrm{q3}^*:\quad\frac{1}{2}(u^2+\tilde{u}^2)+\frac{\delta^2}{2}(p^2-1)-u\tilde{u}p, \end{equation} \begin{equation} \label{q2} \fl\textrm{q2}^*:\quad\frac{1}{4}\left(u^2+\tilde{u}^2+p^2\right)-\frac{1}{2}\left(u\tilde{u}+\tilde{u}p+pu\right), \end{equation} \begin{equation} \label{q1} \fl\textrm{q1}^*:\quad (\tilde{u}-u)^2-p, \end{equation} \begin{equation} \label{a2} \fl\textrm{a2}^*:\quad\frac{1}{2}(1+u^2\tilde{u}^2)-pu\tilde{u}, \end{equation} \begin{equation} \fl\textrm{a1}^*:\quad\label{a1}(\tilde{u}+u)^2-p, \end{equation} \begin{equation} \label{h3} \fl\textrm{h3}^*:\quad u\tilde{u}p+\delta^2, \end{equation} \begin{equation} \label{h2} \fl\textrm{h2}^*:\quad u+\tilde{u}+p. \end{equation} Where it appears $\delta \in\{0,1\}$, whilst in (\ref{q4}) $c\in\mathbb{C}\setminus\{0,1,-1,i,-i\}$ is a fixed constant. Up to a point transformation of the parameters these biquadratic polynomials coincide with those associated with the ABS equations \cite{ABS,ABS2} (precise parameter associations will be given later in Section \ref{ABSlist}). Note that (\ref{h3}) and (\ref{h2}) are considered to be biquadratics here also, but this is in a projective sense, so loosely speaking we view for instance (\ref{h2}) as the polynomial $(u+\tilde{u}+p)(u-\infty)(\tilde{u}-\infty)$. We remark that by a M\"obius change of variables a symmetric biquadratic polynomial can always be brought to one of the forms (\ref{q4})--(\ref{h2}) for some choice of parameter $p$, or else is the square of a multi-affine polynomial (like $G_1,\ldots,G_4$ above), a possibility which we therefore exclude by restricting to (\ref{q4})--(\ref{h2}). The discriminant of biquadratics (\ref{q4})--(\ref{h2}) with respect to $\tilde{u}$ is a polynomial of degree at most four in $u$, the main characterising feature of the biquadratic families is that roots of this discriminant polynomial do not change upon altering parameter $p$. Finally, the assumed symmetry of $\mathcal{Q}$ together with (\ref{df}) implies already that \[G_1=G_2=G_3=G_4=:G,\] in fact we will go further and make the assumption that \begin{equation} G(\tilde{u},\hat{u}) = (\tilde{u}-\hat{u})^2, \end{equation} which is our last additional assumption. \section{The obtained multi-quadratic quad equations}\label{list} The additional assumptions enable solution of the discriminant factorisation hypothesis directly by computer algebra, which gives the main result of our paper as follows. \begin{proposition}\label{listprop} Let $\mathcal{Q}$ be a polynomial of degree two in each of four variables with the Kleinian symmetry \begin{equation} \mathcal{Q}(u,\tilde{u},\hat{u},\th{u})=\mathcal{Q}(\tilde{u},u,\th{u},\hat{u})=\mathcal{Q}(\hat{u},\th{u},u,\tilde{u}), \end{equation} such that the discriminants factorise as follows \begin{equation} \Delta[\mathcal{Q}(u,\tilde{u},\hat{u},\th{u}),\th{u}]\propto (\tilde{u}-\hat{u})^2H_1(u,\tilde{u})H_2(u,\hat{u}),\label{disc} \end{equation} where $H_1$ and $H_2$ are biquadratic polynomials. If $H_1$ and $H_2$ are taken from one of the biquadratic families (\ref{q4})--(\ref{h2}) with generic parameter choices denoted $p$ and $q$ respectively, then $\mathcal{Q}$ is determined up to an overall constant and defines respectively the quad equations (\ref{QQ4})--(\ref{QH2}) below. \end{proposition} Q4$^*$: \begin{equation} \eqalign{ \fl\quad (p-q)[(c^{-2}p-c^2q)(u\tilde{u}-\hat{u}\th{u})^2-(c^{-2}q-c^2p)(u\hat{u}-\tilde{u}\th{u})^2]\\ -(p-q)^2[(u+\th{u})^2(1+\tilde{u}^2\hat{u}^2)+(\tilde{u}+\hat{u})^2(1+u^2\th{u}^2)]\\ +[(u-\th{u})(\tilde{u}-\hat{u})(c^{-1}-cpq)-2(p-q)(1+u\tilde{u}\hat{u}\th{u})]\\ \times[(u-\th{u})(\tilde{u}-\hat{u})(c^{-1}pq-c)-2(p-q)(u\th{u}+\tilde{u}\hat{u})]=0,\\ }\label{QQ4} \end{equation} Q3$^*$: \begin{equation} \eqalign{ \fl\quad (p-q)[p(u\tilde{u}-\hat{u}\th{u})^2-q(u\hat{u}-\tilde{u}\th{u})^2]-\delta^2(p-q)^2[(u+\th{u})^2+(\tilde{u}+\hat{u})^2]\\ \fl\quad +[(u-\th{u})(\tilde{u}-\hat{u})-2\delta^2(p-q)][(u-\th{u})(\tilde{u}-\hat{u})(pq-1)-2(p-q)(u\th{u}+\tilde{u}\hat{u})]=0, }\label{QQ3} \end{equation} Q2$^*$: \begin{equation} \eqalign{ \fl\quad (p-q)[p(u\tilde{u}-\hat{u}\th{u})(u+\tilde{u}-\hat{u}-\th{u})-q(u\hat{u}-\tilde{u}\th{u})(u+\hat{u}-\tilde{u}-\th{u})] \\ +(u-\th{u})(\tilde{u}-\hat{u})[p(u-\hat{u})(\tilde{u}-\th{u})-q(u-\tilde{u})(\hat{u}-\th{u})-pq(p-q)]=0, }\label{QQ2} \end{equation} Q1$^*$: \begin{equation} \eqalign{ \fl\quad (p-q)[p(u+\tilde{u}-\hat{u}-\th{u})^2-q(u+\hat{u}-\tilde{u}-\th{u})^2]\\+4(u-\th{u})(\tilde{u}-\hat{u})[p(u-\hat{u})(\tilde{u}-\th{u})-q(u-\tilde{u})(\hat{u}-\th{u})]=0, }\label{QQ1} \end{equation} A2$^*$: \begin{equation} \eqalign{ \fl\quad (p-q)[p(u\hat{u}-\tilde{u}\th{u})^2-q(u\tilde{u}-\hat{u}\th{u})^2]\\ +(u-\th{u})(\tilde{u}-\hat{u})[(u-\th{u})(\tilde{u}-\hat{u})(pq-1)+2(p-q)(1+u\tilde{u}\hat{u}\th{u})]=0, }\label{QA2} \end{equation} A1$^*$: \begin{equation} \eqalign{ \fl\quad (p-q)[p(u-\tilde{u}+\hat{u}-\th{u})^2-q(u-\hat{u}+\tilde{u}-\th{u})^2]\\+4(u-\th{u})(\tilde{u}-\hat{u})[p(u+\hat{u})(\tilde{u}+\th{u})-q(u+\tilde{u})(\hat{u}+\th{u})]=0, }\label{QA1} \end{equation} H3$^*$: \begin{equation} \eqalign{ \fl\quad (p-q)[p(u\hat{u}-\tilde{u}\th{u})^2-q(u\tilde{u}-\hat{u}\th{u})^2]\\ +(u-\th{u})(\tilde{u}-\hat{u})[(u-\th{u})(\tilde{u}-\hat{u})pq-4\delta^2(p-q)]=0, }\label{QH3} \end{equation} H2$^*$: \begin{equation} \eqalign{ \fl\quad (p-q)[p(u-\tilde{u}+\hat{u}-\th{u})^2-q(u-\hat{u}+\tilde{u}-\th{u})^2]\\+(u-\th{u})(\tilde{u}-\hat{u})[(u-\th{u})(\tilde{u}-\hat{u})-2(p-q)(u+\tilde{u}+\hat{u}+\th{u})]=0. }\label{QH2} \end{equation} We have denoted the list of models here $Q4^*,\ldots,H2^*$ due to their natural correspondence with equations from the ABS list, which are denoted $Q4,\ldots,H1$ \cite{ABS}. (The ABS list and its precise relationship with (\ref{QQ4})--(\ref{QH2}) will be given in Section \ref{ABSlist}.) The biquadratic polynomials associated with ABS equations $Q1_{\delta=0}$, $A1_{\delta=0}$ and $H1$ are already squares, which means the multi-quadratic counterparts of those models factorise into products of multi-affine polynomials, cases we exclude here as degenerate. Similar to the situation for ABS equations, all of (\ref{QQ3})--(\ref{QH2}) can be obtained from the primary model (\ref{QQ4}) by limiting procedures. All listed models are non-equivalent by autonomous M\"obius changes of variables and point transformations of the parameters. However, the models $A2^*$ and $Q3^*_{\delta=0}$ are related by the non-autonomous point transformation $u\rightarrow u^{(-1)^{n+m}}$, whilst $A1^*$ and $Q1^*$ are related by $u\rightarrow (-1)^{n+m}u$. To our knowledge the equations in the list (\ref{QQ4})--(\ref{QH2}) are new except for the following cases. The model H2$^*$ (\ref{QH2}) appeared as the superposition principle for solutions of the KdV equation in \cite{AdVeQ} and in relation to Yang-Baxter maps in \cite{KaNie}. The model A1$^*$ (\ref{QA1}) was obtained, although not in explicit form, in \cite{KaNie3}. The model A2$^*$ (\ref{QA2}) was obtained originally in \cite{JamesQ} as the superposition principle for B\"acklund transformations of Hirota's discrete KdV equation \cite{hirota-0}. \section{Reformulation as single-valued systems}\label{SVS} One of the most salient features of quad equations from the multi-quadratic class is that they define multivalued evolution from initial data. However, built into the construction of models (\ref{QQ4})--(\ref{QH2}) is the discriminant factorisation property, and this allows the quad equation to be reformulated as a system that defines single-valued evolution from augmented initial data \cite{JamesQ}. Here we perform this reformulation on the primary model Q4$^*$ (\ref{QQ4}), and describe the relationship between the quad equation and its reformulation. As described in Section \ref{dfh}, the auxiliary variables on lattice edges (see Figure \ref{quadpic}) are introduced through the relations \begin{equation} \eqalign{ \fl \sigma_1^2 = \frac{c}{2}\left(1+u^2\tilde{u}^2+\tilde{u}^2p^2+p^2u^2\right)-\frac{1}{2c}\left(u^2\tilde{u}^2p^2+u^2+\tilde{u}^2+p^2\right)-\left(c^2-\frac{1}{c^2}\right)u\tilde{u}p,\\ \fl \sigma_2^2 = \frac{c}{2}\left(1+u^2\hat{u}^2+\hat{u}^2q^2+q^2u^2\right)-\frac{1}{2c}\left(u^2\hat{u}^2q^2+u^2+\hat{u}^2+q^2\right)-\left(c^2-\frac{1}{c^2}\right)u\hat{u}q, }\label{dvdef} \end{equation} which are in terms of the associated biquadratic polynomial given earlier in (\ref{q4}). \begin{figure}[t] \begin{center} \begin{picture}(100,100)(0,0) \multiput(21,12)(60,0){2}{{\line(0,1){60}}} \multiput(21,12)(0,60){2}{{\line(1,0){60}}} \put(7,39){{$\sigma_2$}} \put(48,0){{$\sigma_1$}} \put(87,39){{$\tilde{\sigma}_2$}} \put(48,78){{$\hat{\sigma}_1$}} \put(21,12){\circle*{3}} \put(81,12){\circle*{3}} \put(21,72){\circle*{3}} \put(81,72){\circle*{3}} \put(12,4){$u$} \put(84,4){$\tilde{u}$} \put(12,72){$\hat{u}$} \put(84,72){$\th{u}$} \end{picture} \end{center} \caption{Variables assigned to the vertices of a quadrilateral, and auxiliary variables to the edges.} \label{quadpic} \end{figure} By solving equation (\ref{QQ4}) as a quadratic equation for $\th{u}$ and exploiting these edge variables we obtain an equation of the form \begin{equation} \mathcal{F}(u,\tilde{u},\hat{u},\th{u},\sigma_1\sigma_2)=0 \end{equation} which is by construction of polynomial degree one in $\th{u}$ and $\sigma_1\sigma_2$, for instance: \begin{equation} \eqalign{ \fl\quad\mathcal{F}(u,\tilde{u},\hat{u},\th{u},\sigma_1\sigma_2):=u[(c^{-2}p-c^2q)(u\tilde{u}-\hat{u}\th{u})+(c^{-2}q-c^2p)(u\hat{u}-\tilde{u}\th{u})]\\ -(u-\th{u})[(c^{-1}-cpq)(u^2+\tilde{u}\hat{u})+(c^{-1}pq-c)(1+u^2\tilde{u}\hat{u})+2\sigma_1\sigma_2]\\ -(p-q)(\tilde{u}-\hat{u})(1+u^3\th{u}) }\label{Fdef} \end{equation} (the precise form of this expression is chosen because it is simple, but it is not unique). Sequentially solving for each of the quad variables of (\ref{QQ4}) in the same way leads to the following system \begin{equation} \eqalign{ \mathcal{F}(u,\tilde{u},\hat{u},\th{u},\sigma_1\sigma_2)=0,\\ \mathcal{F}(\tilde{u},u,\th{u},\hat{u},\sigma_1\tilde{\sigma}_2)=0,\\ \mathcal{F}(\hat{u},\th{u},u,\tilde{u},\hat{\sigma}_1\sigma_2)=0,\\ \mathcal{F}(\th{u},\hat{u},\tilde{u},u,\hat{\sigma}_1\tilde{\sigma}_2)=0, } \label{quadsys} \end{equation} up to the choice of sign of the discriminant terms appearing in the last argument. The system (\ref{quadsys}), (\ref{dvdef}) is the single-valued model associated with (\ref{QQ4}). It is easily verifiable that each equation in (\ref{quadsys}) implies (\ref{QQ4}) is satisfied modulo the relations (\ref{dvdef}). The signs of the discriminant terms in (\ref{quadsys}) have been chosen for self-consistency (ensuring that any one equation in (\ref{quadsys}) is a consequence of the other three), to preserve the Kleinian symmetry, and to preserve the consistency property described in the following section. The usual initial value problem for a quad-equation involves specifying the dependent variable on vertices along some admissible lattice path or collection of paths. The modification required for system (\ref{quadsys}) is that variables on path edges, subject to the constraint (\ref{dvdef}), also need to be specified. The system (\ref{quadsys}), (\ref{dvdef}) is therefore actually a model of vertex-bond type as described in \cite{hv11}, but it has the additional feature of preserving algebraic relations (\ref{dvdef}) on lattice edges. In this setting the multivalued evolution defined by equation (\ref{QQ4}) is reflected in the non-uniqueness of initial edge variables when they are defined in terms of the initial vertex variables through (\ref{dvdef}). In Table \ref{svms} the polynomial $\mathcal{F}$ that appears in the single-valued system associated with each equation from the list (\ref{QQ4})--(\ref{QH2}) is also given. The auxiliary variables present in the table are introduced through the edge relations which can be written generically as \begin{equation} \sigma_1^2 = H_1(u,\tilde{u}), \quad \sigma_2^2 = H_2(u,\hat{u}),\label{erer} \end{equation} in terms of the associated biquadratic polynomials (cf. Proposition \ref{listprop}). \begin{table} \begin{tabular}{rl} \hline Q4$^*$ & $u[(c^{-2}p-c^2q)(u\tilde{u}-\hat{u}\th{u})+(c^{-2}q-c^2p)(u\hat{u}-\tilde{u}\th{u})]$\\ & \quad $-(u-\th{u})[(c^{-1}-cpq)(u^2+\tilde{u}\hat{u})+(c^{-1}pq-c)(1+u^2\tilde{u}\hat{u})+2\sigma_1\sigma_2]$\\ & \quad $-(p-q)(\tilde{u}-\hat{u})(1+u^3\th{u})$\\ Q3$^*$ & $u[p(u\tilde{u}-\hat{u}\th{u})+q(u\hat{u}-\tilde{u}\th{u})]-(u-\th{u})[u^2+\tilde{u}\hat{u}-\delta^2(1-pq)-2\sigma_1\sigma_2]$\\ & \quad $-\delta^2(p-q)(\tilde{u}-\hat{u})$ \\ Q2$^*$ & $u[p(u+\tilde{u}-\hat{u}-\th{u})+q(u-\tilde{u}+\hat{u}-\th{u})]+p(u\tilde{u}-\hat{u}\th{u})+q(u\hat{u}-\tilde{u}\th{u})$\\ & \quad $-(u-\th{u})[(u-\tilde{u})(u-\hat{u})+pq+4\sigma_1\sigma_2]$\\ Q1$^*$ & $p(u+\tilde{u}-\hat{u}-\th{u})+q(u-\tilde{u}+\hat{u}-\th{u})-2(u-\th{u})[(u-\tilde{u})(u-\hat{u})-\sigma_1\sigma_2]$\\ A2$^*$ & $u[p(u\hat{u}-\tilde{u}\th{u})+q(u\tilde{u}-\hat{u}\th{u})]-(u-\th{u})(1+u^2\tilde{u}\hat{u}-2\sigma_1\sigma_2)$\\ A1$^*$ & $p(u+\hat{u}-\tilde{u}-\th{u})+q(u-\hat{u}+\tilde{u}-\th{u})-2(u-\th{u})[(u+\tilde{u})(u+\hat{u})-\sigma_1\sigma_2]$\\ H3$^*$ & $u[p(u\hat{u}-\tilde{u}\th{u})+q(u\tilde{u}-\hat{u}\th{u})]+2(u-\th{u})(\delta-\sigma_1\sigma_2)$\\ H2$^*$ & $p(u+\hat{u}-\tilde{u}-\th{u})+q(u-\hat{u}+\tilde{u}-\th{u})+(u-\th{u})(\tilde{u}+\hat{u}+2u-2\sigma_1\sigma_2)$\\ \hline \end{tabular} \caption{ Polynomials $\mathcal{F}(u,\tilde{u},\hat{u},\th{u},\sigma_1\sigma_2)$ that define the single-valued system (\ref{quadsys}) associated with the multi-quadratic models (\ref{QQ4})--(\ref{QH2}) of Proposition \ref{listprop}. The auxiliary variables $\sigma_1$ and $\sigma_2$ satisfy edge relations $\sigma_1^2=H_1(u,\tilde{u})$ and $\sigma_2^2=H_2(u,\hat{u})$ in terms of the associated biquadratic polynomials listed in (\ref{q4})--(\ref{h2}). }\label{svms} \end{table} The single-valued reformulation (\ref{quadsys}), (\ref{erer}) that we have proposed for the quad equation (\ref{ge}) is autonomous, but there is the possibility to make a non-autonomous reformulation of the model in which signs of discriminant terms in (\ref{quadsys}) change from quad to quad. Such non-autonomous reformulation is more difficult to study, however in the autonomous case the quad equation and its reformulation are not globally equivalent. They are of course locally equivalent, by which we mean that on a single quad the equation (\ref{ge}) is sufficient for consistency of system (\ref{quadsys}) in $\sigma_1$, $\sigma_2$, allowing the edge variables to be constructed from vertex variables on a single quad. The simplest setting in which the quad equation and its autonomous reformulation are not equivalent is a square domain involving nine points (four quads). On this larger domain we can write the system \begin{equation} \eqalign{ \mathcal{F}(u,\tilde{u},\hat{u},\th{u},\sigma_1\sigma_2)=0,\\ \mathcal{F}(u,\ut{u},\hat{u},\ut{\hat{u}},\ut{\sigma}_1\sigma_2)=0,\\ \mathcal{F}(u,\tilde{u},\uh{u},\uh{\tilde{u}},\sigma_1\uh{\sigma}_2)=0,\\ \mathcal{F}(u,\ut{u},\uh{u},\ut{\uh{u}},\ut{\sigma}_1\uh{\sigma}_2)=0. } \label{crossys} \end{equation} where $\ut{u}=u(n-1,m)$, $\uh{u}=u(n,m-1)$ etc, this system being a consequence of imposing (\ref{quadsys}). The equation (\ref{ge}) imposed on each quad is {\it not} sufficient for consistency of (\ref{crossys}), specifically elimination of $\sigma_1,\sigma_2,\ut{\sigma}_1$ and $\uh{\sigma}_2$ from (\ref{crossys}) yields a constraint on the nine vertex-variables which is not a consequence of (\ref{ge}). In particular (\ref{crossys}) can be used to obtain $\th{u}$ uniquely from data $\{\ut{\uh{u}},\ut{u},\uh{u},u,\ut{\hat{u}},\uh{\tilde{u}},\tilde{u},\hat{u}\}$, whereas if (\ref{ge}) alone were imposed this data would yield two possible values of $\th{u}$. This example therefore demonstrates that global equivalence of the quad equation and its associated single-valued system is not possible unless a non-autonomous reformulation would be considered. More generally than the example above, if initial data is specified on verticies along some admissable lattice path, then the autonomous single-valued system (\ref{quadsys}), (\ref{erer}) generically determines $2^{(x-1)}$ solutions where $x$ is the number of edges along the path (corresponding to the number of auxiliary edge variables), whereas using the equation (\ref{ge}) alone the number of solutions determined would be $2^y$ where $y$ is the number of quads in the domain. Because (on a finite domain) $y\ge x-1$, the autonomous single-valued reformulation of the system can be said to limit the multivaluedness from initial data. \section{Multidimensional consistency}\label{MDC} The purpose of this section is to explain a key integrability feature of models (\ref{QQ4})--(\ref{QH2}), namely their {\it multidimensional consistency} \cite{FrankABS,BS,ABS}. This emerges quite naturally, although it has not been explicitly built into the construction of these equations. The multidimensional consistency involves not just one equation in isolation, and in fact for the equations listed it involves the whole family of equations obtained by varying parameters $p$ and $q$. The consistency is between members of this family with different choices of the parameters. Due to the central role they play it is therefore convenient to explicitly include the dependence on parameters $p$ and $q$ when writing a generic equation from (\ref{QQ4})--(\ref{QH2}): \begin{equation} \mathcal{Q}_{p,q}(u,\tilde{u},\hat{u},\th{u})=0.\label{pqs} \end{equation} The key properties (involving the parameter dependence) of the generic defining polynomial are first the symmetry \begin{equation} \mathcal{Q}_{p,q}(u,\tilde{u},\hat{u},\th{u}) = \mathcal{Q}_{q,p}(u,\hat{u},\tilde{u},\th{u}), \label{covar} \end{equation} and second the consistency of the system \begin{equation} \eqalign{ \mathcal{Q}_{p,q}(u,\tilde{u},\hat{u},\th{u})=0, \quad& \mathcal{Q}_{p,q}(\bar{u},\bt{u},\hb{u},\thb{u})=0,\\ \mathcal{Q}_{q,r}(u,\hat{u},\bar{u},\hb{u})=0, \quad& \mathcal{Q}_{q,r}(\tilde{u},\th{u},\bt{u},\thb{u})=0,\\ \mathcal{Q}_{r,p}(u,\bar{u},\tilde{u},\bt{u})=0, \quad& \mathcal{Q}_{r,p}(\hat{u},\hb{u},\th{u},\thb{u})=0, }\label{cubesys} \end{equation} for any choice of the parameters $p$, $q$ and $r$, which is usually visualised by assigning variables to vertices of a cube as in Figure \ref{cubepic}, and equations to faces. \begin{figure}[t] \begin{center} \begin{picture}(200,140) \input{dat.tex} \end{picture} \end{center} \caption{\label{cubepic}Variables assigned to the vertices of a cube. In the case of the multi-quadratic quad equations (\ref{QQ4})--(\ref{QH2}) the system (\ref{cubesys}) determines four possible values of $\thb{u}$ from initial data $u$, $\tilde{u}$, $\hat{u}$ and $\bar{u}$.} \end{figure} By consistency we mean that for generic initial data $\{u,\tilde{u},\hat{u},\bar{u}\}$ the system (\ref{cubesys}) has at least one solution for the remaining variables $\{\th{u},\hb{u},\bt{u},\thb{u}\}$. Directly by polynomial manipulation it can be confirmed that for each quad-equation (\ref{QQ4})--(\ref{QH2}) the system (\ref{cubesys}) is consistent in this sense, and in fact determines four possible solutions $\{\th{u},\hb{u},\bt{u},\thb{u}\}$. Alternatively, and more explicitly, the consistency property can be verified by reformulating the equation as a single-valued system as described in Section \ref{SVS}. For instance in the case of the primary model (\ref{QQ4}) a direct calculation yields the following expression that determines $\thb{u}$ in terms of the initial data on the cube: \begin{equation} \fl \eqalign{ u(c^2-c^{-2})[p(q-r)^2(\hat{u}\bar{u}-\tilde{u}\thb{u})+q(r-p)^2(\bar{u}\tilde{u}-\hat{u}\thb{u})+r(p-q)^2(\tilde{u}\hat{u}-\bar{u}\thb{u})]\\ -(q-r)(r-p)(\bar{u}-\thb{u})[(c^{-1}-cpq)(u^2+\tilde{u}\hat{u})+(c^{-1}pq-c)(1+u^2\tilde{u}\hat{u})+2\sigma_1\sigma_2]\\ -(r-p)(p-q)(\tilde{u}-\thb{u})[(c^{-1}-cqr)(u^2+\hat{u}\bar{u})+(c^{-1}qr-c)(1+u^2\hat{u}\bar{u})+2\sigma_2\sigma_3]\\ -(p-q)(q-r)(\hat{u}-\thb{u})[(c^{-1}-crp)(u^2+\bar{u}\tilde{u})+(c^{-1}rp-c)(1+u^2\bar{u}\tilde{u})+2\sigma_3\sigma_1]=0,\\ }\label{cubesol} \end{equation} where $\sigma_1$, $\sigma_2$ and $\sigma_3$ are determined from the initial data through equations (\ref{dvdef}) and \begin{equation} \fl \sigma_3^2 = \frac{c}{2}\left(1+u^2\bar{u}^2+\bar{u}^2r^2+r^2u^2\right)-\frac{1}{2c}\left(u^2\bar{u}^2r^2+u^2+\bar{u}^2+r^2\right)-\left(c^2-\frac{1}{c^2}\right)u\bar{u}r.\label{s3} \end{equation} Although the transformation group $(\sigma_1,\sigma_2,\sigma_3)\mapsto(\pm\sigma_1,\pm\sigma_2,\pm\sigma_3)$ leaving (\ref{dvdef}), (\ref{s3}) invariant contains eight elements, the initial data $\{u,\tilde{u},\hat{u},\bar{u}\}$ leads to only four distinct values of $\thb{u}$ due to the symmetry $(\sigma_1,\sigma_2,\sigma_3)\mapsto(-\sigma_1,-\sigma_2,-\sigma_3)$ of (\ref{cubesol}). It is important to note that the consistency property of the associated single-valued system can be broken by reversing sign of all discriminant terms in (\ref{quadsys}). This subtlety reflects the fact that polynomial system (\ref{cubesys}) determines only four possible values of $\thb{u}$ from the initial data, and not eight as might be expected. Other polynomials given in Table \ref{svms} are also chosen with the consistency property in mind, so that in particular the sign chosen for the discriminant terms is important. The consistency property of models (\ref{QQ4})--(\ref{QH2}) means that their more general setting is for a dependent variable $u$ defined on $\mathbb{Z}^d$, where $d>1$ is the dimension, determined by the system \begin{equation} \mathcal{Q}_{p_i,p_j}(u,\mathsf{T}_iu,\mathsf{T}_ju,\mathsf{T}_i\mathsf{T}_ju)=0, \quad i,j\in\{1\ldots d \}.\label{mds} \end{equation} Here $p_1,\ldots,p_d$ are some set of parameters, while $\mathsf{T}_1,\ldots,\mathsf{T}_d$ are shift operators in each dimension. Probably the simplest Cauchy problem for (\ref{mds}) is to specify initial data on coordinate axes, but actually this multidimensional setting is the departure point for an extremely rich variety of initial value problems and lattice configurations \cite{BS,AdVeQ,AS}. This also aligns with the notion of solvability for models in statistical mechanics \cite{baxter}. From a slightly different perspective the consistency property yields a great deal of control over the solution structure of these models by providing immediately a natural auto-B\"acklund transformation. This allows for example to construct exact solutions as periodic B\"acklund chains (the discrete analogue of finite-gap solutions), as well as soliton-type solutions from B\"acklund iteration. We remark that on the domain of a hypercube, $\{0,1\}^d$, associated edge variables can be obtained (as described in Section \ref{SVS}) for {\it any} solution of (\ref{mds}), this domain is therefore singled out as one on which the quad equation and its autonomous single-valued reformulation are globally equivalent. \section{Transformations to ABS equations}\label{transformations} All equations listed by ABS in \cite{ABS}, with the exception of Q4, are mutually related via B\"acklund or Miura type transformations \cite{NC,James,NAH-Sol,James2}. Table \ref{bts} here lists similar transformations connecting all except the primary multi-quadratic model Q4$^*$, specifically equations (\ref{QQ3})--(\ref{QH2}), back to multi-affine equations from the ABS list. Before describing these transformations in more detail we first recall the ABS list in (\ref{Q4})--(\ref{H1}) below. \subsection{The ABS list}\label{ABSlist} Q4: \begin{equation} \eqalign{ \fl\quad \mathop{\mathrm{sn}}\nolimits(\alpha)(v\tilde{v}+\hat{v}\hat{\tilde{v}})-\mathop{\mathrm{sn}}\nolimits(\beta)(v\hat{v}+\tilde{v}\hat{\tilde{v}})\\ -\mathop{\mathrm{sn}}\nolimits(\alpha-\beta)[\tilde{v}\hat{v}+v\hat{\tilde{v}}-k\mathop{\mathrm{sn}}\nolimits(\alpha)\mathop{\mathrm{sn}}\nolimits(\beta)(1+v\tilde{v}\hat{v}\hat{\tilde{v}})]=0, } \label{Q4} \end{equation} Q3: \begin{equation} \eqalign{ \fl\quad(\alpha-1/\alpha)(v\tilde{v}+\hat{v}\th{v})-(\beta-1/\beta)(v\hat{v}+\tilde{v}\th{v})\\ -(\alpha/\beta-\beta/\alpha)[\tilde{v}\hat{v}+v\th{v}+\delta^2(\alpha-1/\alpha)(\beta-1/\beta)/4]=0, }\label{Q3} \end{equation} Q2: \begin{equation} \eqalign{ \fl\quad \alpha(v-\hat{v})(\tilde{v}-\th{v})-\beta(v-\tilde{v})(\hat{v}-\th{v})\\ +\alpha\beta(\alpha-\beta)(v+\tilde{v}+\hat{v}+\th{v}-\alpha^2+\alpha\beta-\beta^2)=0, }\label{Q2} \end{equation} Q1: \begin{equation} \alpha(v-\hat{v})(\tilde{v}-\th{v})-\beta(v-\tilde{v})(\hat{v}-\th{v})+\delta^2 \alpha\beta(\alpha-\beta)=0, \label{Q1} \end{equation} A2: \begin{equation} \fl\quad (\alpha-1/\alpha)(v\hat{v}+\tilde{v}\th{v})-(\beta-1/\beta)(v\tilde{v}+\hat{v}\th{v}) - (\alpha/\beta-\beta/\alpha)(1+v\tilde{v}\hat{v}\th{v})=0, \label{A2} \end{equation} A1: \begin{equation} \alpha(v+\hat{v})(\tilde{v}+\th{v})-\beta(v+\tilde{v})(\hat{v}+\th{v})- \delta^2\alpha\beta(\alpha-\beta)=0, \label{A1} \end{equation} H3: \begin{equation} \alpha(v\tilde{v}+\hat{v}\th{v})-\beta(v\hat{v}+\tilde{v}\th{v})+\delta(\alpha^2-\beta^2)=0, \label{H3} \end{equation} H2: \begin{equation} (v-\th{v})(\tilde{v}-\hat{v})-(\alpha-\beta)(v+\tilde{v}+\hat{v}+\th{v}+\alpha+\beta)=0, \label{H2} \end{equation} H1: \begin{equation} (v-\th{v})(\tilde{v}-\hat{v})-\alpha+\beta=0. \label{H1} \end{equation} In (\ref{Q4})--(\ref{H1}) we have reproduced the ABS list. Where it appears $\delta\in\{0,1\}$ and for Q4 (\ref{Q4}) $k\in \mathbb{C}\setminus\{0,1,-1\}$ is the modulus of the Jacobi elliptic function $\mathop{\mathrm{sn}}\nolimits$. The parametrisations coincide with the ones given by ABS in \cite{ABS} except for the case of Q4, this canonical form (the Jacobi form) was obtained by Hietarinta in \cite{Hie}. Each ABS equation is defined by a polynomial $\mathcal{Q}=\mathcal{Q}(v,\tilde{v},\hat{v},\th{v})$ that is degree one in four variables. It was shown in \cite{ABS} that this polynomial is characterised in terms of biquadratics through a generalised discriminant formula as follows \begin{equation} \fl\eqalign{ (\partial_{\hat{v}}\mathcal{Q})(\partial_{\th{v}}\mathcal{Q})-(\partial_{\hat{v}}\partial_{\th{v}}\mathcal{Q})\mathcal{Q} \propto H_1(v,\tilde{v}), \quad (\partial_v\mathcal{Q})(\partial_{\tilde{v}}\mathcal{Q})-(\partial_v\partial_{\tilde{v}}\mathcal{Q})\mathcal{Q} \propto H_1(\hat{v},\th{v}),\\ (\partial_{\tilde{v}}\mathcal{Q})(\partial_{\th{v}}\mathcal{Q})-(\partial_{\tilde{v}}\partial_{\th{v}}\mathcal{Q})\mathcal{Q} \propto H_2(v,\hat{v}), \quad (\partial_v\mathcal{Q})(\partial_{\hat{v}}\mathcal{Q})-(\partial_v\partial_{\hat{v}}\mathcal{Q})\mathcal{Q} \propto H_2(\tilde{v},\th{v}). }\label{gendisc} \end{equation} The polynomials $H_1$ and $H_2$ appearing in (\ref{gendisc}) coincide with those in (\ref{disc}) up to an overall constant, provided we make the following associations between parameters $\alpha$ and $\beta$ appearing in the multi-affine equations (\ref{Q4})--(\ref{H2}) and parameters $p$ and $q$ appearing in their multi-quadratic counterparts (\ref{QQ4})--(\ref{QH2}), \begin{equation} \begin{array}{rlll} Q4:& p=\sqrt{k}\mathop{\mathrm{sn}}\nolimits(\alpha+K),& q=\sqrt{k}\mathop{\mathrm{sn}}\nolimits(\beta+K),& c=\sqrt{k},\\ Q3,A2:& p=(\alpha+1/\alpha)/2,&q=(\beta+1/\beta)/2,\\ Q2:& p=\alpha^2,&q=\beta^2,\\ Q1,A1:& p=\delta^2\alpha^2,&q=\delta^2\beta^2,\\ H3:& p=1/\alpha,&q=1/\beta,& \delta^2\rightarrow \delta,\\ H2:& p=\alpha,&q=\beta, \end{array} \label{pa} \end{equation} where in the case of Q4 the parameter $K$ is standard notation for the quarter period of the $\mathop{\mathrm{sn}}\nolimits$ function (with modulus $k$) satisfying $\mathop{\mathrm{sn}}\nolimits(K)=1$, $\mathop{\mathrm{sn}}\nolimits'(K)=0$. We remark that for the multi-affine equations H1 (the lattice potential KdV equation \cite{we,NC}), Q1$_{\delta=0}$ (the lattice Schwarzian KdV equation \cite{NC}) and A1$_{\delta=0}$ the associated biquadratic is the square of a polynomial which is degree one in each variable, and as mentioned before their multi-quadratic counterparts factorise into products of multi-affine polynomials which we have excluded as degenerate cases here. \subsection{An example transformation}\label{BTex} \begin{table}[t] \begin{tabular}{lll} Eq. in $u$ & B\"acklund transformation & Eq. in $v$ \\ \hline Q3$^*_{p=2\alpha^2-1}$ & $\alpha[(\tilde{u}+\delta)\tilde{v}^2+(u+\delta)v^2]=(\tilde{u}+u+2\delta\alpha^2)\tilde{v}v$ & Q3$_{\delta=0}$ \\ Q2$^*_{p=\alpha}$ & $(\tilde{v}-v)(\tilde{u}\tilde{v}-uv)+\alpha \tilde{v}v=0$ & Q1$_{\delta=0}$\\ A2$^*_{p=2(\alpha^2+1)/(\alpha^2-1)}$ &$\alpha (\tilde{v}^2v^2+1)=\tilde{v}v[\tilde{u}u(\alpha^2-1)+(\alpha^2+1)]$ & A2\\ H3$^*_{p=4/\alpha^2}$ & $\tilde{u}u=\tilde{v}v(\tilde{v}v+\delta\alpha)$ & H3\\ H3$^*_{p= 4/\alpha}$ & $4\alpha\tilde{u}u=(\tilde{v}+v)^2-\delta^2\alpha^2$ & A1 \\ A1$^*_{p=\alpha^2}$ & $2 \tilde{v} v(\tilde{u}+u)=\alpha(\tilde{v}^2 v^2+1)$ & H3$_{\delta=0}$\\ A1$^*_{p=\alpha}$ & $2 (\tilde{v}+v)(\tilde{u}+u)= (\tilde{v}+v)^2+\alpha$ & A1$_{\delta=0}$\\ H2$^*_{p=-\alpha}$ & $\tilde{u}+u-\alpha=(\tilde{v}+v)^2$ & H1 \\ \hline A2$^*_{p=2\alpha^2-1}$ & $\alpha(\tilde{u}u\tilde{v}^2v^2+1)=(\tilde{u}u+1)\tilde{v}v$ & A2\\ H3$^*_{p=4/\alpha}$ & $4\alpha \tilde{u}u=(\tilde{v}-v)^2-\delta^2\alpha^2$ & Q1 \\ Q1$^*_{p=\alpha^2}$ & $2\tilde{v}v(\tilde{u}-u)=\alpha( \tilde{v}^2 v^2+1)$ & H3$_{\delta=0}$\\ Q1$^*_{p=\alpha}$ & $ 2(\tilde{v}-v)(\tilde{u}-u)=(\tilde{v}-v)^2+\alpha$ & Q1$_{\delta=0}$\\ \hline \end{tabular} \caption{B\"acklund transformations connecting equations from the multi-quadratic class to the multi-affine (ABS) class. Details of implementing the transformations are explained in Section \ref{BTex}. Transformations are given up to composition with point symmetries of the equations in $u$ and $v$. For completeness we include in the second part of the table some transformations that can be obtained from those in the first part by composition with (non-autonomous) point transformations. The transformation connecting H2$^*$ to H1 was given originally in \cite{AdVeQ} up to composition with point symmetries. Methods to obtain the transformations are described in the text, first by non-symmetric degeneration of the auto-B\"acklund transformation in Section \ref{nsd}, second by taking advantage of a natural connection with Yang-Baxter maps in Section \ref{idolons}, and third in Section \ref{quadlin} by exploiting the fact that the transformations, like the equations themselves, can be characterised by discriminant properties of the defining polynomial. }\label{bts} \end{table} To explain precisely the meaning of the entries in Table \ref{bts} we give here an explicit example, specifically the second entry of the table. Consider the coupled system of equations \begin{equation} (v-\tilde{v})(uv-\tilde{u}\tilde{v})+\alpha v \tilde{v} = 0, \quad (v-\hat{v})(uv-\hat{u}\hat{v})+\beta v\hat{v} = 0,\label{bt} \end{equation} which involve two functions $u=u(n,m)$ and $v=v(n,m)$, where as usual $n,m\in\mathbb{Z}$, $\tilde{u}=u(n+1,m)$ and $\hat{u}=u(n,m+1)$, etc. For a fixed function $v=v(n,m)$ the equations (\ref{bt}) are coupled discrete Riccati equations for $u$, which are compatible if $v$ satisfies Q1$_{\delta=0}$. Due this choice of $v$ the function $u=u(n,m)$ that emerges as the solution of (\ref{bt}) then satisfies equation Q2$^*$, that is (\ref{QQ2}), with parameter associations $p=\alpha$ and $q=\beta$ (notice that this parameter association is different from the one in (\ref{pa})). For brevity the second equation in (\ref{bt}) and the parameter association between $q$ and $\beta$ are omitted from Table \ref{bts} because they can be inferred from the first equation in (\ref{bt}) and the association between $p$ and $\alpha$ that have been listed. The transformation defined by (\ref{bt}) from $v$ to $u$ is therefore a quite standard discrete-Riccati type of B\"acklund transformation, see for instance \cite{NC,BS,James}. However the system (\ref{bt}) also defines an inverse transformation from $u$ to $v$ which is less standard because the equations for $v$ are not of Riccati type. Similar to verifying the consistency property of the multi-quadratic models (\ref{QQ4})--(\ref{QH2}) (cf. Section \ref{MDC}), it is convenient to handle this system by introducing auxiliary variables. Specifically for the example (\ref{bt}) the auxiliary variables enter through the edge relations \begin{equation} \eqalign{ \sigma_1^2 = \frac{1}{4}\left(u^2+\tilde{u}^2+\alpha^2\right)-\frac{1}{2}\left(u\tilde{u}+\tilde{u}\alpha+\alpha u\right),\\ \sigma_2^2 = \frac{1}{4}\left(u^2+\hat{u}^2+\beta^2\right)-\frac{1}{2}\left(u\hat{u}+\hat{u}\beta+\beta u\right), } \end{equation} so in fact they are the Q2$^*$ auxiliary variables. They allow (\ref{bt}) to be re-written as \begin{equation} 2\tilde{u}\tilde{v}=(u+\tilde{u}-\alpha-2\sigma_1)v, \quad 2\hat{u}\hat{v}=(u+\hat{u}-\beta-2\sigma_2)v, \label{bt2} \end{equation} which have degree-one polynomial dependence in $v$, $\tilde{v}$ and $\hat{v}$. Implementation of the transformation from $u$ to $v$ therefore requires a solution $(u,\sigma_1,\sigma_2)$ of the single-valued reformulation of Q2$^*$ (cf. Table \ref{svms}). From such solution the system (\ref{bt2}) determines the function $v$ satisfying Q1$_{\delta=0}$. Similar to the example we have focused on in this section, all transformations listed in Table \ref{bts} are standard Riccati-type systems for $u$. The inverse transformation to obtain $v$ is always more involved, reducing to a Riccati type system for $v$ only after the introduction of auxiliary variables associated with the solution of the multi-quadratic quad equation for $u$. \subsection{Obtaining the transformations}\label{nsd} It is rare to have a systematic method to obtain non-local transformations of B\"acklund or Miura type between a given pair of equations. The transformations obtained here do however fit into a general framework, specifically they are defined in terms of polynomials which are characterised by their discriminants, a framework which is therefore consistent with the main theme of this paper. We exploit this general point of view in Section \ref{quadlin} however, we instead use more direct methods to obtain most of the transformations listed in Table \ref{bts}. A method involving {\it non-symmetric degeneration} of the auto-B\"acklund transformation is explained in this section, transformations obtained through a connection with Yang-Baxter maps will be explained in detail in the following section (Section \ref{idolons}). There is a method to obtain B\"acklund transformations between distinct quad equations that was developed in \cite{James}, it is constructive insofar as it takes as a starting point the already obtained equations. It relies on the natural auto-B\"acklund transformation, which for the equations in question is inherent from the multidimensional consistency, and exploits this in conjunction with the hierarchical relationships between equations. Specifically we seek to connect an equation to its degenerate counterpart by making a non-symmetric degeneration of the auto-B\"acklund transformation. The principal example where the non-symmetric degeneration technique has been used is to obtain the second transformation in Table \ref{bts} (which was also the example considered in Section \ref{BTex}). We exploit the fact that the substitution \begin{equation} u=\frac{v}{\epsilon(1+v)} \end{equation} into equation Q2$^*$ yields equation Q1$_{\delta=0}$ to first order as $\epsilon \longrightarrow 0$. The natural auto-B\"acklund transformation for Q2$^*$ (\ref{QQ2}) is as follows \begin{equation} \mathcal{Q}_{p,r}(u,\tilde{u},v,\tilde{v})=0, \quad \mathcal{Q}_{q,r}(u,\hat{u},v,\hat{v})=0, \end{equation} where $\mathcal{Q}_{p,q}$ is the defining polynomial of this equation, and the transformation connects a solution $u=u(n,m)$ to another solution of the same equation $v=v(n,m)$. Applying the degeneration procedure to solution $v$ and judiciously choosing the B\"acklund parameter we are led to write \begin{equation} \fl \mathcal{Q}_{p,r}(u,\tilde{u},rv/(1+v),r\tilde{v}/(1+\tilde{v}))=0, \quad \mathcal{Q}_{q,r}(u,\hat{u},rv/(1+v),r\hat{v}/(1+\hat{v}))=0, \end{equation} which at leading order as $r=1/\epsilon\longrightarrow \infty$ yields exactly the desired non-auto B\"acklund transformation (\ref{bt}). \subsection{From Yang-Baxter maps to B\"acklund transformations} \label{idolons} The Yang-Baxter maps given in \cite{ABSf}, when suitably interpreted as a system of equations for functions defined on edges of the lattice, are naturally connected with multi-affine quad equations from the ABS list \cite{ABS}, in particular a potential function for the edge variables is governed by a quad equation. A systematic method to obtain the Yang-Baxter system on edge variables starting from the multi-affine equations from the ABS list was given in \cite{Tasos}. Developments reported in \cite{KaNie,KaNie3} go further, but in the opposite direction, showing that also non-multi-affine multidimensionally consistent quad equations can emerge as potential for the Yang-Baxter systems. And furthermore, that two different quad equations emerging in this way from the same Yang-Baxter system, as we shall see in this section, can immediately be connected through a B\"acklund-type transformation. Of particular relevance here are the models that have been considered in \cite{KaNie5} alongside the present work. This section is devoted to recalling the relevant models from \cite{ABSf} and giving details of the theory developed in \cite{KaNie,KaNie3,KaNie5} which, combined with the methods developed in this paper, can be used to obtain many of the transformations in Table \ref{bts}. The starting point is a system of equations for two variables, say $s$ and $t$, assigned to the edges of $\mathbb{Z}^2$ oriented in the $n$ and $m$ directions respectively (similar to $\sigma_1$ and $\sigma_2$ introduced in Section \ref{SVS}, cf. Figure \ref{quadpic}). The particular systems relevant here are as follows. {\noindent $F_I$:} \begin{equation} \label{A2I} \eqalign{ \hat{s} = t\frac{\alpha (1-{\beta}^2) s - \beta (1-{\alpha}^2) t-{\alpha}^2+{\beta}^2}{\beta (1-{\alpha}^2) s - \alpha (1-{\beta}^2) t+({\alpha}^2-{\beta}^2)st}, \\ \tilde{t} = s\frac{\beta (1-{\alpha}^2) t - \alpha (1-{\beta}^2) s-{\beta}^2+{\alpha}^2}{\alpha (1-{\beta}^2) t - \beta (1-{\alpha}^2) s+({\beta}^2-{\alpha}^2)st}, } \end{equation} $F_{II}$: \begin{equation} \label{II} \eqalign{ \hat{s}= t\frac{\alpha s-\beta t-\delta(\alpha^2-\beta^2)}{\beta s-\alpha t},\\ \tilde{t}= s\frac{\alpha s-\beta t-\delta(\alpha^2-\beta^2)}{\beta s-\alpha t}, } \end{equation} $F_{III}$: \begin{equation} \label{III} \eqalign{ \hat{s}= t \frac{\alpha s-\beta t}{\beta s-\alpha t},\\ \tilde{t}= s \frac{\alpha s-\beta t}{\beta s-\alpha t}, } \end{equation} $F_{V}$: \begin{equation} \label{V} \eqalign{ \hat{s}=t+\frac{\alpha-\beta}{s-t},\\ \tilde{t}=s+\frac{\alpha-\beta}{s-t}, } \end{equation} which are nothing but the quadrirational Yang-Baxter maps presented in \cite{ABSf} suitably interpreted as an equation on the lattice (we have omitted model $F_{IV}$ because it is not used here). The relevant feature of systems (\ref{A2I})--(\ref{V}) is that they admit a three-parameter family of potentials, we denote the potential here by $f$. The potentials corresponding to these models, which are derived in \cite{KaNie5}, are as follows: \begin{equation} \label{A2p} \fl (F_{I})\quad \begin{array}{l} \tilde{f}+f = A\ln(s)+B\ln(s-\alpha)+C\ln(\alpha s-1)+\frac{1}{2}(B+C)\ln({\alpha}^2-1),\\ \hat{f}+f = A\ln(t)+B\ln(t-\beta)+C\ln(\beta t-1)+\frac{1}{2}(B+C)\ln({\beta}^2-1), \end{array} \end{equation} \begin{equation} \fl (F_{II})\quad \begin{array}{l} \tilde{f}+f=A\alpha (2s-\delta\alpha)+B \ln(s) +C \ln (s-\delta\alpha),\\ \hat{f}+f=A\beta (2t-\delta\beta)+B \ln(t) +C \ln (t-\delta\beta), \end{array} \end{equation} \begin{equation} \fl (F_{III})\quad \begin{array}{l} \tilde{f}+f=A\ln(s)+\alpha B s+\displaystyle {C}/{s},\\ \hat{f}+f=A\ln(t)+\beta B t+\displaystyle {C}/{t}, \end{array} \end{equation} \begin{equation} \fl (F_{V})\quad \begin{array}{l} \tilde{f}+f=As+B(s^2+\alpha)+C(s^3+3 \alpha s),\\ \hat{f}+f=At+B(t^2+\beta)+C(t^3+3 \beta t), \end{array} \label{Vp} \end{equation} The parameters $A$, $B$ and $C$ may be chosen freely in each case. For the purpose of illustration we focus on the system (\ref{A2I}). In particular writing $f=\ln(v)$ for the potential corresponding to the choice of the parameters $A=1$, $B=C=0$ in (\ref{A2p}) we find \begin{equation} \label{vv} \tilde{v}v = s, \quad \hat{v}v = t, \end{equation} whilst writing $f=\ln(u)$ for the potential corresponding to the choice of the parameters $A=-1$, $B=C=1$, (\ref{A2p}) becomes \begin{equation} \label{uu} \tilde{u}u =\frac{(s-\alpha)(\alpha s-1)}{s({\alpha}^2-1)}, \quad \hat{u}u =\frac{(t-\beta)(\beta t-1)}{t({\beta}^2-1)}. \end{equation} Eliminating $s$ and $t$ from (\ref{A2I}) by means of (\ref{vv}) we find that $v$ satisfies equation A2 (\ref{A2}), whilst eliminating $s$ and $t$ from (\ref{A2I}) using (\ref{uu}) we find that variable $u$ satisfies equation A2$^*$ (\ref{QA2}) with the parameter associations \[p =2\frac{\alpha^2+1}{\alpha^2-1},\quad q =2\frac{\beta^2+1}{\beta^2-1}.\] Equations governing the potential $f$ for different choices of the parameters $A$, $B$ and $C$ are referred to in \cite{KaNie} as {\em idolons} of the underlying system governing $s$ and $t$, in particular we have shown that equations A2 and A2$^*$ are idolons of (\ref{A2I}). A B\"acklund transformation can be obtained by composing these relations, that is by eliminating $s$ and $t$ from formulas (\ref{vv}) and (\ref{uu}), \begin{equation} \label{BTI} \tilde{u}u =\frac{(\tilde{v}v-\alpha)(\alpha \tilde{v}v-1)}{\tilde{v}v({\alpha}^2-1)}, \quad \hat{u}u =\frac{(\hat{v}v-\beta)(\beta \hat{v}v-1)}{\hat{v}v({\beta}^2-1)}. \end{equation} This is the B\"acklund transformation between $A2$ and $A2^*$ that appears in the first part of Table \ref{bts}. The second transformation between these models can be obtained similarly after observing that the potential corresponding to choice of parameters $A=C=-1$, $B=1$ is also governed by model $A2^*$. Table \ref{mtab} contains the data required to construct, by the same procedure, all but the first two transformations in the first part of Table \ref{bts}. \begin{table} \begin{tabular}{llll} System & Parameters ($A$,$B$,$C$) & Potential & Equation\\ \hline $F_{I}$ & (-1,1,1) & $f=\ln(u)$ & A2$^*_{p=2(\alpha^2+1)/(\alpha^2-1)}$\\ & (-1,1,-1) & $f=\ln(u)$ & A2$^*_{p=2\alpha^2-1}$\\ & (1,0,0) & $f=\ln(v)$ & A2\\ \hline $F_{II}$ & (0,1,1) & $f=\ln(u)$ & H3$^*_{p=4/\alpha^2}$\\ & (0,0,1) & $f=\ln(v)$ & H3\\ & (1,0,0) & $f=v$ & A1$_{\alpha\rightarrow \alpha^2}$\\ \hline $F_{III}$ & (0,$\textstyle \frac{1}{2}$,$\textstyle \frac{1}{2}$) & $f=u$ & A1$^*_{p=\alpha^2}$\\ & (1,0,0) & $f=\ln(v)$ & H3$_{\delta=0}$\\ & (0,1,0) & $f=v$ & A1$_{\delta=0,\alpha\rightarrow \alpha^2}$\\ \hline $F_{V}$ & (0,1,0) & $f=u$ & H2$^*_{p=-\alpha}$\\ & (1,0,0) & $f=v$ & H1\\ \end{tabular} \caption{ Equations governing different potentials of the systems (\ref{A2I})--(\ref{V}). The potentials are introduced through equations (\ref{A2p})--(\ref{Vp}) with the indicated choice of parameters $(A,B,C)$. It is explained in the text how to construct a transformation between equations (idolons) that govern different potentials of the same system. Parameter associations in the listed equations are given connecting $\alpha$ and $p$ or transforming $\alpha$, similar associations connecting $\beta$ and $q$ or transforming $\beta$ are implicit, they are omitted from the table for brevity. } \label{mtab} \end{table} \subsection{Discriminant properties of the transformations} \label{quadlin} All transformations listed in Table \ref{bts} are in the same class, they involve equations on lattice edges and the defining polynomial $\mathcal{B}=\mathcal{B}(u,\tilde{u},v,\tilde{v})$ is degree-one in each of $u$ and $\tilde{u}$, and degree two in each of $v$ and $\tilde{v}$. This defining polynomial also has the following discriminant properties, \begin{equation} \eqalign{ (\partial_{u}\mathcal{B})(\partial_{\tilde{u}}\mathcal{B})-(\partial_u\partial_{\tilde{u}} \mathcal{B})\mathcal{B} \propto \mu{\tilde{\mu}} h(v,\tilde{v}),\\ (\partial_{\tilde{v}}\mathcal{B})^2-2({\partial^2_{\tilde{v}}} \mathcal{B})\mathcal{B} \propto \eta^2 h^*(u,\tilde{u}), }\label{ddd} \end{equation} where $\mu$ and $\eta$ are polynomials in $v$, and $h$, $h^*$ are the edge biquadratics associated with the two equations connected by the B\"acklund transformation. Specifically $h$ coincides with $H_1$ associated with the multi-affine equation in $v$ through the generalised discriminant (\ref{gendisc}), while $h^*$ coincides with $H_1$ associated with the multi-quadratic equation in $u$ through the discriminant formula (\ref{disc}). This discriminant property is an important feature of the transformations that combines asymmetrically the underlying discriminant characterisations of the multi-quadratic models (\ref{QQ3})--(\ref{QH2}) and the multi-affine ABS equations (\ref{Q3})--(\ref{H1}). In particular it may be used as a basis for their construction. One approach to such construction is offered by recognising the biquadratics themselves (\ref{q4})--(\ref{h2}) can take natural discriminant forms, as the primary example we recognise that (\ref{q3}) is proportional to \[ (\tilde{u}+u+2\delta \alpha^2)^2-4[\alpha(\tilde{u}+\delta)][\alpha(u+\delta)], \qquad 2\alpha^2=p+1. \] It is easily seen that this expression emerges as the discriminant, with respect to $\tilde{v}$ or $v$, of the expression \[ [\alpha(\tilde{u}+\delta)]\tilde{v}^2 - [\tilde{u}+u+2\delta\alpha^2]\tilde{v}v + [\alpha(u+\delta)]v^2, \] which itself coincides with the polynomial defining the first transformation in Table \ref{bts}. The precise choice of this expression is not unique, and in particular the biquadratic obtained from the generalised discriminant in $u$ and $\tilde{u}$ emerges a posteriori after choosing the expression on the basis that the required discriminant property in $v$ and $\tilde{v}$ is obtained. We remark that the classification of polynomials $\mathcal{B}$ with the discriminant properties (\ref{ddd}) is interesting from the point of view of exhausting transformations in the same class as those listed in Table \ref{bts} (such task is similar to the one solved for multi-affine quad equations in \cite{boll}). \section{Discussion}\label{discussion} The discriminant factorisation property, as laid out in Section \ref{dfh}, allows reformulation of the multi-quadratic quad equation as a system that defines single-valued evolution from initial data. Our main result is the list of models constructed on the basis of this hypothesis in Section \ref{list}, it contains all previously known integrable multi-quadratic quad equations as well as a substantial number of new models that exhibit the same integrability features. The nature of the relationship between the discriminant factorisation property and the integrability is therefore an important question. Here we have made additional assumptions beyond the discriminant factorisation hypothesis, therefore although our investigations are suggestive, they leave open the problem of determining whether this property is sufficient for integrability in this class of models. Beyond elliptic (and hyperelliptic) functions, degree-two equations studied systematically from the point of view of integrability seem to be relatively rare. An isolated precedent exists within the framework of Painlev\'e analysis for ordinary differential equations. A list of degree-two counterparts of the Painlev\'e equations was obtained by Chazy \cite{chsy}, loosely speaking these are second-order ordinary differential equations with the Painlev\'e property that are quadratic in the second derivative term. The difficult step of rigorously classifying this class of equations was made by Cosgrove and Scoufis \cite{CoSc}. In that setting one of the principal features is that the higher degree models do not define new transcendents (see \cite{Cosg}), in particular they are solvable in terms of the degree-one Painlev\'e equations. Here the primary model, namely the equation that we identify as a multi-quadratic counterpart of Q4, is the only new model we have not yet connected back to an equation from the multi-affine class. The parallels between integrable quad-equations and the Painlev\'e-type equations suggest that a B\"acklund-type transformation establishing such connection should exist. Verifying or falsifying this is therefore an important open problem. \ack J.A is supported by the Australian Research Council, Discovery Grant DP 110104151. M.N is grateful for financial support from Australian Research Council Discovery Grant DP 0985615, which enabled his stay at the University of Sydney, during which the present surveys started. \section*{References} \bibliographystyle{unsrt}
train/arxiv
BkiUd7c5qU2Ap3wn8DkM
5
1
\section{Introduction} \vspace*{-0.5pt} \noindent Within the scope of global climate studies, authors carrying out modeling research use marine ecosystem models with increasing degrees of complexity. Complexity in such models can arise from the number of biological compartments, or state variables, which are taken into account, as well as from the parameterizations used to model interactions between these compartments. The number of variables can vary from one to more than ten. At least two variables, nutrients ($N$) and phytoplankton ($P$) are necessary to model primary production, that is to say the transformation of mineral nutrients into primitive biotic material using external energy, provided by the sun (Taylor {\it et al. } \cite{Taylor:1991}). However, in order to study the ocean carbon cycle, the main biological processes which have to be understood and estimated are primary production, but also the export of organic matter from the surface to deep ocean layers and organic matter remineralization. The simpliest model able to represent all these processes contains four variables, nutrients ($N$), phytoplankton ($P$), zooplankton ($Z$) and detritus ($D$). This type of model is termed the NPZD-model and different variants of it are used in numerous studies. All these models are similar from a structural point of view but authors use different parameterizations to model fluxes between biological compartments. Complexity can also arise from the spatial resolution of the physical dynamics to which biological variables are submitted. Many model set-ups are zero-dimensional and biological variables correspond to ocean mixed-layer values (e.g., Fasham {\it et al. } \cite{Fasham:1990}, Steele and Henderson \cite{Steele:1992}, Spitz {\it et al. } \cite{Spitz:1998}, Fennel {\it et al. } \cite{Fennel:2001}). Others are one-dimensional, considering that the ocean, in some particular places, can be modeled with a good approximation by a turbulent water column (e.g., Prunet {\it et al. } \cite{Prunet:1996a}, Doney {\it et al. } \cite{Doney:1996}, L\'evy {\it et al. } \cite{Levy:1998a}, M\'emery {\it et al. } \cite{Memery:2002}). Finally, in some studies, the biological model is integrated in a three-dimensional circulation model (e.g., Fasham {\it et al. } \cite{Fasham:1993}, Moisan {\it et al. } \cite{Moisan:1996}, L\'evy {\it et al. } \cite{Levy:1998b} , Carmillet {\it et al. } \cite{Carmillet:2001}). The question which has motivated this work is: are all these models well-posed? Of course, it seems difficult to study all of them and in this work we concentrate first on a one-dimensional NPZD-model and then discuss the possible generalization of our result. The three-dimensional version of the model we consider is proposed in L\'evy {\it et al. } \cite{Levy:1998b}, and a one-dimensional version of a similar model, containing six biological variables, is used by Faugeras {\it et al. } \cite{Faugeras:2002} to assimilate data from the JGOFS-DYFAMED time-series station in the North-Western Mediterranean Sea. Mathematically, the biological model under consideration is a system of coupled parabolic semilinear equations to which initial and boundary conditions are added. Under certain hypotheses, this general type of initial-boundary value problem can be transformed to an abstract Cauchy problem and studied using the theory of semigroups (following Chapter 6 of the book by Pazy \cite{Pazy:1983} for example). In their paper Boushaba {\it et al. } \cite{Boushaba:2002} used results on semigroups to provide a mathematical analysis of a model describing the evolution of a single variable phytoplankton. Although the model they considered is three-dimensional the biological reaction terms are quite simple since only production and mortality of phytoplanckton are represented. The model we propose here seems to be more realistic and has already been numerically validated using observations from the DYFAMED time-series station (L\'evy {\it et al. } \cite{Levy:1998b}, Faugeras {\it et al. } \cite{Faugeras:2002}). If data are regular enough the semigroup method can enable the existence of classical solutions to be proved. However it does not enable parabolic equations with time-dependent irregular coefficients to be easily handled. Since this is the case in the NPZD-model we consider a variational formulation approach is more attractive. The main purpose of this paper is to address the issue of the existence of weak solutions to this particular one-dimensional model. The method we propose is inspired from the work of Artola \cite{Artola:1989} in which an existence result for a semilinear parabolic system is derived using a fixed-point argument. We introduce an approximate model and prove the existence of weak solutions to this model using this method. We then pass to the limit in the approximate model to prove the existence of weak solutions to the NPZD-model. Furthermore, as the variables of the model represent concentrations they should be positive. We show this is the case. We shall now briefly outline the contents of the paper. In the next section we introduce the equations of the one-dimensional NPZD-model and give some comments on the different parameterizations used. In Section 3 we set the mathematical framework and state our main result, which is proved in Sections 4 and 5. The goal of Section 6 is twofold. First we show that the existence and positivity results still hold when different parameterizations found in the literature are used. Second, we address the issue of uniqueness of solutions. In order to prove uniqueness we need the nonlinear reaction terms to satisfy a local Lipschitz condition. We show this is the case in our particular model. \vspace*{1pt}\baselineskip=13pt \section{Presentation of the one-dimensional NPZD-model} \label{section:NPZD} \vspace*{-0.5pt} \noindent \subsection{Equations of the model} \label{subsec:NPZD} \noindent In this section we give the equations of the one-dimensional NPZD-model and formulate the initial-boundary value problem which will be studied. Let us first of all justify the use of a one-dimensional model. We have in mind numerical studies (Faugeras {\it et al. } \cite{Faugeras:2002}, L\'evy {\it et al. } \cite{Levy:1998a}, M\'emery {\it et al. } \cite{Memery:2002}) conducted with such one-dimensional models. In these studies simulations are forced with physical data (wind stress, heat fluxes, evaporation-precipitation) and validated by comparison with biogeochemical data (chlorophyll and nitrate) collected at the DYFAMED station. This station is located in the Northwestern Mediterranean Sea and is an interesting test case for several reasons. First, several biogeochemical production regimes that take place in the world ocean are found here. Secondly, the station is far enough away from the Ligurian Current to be sufficiently protected from lateral transport, thereby permitting a one-dimensional study.\\ In the above cited numerical studies the biogeochemical model is integrated in a one-dimensional physical model, which simulates the time evolution of velocity, temperature, salinity and turbulent kinetic energy (TKE). Advection is neglected even though this might result in a crude approximation in summer during strong wind events (Andersen and Prieur \cite{Andersen:2000}). The only dynamic process which is taken into account is vertical diffusion. The one-dimensional NPZD-model consists of four coupled semilinear parabolic equations. Before introducing them let us give some notations. In all the following we denote the nutrient, phytoplankton, zooplankton and detritus concentration vector by, $$ {\bf C}=(N,P,Z,D)=(C_1,C_2,C_3,C_4), $$ and the reaction terms by, $$ {\bf f}=(f_N,f_P,f_Z,f_D)=(f_{1},f_2,f_3,f_4). $$ The equations of the NPZD-model read as follows. For $i=1$ to $4$: \begin{equation} \label{eqn:p1} \left \lbrace \begin{array}{ll} \displaystyle \frac{\partial C_i}{\partial t}- \displaystyle \frac{\partial}{\partial x}(d(t,x) \displaystyle \frac{\partial C_i}{\partial x}) + \delta_{i,4} v_d \frac{\partial C_i}{\partial x} = f_i(t,x,{\bf C}), & t \in ]0,T], \quad x \in ]0,L[, \\[7pt] \displaystyle \frac{\partial C_i}{\partial x}(t,0)=\displaystyle \frac{\partial C_i}{\partial x}(t,L)=0, & t \in ]0,T],\\[7pt] C_i(0,x)=C_i^0(x), & x \in ]0,L[, \\ \end{array} \right. \end{equation} with \begin{equation} \label{eqn:p2} \left \lbrace \begin{array}{lll} f_N(t,x,{\bf C})&=&(-\mu_p(1-\gamma) L_I(t,x,P) L_{N}P+\mu_z Z + \mu_d D)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}(x)\\[7pt] & &+(\tau(P+Z+D))\ \hbox{\rm l\hskip -5pt 1}_{]l,L[}(x),\\[7pt] f_P(t,x,{\bf C})&=&(\mu_p(1-\gamma)L_I(t,x,P)L_NP-G_PZ-m_pP)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}(x)\\[7pt] & &+(-\tau P)\ \hbox{\rm l\hskip -5pt 1}_{]l,L[}(x),\\[7pt] f_Z(t,x,{\bf C})&=&(a_pG_PZ + a_dG_DZ -m_zZ -\mu_zZ)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}(x)\\[7pt] & &+(-\tau Z)\ \hbox{\rm l\hskip -5pt 1}_{]l,L[}(x),\\[7pt] f_D(t,x,{\bf C})&=&((1-a_p)G_PZ-a_dG_DZ+m_pP+m_zZ-\mu_dD)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}(x)\\[7pt] & &+(-\tau D)\ \hbox{\rm l\hskip -5pt 1}_{]l,L[}(x).\\ \end{array} \right. \end{equation} \noindent $T$ is a fixed time. In numerical simulations, system (\ref{eqn:p1}) is intregated over a period of time which can vary from one month to a few years.\\ ~\\ $L$ is the depth of the water column under consideration ($L \approx 1000$ m), $l$ is the maximum depth of the euphotic layer ($l \approx 200$ m).\\ ~\\ $\hbox{\rm l\hskip -5pt 1}_{]0,l]}$ and $\hbox{\rm l\hskip -5pt 1}_{]l,L[}$ are the usual indicator functions, $$ \hbox{\rm l\hskip -5pt 1}_{]0,l]}(x)= \left \lbrace \begin{array}{l} 1 \quad {\mathrm{if}} \quad x \in ]0,l],\\ 0 \quad {\mathrm{otherwise}}.\\ \end{array} \right. $$ \noindent $\delta_{i,4}$ is the Kronecker symbol, $$ \delta_{i,4} = \left \lbrace \begin{array}{l} 1 \quad {\mathrm{if}} \quad i=4,\\ 0 \quad {\mathrm{otherwise}}. \end{array} \right. $$ \noindent ~\\ Neuman boundary conditions at $x=0$ and $x=L$ express the fact that there is no flux through the surface of the ocean and through the ocean floor.\\ ~\\ Initial concentrations, $C_i^0$, satisfy $C_i^0(x) \ge 0$ for all $x \in ]0,L[$.\\ ~\\ The different parameters which appear in the reaction terms $f_i$ are strictly positive constants. All of them are shown in Table \ref{tab:paramNPZD}. A schematic representation of the model is shown on Figure \ref{fig:NPZD}. Let us note that parameters $\gamma, a_p$ and $a_d$ satisfy $1-\gamma > 0$, $1-a_p> 0$ and $1-a_d > 0$.\\ The nonlinear functions $L_I, L_N, G_P$ and $G_D$ are given explicitly in the following subsection, and more details about the model can be found in L\'evy {\it et al. } \cite{Levy:1998b}. \begin{table}[htbp] \caption{ Parameter values \label{tab:paramNPZD} } \begin{tabular}{|l|c|c|c|} \hline parameter& name & value & unit \\ \hline half-saturation\ constant & $k_n$ & 0.5 & $mmolNm^{-3}$ \\ maximal\ grazing\ rate& $g_z$ &0.75 &$day^{-1}$\\ half-saturation\ constant\ for\ grazing& $k_z$ & 1 & $mmolN.m^{-3}$\\ assimilated\ fraction\ of\ phytoplankton & $a_p$ & 0.7 & \\ assimilated\ fraction\ of\ detritus & $a_d$ & 0.5 & \\ zooplankton\ excretion\ rate &$\mu_z$&0.1&$day^{-1}$\\ phytoplankton\ mortality\ rate &$m_p$ &0.03 & $day^{-1}$\\ zooplankton\ mortality\ rate & $m_z$ &0.03 &$day^{-1}$\\ detritus\ remineralization\ rate& $\mu_d$ & 0.09 & $day^{-1}$\\ detritus\ sedimentation\ speed &$v_d$& 5 & $m.day^{-1}$\\ maximal\ growth\ rate\ & $\mu_p$ & 2 & $day^{-1}$\\ exsudation\ fraction & $\gamma$ & 0.05 & \\ remineralization\ rate&$\tau$ & 0.05 &$day^{-1}$\\ \hline \end{tabular} \end{table} \begin{figure*}[!h] \caption{\label{fig:NPZD} Schematic representation of the compartments and processes of the NPZD surface layer model.} \centering \input{schemaNPZD_english.tex} \end{figure*} \subsection{Comments and hypotheses} \label{ssec:comethyp} \noindent \begin{enumerate} \item The mixing or diffusion coefficient, $d(t,x)$, is obtained diagnostically from TKE (Gaspar {\it et al. } \cite{Gaspar:1990}). In modeling studies it is considered in the first approximation that biological variables do not influence physical variables. As a consequence biological tracers are vertically mixed with the same coefficient as temperature and salinity. This coefficient is an output of the physical model and data for the biological model. Consequently it does not depend on ${\bf C}$. We are thus given once and for all a mixing coefficient $d(t,x)$. It strongly varies in space and time and we can not assume it is particularly regular (Lewandosky \cite{Lewandosky:1997}). The usual basic assumption made in the mathematical literature as well as in numerical studies is the following. We suppose that $$ 0 < d_0 \le d(t,x) \le d_{\infty},\ a.e.\ {\mathrm{in}}\ ]0,T[ \times ]0,L[. $$ \item Because of functions $\hbox{\rm l\hskip -5pt 1}_{]0,l]}$ and $\hbox{\rm l\hskip -5pt 1}_{]l,L[}$, the equations of the model are not the same above and below the depth $l$ which physically corresponds to the depth at which the action of light on the system becomes negligeable. This corresponds to a discontinuity of the reaction terms $f_i(t,x,{\bf C})$ at the point $x=l$. It is the choice of modelization made by L\'evy {\it et al. } \cite{Levy:1998b}. Above the depth $l$ the reaction terms correspond to the schematic representation of the model shown on Figure \ref{fig:NPZD}. The basic biogeochemical fluxes are represented using a minimum number of prognostic variables. Nutrients allow the estimation of production to be made. Zooplankton mortality and detrital sedimentation feed the particle export flux. Below the depth $l$ remineralization processes are preponderent and the surface model does not apply. Instead decay of phytoplankton, zooplankton and detritus in nutrients parameterize remineralization. More details about the modeled biogeochemical processes can be found in L\'evy {\it et al. } \cite{Levy:1998b}. In the following points we give the analytical expression of the nonlinear terms which are used. \item $L_N$, $G_P$ and $G_D$ are nonlinear functions. \begin{itemize} \item $L_N$ parameterizes the nutrient limitation on phytoplankton growth. It follows the Michaelis-Menten kinetic, $L_{N}=\displaystyle \frac{N}{k_n+N}$. The possible nullification of the term $k_n+N$, invites us to define, $L_{N}=\displaystyle \frac{N}{k_n+|N|}$. This formulation will be used in the following. We will show that if initial concentrations are positive then concentrations always stay positive, thus the two formulations are equivalent. Let us remark that, \begin{itemize} \item[] $L_N$ is defined and continuous on $\hbox{\rm I\hskip -1.5pt R}$, \item[] $|L_N(N)| \le 1$, $\forall N \in \hbox{\rm I\hskip -1.5pt R}$. \end{itemize} \item $G_P$ and $G_D$ are the zooplankton grazing rates on phytoplankton and detritus. The formulation used is a squared Michaelis-Menten response function: $$ \begin{array}{l} G_P=\displaystyle \frac{g_zP^2}{k_z+P^2},\\ G_D=\displaystyle \frac{g_zD^2}{k_z+D^2}.\\ \end{array} $$ In the remainder of this paper we use the following properties: \begin{itemize} \item[] $G_P$ and $G_D$ are defined and continuous on $\hbox{\rm I\hskip -1.5pt R}$, \item[] $|G_P| \le g_z $, $\forall P \in \hbox{\rm I\hskip -1.5pt R}$, \item[] $|G_D| \le g_z $, $\forall D \in \hbox{\rm I\hskip -1.5pt R}$. \end{itemize} \end{itemize} \item The limitation of phytoplankton growth by light is parameterized by, $$ L_I(t,x,P)=1-\exp(-PAR(t,x,P)/k_{par}), $$ $k_{par}$ is a positive constant. The photosynthetic available radiation, $PAR$, is predicted from surface irradiance and phytoplankton pigment content by a light absorption model according to L\'evy {\it et al. } \cite{Levy:1998b}. From a biological point of view, the fact that $PAR$ depends on $P$ is important. This models the so-called self-shading effect. We give further details of the parameterization of $PAR$ in Section 6, and here we only suppose it is a positive function, continuous in $P$ for a.e. $t,x$ and measurable in $t,x$ for all $P$. In order to prove the existence result we have to notice that: \begin{itemize} \item[] $L_I$ is defined on $[0,T] \times [0,L] \times \hbox{\rm I\hskip -1.5pt R}$, \item[] $0 \le L_I(t,x,P) \le 1$, a.e in $[0,T] \times [0,L] \times \hbox{\rm I\hskip -1.5pt R}$, \item[] $(t,x) \rightarrow L_I(t,x,P)$ is measurable, for all $P \in \hbox{\rm I\hskip -1.5pt R}$, \item[] $P \rightarrow L_I(t,x,P)$ is continuous, for a.e $(t,x) \in [0,T] \times [0,L]$. \end{itemize} \item Eventually, let us remark the presence of the advection term $v_d \displaystyle \frac{\partial D}{\partial x}$ in the detritus equations. Detritus, $D$, sink at a speed of $v_d$. \end{enumerate} \vspace*{1pt}\baselineskip=13pt \section{Mathematical preliminaries and statement of main result} \vspace*{-0.5pt} \noindent \subsection{Functional spaces} \noindent In this section we introduce the functional spaces which we use in the remainder of this work. All this study is conducted on the open set $]0,L[$ and $T$ is a fixed time. Throughout this work, concentrations, $C_i$, are considered as elements of the functional space $L^2(0,L)$ whose Hilbert space structure is convenient to use. However, let us remember that $L^2(0,L)$ is continuously imbedded into $L^1(0,L)$ which is a natural space for concentrations. ${\bf H}$ and ${\bf H}^1$ are the separable Hilbert spaces defined by $$ \begin{array}{l} {\bf H}=(L^2(0,L))^4,\\ {\bf H}^1=(H^1(0,L))^4. \end{array} $$ ${\bf H}$ is equipped with the scalar product $$ \begin{array}{ll} ({\bf C},\hat{{\bf C}})&= \displaystyle \int_{0}^{L} \displaystyle \sum_{i=1}^{4} C_i(x)\hat{C}_i(x)dx\\[7pt] &= \displaystyle \sum_{i=1}^{4} (C_i,\hat{C}_i)_{L^2(0,L)}.\\ \end{array} $$ We denote by $||.||$ the induced norm on ${\bf H}$.\\ ${\bf H}^1$ is equipped with the scalar product $$ \begin{array}{ll} ({\bf C},\hat{{\bf C}})_1 &= \displaystyle \int_{0}^{L} \displaystyle \sum_{i=1}^{4} C_i(x)\hat{C}_i(x)dx+ \displaystyle \int_{0}^{L} \displaystyle \displaystyle \sum_{i=1}^{4} \displaystyle \frac{\partial C_i(x)}{\partial x} \displaystyle \frac{\partial \hat{C}_i(x)}{\partial x}dx\\[7pt] &= \displaystyle \displaystyle \sum_{i=1}^{4} (C_i,\hat{C}_i)_{L^2(0,L)}+ \displaystyle \displaystyle \sum_{i=1}^{4} (\displaystyle \frac{\partial C_i}{\partial x},\displaystyle \frac{\partial \hat{C}_i}{\partial x})_{L^2(0,L)}.\\ \end{array} $$ We denote by $||.||_1$, the induced norm on ${\bf H}^1$. We will also have to consider the space ${\bf L}^{\infty} = (L^{\infty}(0,L))^4$. $L^{\infty}(0,L)$ is a Banach equipped with the norm $$ ||C_i||_{\infty} = inf \lbrace M;|C_i(x)| \le M \ a.e. \ in \ (0,L) \rbrace. $$ Similarly ${\bf L}^{\infty}$ is a Banach space equipped with the norm $$ ||{\bf C}||_{\infty} = \sup_{i=1, ..., 4} ||C_i||_{\infty}. $$ Now, if $X$ is a real Banach space equipped with the norm $||.||_X$, $C([0,T],X)$ is the space of continuous functions on $[0,T]$ with values in $X$, equipped with the norm, $$ ||{\bf C}||_{C([0,T],X)}=\sup_{[0,T]} ||{\bf C}(t)||_X. $$ Similarly $L^2(0,T,X)$ is the space of functions $L^2$ in time with values in $X$, equipped with the norm, $$ ||{\bf C}||_{L^2(0,T,X)}=(\int_{0}^{T}||{\bf C}(t)||_X^2 dt)^{1/2}, $$ and $L^{\infty}(0,T,X)$ is the space of functions $L^{\infty}$ in time with values in $X$, equipped with the norm, $$ ||{\bf C}||_{L^{\infty}(0,T,X)}= inf \lbrace M;||{\bf C}(t)||_X \le M \ a.e \ in \ (0,T) \rbrace. $$ $C([0,T],X)$, $L^2(0,T,X)$ and $L^{\infty}(0,T,X)$ are Banach spaces. We have the useful \begin{lemma} \label{lemma:injection} The imbedding, ${\bf H}^1 \subset {\bf L}^{\infty}$, is continuous.\\ The imbeddings, ${\bf H}^1 \subset {\bf H}$ and ${\bf H}^1 \subset C([0,L],\hbox{\rm I\hskip -1.5pt R}^4)$ are compact.\\ \end{lemma} \begin{proof} It is a consequence of corollaries IX.14 and IX.16 in Br\'ezis \cite{Brezis:1992}, and of the Rellich-Kondrachoff theorem (Lions and Magenes \cite{Lions:1968}) \end{proof} ${\bf H}'$ denotes the dual of ${\bf H}$ and $({\bf H}^1)'$ the dual of ${\bf H}^1$. When ${\bf H}$ is identified with its dual, we have the classical scheme, $$ {\bf H}^1 \subset {\bf H}={\bf H}' \subset ({\bf H}^1)', $$ where each space is dense in the following and the imbeddings are continuous. Let us denote by $W({\bf H}^1)$ the Hilbert space, $$ W({\bf H}^1) = \lbrace {\bf C} \in L^2(0,T,{\bf H}^1); \frac{d {\bf C}}{dt} \in L^2(0,T,({\bf H}^1)') \rbrace. $$ \begin{lemma} Every ${\bf C} \in W({\bf H}^1)$ is a.e equal to a continuous function from $[0,T]$ to ${\bf H}$. Moreover we have the following continuous imbedding, $$ W({\bf H}^1) \subset C([0,T],{\bf H}). $$ \end{lemma} \begin{proof} See Dautray and Lions \cite{Dautray:1988b} for example \end{proof} Moreover, because the injective mapping ${\bf H}^1 \subset {\bf H}$ is compact, we know that, \begin{lemma} \label{lemma:compacite} The identity mapping, $W({\bf H}^1) \subset L^2(0,T,{\bf H})$, is compact.\\ \end{lemma} \begin{proof} See Aubin \cite{Aubin:1963} or Lions \cite{Lions:1969} \end{proof} \subsection{A preliminary transformation of the system and the bilinear form $a(t,{\bf C},{\bf C}')$} \noindent In order to work with a bilinear form as simple as possible, we start by adding $\lambda C_i$ to both sides of system (\ref{eqn:p1}). The value of $\lambda > 0$ will be fixed in what follows. This leads to the equivalent system, for $i=1$ to $4$: \begin{equation} \label{eqn:p1equi} \left \lbrace \begin{array}{ll} \displaystyle \frac{\partial C_i}{\partial t}- \displaystyle \frac{\partial}{\partial x}(d(t,x) \displaystyle \frac{\partial C_i}{\partial x}) + \delta_{i,4} v_d \frac{\partial C_i}{\partial x} + \lambda C_i&\\[7pt] = f_i(t,x,{\bf C}) + \lambda C_i,& t \in ]0,T],\ x \in ]0,L[, \\[7pt] \displaystyle \frac{\partial C_i}{\partial x}(t,0)=\displaystyle \frac{\partial C_i}{\partial x}(t,L)=0, & t \in ]0,T],\\[7pt] C_i(0,x)=C^0(x), & x \in ]0,L[. \\ \end{array} \right. \end{equation} For $N,N',P,P',Z,Z'$ and $D,D' \in H^1(0,L)$, we define $$ \begin{array}{l} a_N(t,N,N')= \displaystyle \int_{0}^{L} d(t,x) \displaystyle \frac{\partial N}{\partial x} \displaystyle \frac{\partial N'}{\partial x} + \lambda \int_{0}^{L} NN',\\[7pt] a_P(t,P,P')= \displaystyle \int_{0}^{L} d(t,x) \displaystyle \frac{\partial P}{\partial x} \displaystyle \frac{\partial P'}{\partial x}+\lambda \int_{0}^{L} PP',\\[7pt] a_Z(t,Z,Z')= \displaystyle \int_{0}^{L} d(t,x) \displaystyle \frac{\partial Z}{\partial x} \displaystyle \frac{\partial Z'}{\partial x}+\lambda \int_{0}^{L} ZZ',\\[7pt] a_D(t,D,D')= \displaystyle \int_{0}^{L} d(t,x) \displaystyle \frac{\partial D}{\partial x} \displaystyle \frac{\partial D'}{\partial x}+\int_{0}^{L} v_d \displaystyle \frac{\partial D}{\partial x}D' +\lambda \int_{0}^{L} DD',\\ \end{array} $$ and $$ a(t,{\bf C},{\bf C}')=a_N(t,N,N')+a_P(t,P,P')+a_Z(t,Z,Z')+a_D(t,D,D'). $$ \begin{lemma} \label{lemma:formebi} For a.e. $t \in [0,T]$, $a(t,{\bf C},{\bf C}')$ is a continuous bilinear form on ${\bf H}^1 \times {\bf H}^1$. For all ${\bf C}$, ${\bf C}' \in {\bf H}^1$, $t \rightarrow a(t,{\bf C},{\bf C}')$ is measurable and there exists a constant $M_a > 0$ such that, $$ |a(t,{\bf C},{\bf C}')| \le M_a ||{\bf C}||_1 ||{\bf C}'||_1, \quad \forall {\bf C},{\bf C}' \in {\bf H}^1. $$ For a fixed $\lambda$, $\lambda \ge \displaystyle \frac{v_d^2}{2d_0}$, there exists a constant $c_0 > 0$ such that, $$ a(t,{\bf C},{\bf C}) \ge c_0 ||{\bf C}||_1^2, \quad \forall t\in[0,T], \quad \forall {\bf C} \in {\bf H}^1. $$ \end{lemma} \begin{proof} The proof for this is classical and it is omited \end{proof} \subsection{The reaction terms and the nonlinear operator ${\bf G}$} \label{section:G} \noindent In this paragraph we show that the reaction terms of the NPZD-model enable us to define a continuous operator ${\bf G}$ on $L^2(0,T,{\bf H})$. \begin{lemma} \label{lemma:besogne} The reaction terms $f_N$, $f_P$, $f_Z$ and $f_D$ defined in Section 2 have the following properties: \begin{itemize} \item[(P1)] For a.e. $(t,x) \in [0,T] \times [0,L]$, and all ${\bf C} \in \hbox{\rm I\hskip -1.5pt R}^4$, $$ \begin{array}{l} |f_N(t,x,{\bf C})| \le (\mu_p(1-\gamma) + \tau)|P| +(\mu_z + \tau)|Z|+(\mu_d + \tau)|D|,\\[4pt] |f_P(t,x,{\bf C})| \le (\mu_p(1-\gamma) +m_p+ \tau)|P| +g_z|Z|,\\[4pt] |f_Z(t,x,{\bf C})| \le ((a_p + a_d)g_z +m_z +\mu_z+ \tau)|Z|,\\[4pt] |f_D(t,x,{\bf C})| \le (((1-a_p)+a_d)g_z+m_z)|Z| +m_p|P| +(\mu_d + \tau)|D|.\\ \end{array} $$ \item[(P2)] The function ${\bf f}(t,x,{\bf C})$, defined from $[0,T] \times [0,L] \times \hbox{\rm I\hskip -1.5pt R}^4 \rightarrow \hbox{\rm I\hskip -1.5pt R}^4$, is measurable in $(t,x)$, for all ${\bf C} \in \hbox{\rm I\hskip -1.5pt R}^4$, and is continuous in ${\bf C}$, for a.e. $(t,x) \in [0,T] \times [0,L]$. \end{itemize} \end{lemma} \begin{proof} The proof is straightforward and uses the comments of Section 2.2 \end{proof} We now define a function ${\bf g}(t,x,{\bf C}) = {\bf f}(t,x,{\bf C}) + \lambda {\bf C}$ from $[0,T] \times [0,L] \times \hbox{\rm I\hskip -1.5pt R}^4 \rightarrow \hbox{\rm I\hskip -1.5pt R}^4$ and a nonlinear operator, ${\bf G}$, by: $$ {\bf GC}={\bf g}(t,x,{\bf C}(t,x)), \quad (t,x) \in [0,T] \times [0,L]. $$ \begin{proposition} \label{prop:biendefG} The operator, ${\bf G}$, is well defined from $L^2(0,T,{\bf H})$ to itself. There exists a constant $M_g > 0$, depending only on the parameters of the model, such that, for all ${\bf C} \in \L^2(0,T,{\bf H})$ $$ ||{\bf GC}||_{L^2(0,T,{\bf H})} \le M_g ||{\bf C}||_{L^2(0,T,{\bf H})}. $$ The operator ${\bf G}$ is continuous on $L^2(0,T,{\bf H})$.\\ \end{proposition} \begin{proof} Let ${\bf C} \in L^2(0,T,{\bf H})$ and $t \in [0,T]$. From point $(P1)$ of lemma \ref{lemma:besogne} we obtain, $$ \begin{array}{l} ||{\bf G}{\bf C}(t)||^2=\\[7pt] \displaystyle \int_{0}^{L} |f_N(t,x,{\bf C}(t,x)) + \lambda N(t,x)|^2 + |f_P(t,x,{\bf C}(t,x))+\lambda P(t,x)|^2\\[7pt] + |f_Z(t,x,{\bf C}(t,x))+\lambda Z(t,x)|^2 + |f_D(t,x,{\bf C}(t,x))+\lambda D(t,x)|^2 dx,\\[7pt] ||{\bf G}{\bf C}(t)||^2 \le \\[7pt] cte_1 ( ||P(t)||_{L^2(0,L)}^2 + ||Z(t)||_{L^2(0,L)}^2 + ||D(t)||_{L^2(0,L)}^2+ ||N(t)||_{L^2(0,L)}^2 )\\[7pt] + cte_2 (||P(t)||_{L^2(0,L)}^2+||Z(t)||_{L^2(0,L)}^2+ ||P(t)||_{L^2(0,L)}^2 )\\[7pt] +cte_3 (||Z(t)||_{L^2(0,L)}^2)\\[7pt] + cte_4 (||Z(t)||_{L^2(0,L)}^2+||P(t)||_{L^2(0,L)}^2+||D(t)||_{L^2(0,L)}^2),\\[7pt] ||{\bf G}{\bf C}(t)||^2 \le M_g^2 ||{\bf C}(t)||^2, \end{array} $$ and integrating on $[0,T]$, $$ ||{\bf G}{\bf C}||_{L^2(0,T,{\bf H})} \le M_g ||{\bf C}||_{L^2(0,T,{\bf H})}. $$ From point ($P2$) of lemma \ref{lemma:besogne} we know that the function, $$ {\bf g}(t,x,{\bf C}) = {\bf f}(t,x,{\bf C}) + \lambda {\bf C}, $$ from $[0,T] \times [0,L] \times \hbox{\rm I\hskip -1.5pt R}^4 \rightarrow \hbox{\rm I\hskip -1.5pt R}^4$, satisfies the conditions of Carath\'eodory and by theorem 2.1 page 22 of Krasnosel'skii \cite{Krasnoselskii:1964}, we know that the operator ${\bf G}$ is continuous \end{proof} \subsection{Variational formulation} We can now write the definition of a weak solution to system (\ref{eqn:p1}), \begin{definition} ${\bf C} \in W({\bf H}^1)$ is a weak solution of system (\ref{eqn:p1}) if $$ \forall \phi \in {\bf H}^1, \quad (\displaystyle \frac{d {\bf C}}{dt}, {\bf \phi}) + a(t,{\bf C},{\bf \phi}) = ({\bf GC},{\bf \phi}), $$ in the ${\mathcal{D}}'(]0,T[)$ sens,\\ and ${\bf C}(0)={\bf C}^0$. \end{definition} and state the main result of this paper, \begin{theorem} \label{theo:main} Let ${\bf C}^0 \in {\bf H}$. There exists a weak solution to system (\ref{eqn:p1}). Furthermore, if $N^0,P^0,Z^0$ and $D^0$ are positive then $N,P,Z$ and $D$ are positive for a.e. $t \in [0,T]$. \end{theorem} The proof is given in the next two sections. \vspace*{1pt}\baselineskip=13pt \section{Existence} \vspace*{-0.5pt} \noindent The existence result is obtained in two steps. We first define an approximate problem, in which the operator ${\bf G}$ is approximated by an operator ${\bf G}_n$. This approximate problem is solved using the Schauder fixed-point theorem. In the second step we let $n \rightarrow \infty$ to obtain a solution to the initial problem. \subsection{Step 1: approximated problem} \noindent Let $n > 0$ be a fixed integer and ${\bf g}_n$ be defined by, $$ \begin{array}{lll} {\bf g}_n: & [0,T] \times [0,L] \times \hbox{\rm I\hskip -1.5pt R}^4 & \rightarrow \hbox{\rm I\hskip -1.5pt R}^4\\ & (t,x,{\bf C}) & \rightarrow \displaystyle (\frac{({\bf g}(t,x,{\bf C}))_i} {1+ \frac{1}{n} |({\bf g}(t,x,{\bf C}))_i|})_{i=1, ...4}. \end{array} $$ Define the nonlinear operator, ${\bf G}_n$, by: $$ {\bf G}_n{\bf C}={\bf g}_n(t,x,{\bf C}(t,x)), \quad (t,x) \in [0,T] \times [0,L]. $$ \begin{proposition} \label{prop:biendef2} The operator, ${\bf G}_n$, is well defined from $L^2(0,T,{\bf H})$ to itself and there exists a constant, $M_g > 0$, such that for all ${\bf C} \in \L^2(0,T,{\bf H})$, $$ ||{\bf G}_n{\bf C}||_{L^2(0,T,{\bf H})} \le M_g ||{\bf C}||_{L^2(0,T,{\bf H})}. $$ The operator ${\bf G}_n$ is continuous on $L^2(0,T,{\bf H})$.\\ For all ${\bf C} \in L^2(0,T,{\bf H})$, we also have the estimation, $$ ||{\bf G}_n{\bf C}||_{L^2(0,T,{\bf H})} \le 2n\sqrt{LT}. $$ \end{proposition} \begin{proof} Let ${\bf C} \in L^2(0,T,{\bf H})$. From the definition of ${\bf g}_n$ and from proposition \ref{prop:biendefG} we obtain, $$ ||{\bf G}_n{\bf C}||_{L^2(0,T,{\bf H})} \le ||{\bf G}{\bf C}||_{L^2(0,T,{\bf H})} \le M_g ||{\bf C}||_{L^2(0,T,{\bf H})}. $$ The estimation $||{\bf G}_n{\bf C}||_{L^2(0,T,{\bf H})} \le 2n\sqrt{LT}$ is also derived easily from the choice made to define ${\bf g}_n$.\\ As in proposition \ref{prop:biendefG}, ${\bf g}_n$ satisfies the conditions of Carath\'eodory and ${\bf G}_n$ is continuous on $L^2(0,T,{\bf H})$ \end{proof} We now seek a solution to the approximated system and show that such a solution is a fixed-point of the operator $\Theta$ defined in the next proposition. \begin{proposition} \label{prop:theta} Let $\hat{{\bf C}}$ be a fixed element of $L^2(0,T,{\bf H})$ and let ${\bf C}^0 \in {\bf H}$. There exists a unique solution to the problem:\\ find ${\bf C} \in W({\bf H}^1)$ such that, $$ \forall \phi \in {\bf H}^1, \quad (\displaystyle \frac{d {\bf C}}{dt}, {\bf \phi}) + a(t,{\bf C},{\bf \phi}) = ({\bf G}_n\hat{{\bf C}},{\bf \phi}), $$ in the ${\mathcal{D}}'(]0,T[)$ sens,\\ and ${\bf C}(0)={\bf C}^0$.\\ This solution defines an operator $\Theta$ on $L^2(0,T,{\bf H})$, $\Theta\hat{{\bf C}}={\bf C}$. \end{proposition} \begin{proof} Since the problem is linear in ${\bf C}$ and ${\bf G}_n\hat{{\bf C}}$ is fixed in $L^2(0,T,{\bf H})$, the proof is classical (e.g. Dautray and Lions, \cite{Dautray:1988b}) \end{proof} To insure that $\Theta$ has a fixed point, we show that the Schauder fixed point theorem can be applied. \begin{lemma} \label{lemma:thetacontinu} The operator $\Theta$ is continuous on $L^2(0,T,{\bf H})$. \end{lemma} \begin{proof} Let $\hat{{\bf C}}^1$ and $\hat{{\bf C}}^2 \in L^2(0,T,{\bf H})$. ${\bf C}^1$ and ${\bf C}^2$, the associated solutions to the problem of proposition \ref{prop:theta}, satisfy, $$ (\displaystyle \frac{d}{dt} ({\bf C}^1-{\bf C}^2) , {\bf \phi}) + a(t,{\bf C}^1-{\bf C}^2,{\bf \phi})= ({\bf G}_n\hat{{\bf C}}^1-{\bf G}_n\hat{{\bf C}}^2,{\bf \phi}). $$ Taking ${\bf \phi}={\bf C}^1-{\bf C}^2$ as a test function, integrating on $[0,t]$, using the coerciveness of $a$ and Cauchy-Schwarz inequality, we obtain, $$ \begin{array}{l} \displaystyle \int_0^t \frac{1}{2}\frac{d }{dt}||{\bf C}^1(s)-{\bf C}^2(s)||^2 + c_0 ||{\bf C}^1(s)-{\bf C}^2(s)||_1^2 ds \\ \le \displaystyle \int_0^t ||{\bf G}_n\hat{{\bf C}}^1(s)-{\bf G}_n\hat{{\bf C}}^2(s)||||{\bf C}^1(s)-{\bf C}^2(s)||ds. \end{array} $$ As ${\bf C}^1(0)={\bf C}^2(0)={\bf C}^0$ we obtain using Young's inequality, $$ \begin{array}{l} ||{\bf C}^1(t)-{\bf C}^2(t)||^2 + \displaystyle \int_0^t 2c_0 ||{\bf C}^1(s)-{\bf C}^2(s)||_1^2 ds \\ \le \displaystyle \int_0^t\frac{1}{\alpha} ||{\bf G}_n\hat{{\bf C}}^1(s)-{\bf G}_n\hat{{\bf C}}^2(s)||^2 ds + \alpha \displaystyle \int_0^t ||{\bf C}^1(s)-{\bf C}^2(s)||^2 ds, \end{array} $$ and with $\alpha = 2c_0$, $$ ||{\bf C}^1(t)-{\bf C}^2(t)||^2 \le \displaystyle \int_0^T\frac{1}{2c_0} ||{\bf G}_n\hat{{\bf C}}^1(s)-{\bf G}_n\hat{{\bf C}}^2(s)||^2 ds. $$ Eventually, we obtain, integrating on $[0,T]$, $$ ||\Theta\hat{{\bf C}}^1-\Theta\hat{{\bf C}}^2||_{L^2(0,T,{\bf H})}= ||{\bf C}^1-{\bf C}^2||_{L^2(0,T,{\bf H})} \le \displaystyle \sqrt{\frac{T}{2c_0}} ||{\bf G}_n\hat{{\bf C}}^1-{\bf G}_n\hat{{\bf C}}^2||_{L^2(0,T,{\bf H})} $$ and $\Theta$ is continuous as ${\bf G}_n$ is \end{proof} \begin{lemma} \label{lemma:thetaborne} The operator $\Theta$ maps $L^2(0,T,{\bf H})$ in the ball $$ B=\lbrace {\bf C} \in L^2(0,T,{\bf H}) , ||{\bf C}||_{L^2(0,T,{\bf H})} \le \displaystyle \sqrt{T(\displaystyle \frac{2LTn^2}{c_0} + ||{\bf C}^0||^2)} \rbrace. $$ In particular, we have, $\Theta(B) \subset B$. \end{lemma} \begin{proof} Let $\hat{{\bf C}} \in L^2(0,T,{\bf H})$. ${\bf C}$, the solution to the problem of proposition \ref{prop:theta}, satisfies, $$ \displaystyle (\frac{d}{dt} {\bf C}, {\bf \phi}) + a(t,{\bf C},{\bf \phi})= ({\bf G}_n\hat{{\bf C}},{\bf \phi}). $$ Taking ${\bf \phi}={\bf C}$ as a test function, integrating $[0,t]$, using the coerciveness of $a$ and the Cauchy-Schwarz inequality we obtain, $$ \displaystyle \int_0^t \frac{1}{2}\frac{d}{dt}||{\bf C}(s)||^2 + c_0 ||{\bf C}(s)||_1^2 ds \le \displaystyle \int_0^t ||{\bf G}_n\hat{{\bf C}}(s)||||{\bf C}(s)||ds, $$ with Young's inequality, $$ ||{\bf C}(t)||^2 + \displaystyle \int_0^t 2c_0 ||{\bf C}(s)||_1^2 ds \le \displaystyle \int_0^t\frac{1}{\alpha} ||{\bf G}_n\hat{{\bf C}}(s)||^2 ds + \alpha \displaystyle \int_0^t ||{\bf C}(s)||^2 ds + ||{\bf C}^0||^2, $$ with $\alpha = 2c_0$ and as $\displaystyle \int_0^t ||{\bf G}_n\hat{{\bf C}}(s)||^2 ds \le 4LTn^2$, we have $$ ||{\bf C}(t)||^2 \le \displaystyle \frac{4LTn^2}{2c_0} + ||{\bf C}^0||^2, $$ integrating once more on $[0,T]$ we obtain, $$ ||\Theta\hat{{\bf C}}||_{L^2(0,T,{\bf H})}^2= ||{\bf C}||_{L^2(0,T,{\bf H})}^2 \le \displaystyle T(\displaystyle \frac{2LTn^2}{c_0} + ||{\bf C}^0||^2). $$ \end{proof} \begin{lemma} \label{lemma:thetacompact} The operator $\Theta$ is compact. \end{lemma} \begin{proof} Let $B$ be a bounded set in $L^2(0,T,{\bf H})$. Let us show that $\Theta(B)$ is bounded in $W({\bf H}^1)$. Let $\hat{{\bf C}} \in B \subset L^2(0,T,{\bf H})$ and let ${\bf C}$ be the associated solution to the problem of proposition \ref{prop:theta}. As in the proof of lemma \ref{lemma:thetaborne} we obtain $$ ||{\bf C}(t)||^2 + \displaystyle \int_0^t 2c_0 ||{\bf C}(s)||_1^2 ds \le \displaystyle \int_0^t\frac{1}{\alpha} ||{\bf G}_n\hat{{\bf C}}(s)||^2 ds + \alpha \displaystyle \int_0^t ||{\bf C}(s)||^2 ds + ||{\bf C}^0||^2. $$ Taking $\alpha = c_0$ this time, we obtain $$ \displaystyle \int_0^T ||{\bf C}(s)||_1^2 ds \le \displaystyle \frac{1}{c_0}||{\bf G}_n\hat{{\bf C}}||_{L^2(0,T,{\bf H})^2} + ||{\bf C}^0||^2 \le \frac{4LTn^2}{c_0} + ||{\bf C}^0||^2, $$ and $\Theta \hat{{\bf C}}$ is bounded in $L^2(0,T,{\bf H}^1)$.\\ Moreover we obtain $$ \forall \phi \in {\bf H}^1, \quad (\displaystyle \frac{d {\bf C}}{dt}, {\bf \phi}) + a(t,{\bf C},{\bf \phi}) = ({\bf G}_n\hat{{\bf C}},{\bf \phi}). $$ From lemma \ref{lemma:formebi}, $$ |(\displaystyle \frac{d {\bf C}}{dt}, {\bf \phi})| \le M_a||{\bf C}||_1 ||{\bf \phi}||_1 + ||{\bf G}_n\hat{{\bf C}}||||{\bf \phi}|| \le M_a||{\bf C}||_1 ||{\bf \phi}||_1 + 2n\sqrt{L}||{\bf \phi}||_1, $$ and $$ \displaystyle \int_0^T ||\frac{d{\bf C}}{dt}||_{({\bf H}^1)'}^2 ds \le 2 \displaystyle \int_0^T (4Ln^2 + M^2 ||{\bf C}||_1^2)ds. $$ And therefore, $||\displaystyle \frac{d{\bf C}}{dt}||_{L^2(0,T,({\bf H}^1)')}$ is bounded in $L^2(0,T,({\bf H}^1)')$.\\ The range of $\Theta$ is in $W({\bf H}^1)$, from lemma \ref{lemma:compacite}, the injection $W({\bf H}^1) \subset L^2(0,T,{\bf H})$ is compact, and this concludes the proof \end{proof} It is now possible to state the main result of this section, concerning the existence of weak solutions to the approximated problem. \begin{theorem} Let $n>0$ be a fixed integer. Let ${\bf C}^0 \in {\bf H}$. There exists a solution, ${\bf C}_n$, to the problem:\\ find ${\bf C} \in W({\bf H}^1)$ such that, $$ \forall \phi \in {\bf H}^1, \quad (\displaystyle \frac{d {\bf C}}{dt}, {\bf \phi}) + a(t,{\bf C},{\bf \phi}) = ({\bf G}_n{\bf C},{\bf \phi}), $$ in the ${\mathcal{D}}'(]0,T[)$ sens,\\ and ${\bf C}(0)={\bf C}^0$.\\ \end{theorem} \begin{proof} From lemma \ref{lemma:thetacontinu}, \ref{lemma:thetaborne}, \ref{lemma:thetacompact} and the Schauder fixed-point theorem, the operator $\Theta$ has a fixed point, which is the solution sought \end{proof} \subsection{Step 2: letting $n \rightarrow \infty$} \noindent We now pass to the limit as $n \rightarrow \infty$, in the equations, \begin{equation} \label{equ:probn} \begin{array}{l} \forall \phi \in {\bf H}^1, \quad (\displaystyle \frac{d {\bf C}_n}{dt}(t),\phi) + a(t,{\bf C}_n,\phi) = ({\bf G}_n{\bf C}_n(t),\phi),\\ {\bf C}_n(0)={\bf C}^0. \end{array} \end{equation} This is achieved in two steps: \begin{itemize} \item[a)] a priori estimations on the sequence ${\bf C}_n$, \item[b)] extraction of subsequences and letting $n \rightarrow \infty$. \end{itemize} \begin{itemize} \item[a)] {\bf estimations}. Let us show that: \begin{itemize} \item[(a.1)]the sequence $({\bf C}_n)_{n > 0}$ is bounded in $L^{\infty}(0,T,{\bf H})$, \item[(a.2)]the sequence $({\bf C}_n)_{n > 0}$ is bounded in $L^{2}(0,T,{\bf H}^1)$, \item[(a.3)]the sequence $(\displaystyle \frac{d {\bf C}_n}{dt})_{n > 0}$ is bounded in $L^{2}(0,T,({\bf H}^1)')$. \end{itemize} Taking, ${\bf C}_n$ as a test function in (\ref{equ:probn}) we obtain, $$ \frac{1}{2}\frac{d}{dt}||{\bf C}_n||^2 + a(t,{\bf C}_n,{\bf C}_n)=({\bf G}_n{\bf C}_n,{\bf C}_n), $$ or, $$ \frac{1}{2}\frac{d}{dt}||{\bf C}_n||^2 + c_0||{\bf C}_n||_1^2 \le ||{\bf G}_n{\bf C}_n||||{\bf C}_n|| \le M_g ||{\bf C}_n||^2, $$ and integrating on $[0,t]$, we obtain \begin{equation} \label{eqn:estim} ||{\bf C}_n||^2 + 2c_0\displaystyle \int_0^t ||{\bf C}_n||_1^2 ds \le 2 M_g \displaystyle \int_0^t ||{\bf C}_n||^2 ds + ||{\bf C}^0||^2. \end{equation} Equation (\ref{eqn:estim}) gives $$ ||{\bf C}_n||^2 \le 2 M_g \displaystyle \int_0^t ||{\bf C}_n||^2 ds + ||{\bf C}^0||^2. $$ Using Gronwall lemma, we have \begin{equation} \label{eqn:estim2} ||{\bf C}_n(t)||^2 \le ||{\bf C}^0||^2 \exp(2 M_g T), \end{equation} and the sequence $({\bf C}_n)_{n > 0}$ is bounded in $L^{\infty}(0,T,{\bf H})$.\\ Equation (\ref{eqn:estim}) also gives $$ \displaystyle \int_0^t ||{\bf C}_n||_1^2 ds \le \frac{M_g}{c_0} \displaystyle \int_0^t ||{\bf C}_n||^2 ds + \frac{1}{2c_0}||{\bf C}^0||^2, $$ and with (\ref{eqn:estim2}) $$ \displaystyle \int_0^t ||{\bf C}_n||_1^2 ds \le \frac{M_gT}{c_0} ||{\bf C}^0||^2 \exp(2 M_g T) + \frac{1}{2c_0}||{\bf C}^0||^2. $$ Therefore the sequence $({\bf C}_n)_{n > 0}$ is bounded in $L^{2}(0,T,{\bf H}^1)$.\\ Let us now give an estimation for the sequence, $(\displaystyle \frac{d {\bf C}_n}{dt})_{n > 0}$. We have $$ |(\frac{d {\bf C}_n}{dt},\phi)| \le |a(t,{\bf C}_n,\phi)| + |({\bf G}_n{\bf C}_n,\phi)|, $$ and therefore $$ |(\frac{d {\bf C}_n}{dt},\phi)| \le M_a||{\bf C}_n||_1||\phi||_1 + M_g||{\bf C}_n||_1||\phi||_1, $$ that is to say $$ \begin{array}{l} \displaystyle \int_0^T ||\frac{d {\bf C}_n}{dt}||_{({\bf H}^1)'}^2 \le 2(M_a^2 +M_g^2) \displaystyle \int_0^T ||{\bf C}_n||_1^2 \\[10pt] \le 2(M_a^2 +M_g^2)(\displaystyle \frac{M_gT}{c_0} ||{\bf C}^0||^2 \exp(2 M_g T) + \displaystyle \frac{1}{2c_0}||{\bf C}^0||^2), \end{array} $$ and the sequence $(\displaystyle \frac{d {\bf C}_n}{dt})_{n > 0}$ is bounded in $L^{2}(0,T,({\bf H}^1)')$. \item[b)]{\bf passing to the limit}. Let us first recall that ${\mathcal{D}}(]0,T[,{\bf H}^1) \subset W({\bf H}^1)$. Therefore $\forall \phi \in {\bf H}^1$ and $\forall \varphi \in {\mathcal{D}}(]0,T[)$, we have $\psi = \phi \otimes \varphi \in L^2(0,T,{\bf H}^1)$ and $\displaystyle \frac{d \psi}{dt} \in L^2(0,T,({\bf H}^1)')$. \begin{itemize} \item[b.1)] The term $(\displaystyle \frac{d {\bf C}_n}{dt},\phi)$: from (a.3), we are able to extract from the sequence $(\displaystyle \frac{d {\bf C}_n}{dt})_{n > 0}$ a subsequence (denoted in the same way) converging to some ${\bf h}$ in $L^2(0,T,({\bf H}^1)')$ weak star, that is to say, for all $\phi \in {\bf H}^1$ and all $\varphi \in {\mathcal{D}}(]0,T[)$, $$ \lim_{n \rightarrow \infty} \displaystyle \int_0^T (\displaystyle \frac{d {\bf C}_n}{dt},\phi)\varphi ds = \displaystyle \int_0^T ({\bf h},\phi)\varphi ds. $$ Moreover, by definition, we obtain $$ \displaystyle \int_0^T (\displaystyle \frac{d {\bf C}_n}{dt},\phi)\varphi ds = - \displaystyle \int_0^T ({\bf C}_n,\phi)\displaystyle \frac{d \varphi}{dt} ds. $$ From (a.2), we are able to extract from the sequence $({\bf C}_n)_{n > 0}$ a subsequence (denoted in the same way) converging to some ${\bf C}$ in $L^2(0,T,{\bf H}^1)$ weak. Therefore, for all $\phi \in {\bf H}^1$ and all $\varphi \in {\mathcal{D}}(]0,T[)$, $$ \lim_{n \rightarrow \infty} - \displaystyle \int_0^T ({\bf C}_n,\phi)\displaystyle \frac{d \varphi}{dt} ds =- \displaystyle \int_0^T ({\bf C},\phi)\displaystyle \frac{d \varphi}{dt} ds, $$ and ${\bf h} = \displaystyle \frac{d {\bf C}}{dt}$ in $L^2(0,T,({\bf H}^1)')$. \item[b.2)] The term $a(t,{\bf C}_n,\phi)$: from (b.1), we can suppose that the sequence $({\bf C}_n)_{n>0}$ converges to ${\bf C}$ in $L^2(0,T,{\bf H}^1)$ weak. Therefore the sequence $(\partial_x {\bf C}_n)_{n>0}$ converges to $\partial_x {\bf C}$ in $L^2(0,T,{\bf H})$ weak. Then, for all $\phi \in {\bf H}^1$ and all $\varphi \in {\mathcal{D}}(]0,T[)$, $$ \lim_{n \rightarrow \infty} \displaystyle \int_0^T a(s,{\bf C}_n,\phi)\varphi ds =\displaystyle \int_0^T a(s,{\bf C},\phi)\varphi ds. $$ \item[b.3)] The term $({\bf G}_n{\bf C}_n,\phi)$: from (a.2), (a.3), and from the compacity of the injection $W({\bf H}^1) \rightarrow L^2(0,T,{\bf H})$, we can suppose that the sequence $({\bf C}_n)_{n > 0}$ converges to ${\bf C}$ in $L^2(0,T,{\bf H})$ strong. Therefore, each ${\bf C}_{n,i}$, $i=1, ...4$, converges to ${\bf C}_i$ in $L^2(0,T,L^2(0,L))$ strong. From the inverse Lebesgue theorem (Br\'ezis, \cite{Brezis:1992}, theorem IV.9. page 58), we can suppose that: \begin{itemize} \item[(b.3.1)] the sequences $({\bf C}_{n,i})_{n>0}$, $i=1, ...4$, converge to ${\bf C}_i$ a.e. in $]0,T[ \times ]0,L[$. \item[(b.3.2)] for $i=1$ to $4$, $|{\bf C}_{n,i}| \le h_i$, $\forall n>0$, a.e. in $]0,T[ \times ]0,L[$ and $h_i \in L^2(0,T,L^2(0,L))$. \end{itemize} As ${\bf g}_{n,i}(t,x,{\bf C})$ is continuous in its third variable, we deduce from (b.3.1) that $\forall \phi_i \in H^1(0,L)$ and $\forall \varphi \in {\mathcal{D}}(]0,T[)$, $$ \begin{array}{l} u_{n,i}(t,x)={\bf g}_{n,i}(t,x,{\bf C}_n(t,x))\phi_i(x)\varphi(t)\\ \mbox{\raisebox{-2ex}{$\stackrel{\longrightarrow}{n \rightarrow \infty}$}} {\bf g}_{i}(t,x,{\bf C}(t,x))\phi_i(x)\varphi(t), \end{array} $$ a.e. in $]0,T[ \times ]0,L[$.\\ Moreover, from lemma 3.3 and with (b.3.2) we have, $$ |u_{n,i}| \le M_i ( \displaystyle \sum_{i=1}^4 h_i) |\phi_i||\varphi| \in L^1(]0,T[ \times ]0,L[), $$ where the $M_i$ are constants. Thus, from the Lebesgue theorem on dominated convergence, we obtain, $$ \lim_{n \rightarrow \infty} \displaystyle \int_0^T \int_0^L u_{n,i}(t,x)dxdt = \displaystyle \int_0^T \int_0^L {\bf g}_{i}(t,x,{\bf C}(t,x))\phi_i(x)\varphi(t)dxdt, $$ and finally, for all $\phi \in {\bf H}^1$ and all $\varphi \in {\mathcal{D}}(]0,T[)$, $$ \lim_{n \rightarrow \infty} \displaystyle \int_0^T ({\bf G}_n{\bf C}_n,\phi)\varphi = \displaystyle \int_0^T ({\bf G}{\bf C},\phi)\varphi. $$ \end{itemize} \end{itemize} This concludes the proof of the existence of weak solutions to the one-dimensional NPZD-model. \vspace*{1pt}\baselineskip=13pt \section{Positivity} \vspace*{-0.5pt} \noindent In this section we prove the second part of Theorem \ref{theo:main}: if initial conditions $N^0,P^0,Z^0$ and $D^0$ are positive then solutions to the one-dimensional NPZD-model are positive for a.e. $t \in [0,T]$. To prove this, we need to treat each of the four equations seperately, in detail, and in a convenient order. We first show that $Z$ and $P$ are positive. Next we show that $D$ is positive using the fact that $Z$ and $P$ are positive. Finally, as $Z$, $P$ and $D$ are positive we obtain the positivity of $N$. \begin{itemize} \item Let us recall that for all ${\bf C} \in {\bf H}^1$ and all $t \in [0,T]$, $a_N(t,N,N) \ge 0$, $a_P(t,P,P) \ge 0$, $a_Z(t,Z,Z) \ge 0$ and $a_Z(t,Z,Z) \ge 0$. \item $Z$ is positive:\\ Let ${\bf C}$ be a weak solution to the NPZD-model. Let us take $$ -Z^-=-max(0,-Z), $$ as a test function. Since, $$ \displaystyle \int_0^L\displaystyle \frac{\partial Z(t,x)}{\partial t}Z^-(t,x) dx =-\displaystyle \frac{1}{2}\displaystyle \frac{d }{dt}||Z^-(t)||_{L^2(0,L)}^2, $$ and $$ a_Z(t,Z(t),-Z^-(t))=a_Z(t,Z(t)^-,Z(t)^-), $$ we obtain $$ \displaystyle \frac{1}{2}\displaystyle \frac{d}{dt}||Z(t)^-||_{L^2(0,L)}^2+a_Z(t,Z(t)^-,Z(t)^-) =-(g_Z({\bf C}(t)),Z(t)^-). $$ Let us detail the term $(g_Z({\bf C}),Z^-)$. $$ \begin{array}{ll} (g_Z({\bf C}),Z^-)_{L^2(0,L)}=& \displaystyle \int_0^L(a_p\displaystyle \frac{g_zP^2}{k_z+P^2}ZZ^- + a_d\displaystyle \frac{g_zD^2}{k_z+D^2}ZZ^- -m_zZZ^- \\[11pt] &-\mu_zZZ^-)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]} +(-\tau ZZ^-)\ \hbox{\rm l\hskip -5pt 1}_{]l,L[} + \lambda ZZ^-.\\ \end{array} $$ As $ZZ^-=-(Z^-)^2$, we have $$ \begin{array}{ll} (g_Z({\bf C}),Z^-)_{L^2(0,L)}=& \displaystyle \int_0^L(-(a_p\displaystyle \frac{g_zP^2}{k_z+P^2})(Z^-)^2 - (a_d\displaystyle \frac{g_zD^2}{k_z+D^2})(Z^-)^2 +m_z(Z^-)^2\\[11pt] & +\mu_z(Z^-)^2)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]} +(\tau (Z^-)^2)\ \hbox{\rm l\hskip -5pt 1}_{]l,L[} -\lambda (Z^-)^2,\\ \end{array} $$ and $$ \begin{array}{ll} (g_Z({\bf C}),Z^-)_{L^2(0,L)} \ge & \displaystyle \int_0^L(-(a_p\displaystyle \frac{g_zP^2}{k_z+P^2})(Z^-)^2 \\[11pt] & - (a_d\displaystyle \frac{g_zD^2}{k_z+D^2})(Z^-)^2)\hbox{\rm l\hskip -5pt 1}_{]0,l]}-\lambda (Z^-)^2,\\ \end{array} $$ or $$ \begin{array}{ll} -(g_Z({\bf C}),Z^-)_{L^2(0,L)} \le &\displaystyle \int_0^L((a_p\displaystyle \frac{g_zP^2}{k_z+P^2})(Z^-)^2 \\[11pt] & +(a_d\displaystyle \frac{g_zD^2}{k_z+D^2})(Z^-)^2)+\lambda (Z^-)^2. \end{array} $$ Thus $$ \begin{array}{ll} -(g_Z({\bf C}),Z^-)_{L^2(0,L)} & \le \displaystyle \int_0^L(g_z(a_p+a_d)+\lambda)(Z^-)^2),\\[10pt] & =(g_z(a_p+a_d)+\lambda)||Z^-||_{L^2(0,L)}^2.\\ \end{array} $$ As $a_Z(t,Z^-,Z^-) \ge 0$, we obtain $$ \displaystyle \frac{d}{dt}||Z^-||_{L^2(0,L)}^2 \le 2(g_z(a_p+a_d)+\lambda)||Z^-||_{L^2(0,L)}^2. $$ Integrating this inequality on $[0,t]$ and using Gronwall's lemma, we obtain $$ ||Z^-(t)||_{L^2(0,L)}^2 \le ||Z^-(0)||_{L^2(0,L)}^2 \exp(2(g_z(a_p+a_d)+\lambda)t). $$ Therefore $Z$ is positive. \item $P$ is positive:\\ In the same manner, let us examine the term $(g_P({\bf C}),P^-)_{L^2(0,L)} $.\\ $$ \begin{array}{ll} (g_P({\bf C}),P^-)_{L^2(0,L)} &= \displaystyle \int_0^L (\mu_p(1-\gamma)L_IL_NPP^- -(\displaystyle \frac{g_z P^2}{k_z+P^2}ZP^-) \\[11pt] &-m_pPP^-)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]} + (-\tau PP^-)\ \hbox{\rm l\hskip -5pt 1}_{]l,L[} + \lambda PP^-,\\[7pt] & =\displaystyle \int_0^L (-\mu_p(1-\gamma)L_IL_N(P^-)^2 -(\displaystyle \frac{g_z P^2}{k_z+P^2}ZP^-)\\[11pt] & +m_p(P^-)^2)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]} + (\tau (P^-)^2)\ \hbox{\rm l\hskip -5pt 1}_{]l,L[} - \lambda (P^-)^2,\\[7pt] & \ge \displaystyle \int_0^L (-\mu_p(1-\gamma)L_IL_N(P^-)^2 \\[11pt] &-(\displaystyle \frac{g_z P^2}{k_z+P^2}ZP^-))\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}- \lambda (P^-)^2, \end{array} $$ and $$ \begin{array}{ll} -(g_P({\bf C}),P^-)_{L^2(0,L)} & \le \displaystyle \int_0^L \mu_p(1-\gamma)L_IL_N(P^-)^2 \\[11pt] & +(\displaystyle \frac{-g_z P}{k_z+P^2}Z(P^-)^2)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}+ \lambda (P^-)^2. \end{array} $$ The function $x \mapsto \displaystyle \frac{-x}{k_z+x^2}$ is bounded by $\displaystyle \frac{1}{2\sqrt{k_z}}$ on $\hbox{\rm I\hskip -1.5pt R}$.\\ $L_N$ and $L_I$ are bounded by $1$.\\ $Z(t)$, is a solution to the NPZD-model and therefore belongs to $H^1(0,L) \subset L^{\infty}(0,L)$. Hence we have, $\forall t,\ Z(t) \le ||Z(t)||_{\infty}$, and $$ \begin{array}{l} -(g_P({\bf C}),P^-)_{L^2(0,L)} \le (\lambda+\mu_p(1-\gamma)+g_z\displaystyle \frac{1}{2\sqrt{k_z}}||Z(t)||_{\infty}) ||P^-(t)||_{L^2(0,L)}^2. \end{array} $$ We conclude in the same way to obtain $$ \begin{array}{ll} ||P^-(t)||_{L^2(0,L)}^2 &\le ||P^-(0)||_{L^2(0,L)}^2 \exp(\displaystyle \int_0^t 2(\lambda+\mu_p(1-\gamma) \\[11pt] &+g_z\displaystyle \frac{1}{2\sqrt{k_z}}||Z(s)||_{\infty})ds). \end{array} $$ \item $D$ is positive:\\ $$ \begin{array}{ll} (g_D({\bf C}),D^-)_{L^2(0,L)}& = \displaystyle \int_0^L((1-a_p)(\displaystyle \frac{g_z P^2}{k_z+P^2}ZD^-) -a_d(\displaystyle \frac{g_z D^2}{k_z+D^2}ZD^-) \\[11pt] & +m_pPD^-+m_zZD^- +\mu_d(D^-)^2)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}\\[8pt] &+(\tau (D^-)^2)\ \hbox{\rm l\hskip -5pt 1}_{]l,L[} +\lambda DD^-. \end{array} $$ Because $P,Z$ and $D^-$ are positive, we obtain $$ \begin{array}{ll} (g_D({\bf C}),D^-)_{L^2(0,L)} &\ge \displaystyle \int_0^L(-a_d(\displaystyle \frac{g_z D^2}{k_z+D^2}ZD^-))\ \hbox{\rm l\hskip -5pt 1}_{]0,l]} -\lambda (D^-)^2,\\[11pt] -(g_D({\bf C}),D^-)_{L^2(0,L)} &\le \displaystyle \int_0^L(-a_d(\displaystyle \frac{g_zD}{k_z+D^2}Z(D^-)^2))\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}+\lambda (D^-)^2,\\[11pt] -(g_D({\bf C}),D^-)_{L^2(0,L)} &\le \displaystyle \int_0^L(a_d(g_z\displaystyle \frac{1}{2\sqrt{k_z}}||Z||_{\infty})(D^-)^2)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}+\lambda (D^-)^2,\\[11pt] -(g_D({\bf C}),D^-)_{L^2(0,L)} &\le (\lambda+ a_d(g_z\displaystyle \frac{1}{2\sqrt{k_z}}||Z||_{\infty}))||D^-||_{L^2(0,L)}^2.\\ \end{array} $$ Hence $$ \displaystyle \frac{d}{dt}||D^-||_{L^2(0,L)}^2 \le 2( \lambda + a_d(g_z\displaystyle \frac{1}{2\sqrt{k_z}}||Z||_{\infty}))||D^-||_{L^2(0,L)}^2, $$ and we can conclude. \item $N$ is positive:\\ $$ \begin{array}{ll} (g_N({\bf C}),N^-)_{L^2(0,L)} &=\displaystyle \int_0^L (-\mu_p(1-\gamma) L_I L_{N}PN^-+\mu_z ZN^- \\[11pt] &+ \mu_d DN^-)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]} +(\tau(P+Z+D)N^-)\ \hbox{\rm l\hskip -5pt 1}_{]l,L[}+\lambda NN^-. \end{array} $$ Because $P,Z,D$ and $N^-$ are positive, we have $$ \begin{array}{l} (g_N({\bf C}),N^-)_{L^2(0,L)} \ge \displaystyle \int_0^L (-\mu_p(1-\gamma) L_I L_{N}PN^-)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]}-\lambda (N^-)^2,\\[8pt] -(g_N({\bf C}),N^-)_{L^2(0,L)} \le \displaystyle \int_0^L (-\mu_p(1-\gamma) L_I \displaystyle \frac{1}{k_n+|N|}P(N^-)^2)\ \hbox{\rm l\hskip -5pt 1}_{]0,l]} +\lambda (N^-)^2. \end{array} $$ Once again we can conclude and the proof of theorem \ref{theo:main} is complete. \end{itemize} Hence, if initial concentrations are positive then concentrations are always positive and both models, with or without absolute values in the nonlinear terms, are equivalent. \vspace*{1pt}\baselineskip=13pt \section{Existence, positivity and uniqueness for different $G_P$, $G_D$, $L_I$ and zooplankton mortality formulations} \label{sec:uniqueness} \vspace*{-0.5pt} \noindent Functions used to parameterize biological fluxes such as zooplankton grazing on phytoplankton, $G_P$, or on detritus, $G_D$, light limited growth rate, $L_I$ or zooplankton mortality (which is a constant, $m_z$, in our model), vary from one modeling study to another. One can wonder if the existence result still applies with these different formulations. To answer this question it should be noticed that the key argument used in the proof is the fact that the nonlinear reaction terms allow us to define a nonlinear continuous operator ${\bf G}$ satisfying $||{\bf GC}||_{L^2(0,T,{\bf H})} \le M_g ||{\bf C}||_{L^2(0,T,{\bf H})}$. Therefore, as all the functions listed in Table \ref{tab:functions}, found in the literature, are continuous and bounded on $\hbox{\rm I\hskip -1.5pt R}^+$ or $(\hbox{\rm I\hskip -1.5pt R}^+)^2$, the existence result stays correct. Positivity can also easily be checked for all these different formulations. It should however be mentioned that some studies use a quadratic zooplankton mortality term which can not be treated with the method we propose. \begin{table}[htbp] \caption{ Different parameterizations found in the literature. All parameters are positive constants. \label{tab:functions} } \begin{tabular}{|p{2cm}|p{6cm}|p{3cm}|} \hline &&\\[-5pt] $Z$ grazing on $P$, $G_P$ & $\displaystyle \frac{g_zP^2}{k_z + P^2}$ & this\ study\ and\ e.g.\ Fennel\ {\it et al. } \cite{Fennel:2001}\\ \cline{2-3} &&\\[-5pt] & $\displaystyle \frac{g_zP^2}{k_z + P^2 + D^2}$ & e.g.\ Losa {\it et al. } \cite{Losa:2001}\\ &&\\[-5pt] \cline{2-3} &&\\[-5pt] & $\displaystyle \frac{g_zrP^2}{k_z(rP+(1-r)D)+rP^2+(1-r)D^2}$ & e.g.\ Fasham {\it et al. } \cite{Fasham:1990} \\ &&\\[-5pt] \hline &&\\[-5pt] $Z$ grazing on $D$, $G_D$ & $\displaystyle \frac{g_zD^2}{k_z + D^2}$ & this\ study\ and\ e.g.\ Fennel\ {\it et al. } \cite{Fennel:2001}\\ &&\\[-5pt] \cline{2-3} &&\\[-5pt] & $\displaystyle \frac{g_zD^2}{k_z + P^2 + D^2}$ & e.g.\ Losa\ {\it et al. } \cite{Losa:2001}\\ &&\\[-5pt] \cline{2-3} &&\\[-5pt] & $\displaystyle \frac{g_zrD^2}{k_z(rP+(1-r)D)+rP^2+(1-r)D^2}$ & e.g.\ Fasham\ {\it et al. } \cite{Fasham:1990} \\ &&\\[-5pt] \hline &&\\[-5pt] light\ limited\ growth\ rate, $L_I$ & $1-\exp(-PAR(t,x,P)/k_{par})$ & this\ study\ and\ e.g.\ L\'evy\ {\it et al. } \cite{Levy:1998a} \\ \cline{2-3} &&\\[-5pt] & $\displaystyle \frac{v_p \alpha PAR(t,x,P)}{(v_p^2 + \alpha^2 PAR(t,x,P)^2)^{1/2}}$ & e.g.\ Spitz\ {\it et al. } \cite{Spitz:1998}\\ &&\\[-5pt] \hline &&\\[-5pt] Z\ mortality& $m_z$ & this\ study\ and\ e.g.\ L\'evy {\it et al. } \cite{Levy:1998b}\\ \cline{2-3} &&\\[-5pt] & $\displaystyle \frac{m_z Z}{k + Z}$ & e.g.\ Losa\ {\it et al. } \cite{Losa:2001}\\ &&\\ \hline \end{tabular} \end{table} Let us now concentrate on the question of the uniqueness of weak solutions to the one-dimensional NPZD-model.\\ In order to prove uniqueness we need the nonlinear reaction terms to satisfy a local Lipschitz condition which was not needed to obtain the existence result. To verify that such a condition holds we examine in some details the optical model from which the $PAR(t,x,P)$ term and consequently the $L_I(t,x,P)$ term are calculated.\\ In the different equations $L_I(t,x,P)$ allways appears in the product form $PL_I(t,x,P)$. Concerning this product the desired local Lipschitz condition reads as follows:\\ ~\\ for all $(t,x) \in [0,T] \times [0,L]$, and all $P,\hat{P} \in [0,+\infty[$, \begin{equation} \label{lipschitz} |PL_I(t,x,P)-\hat{P}L_I(t,x,\hat{P})| \le K_I(P,\hat{P}) |P-\hat{P}|, \end{equation} where $K_I$ is a continuous nonnegative real-valued function which is increasing in each variable.\\ ~\\ In the optical model we considered, two different wavelengths are taken into account and the absorption coefficients depend on the local phytoplankton concentrations: $$ \begin{array}{ll} PAR(t,x,P)=&Q(t)(\exp(-(k_{go}+k_{gp}(\displaystyle \frac{12 P r_d}{r_{pg}r_c})^{l_g}) x) \\[7pt] &+\exp(-(k_{ro}+k_{rp}(\displaystyle \frac{12 P r_d}{r_{pg}r_c})^{l_r}) x)). \end{array} $$ $Q(t)$ is proportional to the irradiance intensity hitting the sea surface at time $t$. Parameters are given in Table \ref{tab:optic}. Let us suppose that $Q(t) \in L^{\infty}(0,T)$ and that $Q(t) \ge 0$. Even though the exponents $l_g$ and $l_r$ satisfy $0< l_g < 1$ and $0< l_r < 1$, an easy calculation of the derivative, $\displaystyle \frac{d}{dP}(PL_I(t,x,P))$, shows that with such an optical model property (\ref{lipschitz}) is satisfied with: $$ K_I(P,\hat{P})= 1 + \displaystyle \frac{||Q||_{\infty} L}{k_{par}} (k_{gp} l_g \displaystyle (\frac{12r_d}{r_{pg}r_{c}})^{l_g} (\max{(P,\hat{P})})^{l_g} +k_{rp} l_r \displaystyle (\frac{12r_d}{r_{pg}r_{c}})^{l_r} (\max{(P,\hat{P})})^{l_r}). $$ In the literature $PAR(t,x,P)$ is often parameterized by, $$ PAR(t,x,P)=Q(t)\exp(-(k_1 + k_2 P)x), $$ where $k_1$ and $k_2$ are positive constants. With this simpler formulation property (\ref{lipschitz}) is clearly satisfied.\\ The following two lemmas give the local Lipschitz property satisfied by all four reaction terms of the NPZD-model. \begin{lemma} \label{lemma:besogne2} The nonlinear reaction terms $g_N$, $g_P$, $g_Z$ and $g_D$ satisfy:\\ for all $(t,x) \in [0,T] \times [0,L]$, and all ${\bf C},\hat{{\bf C}} \in (\hbox{\rm I\hskip -1.5pt R}^+)^4$, $$ \begin{array}{l} |g_N(t,x,{\bf C}) - g_N(t,x,\hat{{\bf C}})| \le K_N(P,\hat{P})(|N-\hat{N}|+|P-\hat{P}|+|Z-\hat{Z}|+|D-\hat{D}|),\\[4pt] |g_P(t,x,{\bf C}) - g_P(t,x,\hat{{\bf C}})| \le K_P(P,\hat{P})(|N-\hat{N}|+|P-\hat{P}|+|Z-\hat{Z}|+|D-\hat{D}|),\\[4pt] |g_Z(t,x,{\bf C}) - g_Z(t,x,\hat{{\bf C}})| \le K_Z(Z,\hat{Z})(|N-\hat{N}|+|P-\hat{P}|+|Z-\hat{Z}|+|D-\hat{D}|),\\[4pt] |g_D(t,x,{\bf C}) - g_D(t,x,\hat{{\bf C}})| \le K_D(Z,\hat{Z})(|N-\hat{N}|+|P-\hat{P}|+|Z-\hat{Z}|+|D-\hat{D}|), \end{array} $$ where $K_N$, $K_P$, $K_Z$, $K_D$ are continuous nonnegative real-valued functions which are increasing in each variable. \end{lemma} \begin{proof} Functions $l(x)=\displaystyle \frac{x}{k_n+x}$ and $g(x)=\displaystyle \frac{x^2}{k_z^2 +x^2}$ are continuously differentiable on $[0,+\infty[$, and $$ |l'(x)| \le \displaystyle \frac{1}{k_n},\quad |g'(x)| \le \displaystyle \frac{3\sqrt{3}}{8\sqrt{k_z}}. $$ Therefore $l$ and $g$ are Lipschitz continuous.\\ It is clear that $$ \begin{array}{ll} |g_N(t,x,{\bf C}) - g_N(t,x,\hat{{\bf C}})| & \le \mu_p(1-\gamma) |L_I(t,x,P)PL_N(N) - L_I(t,x,\hat{P})\hat{P}L_N(\hat{N})|\\[4pt] &+ (\mu_z + \tau)|Z-\hat{Z}|+ (\mu_d + \tau)|D-\hat{D}| \\[4pt] &+\tau |P-\hat{P}| + \lambda |N-\hat{N}|. \end{array} $$ Now since $$ \begin{array}{ll} |L_I(t,x,P)PL_N(N) - L_I(t,x,\hat{P})\hat{P}L_N(\hat{N})|=&|L_I(t,x,P)P(L_N(N) - L_N(\hat{N}))\\[4pt] &+ L_N(\hat{N})( L_I(t,x,P)P) - L_I(t,x,\hat{P})\hat{P}|, \end{array} $$ we have $$ |L_I(t,x,P)PL_N(N) - L_I(t,x,\hat{P})\hat{P}L_N(\hat{N})| \le \frac{1}{k_n} \max{(P,\hat{P})} |N - \hat{N}| + K_I(P,\hat{P})|P - \hat{P}|. $$ We then define $$ K_N(P,\hat{P})=\max{( \lambda + \mu_p(1-\gamma)\frac{1}{k_n} \max{(P,\hat{P})} , \tau + \mu_p(1-\gamma)K_I(P,\hat{P}), \mu_z + \tau, \mu_d + \tau)}. $$ $K_P$, $K_Z$ and $K_D$ are obtained in the same way \end{proof} \begin{lemma} \label{prop:biendefbis} For $t \in [0,T]$ and for positive ${\bf C}(t), \hat{{\bf C}}(t) \in {\bf H}^1$, there exists a constant $L_{\infty}$, depending on $||{\bf C}(t)||_{\infty}$ and $||\hat{{\bf C}}(t)||_{\infty}$, such that the operator, ${\bf G}$, satisfies, $$ ||{\bf GC}(t) - {\bf G}\hat{{\bf C}}(t)|| \le L_{\infty}||{\bf C}(t)-\hat{{\bf C}}(t)||. $$ \end{lemma} \begin{proof} From lemma \ref{lemma:injection}, ${\bf H}^1 \subset {\bf L}^{\infty}$. From lemma \ref{lemma:besogne2} we have, $$ \begin{array}{l} ||{\bf G}{\bf C}(t)-{\bf G}\hat{{\bf C}}(t)||^2 \\[7pt] = \displaystyle \int_{0}^{L} |g_N(t,x,{\bf C}(t,x))-g_N(t,x,\hat{{\bf C}}(t,x))|^2 +|g_P(t,x,{\bf C}(t,x))-g_P(t,x,\hat{{\bf C}}(t,x))|^2\\[7pt] +|g_Z(t,x,{\bf C}(t,x))-g_Z(t,x,\hat{{\bf C}}(t,x))|^2 +|g_D(t,x,{\bf C}(t,x))-g_D(t,x,\hat{{\bf C}}(t,x))|^2 dx,\\[7pt] \le cte (K_N(||P(t)||_{\infty},||\hat{P}(t)||_{\infty}))^2\ [\ ||N(t)-\hat{N}(t)||_{L^2(0,L)}^2\\[7pt] +||P(t)-\hat{P}(t)||_{L^2(0,L)}^2 + ||Z(t)-\hat{Z}(t)||_{L^2(0,L)}^2 + ||D(t)-\hat{D}(t)||_{L^2(0,L)}^2\ ]\\[7pt] + ...\\[7pt] \le L_{\infty}(||{\bf C}(t)||_{\infty},||\hat{{\bf C}}(t)||_{\infty})^2 || {\bf C}(t) - \hat{{\bf C}}(t) ||^2. \end{array} $$ \end{proof} Elementary calculations show that functions of Table \ref{tab:functions} are continuously differentiable on $\hbox{\rm I\hskip -1.5pt R}^+$ or $(\hbox{\rm I\hskip -1.5pt R}^+)^2$, with bounded first derivatives. Therefore they are Lipschitz continuous and the uniqueness result presented below also holds for these formulations. \begin{proposition} \label{prop:unicite} The weak solution to the one-dimensional NPZD-model, ${\bf C} \in W({\bf H}^1)$, is unique. \end{proposition} \begin{proof} Let us suppose that there are two solutions ${\bf C}^1$ and ${\bf C}^2 \in W({\bf H}^1)$. They satisfy $$ \forall \phi \in {\bf H}^1, \quad (\displaystyle \frac{d}{dt} ({\bf C}^1 - {\bf C}^2), {\bf \phi}) + a(t,{\bf C}^1 - {\bf C}^2,{\bf \phi}) = ({\bf GC}^1-{\bf GC}^2,{\bf \phi}), $$ Let us choose $\phi ={\bf C}^1 - {\bf C}^2$ as a test function. Using the coerciveness of $a$ and the Cauchy-Schwarz inequality, we obtain $$ \displaystyle \frac{1}{2}\frac{d}{dt}||{\bf C}^1(t) - {\bf C}^2(t)||^2 +c_0 ||{\bf C}^1(t) - {\bf C}^2(t)||_1^2 \le ||{\bf GC}^1(t) - {\bf GC}^2(t)||||{\bf C}^1(t) - {\bf C}^2(t)||. $$ With Young's inequality, we obtain $$ \begin{array}{l} \displaystyle \frac{1}{2}\frac{d}{dt}||{\bf C}^1(t) - {\bf C}^2(t)||^2 +c_0 ||{\bf C}^1(t) - {\bf C}^2(t)||_1^2 \\[7pt] \le \displaystyle \frac{1}{2\alpha}||{\bf GC}^1(t) - {\bf GC}^2(t)||^2 +\displaystyle \frac{\alpha}{2}||{\bf C}^1(t) - {\bf C}^2(t)||^2, \end{array} $$ with $\alpha = 2c_0$, $$ \displaystyle \frac{d}{dt}||{\bf C}^1(t) - {\bf C}^2(t)||^2 \le \frac{1}{2 c_0}||{\bf GC}^1(t) - {\bf GC}^2(t)||^2. $$ From lemma (\ref{prop:biendefbis}) we obtain $$ \displaystyle \frac{d}{dt}||{\bf C}^1(t) - {\bf C}^2(t)||^2 \le \displaystyle \frac{1}{2c_0} L_{\infty}^2(t)||{\bf C}^1(t)-{\bf C}^2(t)||^2. $$ Thus, integrating on $[0,t]$ and using Gronwall's lemma $$ ||{\bf C}^1(t)-{\bf C}^2(t)||^2 \le ||{\bf C}^1(0)-{\bf C}^2(0)||^2 \exp(\displaystyle \int_0^t \frac{1}{2c_0}L_{\infty}^2(s) ds). $$ This concludes the proof \end{proof} \begin{table}[htbp] \caption{ Optical model parameters \label{tab:optic} } \begin{tabular}{|p{5cm}|c|c|c|} \hline parameter& name & value & unit \\ \hline Redfield\ ratio\ C:N & $r_d$ & 6.625 & \\ contribution\ of\ Chl\ to\ absorbing\ pigments & $r_{pg}$ & 0.7 & \\ carbone:chlorophyll ratio & $r_c$ & 55 & $mgC.mgChla^{-1}$\\ water\ absorption\ in\ red & $k_{ro}$& 0.225 & $m^{-1}$\\ water\ absorption\ in\ green & $k_{go}$& 0.0232 & $m^{-1}$\\ pigment\ absorption\ in\ red & $k_{rp}$& 0.037 & $m^{-1}.(mgChl.m^{-3})^{-l_r}$ \\ pigment\ absorption\ in\ green & $k_{gp}$& 0.074 & $m^{-1}.(mgChl.m^{-3})^{-l_g}$ \\ power law for absorption in red & $l_{r}$ & 0.629 & \\ power law for absorption in green & $l_{g}$ & 0.674 & \\ \hline \end{tabular} \end{table} \vspace*{1pt}\baselineskip=13pt \section{Conclusion} \vspace*{-0.5pt} \noindent We have presented a qualitative analysis of a one-dimensional biological NPZD-model. This model describes the evolution over time and space of four biological variables, phytoplankton, zooplankton, nutrients and detritus. The only physical process which is taken into account is vertical diffusion and the biological model is imbedded in a physical turbulence model which we did not give explicitly but appeared as a space and time-dependent mixing coefficient. The model's equation for detritus also contains an advection term which represents the sinking of detritus with a constant speed. All four variables interact through nonlinear reaction terms which depend on space and time through the action of light and present a discontinuity in the space variable at a particular depth. We have formulated an initial-boundary value problem and proved existence of a unique weak solution to it. Furthermore, a detailed investigation of the reaction terms enabled us to prove positivity of the solution. This is biologically important since variables represent concentrations which should always be positive quantities. We have also shown that the result still holds if different parameterizations of biological processes found in the biogeochemical modeling literature are used. The analysis conducted in this paper is a necessary first step towards the investigation of qualitative properties other than positivity which might be of interest. For example Boushaba {\it et al. } \cite{Boushaba:2002} deal with the problem of determining the asymptotic behavior of solutions to their phytoplankton model. One could also wish to investigate the bifurcational structure of the NPZD-model even though the complexity of the analytical formulation of the equations might constitute a difficulty. This type of study could help the understanding of the modifications of evolution of the NPZD system under minor changes in the values of parameters reported in Edwards \cite{Edwards:2001} and Faugeras {\it et al. } \cite{Faugeras:2002}. Eventually we would like to point out that the analysis we presented can easily be extended to models containing any number, $n$, of biological variables as long as the nonlinearities allow us to define a continuous nonlinear operator ${\bf G}$ satisfying $||{\bf GC}||_{L^2(0,T,(L^2(0,L))^n)} \le M_g ||{\bf C}||_{L^2(0,T,(L^2(0,L))^n)}$. However, in such more complex models, the question of positivity seems to be delicate as equations have to be treated one after the other, and the right order has to be found as the positivity of some variables can depend on the positivity of others. Let us also mention that the analysis can be extended to three-dimensional models in which not only mixing coefficients but also velocities, calculated by an ocean circulation model, are included in the system of partial differential equations constituing the biological model. These velocities can be included in the bilinear form, $a(t,.,.)$, as $v_d$, the detritus sedimentation speed, is in the formulation of the initial-boundary value problem we studied.\\ ~\\ {\bf Acknowledgements}.\\ The author is grateful to his thesis supervisors, Jacques Blum and Jacques Verron, for their guidance. The author also thanks Marina L\'evy and Laurent M\'emery for helpful discussions on physical and biological models, and Jean-Pierre Puel for his mathematical advices.
train/arxiv
BkiUdkLxK6Ot9V_E4Qnl
5
1
\section{Gauge-Higgs unification} The existence of the Higgs boson of a mass $125\,$GeV has been firmly confirmed. It establishes the unification scenario of electromagnetic and weak forces. In the standard model (SM) electromagnetic and weak forces are unified as $SU(2)_L \times U(1)_Y$ gauge forces. The $SU(2)_L \times U(1)_Y$ gauge symmetry is spontaneously broken by the Higgs scalar fields, whose neutral component appears as the observed Higgs boson. Although almost all experimental data are consistent with the SM, it is not clear whether the observed Higgs boson is precisely what the SM assumes to exit. The gauge sector of the SM is beautiful. The gauge principle dictates how quarks and leptons interact with each other by gauge forces. In the SM the Higgs field gives masses to quarks, leptons, and weak bosons. However, the potential for the Higgs boson must be prepared by hand such that it induces the spontaneous breaking of $SU(2)_L \times U(1)_Y$ symmetry. To put it differently, there is no principle for the Higgs field which determines how the Higgs boson interacts with itself and other fields. The lack of a principle results in the arbitrariness in the choice of parameters in the theory. Furthermore the Higgs boson acquires an infinitely large correction to its mass at the quantum level which must be cancelled by fine tuning of bare parameters. It is called as the gauge hierarchy problem. In addition, even the ground state for the Higgs boson may become unstable against quantum corrections. Gauge-Higgs unification (GHU) naturally solves those problems. The 4d Higgs boson appears as a fluctuation mode of an Aharonov-Bohm (AB) phase in the fifth dimension of spacetime, thus becoming a part of gauge fields. By dynamics of the AB phase the Higgs boson acquires a finite mass at the quantum level, which is protected from divergence by the gauge principle. The interactions of the Higgs boson are governed by the gauge principle, too. In short, gauge fields and the Higgs boson are unified.\cite{Hosotani1983}-\cite{Hatanaka1998} A realistic model of gauge-Higgs unification has been proposed. It is the $SO(5) \times U(1)_X \times SU(3)_C$ gauge-Higgs unification in the Randall-Sundrum warped space. It reproduces the SM content of gauge fields and matter content, and SM phenomenology at low energies. It leads to small deviations in the Higgs couplings. It also predicts new particles at the scale 5 TeV to 10 TeV as Kaluza-Klein (KK) excitation modes in the fifth dimensions. Signals of these new particles can be seen both at LHC and at ILC.\cite{Kubo2002}-\cite{FHHOY2019} One of the distinct features of the gauge-Higgs unification is large parity violation in the couplings of quarks and leptons to KK exited states of gauge bosons. Right-handed quarks and leptons have much larger couplings to the first KK excited states of photon, $Z$ boson, and $Z_R$ boson (called as $Z'$ bosons) than the left-handed ones. These $Z'$ bosons have masses around 7 TeV - 8 TeV. We will show below that even at 250 GeV ILC with 250 fb$^{-1}$ data large deviations from the SM in various cross sections in $e^+ e^- \rightarrow f \bar f$ processes can be seen by measuring the dependence on the polarization of the electron beam. The key technique is to see interference effects between the contribution from photon and $Z$ boson and the contribution from $Z'$ bosons. We comment that there might be variation in the matter content of the $SO(5) \times U(1)_X \times SU(3)_C$ gauge-Higgs unification. Recently a new way of introducing quark and lepton multiplets has been found, which can be embedded in the $SO(11)$ gauge-Higgs grand unification.\cite{FHHOY2019} Other options for fermion content have been proposed.\cite{Yoon2018} These models can be clearly distinguished from each other by investigating the polarization dependence of electron/positron beams in fermion pair production at ILC. Note also that gauge-Higgs unification scenario provides new approaches to dark matter, Higgs, and neutrino physics.\cite{FHHOS-DM2014}-\cite{Lim2018} \section{$SO(5) \times U(1)\times SU(3)$ GHU in Randall-Sundrum warped space} The theory is defined in the Randall-Sundrum (RS) warped space whose metric is given by \begin{align} ds^2= e^{-2\sigma(y)} \eta_{\mu\nu}dx^\mu dx^\nu+dy^2, \label{RSmetric1} \end{align} where $\mu,\nu=0,1,2,3$, $\eta_{\mu\nu}=\mbox{diag}(-1,+1,+1,+1)$, $\sigma(y)=\sigma(y+ 2L)=\sigma(-y)$, and $\sigma(y)=ky$ for $0 \le y \le L$. It has the topological structure $S^1/Z_2$. In terms of the conformal coordinate $z=e^{ky}$ ($1\leq z\leq z_L=e^{kL}$) in the region $0 \leq y \leq L$ \begin{align} ds^2= \frac{1}{z^2} \bigg(\eta_{\mu\nu}dx^{\mu} dx^{\nu} + \frac{dz^2}{k^2}\bigg) . \label{RSmetric-2} \end{align} The bulk region $0<y<L$ ($1<z<z_L$) is anti-de Sitter (AdS) spacetime with a cosmological constant $\Lambda=-6k^2$, which is sandwiched by the UV brane at $y=0$ ($z=1$) and the IR brane at $y=L$ ($z=z_L$). The KK mass scale is $m_{\rm KK}=\pi k/(z_L-1) \simeq \pi kz_L^{-1}$ for $z_L\gg 1$. Gauge fields $A_M^{SO(5)}$, $A_M^{U(1)_X}$ and $A_M^{SU(3)_C}$ of $SO(5) \times U(1)_X \times SU(3)_C$, with gauge couplings $g_A$, $g_B$ and $g_C$, satisfy the orbifold conditions\cite{HOOS2008, FHHOS2013, FHHOY2019} \begin{align} &\begin{pmatrix} A_\mu \cr A_{y} \end{pmatrix} (x,y_j-y) = P_{j} \begin{pmatrix} A_\mu \cr - A_{y} \end{pmatrix} (x,y_j+y)P_{j}^{-1} \label{BC-gauge1} \end{align} where $(y_0, y_1) = (0, L)$. For $A_M^{SO(5)}$ \begin{align} P_0=P_1 = P_{\bf 5}^{SO(5)} = \mbox{diag} (I_{4},-I_{1} ) ~, \label{BC-matrix1} \end{align} whereas $P_0=P_1=I$ for $A_M^{U(1)_X}$ and $A_M^{SU(3)_C}$. With this set of boundary conditions $SO(5)$ gauge symmetry is broken to $SO(4) \simeq SU(2)_L \times SU(2)_R$. At this stage there appear zero modes of 4D gauge fields in $SU(3)_C$, $SU(2)_L \times SU(2)_R$ and $U(1)_X$. There appear zero modes in the $SO(5)/SO(4)$ part of $A_y^{SO(5)}$, which constitute an $SU(2)_L$ doublet and become 4D Higgs fields. As a part of gauge fields the 4D Higgs boson $H(x)$ appears as an AB phase in the fifth dimension; \begin{align} &\hat W = P \exp \bigg\{ i g_A \int_{-L}^L dy \, A_y \bigg\} \cdot P_1 P_0 \sim \exp \bigg\{ i \bigg(\theta_H + \frac{H(x)}{f_H} \bigg) 2 T^{(45)} \bigg\} ~, \label{ABphase1} \end{align} where \begin{align} &f_H = \frac{2}{g_w} \sqrt{ \frac{k}{L(z_L^2 -1)}} \sim \frac{2 ~ m_{\rm KK}}{\pi g_w \sqrt{kL}} ~. \label{ABphase2} \end{align} $g_w = g_A/\sqrt{L}$ is the 4D weak coupling. Gauge invariance implies that physics is periodic in $\theta_H $ with a period $2\pi$. A brane scalar field $\hat \Phi_{(1,2,2, \frac{1}{2})} (x)$ or $\hat \Phi_{(1,1,2, \frac{1}{2})} (x)$ is introduced on the UV brane where subscripts indicate the $SU(3)_C \times SU(2)_L \times SU(2)_R \times U(1)_X$ content. Nonvanishing $\langle \hat \Phi \rangle$ spontaneously breaks $SU(2)_R \times U(1)_X$ to $U(1)_Y$, resulting in the SM symmetry $SU(3)_C \times SU(2)_L \times U(1)_Y$. Once the fermion content is specified, the effective potential $V_{\rm eff} (\theta_H)$ is evaluated. The location of the global minimum of $V_{\rm eff} (\theta_H)$ determines the value of $\theta_H$. When $\theta_H \not= 0$, $SU(2)_L \times U(1)_Y$ symmetry is dynamically broken to $U(1)_{\rm EM}$. It is called the Hosotani mechanism.\cite{Hosotani1983} The $W$ boson mass is given by \begin{align} m_W \sim \sqrt{\frac{k}{L}} ~ z_L^{-1} \, \sin \theta_H \sim \frac{\sin \theta_H}{\pi \sqrt{kL}} ~ m_{\rm KK} ~. \label{Wmass1} \end{align} As typical values, for $\theta_H = 0.10$ and $z_L = 3.6 \times 10^4$ one find $m_{\rm KK} = 8.1\,$TeV and $f_H = 2.5\,$TeV. There appears natural little hierarchy in the weak scale ($m_Z$) and the KK scale ($m_{\rm KK}$). Quark and lepton multiplets are introduced in the vector representation {\bf 5} of $SO(5)$. Further dark fermions are introduced in the spinor representation {\bf 4} of $SO(5)$. This model is called as the A-model, and has been investigated intensively so far.\cite{FHHOS2013}-\cite{FHHO2017ILC} Recently an alternative way of introducing matter has been found.\cite{FHHOY2019} This model, called as the B-model, can be implemented in the $SO(11)$ gauge-Higgs grand unification.\cite{HosotaniYamatsu2015, Furui2016, HosotaniYamatsu2017} The matter content of the two models is summarized in Table \ref{Table-matter}. In this talk phenomenological consequences of the A-model are presented. \begin{table}[tbh] \renewcommand{\arraystretch}{1.4} \begin{center} \caption{Matter fields. $SU(3)_C\times SO(5) \times U(1)_X$ content is shown. In the A-model only $SU(3)_C\times SO(4) \times U(1)_X$ symmetry is maintained on the UV brane so that the $SU(2)_L \times SU(2)_R$ content is shown for brane fields. In the B-model given in ref.\ \cite{FHHOY2019} the full $SU(3)_C\times SO(5) \times U(1)_X$ invariance is preserved on the UV brane. } \vskip 10pt \begin{tabular}{|c|c|c|} \hline &A-model &B-model \\ \hline \hline quark &$({\bf 3}, {\bf 5})_{\frac{2}{3}} ~ ({\bf 3}, {\bf 5})_{-\frac{1}{3}}$ &$({\bf 3}, {\bf 4})_{\frac{1}{6}} ~ ({\bf 3}, {\bf 1})_{-\frac{1}{3}}^+ ~ ({\bf 3}, {\bf 1})_{-\frac{1}{3}}^-$ \\ \hline lepton &$({\bf 1}, {\bf 5})_{0} ~ ({\bf 1}, {\bf 5})_{-1}$ &$\strut ({\bf 1}, {\bf 4})_{-\frac{1}{2}}$ \\ \hline dark fermion &$({\bf 1}, {\bf 4})_{\frac{1}{2}}$ & $({\bf 3}, {\bf 4})_{\frac{1}{6}} ~ ({\bf 1}, {\bf 5})_{0}^+ ~ ({\bf 1}, {\bf 5})_{0}^-$ \\ \hline \hline brane fermion &$\begin{matrix} ({\bf 3}, [{\bf 2, 1}])_{\frac{7}{6}, \frac{1}{6}, -\frac{5}{6}} \cr \noalign{\kern -4pt} ({\bf 1}, [{\bf 2, 1}])_{\frac{1}{2}, -\frac{1}{2}, -\frac{3}{2}} \end{matrix}$ &$({\bf 1}, {\bf 1})_{0} $ \\ \hline brane scalar &$({\bf 1}, [{\bf 1,2}])_{\frac{1}{2}}$ &$({\bf 1}, {\bf 4})_{\frac{1}{2}} $ \\ \hline Sym.\ on UV brane &$SU(3)_C \times SO(4) \times U(1)_X$ &$SU(3)_C \times SO(5) \times U(1)_X$ \\ \hline \end{tabular} \label{Table-matter} \end{center} } \end{table} The correspondence between the SM in four dimensions and the gauge-Higgs unification in five dimensions is summarized as \begin{align} \begin{matrix} {\rm SM} && {\rm GHU} \cr \noalign{\kern 3pt} \displaystyle \int d^4x \Big\{ {\cal L}^{\rm gauge} + {\cal L}^{\rm Higgs}_{\rm kinetic} \Big\} &~ \Rightarrow ~ &\displaystyle \int d^5 x \sqrt{-g} ~ {\cal L}^{\rm gauge}_{\rm 5d} \cr \noalign{\kern 5pt} \displaystyle \int d^4 x \Big\{ {\cal L}^{\rm fermion} + {\cal L}^{\rm Yukawa} \Big\} &\Rightarrow &\displaystyle \int d^5 x \sqrt{-g} ~ {\cal L}^{\rm fermion}_{\rm 5d} \cr \noalign{\kern 5pt} - \displaystyle \int d^4 x ~ {\cal L}^{\rm Higgs}_{\rm potential} &\Rightarrow & \displaystyle \int d^4 x ~ V_{\rm eff} (\theta_H) \end{matrix} \label{correspondence} \end{align} In the SM, ${\cal L}^{\rm gauge}$, $ {\cal L}^{\rm Higgs}_{\rm kinetic}$ and ${\cal L}^{\rm fermion}$ are governed by the gauge principle, but ${\cal L}^{\rm Yukawa}$ and ${\cal L}^{\rm Higgs}_{\rm potential}$ are not. On the GHU side in (\ref{correspondence}), ${\cal L}^{\rm gauge}_{\rm 5d}$ and ${\cal L}^{\rm fermion}_{\rm 5d}$ are governed by the gauge principle and $V_{\rm eff} (\theta_H)$ follows from them. \section{Gauge couplings and Higgs couplings} Let us focus on the A-model. The SM quark-lepton content is reproduced with no exotic light fermions. The one-loop effective potential $V_{\rm eff} (\theta_H)$ is displayed by in fig.\ \ref{figure-Veff}. The finite Higgs boson mass $m_H \sim 125\,$GeV is generated naturally with $\theta_H \sim 0.1$. Relevant parameters in the theory are determined from quark-lepton masses, $m_Z$, and electromagnetic, weak, and strong gauge coupling constants. Many of physical quantities depend on the value of $\theta_H$, but not on other parameters. In the SM the $W$ and $Z$ couplings of quarks and leptons are universal. They depend on only representations of the group $ SU(2)_L \times U(1)_Y$. In GHU the $W$ and $Z$ couplings of quarks and leptons may depend on more detailed behavior of wave functions in the fifth dimension. Four-dimensional couplings are obtained by integrating the product of the $W/Z$ and quark/lepton wave functions over the fifth dimensional coordinate. \begin{figure}[tbh] \begin{center} \includegraphics[bb=0 0 360 255, height=4.5cm]{Veff2.pdf} \includegraphics[bb=0 0 360 227, height=4.5cm]{Veff1.pdf} \end{center} \vskip -10pt \caption{Effective potential $V_{\rm eff} (\theta_H)$ is displayed in the unit of $(k z_L^{-1})^4/16 \pi^2$ for $z_L = 3.56 \times 10^4$ and $m_{\rm KK} = 8.144\,$TeV. The minimum of $V_{\rm eff}$ is located at $\theta_H = 0.10$. The curvature at the minimum determines the Higgs boson mass by $m_H^2 = f_H^{-2} V_{\rm eff}''(\theta_H)|_{\rm min}$, yielding $m_H = 125.1\,$GeV. } \label{figure-Veff} \end{figure} Surprisingly the $W$ and $Z$ couplings of quarks and leptons and the $WWZ$ coupling in GHU turn out very close to those in the SM. The result is tabulated in Table \ref{Table-gaugeCoupling1}. In the last column the values in the SM are listed. The deviations from the SM are very small. The $W$ couplings of left-handed light quarks and leptons are approximately given by \begin{align} g_L^W\sim g_w \, \frac{\sqrt{2kL}}{\sqrt{2kL - \frac{3}{4} \sin^2 \theta_H }} \sim g_w \, \Big( 1 + \frac{3 \sin^2 \theta_H}{16 kL} \Big) ~. \label{Wcoupling1} \end{align} Here $kL = \ln z_L$. The $W$ couplings of right-handed quarks and leptons are negligibly small. \begin{table}[tbh] \renewcommand{\arraystretch}{1.1} \begin{center} \caption{Gauge ($W, Z$) couplings of quarks and leptons. $WWZ$ coupling is also listed at the bottom. The values in the SM are listed in the last column. } \vskip 10pt \label{Table-gaugeCoupling1} \begin{tabular}{|c|c|cc|cc|c|} \hline \multicolumn{2}{|c|}{~} &\multicolumn{2}{|c|}{$\theta_H = 0.115$} &\multicolumn{2}{|c|}{$\theta_H=0.0737$} & SM\\ \hline &$(\nu_e , e)$ &\multicolumn{2}{|c|}{1.00019} &\multicolumn{2}{|c|}{ 1.00009} & \\ &$(\nu_\mu , \mu)$ &\multicolumn{2}{|c|}{1.00019} & \multicolumn{2}{|c|}{1.00009 }&1 \\ $g_L^W/g_w$ &$(\nu_\tau , \tau)$ &\multicolumn{2}{|c|}{1.00019} & \multicolumn{2}{|c|}{1.00009 } & \\ \cline{2-7} &$(u,d)$ &\multicolumn{2}{|c|}{1.00019} & \multicolumn{2}{|c|}{1.00009 } & \\ &$(c,s)$ &\multicolumn{2}{|c|}{1.00019} & \multicolumn{2}{|c|}{1.00009 } &1 \\ &$(t,b)$ &\multicolumn{2}{|c|}{0.9993} & \multicolumn{2}{|c|}{0.9995} & \\ \hline &$\nu_e, \nu_\mu, \nu_\tau$ &0.50014 &0 &0.50008 &0 & 0.5 \qquad 0\\ \cline{2-7} &$e, \mu, \tau$ &-0.2688 &0.2314 &-0.2688 &0.2313 & -0.2688 $\,$ 0.2312\\ \cline{2-7} $(g_L^Z,g_R^Z)/g_w$ &$u, c$ &0.3459 &-0.1543 &0.3459 &-0.1542 & 0.3458 $\,$ -0.1541\\ &$t$ &0.3449 &-0.1553 &0.3453 &-0.1549 & \\ \cline{2-7} &$d, s$ &-0.4230 &0.0771 &-0.4230 &0.0771 & -0.4229 $\,$ 0.0771\\ &$b$ &-0.4231 &0.0771 &-0.4230 &0.0771 & \\ \hline \multicolumn{2}{|c|}{$g_{WWZ}/g_w \cos \theta_W$} &\multicolumn{2}{|c|}{0.9999998} &\multicolumn{2}{|c|}{0.99999995} &1\\ \hline \end{tabular} \end{center} \end{table} \def\noalign{\kern 3pt}{\noalign{\kern 3pt}} Yukawa couplings of quarks and leptons, and $WWH$, $ZZH$ couplings are well approximated by \begin{align} \begin{pmatrix} g_{\rm Yukawa} \cr \noalign{\kern 3pt} g_{WWH} \cr \noalign{\kern 3pt} g_{ZZH} \end{pmatrix} &\sim \begin{pmatrix} g_{\rm Yukawa}^{\rm SM} \cr \noalign{\kern 3pt} g_{WWH}^{\rm SM} \cr \noalign{\kern 3pt} g_{ZZH}^{\rm SM} \end{pmatrix} \times \cos\theta_H \label{HiggsCoupling1} \end{align} where $g_{\rm Yukawa}^{\rm SM}$ on the right side, for instance, denotes the value in the SM. For $\theta_H \sim 0.1$ the deviation amounts to only 0.5\%. Larger deviations are expected in the cubic and quartic self-couplings of the Higgs boson. They are approximately given by \begin{align} \lambda_3^{\rm Higgs} &\sim 156.9 \, \cos\theta_H + 17.6 \, \cos^2 \theta_H ~~({\rm GeV}), \cr \noalign{\kern 5pt} \lambda_4^{\rm Higgs} &\sim - 0.257+ 0.723\cos 2 \theta_H + 0.040 \cos 4 \theta_H ~. \label{HiggsCoupling2} \end{align} In the SM, $\lambda_3^{\rm Higgs, SM} = 190.7\,$GeV and $\lambda_4^{\rm Higgs, SM} = 0.774$. In the $\theta_H \rightarrow 0$ limit $\lambda_3^{\rm Higgs}$ and $\lambda_4^{\rm Higgs}$ become 8.5\% and 35\% smaller than the values in the SM. $\lambda_3^{\rm Higgs}$ can be measured at ILC. GHU gives nearly the same phenomenology at low energies as the SM. To distinguish GHU from the SM, one need to look at signals of new particles which GHU predicts. \section{New particles -- KK excitation} KK excitations of each particle appear as new particles. The existence of an extra dimension is confirmed by observing KK excited particles of quarks, leptons, and gauge bosons. The KK spectrum is shown in Table \ref{table-KKspectrum1}. $Z_R$ is the gauge field associated with $SU(2)_R$, and has no zero mode. $Z^{(1)}$, $\gamma^{(1)}$ and $Z_R^{(1)}$ are called as $Z'$ bosons. Clean signals can be found in the process $q \, \bar q \rightarrow Z' \rightarrow e^+ e^- , \mu^+ \mu^-$ at LHC. So far no event of $Z'$ has been observed, which puts the limit $\theta_H < 0.11$. \begin{table}[htb] \caption{The mass spectrum $\{ m_n \}$ ($n \ge 1$) of KK excited modes of gauge bosons and quarks for $\theta_H = 0.10, n_F = 4$, where $n_F$ is the number of dark fermion multiplets. Masses are given in the unit of TeV. Pairs $(W^{(n)}, Z^{(n)})$, $(W_R^{(n)}, Z_R^{(n)})$, $(t^{(n)}, b^{(n)})$, $(c^{(n)}, s^{(n)})$, $(u^{(n)}, d^{(n)})$ ($n \ge 1$) have almost degenerate masses. The spectrum of $W_R$ tower is the same as that of $Z_R$ tower. The gluon tower has the same spectrum as the photon ($\gamma$) tower. } \label{table-KKspectrum1} \vskip 5pt \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{|c|c|c|c|c||c|c|c|c|} \hline \multicolumn{9}{|c|}{$\theta_H = 0.10, ~ n_F = 4, ~ m_{\rm KK} = 8.144 \,{\rm TeV}, ~ z_L = 3.56 \times 10^4$} \\ \hline &\multicolumn{2}{c|}{$Z^{(n)}$} &$\gamma^{(\ell)}$ &$Z_R^{(\ell)}$ &\multicolumn{2}{c|}{$t^{(n)}$} &$c^{(n)}$ &$u^{(n)}$ \\ \hline $n\, (\ell)$ &$m_n$ &$\mfrac{m_n}{m_{\rm KK}}$ &$m_\ell$ &$m_\ell$ &$m_n$ &$\mfrac{m_n}{m_{\rm KK}}$ &$m_n$ &$m_n$ \\ \hline $1\, (1)$&6.642 &$0.816$ &6.644 &6.234 &7.462&0.916 &8.536 &10.47 \\ 2 ~~~~~&9.935 &1.220 &-- &-- &8.814 &1.082 &12.01 &13.82 \\ $3\, (2)$&14.76 &1.812&14.76 &14.31 &15.58 &1.913 &16.70 &18.76 \\ 4 ~~~~~&18.19 &2.233 &-- &-- &16.99 &2.087 &20.41 &22.37 \\ \hline \end{tabular} \end{table} The KK mass scale as a function of $\theta_H$ is approximately given by \begin{align} &m_{\rm KK} (\theta_H) \sim \frac{1.36\,{\rm TeV}}{(\sin \theta_H )^{0.778}} ~, \label{KKscale1} \end{align} irrespective of the other parameters of the theory. In GHU many of physical quantities such as the Higgs couplings in (\ref{HiggsCoupling1}) and (\ref{HiggsCoupling2}), the KK scale (\ref{KKscale1}), and KK masses of gauge bosons are approximately determined by the value of $\theta_H$ only. This property is called as the $\theta_H$ universality. Once the $Z^{(1)}$ particle is found and its mass is determined, then the value of $\theta_H$ is fixed and the values of other physical quantities are predicted.\cite{FHHOS2013} Although $Z'$ bosons are heavy with masses around 6 -- 8 TeV, their effects can be seen at 250 GeV ILC ($e^+ e^-$ collisions). (Fig.~\ref{fig:ILC-Zprime}) The couplings of right-handed quarks and leptons to $Z'$ bosons are much stronger than those of left-handed quarks and leptons. This large parity violation manifests as an interference effect in $e^+ e^-$ collisions.\cite{FHHO2017ILC} \begin{figure}[htb] \begin{center} \includegraphics[bb=10 57 690 191, width=10cm]{ILC-Zprime1.pdf} \end{center} \caption{ Dominant diagrams in the process $e^+ e^- \rightarrow \mu^+ \mu^-$ } \label{fig:ILC-Zprime} \end{figure} Left-handed light quarks and leptons are localized near the UV brane (at $y=0$), whereas right-handed ones near the IR brane (at $y=L$). Wave functions of top and bottom quarks spread over the entire fifth dimension. In GHU both left- and right-handed fermions are in the same gauge multiplet so that if a left-handed fermion is localized near the UV brane, then its partner right-handed fermion is necessarily localized near the IR brane. KK modes of gauge bosons in the RS space are always localized near the IR brane. $Z'$ couplings of quarks and leptons are given by overlap-integrals of wave functions of $Z'$ bosons and left- or right-handed quarks and leptons. Consequently right-handed quarks/leptons have larger couplings to $Z'$. Typical behavior of wave functions is depicted in fig.~\ref{fig-wavefunctions}. \begin{figure}[bht] \begin{center} \includegraphics[bb=2 88 425 375, width=8.0cm]{wavefunctions.pdf} \end{center} \vskip 5pt \caption{ Wave functions of various fermions and gauge bosons for $\theta_H = 0.1$. Only some of the relevant components in $SO(5)$ are displayed. Wave functions of light quarks and leptons are qualitatively similar to those of $(u_L, u_R)$. Wave functions of $(b_L, b_R)$ are similar to those of $(t_L, t_R)$. $Z$ boson wave function is almost constant, whereas $Z^{(1)}$'s wave function becomes large near the IR brane at $z=z_L$. } \label{fig-wavefunctions} \end{figure} Gauge couplings of quarks and leptons to $Z^{(1)}$, $\gamma^{(1)}$ and $Z_R^{(1)}$ are summarized in Table~\ref{table-Zprimecoupling1}. Except for $b$ and $t$ quarks, right-handed quarks and leptons have much larger couplings than left-handed ones. \begin{table}[h] \caption{ Gauge couplings of quarks and leptons to $Z^{(1)}$, $\gamma^{(1)}$ and $Z_R^{(1)}$ for $\theta_H = 0.0917$ and $\sin^2 \theta_W = 0.2312$. Couplings are given in the unit of $g_w/\cos \theta_W$. The $Z$ couplings in the SM , $I_3 - \sin^2 \theta_W Q_{\rm EM}$, are also shown. } \label{table-Zprimecoupling1} \vskip 5pt \begin{center} \renewcommand{\arraystretch}{1.0} \begin{tabular}{|c|cc|cc|cc|cc|} \hline &\multicolumn{2}{c|}{SM: $Z$} &\multicolumn{2}{c|}{$Z^{(1)}$} &\multicolumn{2}{c|}{$Z_R^{(1)}$} &\multicolumn{2}{c|}{$\gamma^{(1)}$}\\ &Left &Right & Left & Right & Left & Right & Left & Right \\ \hline $\nu_e$ & & & $-0.183$ & 0 & 0 & 0 & 0 & 0 \\ $\nu_{\mu}$ & $0.5$ & 0 & $-0.183$ & 0 & 0 & 0 & 0 & 0 \\ $\nu_{\tau}$ & $$ & & $-0.183$ & 0 & 0 & 0 & 0 & 0 \\ \hline $e$ & $$ & $$ & $0.099$ & $0.916$ & 0 & $-1.261$ & $0.155$ & $-1.665$ \\ $\mu$ &$-0.269$ &$0.231$ & $0.099$ & $0.860$ & 0 & $-1.193$ & $0.155$ & $-1.563$ \\ $\tau$ & $$ & $$ & $0.099$ & $0.814$ & 0 & $-1.136$ & $0.155$ & $-1.479$ \\ \hline $u$ & $$ & $$ & $-0.127$ & $-0.600$ & $0$ & $0.828$ & $-0.103$ & $1.090$ \\ $c$ &$0.346$ &$-0.154$ & $-0.130$ & $-0.555$ & $0$ & $0.773$ & $-0.103$ & $1.009$ \\ $t$ & $$ & $$ & $0.494$ & $-0.372$ & $0.985$ & $0.549$& $0.404$ & $0.678$ \\ \hline $d$ & $$ & $$ & $0.155$ & $0.300$ & $0$ & $-0.414$ & $0.052$ & $-0.545$ \\ $s$ &$-0.423$ &$0.077$ & $0.155$ & $0.277$ & $0$ & $-0.387$ & $0.052$ & $-0.504$ \\ $b$ & $$ & $$ & $-0.610$ & $0.186$ & $0.984$ & $-0.274$ & $-0.202$ & $-0.339$ \\ \hline \end{tabular} \end{center} \end{table} \section{$e^+ e^- $ collisions} The amplitude ${\cal M}$ for the $e^+ e^- \rightarrow \mu^+ \mu^-$ process at the tree level in fig.~\ref{fig:ILC-Zprime} can be expressed as the sum of two terms ${\cal M}_0$ and ${\cal M}_{Z'}$. \begin{align} {\cal M} &= {\cal M}_0 + {\cal M}_{Z'} \cr &= {\cal M}(e^+ e^- \rightarrow \gamma \, , \, Z \rightarrow \mu^+ \mu^-) + {\cal M}(e^+ e^- \rightarrow Z' \rightarrow \mu^+ \mu^-) ~. \label{pairproduction1} \end{align} For $s = (250\,{\rm GeV})^2 \sim (1\,{\rm TeV})^2$, we have $m_Z^2 \ll s \ll m_{Z'}^2$ so that the amplitude can be approximated by \begin{align} {\cal M} &\simeq \frac{g_w^2}{\cos^2 \theta_W} \sum_{\alpha, \beta= L,R} J_\alpha^{(e)\nu} (p,p') \bigg\{ \frac{\kappa_{\rm SM}^{\alpha\beta}}{s} - \frac{\kappa_{Z'}^{\alpha\beta}}{m_{Z'}^2} \bigg\} J_{\beta\nu}^{(\mu)} (k,k') \label{pairproduction2} \end{align} where $J_{\alpha\nu}^{(e)} (p,p')$ and $J_{\beta\nu}^{(\mu)} (k,k')$ represent momentum and polarization configurations of the initial and final states, respectively. $\kappa_{\rm SM}^{\alpha\beta}$ and $\kappa_{Z'}^{\alpha\beta}$ are found from Table~\ref{table-Zprimecoupling1} to be \begin{align} (\kappa_{\rm SM}^{LL}, \kappa_{\rm SM}^{LR}, \kappa_{\rm SM}^{RL}, \kappa_{\rm SM}^{RR}) &= (0.25, 0.1156, 0.1156, 0.2312)~, \cr \noalign{\kern 5pt} (\kappa_{Z'}^{LL}, \kappa_{Z'}^{LR}, \kappa_{Z'}^{RL}, \kappa_{Z'}^{RR})~ &= (0.034, -0.158, -0.168, 4.895) ~. \label{pairproduction3} \end{align} Compared with the value in the SM, $\kappa_{Z'}^{RR}$ is very large whereas $\kappa_{Z'}^{LL}$ is very small. Although direct production of $Z'$ particles is not possible with $s= (250\,{\rm GeV})^2 \sim (1\,{\rm TeV})^2$, the interference term becomes appreciable. Suppose that the electron beam is polarized in the right-handed mode. Then the interference term gives \begin{align} \frac{{\cal M}_0 {\cal M}_{Z'}^*}{| {\cal M}_0 |^2} &\sim - \frac{ \kappa_{Z'}^{RR} + \kappa_{Z'}^{RL}}{ \kappa_{\rm SM}^{RR} + \kappa_{\rm SM}^{RL}} \, \frac{s}{m_{Z'}^2} \sim - 13.6 \, \frac{s}{m_{Z'}^2} \cr \noalign{\kern 10pt} &\sim -0.017 \quad {\rm at} ~ \sqrt{s} = 250\,{\rm GeV} ~. \label{pairproduction4} \end{align} This is a sufficiently big number. As the number of events of fermion pair production is huge in the proposed ILC experiment, 1.7\% correction can be certainly confirmed. One recognizes that polarized electron and/or positron beams play an important role to investigate physics beyond the SM.\cite{FHHO2017ILC}, \cite{Yoon2018}, \cite{Bilokin2017}-\cite{ILC2019} \subsection{Energy and polarization dependence} In the $e^+ e^-$ collision experiments one can control both the energy and polarization of incident electron and positron beams. First consider the total cross section for $ e^+ e^- \rightarrow \mu^+ \mu^-$; \begin{align} F_1 = \frac{\sigma (e^+ e^- \rightarrow \mu^+ \mu^- )^{\rm GHU}}{\sigma (e^+ e^- \rightarrow \mu^+ \mu^- )^{\rm SM}} ~. \label{mupair1} \end{align} Both the electron and positron beams are polarized with polarization $P_{e^-}$ and $P_{e^+}$. For purely right-handed (left-handed) electrons $P_{e^-} = +1 (-1)$. At $\sqrt{s} \ge 250\,$GeV, $e^+$ and $e^-$ in the initial state may be viewed as massless particles. The ratio $F_1$ in (\ref{mupair1}) depends on the effective polarization \begin{align} P_{\rm eff} = \frac{P_{e^-} - P_{e^+}}{1 - P_{e^-} P_{e^+} } ~. \label{Peff1} \end{align} At the proposed 250 GeV ILC, $|P_{e^-}| \le 0.8$ and $|P_{e^+}| \le 0.3$ so that $|P_{\rm eff}| \le 0.877$. The $\sqrt{s}$ dependence of $F_1$ is depicted in fig.~\ref{fig:mupair} (a). The deviation from the SM becomes very large at $\sqrt{s} = 1.5\,{\rm TeV} \sim 2\,{\rm TeV}$ for $\theta_H = 0.09 \sim 0.07$, particularly with $P_{\rm eff} \sim 0.8$. For $P_{\rm eff} \sim - 0.8$ the deviation is tiny. At the energy $\sqrt{s} = 250\,$GeV the deviation might look small. As the event number expected at ILC is so huge that deviation can be unambiguously observed even at $\sqrt{s} = 250\,$GeV. In fig.~\ref{fig:mupair} (b) the polarization $P_{\rm eff}$ dependence of $F_1$ is depicted for $\sqrt{s} = 250\,$GeV and 500$\,$GeV. As the polarization $P_{\rm eff}$ varies from $-1$ to $+1$, deviation from the SM becomes significantly larger. The grey band in fig.~\ref{fig:mupair} (b) indicates statistical uncertainty at $\sqrt{s} = 250\,$GeV with 250$\,$fb$^{-1}$ data set in the SM. It is seen that the signal of GHU can be clearly seen by measuring the polarization dependence in the early stage of ILC 250$\,$GeV. \begin{figure}[thb] \begin{center} \includegraphics[bb=0 0 288 293, width=6.0cm]{sigma-mu-ratio.pdf} \quad \includegraphics[bb=0 0 360 232, width=7.cm]{sigma-mu-ILC2.pdf} \\ (a) \hskip 6cm (b) \end{center} \caption{ $F_1 = \sigma (\mu^+ \mu^- )^{\rm GHU}/\sigma (\mu^+ \mu^- )^{\rm SM}$ in (\ref{mupair1}) is plotted. (a) The $\sqrt{s}$ dependence is shown. Blue curves a, c and green curve e are for $\theta_H = 0.0917$, whereas red curves b, d are for $\theta_H = 0.0737$. Curves a and b are with $P_{\rm eff} =0$. Curves c and d are with $P_{\rm eff} =0.877$. Curve e is with $P_{\rm eff} =- 0.877$. (b) The polarization $P_{\rm eff}$ dependence is shown. Solid (dashed) lines are for $\sqrt{s} = 250\,$GeV (500$\,$GeV). Blue lines are for $\theta_H = 0.0917$, whereas red lines are for $\theta_H = 0.0737$. The grey band indicates statistical uncertainty at $\sqrt{s} = 250\,$GeV with 250$\,$fb$^{-1}$ data set. } \label{fig:mupair} \end{figure} \subsection{Forward-backward asymmetry} Not only in the total cross sections but also in differential cross sections for $e^+ e^- \rightarrow \mu^+ \mu^- $ significant deviation from the SM can be seen.\cite{Richard2018, Suehara2018} Even with unpolarized beams the differential cross sections $d\sigma/d\cos\theta$ becomes 8\% (4\%) smaller than in the SM in the forward direction for $\theta_H = 0.0917$ ($0.0737$). Forward-backward asymmetry $A_{\rm FB}$ characterizes this behavior. In fig.~\ref{fig:AFB}(a) the $\sqrt{s}$-dependence of $A_{\rm FB}$ for $e^+ e^- \rightarrow \mu^+ \mu^- $ is shown. As $\sqrt{s}$ increases the deviation from the SM becomes evident. Again the deviation becomes largest around $\sqrt{s} = 1.5 \sim 2\,$TeV with $P_{\rm eff} = 0.877$ for $\theta_H = 0.0917 \sim 0.0737$. The sign of $A_{\rm FB}$ flips around $\sqrt{s} = 1.1 \sim 1.5\,$TeV. Even at $\sqrt{s} = 250\,$GeV, significant deviation from the SM can be seen in the dependence on the polarization ($P_{\rm eff}$) of the electron/positron beam as depicted in fig.~\ref{fig:AFB}(b). With $250\text{ fb}^{-1}$ data the deviation amounts to 6$\sigma$ (4$\sigma$) at $P_{\rm eff} = 0.8$ for $\theta_H = 0.0917 \, ( 0.0737)$, whereas the deviation is within an error at $P_{\rm eff} = - 0.8$. Observing the polarization dependence is a definitive way of investigating the details of the theory. \begin{figure}[thb] \begin{center} \includegraphics[bb=0 0 360 243, width=6.7cm]{AFB-GHU.pdf} \quad \includegraphics[bb=0 0 360 230, width=6.8cm]{AFB-ILC.pdf} \\ (a) \hskip 6.5cm (b) \end{center} \caption{ Forward-backward asymmetry $A_{\rm FB} (\mu^+ \mu^-)$. (a) The $\sqrt{s}$ dependence is shown. Blue curves a, b, c are for $\theta_H = 0.0917$, red curves d, e are for $\theta_H = 0.0737$, and black curves f, g, h are for the SM. Solid curves a, d, f are for unpolarized beams. Dashed curves b, e, g are with $P_{\rm eff} =0.877$. Dotted curves c and h are with $P_{\rm eff} =- 0.877$. (b) $(A_{\rm FB}^{\rm GHU} - A_{\rm FB}^{\rm SM})/A_{\rm FB}^{\rm SM}(\mu^+\mu^-)$ as functions of the effective polarization $P_{\rm eff}$. Solid and dotted lines are for $\sqrt{s} = 250\,$GeV and $500 \,$GeV, respectively. Blue and red lines correspond to $\theta_H = 0.0917$ and $0.0737$, respectively. The gray band indicates the statistical uncertainty at $\sqrt{s}=250\,$GeV with $250\text{ fb}^{-1}$ data. } \label{fig:AFB} \end{figure} \subsection{Left-right asymmetry} Systematic errors in the normalization of the cross sections are reduced in the measurement of \begin{align} R_{f, LR} (\overline{P}) =\frac{\sigma( \bar{f}f \, ; \, P_{e^-} = + \overline{P}, P_{e^+}=0 )} {\sigma( \bar{f}f \, ; \, P_{e^-} = - \overline{P}, P_{e^+}=0 )} \label{defRfRL} \end{align} where the electron beams are polarized with $P_{e^-} = + \overline{P}$ and $- \overline{P}$. Only the polarization of the electron beams is flipped in experiments. Let $\sigma_{LR}^f$ ($\sigma_{RL}^f$) denote the $e_L^-e_R^+ (e_R^- e_L^+) \to f\bar{f}$ scattering cross section. Then the left-right asymmetry $A_{LR}^f$ is related to $R_{f, LR} $ by \begin{align} A_{LR}^f &= \frac{\sigma_{LR}^f- \sigma_{RL}^f}{\sigma_{LR}^f + \sigma_{RL}^f} = \frac{1}{\overline{P}} \, \frac{1- R_{f,LR}}{1+ R_{f,LR}} ~. \label{LRasym} \end{align} The predicted $R_{f, LR} (\overline{P})$ is summarized in Table \ref{tbl:LRasym} for $\overline{P} = 0.8$. Even at $\sqrt{s} = 250\,{\rm GeV}$ with $L_{int} = 250\,{\rm fb}^{-1}$ data, namely in the early stage of the ILC experiment, significant deviation from the SM is seen. The difference between $R_{\mu, LR}$ and $R_{b, LR}$ stems from the different behavior of wave functions of $\mu$ and $b$ in the fifth dimension. \begin{table}[htbp] \caption{$R_{f, LR} (\overline{P})$ in the SM, and deviations of $R_{f, LR} (\overline{P})^{\rm GHU} / R_{f, LR} (\overline{P})^{\rm SM}$ from unity are tabulated for $\overline{P} = 0.8$. Statistical uncertainties of $R_{f,LR}^{\rm SM}$ is estimated with $L_{int}$ data for both $\sigma( \bar{f}f ; P_{e^-} = + \overline{P})$ and $\sigma( \bar{f}f ; P_{e^-} = - \overline{P})$, namely with $2 L_{int}$ data in all.} \label{tbl:LRasym} \vskip 8pt \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{|c|c|c|cc|} \hline $f$ & $\sqrt{s}$~~~,~~~ $L_{int}$ & SM &\multicolumn{2}{c|}{ GHU} \\ && $R_{f,LR}^{SM}$ (uncertainty) & $\theta_H=0.0917$ & $\theta_H = 0.0737$ \\ \hline $\mu$ & $250\,{\rm GeV}$, $250\,{\rm fb}^{-1}$ & $0.890$ ($0.3\%$) & $-3.4\%$ & $-2.2\%$ \\ & $500\,{\rm GeV}$, $500\,{\rm fb}^{-1}$ & $0.900$ ($0.4\%$) & $-13.2\%$ & $-8.6\%$ \\ \hline $b$ & $250\,{\rm GeV}$, $250\,{\rm fb}^{-1}$ & $0.349$ ($0.3\%$) & $-3.1\%$ & $-2.1\%$ \\ & $500\,{\rm GeV}$, $500\,{\rm fb}^{-1}$ & $0.340$ ($0.5\%$) & $-12.3\%$ & $-8.3\%$ \\ \hline $t$ & $500\,{\rm GeV}$, $500\,{\rm fb}^{-1}$ & $0.544$ ($0.4\%$) & $-13.0\%$ & $-8.2\%$ \\ \hline \end{tabular} \end{table} \section{Summary} Gauge-Higgs unification predicts large parity violation in the quark-lepton couplings to the $Z'$ bosons ($Z^{(1)}, \gamma^{(1)},Z_R^{(1)}$). Although these $Z'$ bosons are very heavy with masses 7 - 8$\,$TeV, they give rise to significant interference effects in $e^+ e^-$ collisions at $\sqrt{s} = 250\,{\rm GeV} \sim 1\,$TeV. We examined the A-model of $SO(5) \times U(1) \times SU(3)$ gauge-Higgs unification, and found that significant deviation can be seen at 250$\,$GeV ILC with 250$\,{\rm fb}^{-1}$ data. Polarized electron and positron beams are indispensable. All of the total cross section, differential cross section, forward-backward asymmetry, and left-right asymmetry for $e^+ e^- \rightarrow f \bar f$ processes show distinct dependence on the energy and polarization. We stress that new particles of masses 7 - 8$\,$TeV can be explored at 250$\,$GeV ILC by seeing the interference effect, but not by direct production. This is possible at $e^+ e^-$ colliders because the number of $e^+ e^- \rightarrow f \bar f$ events is huge. Although the probability of directly producing $Z'$ bosons is suppressed by a factor $(s/m_{Z'}^2)^2$, the interference term is suppressed only by a factor of $s/m_{Z'}^2$. This gives a big advantage over $p p$ colliders such as LHC. In this talk the predictions coming from the A-model are presented. It is curious to see how predictions change in the B-model. Preliminary study indicates the pattern of the polarization dependence is reversed in the B-model in comparison with the A-model. The B-model is motivated by the idea of grand unification, which, in my opinion, is absolute necessity in GHU in the ultimate form. The A-model cannot be implemented in natural grand unification. Satisfactory grand unification in GHU has not been achieved yet.\cite{HosotaniYamatsu2015, Furui2016, HosotaniYamatsu2017}, \cite{Burdman2003}-\cite{MaruYatagai2019} There are many other issues to be solved in GHU. Mixing in the flavor sector, behavior at finite temperature, inflation in cosmology, and baryon number generation are among them. I would like to come back to these issues in due course. \section*{Acknowledgement} This work was supported in part by Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, No.\ 15K05052 and No.\ 19K03873. \def\jnl#1#2#3#4{{#1}{\bf #2}, #3 (#4)} \def{\em Prog.\ Theoret.\ Phys.\ }{{\em Prog.\ Theoret.\ Phys.\ }} \def{\em Prog.\ Theoret.\ Exp.\ Phys.\ }{{\em Prog.\ Theoret.\ Exp.\ Phys.\ }} \def{\em Nucl.\ Phys.} B{{\em Nucl.\ Phys.} B} \def{\it Phys.\ Lett.} B{{\it Phys.\ Lett.} B} \def\em Phys.\ Rev.\ Lett. {\em Phys.\ Rev.\ Lett. } \def{\em Phys.\ Rev.} D{{\em Phys.\ Rev.} D} \def{\em Ann.\ Phys.\ (N.Y.)} {{\em Ann.\ Phys.\ (N.Y.)} } \def{\em Mod.\ Phys.\ Lett.} A{{\em Mod.\ Phys.\ Lett.} A} \def{\em Int.\ J.\ Mod.\ Phys.} A{{\em Int.\ J.\ Mod.\ Phys.} A} \def{\em Int.\ J.\ Mod.\ Phys.} B{{\em Int.\ J.\ Mod.\ Phys.} B} \def{\em Phys.\ Rev.} {{\em Phys.\ Rev.} } \def{\em JHEP} {{\em JHEP} } \def{\em JCAP} {{\em JCAP} } \def{\em J.\ Phys.} A{{\em J.\ Phys.} A} \def{\em J.\ Phys.} G{{\em J.\ Phys.} G} \def{\em ibid.} {{\em ibid.} } \renewenvironment{thebibliography}[1] {\begin{list}{[$\,$\arabic{enumi}$\,$]} {\usecounter{enumi}\setlength{\parsep}{0pt} \setlength{\itemsep}{0pt} \renewcommand{\baselinestretch}{1.2} \settowidth {\labelwidth}{#1 ~ ~}\sloppy}}{\end{list}} \section*{References}
train/arxiv
BkiUe_3xK6wB9lIs-n5I
5
1
\section{INTRODUCTION} Planetary Nebulae (PNe) are dying low-mass ($M \lesssim 8M_{\odot}$) stars whose ejected outer layers undergo ionization by the intense radiation from their central cores \citep[{e.g.,}][]{iau209}. Their resulting spectra, which are dominated by the bright lines of [\ion{O}{3}] $\lambda\lambda 4959,5007$ and H$\beta$ in the blue, and H$\alpha$ and [\ion{N}{2}] $\lambda\lambda 6548,6584$ in the red, are ideally suited for radial velocity programs. Moreover, because PNe are bright, plentiful, distinctive, and representative of an older stellar population, they are the objects of choice for a host of kinematic problems, ranging from the measurement of dark matter in elliptical galaxies \citep{PNS, deLorenzi} to the study of galaxy interactions \citep{M51, CenA}. PNe have been heavily used to study the kinematics of early-type galaxies \citep[][and references therein] {sw06,PNearly} but similar studies in late-type spirals have been lacking, due mostly to the problems associated with PN identification. In star-forming systems, \ion{H}{2} regions far outnumber bright planetary nebulae, so unless one works in the Local Group where the contaminating objects can be spatially resolved \citep[{e.g.,}][]{M33,M31PNS}, or in the halos of edge-on systems \citep[{e.g.,}][]{p7,p10}, extreme care is needed to discriminate between the two classes of objects. In fact, PN-based kinematic studies have been performed in the disks of only a few late-type systems: the SMC \citep[44 objects;][]{dlfw85}, the LMC \citep[110 objects;][]{mdfw88,vmd92}, M94 \citep[67 objects;][]{M94PNS}, M33 \citep[140 objects;][]{M33}, and M31 \citep[$>$2000 objects;][]{M31PNS}. \footnotetext[4]{The Hobby-Eberly Telescope (HET) is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximillians-Universit\"at M\"unchen, and Georg-August-Universit\"at G\"ottingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly.} \begin{deluxetable*}{llccccccl} \tabletypesize{\scriptsize} \tablecaption{Target Galaxies\label{tabBasic}} \tablewidth{0pt} \tablehead{ &&&\colhead{$v_{\odot}$\tablenotemark{a}} &\colhead{Distance\tablenotemark{b}} &\colhead{Survey}&&&\\ \colhead{Galaxy} &\colhead{Type} &\colhead{Size\tablenotemark{a}} &\colhead{(km~s$^{-1}$)} &\colhead{(Mpc)} &\colhead{Region} &\colhead{P.A.} &\colhead{$i$} &\colhead{P.A. and $i$ Reference} } \startdata IC~342 &Scd &$21\farcm4$ & 34 &$3.5 \pm 0.3$ &$4\farcm 8$ &$39^\circ$ &25$^\circ$ &\citet{N80} \\ M74 &Sc &$10\farcm5$ &656 &$8.6 \pm 0.3$ &$4\farcm 8$ &$25^\circ$ &6.5$^\circ$ &\citet{KB92} \\ M83 &SBc &$12\farcm9$ &516 &$4.8 \pm 0.1$ &$18\arcmin$ &$46^\circ$ &24$^\circ$ &\citet{L+04} \\ M94 &Sab &$11\farcm2$ &310 &$4.4^{+0.1}_{-0.2}$ &$5\farcm 8$ &$115^\circ$ &35$^\circ$ &\citet{MC96} \\ M101 &Scd &$28\farcm8$ &241 &$7.7 \pm 0.5$ &$8\arcmin$ &$35^\circ$ &17$^\circ$ &\citet{ZEH90} \\ \enddata \tablenotetext{a}{From RC3} \tablenotetext{b}{From Paper~I except for M101, which is from \citet{M101PNe}} \end{deluxetable*} In Paper~I \citep{thesis1}, we presented the results of narrow-band [\ion{O}{3}] and H$\alpha$ surveys for PNe in six nearby, low-inclination galaxies: IC~342, M74 (NGC~628), M83 (NGC~5236), M94 (NGC~4736), NGC~5068, and NGC~6946. Here, we present follow-up PN spectroscopy in the first four of these systems (each with $>$140 PNe), plus M101 (NGC~5457), a galaxy with prior PN identifications from \citet{M101PNe}. In \S2 we describe our observations with the Hydra multi-fiber spectrographs of the WIYN and Blanco telescopes, and detail our supplemental observations with the Medium Resolution Spectrograph (MRS) of the Hobby-Eberly Telescope (HET)\footnotemark[4]. In \S3, we outline the reduction procedures required to extract measurable spectra from these instruments. We explore the precision of our radial velocities in \S4 using a series of tests, including a $\chi^2$ analysis of spectra taken with different setups on different nights, and an external comparison with the results of counter-dispersed imaging \citep{M94PNS}. In \S5, we describe our efforts to remove contaminating objects, such as \ion{H}{2} regions and background emission galaxies, from our sample. In \S6, we present our final PN velocities and uncertainties; these measurements serve as the basis of a kinematic analysis of the systems' inner and outer disks \citep[][(Paper~III)]{letter,thesis3}. We also co-add the data to produce a series of ``mean'' spectra for our extragalactic planetaries, and explore the excitation of these spectra as a function of [\ion{O}{3}] $\lambda 5007$ absolute magnitude. Finally, in \S7, we discuss the origins of our PN progenitors and test for population homogeneity using circumstellar extinction. Our conclusions are in \S8. \section{OBSERVATIONS} In Paper~I, we surveyed the PN populations of six large ($r > 7\arcmin$), nearby ($D < 10$~Mpc), low-inclination ($i < 35^\circ$) spiral galaxies with [\ion{O}{3}]~$\lambda$5007\ and H$\alpha$ imaging. From this sample, four PN systems were deemed large enough for kinematic follow-up: those of IC~342, M74, M83, and M94. In addition, we also targeted the PN system of M101, a galaxy which had been previously surveyed by \citet{M101PNe}. A description of these systems is given in Table~\ref{tabBasic}. Our goal was to obtain precise ($\lesssim 15$~km~s$^{-1}$) velocities for as many of our previously identified planetary nebula candidates as possible. To do this, we used the Hydra multi-fiber spectrographs on the WIYN and Blanco telescopes, supplemented with spectra from the Medium Resolution Spectrograph (MRS) of the Hobby-Eberly Telescope (HET). For the Hydra runs, our strategy was to maximize the number of PNe targeted, while minimizing our velocity errors, all under the constraint imposed by the fiber positioners ({i.e.,}\ the requirement of keeping a minimum fiber separation of $37\arcsec$ on WIYN and $25\arcsec$ on Blanco). To work around this constraint, which typically limited us to observing $\lesssim 40$~PNe per setup, we performed quick-look data reductions immediately after each observation. By assessing the quality of each spectrum in real time, we rapidly identified those PNe needing additional data and gave them a higher priority in the next night's fiber assignments. In this way, we not only maximized the number of PNe with high-precision velocities, but also controlled the systematic errors associated with observations through different fibers and different Hydra configurations. Our Hydra spectroscopy in the north was performed with the 3.5-m WIYN telescope at Kitt Peak during 6 separate runs between March 2003 and November 2007. The first of these runs targeted the PNe of M101 with $2\arcsec$ red-sensitive fibers and a 600 lines mm$^{-1}$ grating blazed at $10\fdg 1$ in first order, producing spectra between 4500 and 7000 \AA, at a dispersion of 1.4 \AA\ pixel$^{-1}$ with $\sim$4.6~\AA\ (275 km~s$^{-1}$) resolution. Our subsequent observations used the same fiber bundle, but with a new 740 lines~mm$^{-1}$ Volume Phase Holographic (VPH) grating, designed to optimize throughput near 4990~\AA\null. These spectra covered the wavelength range from 4400~\AA\ to 5500 \AA, with higher dispersion (0.5 \AA\ pixel$^{-1}$), improved resolution (1.4 \AA, or 84 km~s$^{-1}$) and, most importantly, greater efficiency. Each Hydra setup was observed for 3 hours, typically using a series of four 45~min exposures. For our southern (M83) observations, we used the version of Hydra on the CTIO 4-m Blanco telescope, with an atmospheric dispersion corrector, $2\arcsec$ STU fibers, and a 632~lines~mm$^{-1}$ grating blazed at $10\fdg 8$ in first order. This instrument yielded spectra with a resolution of 3.3~\AA\ (198~km~s$^{-1}$) and a dispersion of 0.59~\AA~pixel$^{-1}$ over the wavelength range between 4500 and 6900~\AA\null. Again each Hydra setup consisted of a series of 45~min exposures totalling 3 hours. However, because of the instrument's larger number of fibers (138 versus 86) and smaller minimum fiber separation ($25\arcsec$ versus $37\arcsec$), and because our photometric survey of M83 encompassed a much wider field-of-view than those for our other galaxies, each setup was able to target $\sim$70~PNe at once, rather than just $\sim$40. Finally, to supplement our Hydra observations, we targeted some of the M101 PNe with the Medium Resolution Spectrograph of the queue-scheduled Hobby-Eberly Telescope. This dual-beam, bench-mounted instrument has a 79~lines~mm$^{-1}$ echelle grating, a 220~lines~mm$^{-1}$ cross-disperser, and a single $2\arcsec$ red-sensitive fiber which delivers data from 4400 to 6200~\AA\ in the blue, and 6300 to 10,000~\AA\ in the red. Although this instrument could only target objects within $50\arcsec$ of an $V < 17$ offset star, the MRS' high dispersion (0.14~\AA~pixel$^{-1}$ with 1.1~\AA\ resolution) coupled with the HET's large aperture produced precise velocities with relatively short ($\sim$20~min) exposures. A log of all our observations appears in Table~\ref{tabObs}. \begin{deluxetable*}{lllccccl} \tabletypesize{\scriptsize} \tablecaption{Observing Log\label{tabObs}} \tablewidth{0pt} \tablehead{ &\colhead{Observing} &Telescope &\colhead{Number} & \colhead{PNe} &\colhead{Sky Fibers} &\colhead{Exposure} & \colhead{Sky} \\ \colhead{Galaxy} &\colhead{Dates} &\& Grating &\colhead{of Setups} &\colhead{per Setup} &\colhead{per Setup} &\colhead{Time (min)} &\colhead{Conditions} } \startdata IC 342 &2006 Nov 19-22 &WIYN/VPH &5 &31-35 &6-7 &$4 \times 45$ &phot \\ IC 342 &2007 Mar 13-18 &WIYN/VPH &4 &31-34 &6-7 &$4 \times 45$ &phot-spec\\ IC 342 &2007 Nov 10-12 &WIYN/VPH &2 &35-36 &10 &$4 \times 45$ &cloudy \\ \\ M74 &2006 Oct 13 &WIYN/VPH &1 &31 &5 &$4 \times 45$ &cloudy \\ M74 &2006 Nov 19-22 &WIYN/VPH &7 &26-32 &5-9 &$4 \times 45$ &phot \\ M74 &2007 Nov 10-12&WIYN/VPH &6 &29-32 &9-11 &$4 \times 45$ &cloudy\\ \\ M83 &2005 May 30-Jun 2 &Blanco/[email protected] &8 &66-73 &10-50 &$4 \times 45$ &phot\\ \\ M94 &2006 Mar 2-5 &WIYN/VPH &5 &25-28 &5-6 &$4 \times 45$ &phot-spec\\ M94 &2007 Mar 13-18 &WIYN/VPH &6 &27-29 &5-7 &$4 \times 45$ &phot-spec\\ \\ M101 &2003 Mar 24-25 &WIYN/[email protected] &4 &31-35 &12-23 &$6 \times 30$ &phot-spec\\ M101 &2005 Mar - 2006 Mar &HET/MRS &25 &1 &0 &15 - 35 &phot-spec\\ M101 &2006 Mar 2-5 &WIYN/VPH &2 &34 &4-5 &$4 \times 45$ &phot-spec\\ \enddata \end{deluxetable*} \section{SPECTRAL REDUCTION} \footnotetext[5]{IRAF is distributed by NOAO, which are operated by AURA, Inc., under cooperative agreement with the NSF.} \subsection{Hydra spectra} Our data were reduced using the routines of IRAF\footnotemark[5] \citep{v98}. To reduce the Hydra data, we began with the tasks within the {\tt ccdred} package: the data were trimmed and bias-subtracted via {\tt ccdproc}, the dome flats (typically three per setup) were combined using {\tt flatcombine}, and the comparison arcs (CuAr at WIYN and Penray HeNeArXe at Blanco) which bracketed the target exposures were combined via {\tt imcombine}. Next, {\tt dohydra} within the {\tt hydra} package was used to reduce the spectra, with the averaged dome flats serving to define the extraction apertures, and the averaged comparison arcs providing the wavelength calibration to a precision better than 0.03~\AA\ for [email protected], 0.02~\AA\ for Blanco, and 0.013~\AA\ for WIYN+VPH\null. Finally, the individual spectra were re-sampled onto a log wavelength scale to facilitate the co-addition of data taken at different times of the year. We note that the wavelength calibration of the CTIO data required some extra attention. The Penray lamp's emission lines in the blue are $\sim$3 orders of magnitude weaker than its lines in the red. Since very short and very long comparison arcs were only taken on the first night of the run, we used a spliced version of the arcs ({i.e.,}\ with a long exposure in the blue and a short exposure in the red) as the master comparison for all four nights' data. To check for possible setup-to-setup variations, the inferred wavelengths of five strong, well-defined emission lines (\ion{Xe}{1} $\lambda 4671$, \ion{He}{1} $\lambda 5016$, \ion{Ne}{1} $\lambda 5401$, \ion{Ne}{1} $\lambda 6533$, and \ion{Ne}{1} $\lambda 6599$) on each individual exposure were compared to their wavelengths on the master arc. For the first 6 setups of the run, this test showed no significant variations other than small zero point shifts. However, for our last night's observations, the wavelengths of \ion{Ne}{1} $\lambda 6533$ and \ion{Ne}{1} $\lambda 6599$ were offset $\sim$10~km~s$^{-1}$\ to the blue with respect to the other lines. To correct for this shift, the H$\alpha$ velocities measured from the last two setups were incremented by this small amount. After extracting each spectrum, the PNe were sky subtracted using data acquired through several blank-field fibers. For the WIYN+VPH spectra, this step was straightforward: since no bright sky lines fell within the wavelength range of the instrument, we simply used {\tt scombine} to combine the extracted spectra from the multiple exposures and {\tt skysub} for the subtraction. For the [email protected] and Blanco spectra, which had wider wavelength coverage, we used {\tt scombine} as before, then used {\tt skytweak} to align the spectra before subtracting. We note our observations were all taken during dark time and that no bright sky lines exist near any of the emission features of interest. Consequently, the details of this step do not effect our final results. Finally, each PN spectrum was shifted into the barycentric rest frame using the IRAF task {\tt dopcor}. These velocity corrections were especially important for IC~342, a $\beta = 46^\circ$ object whose data were collected at different times of the year, but all our spectra were shifted, even if the correction was less than 1 km~s$^{-1}$. Once in the barycentric frame, the data from the multiple setups were co-added to create a final summed spectrum for each PN. \subsection{MRS spectra} Our echelle data from the Hobby-Eberly Telescope's Medium Resolution Spectrograph were reduced using an automated pipeline designed by K.A.H. to take advantage of the instrument's long-term stability. First, the data from the blue and red sides of the spectrograph were trimmed and bias-subtracted with {\tt ccdproc}. Next, as with Hydra, the spectra were extracted and flatfielded using the night's dome flats, and wavelength calibrated using ThAr comparison arcs. We note that the latter step was usually performed via the echelle task {\tt ecreidentify} and an initial wavelength solution stored in the pipeline's database. (The existence of this database also allowed us to test for time-dependent systematic errors in the wavelength calibration.) Finally, the spectra were re-sampled onto a log-wavelength scale, shifted into the barycentric frame, and almost always co-added with spectra taken with WIYN+Hydra. No sky subtraction was performed on these single-fiber observations. Again, since the data were taken during dark time and our targeted spectral features are far from any sky line, this omission in no way changed our results. \section{MEASURING VELOCITIES AND UNCERTAINTIES} Figure~\ref{spectra} gives sample spectra from each of our instrument configurations and illustrates the varying quality of our data. Although a number of lines are present, the brightest feature, by far, is always the 5007~\AA\ emission from doubly-ionized oxygen. We therefore determined our PN velocities (and velocity uncertainties) solely from this line, via the line-fitting routines of {\tt emsao} within IRAF's {\tt rvsao} package \citep{rvsao}. Weaker lines, such as [\ion{O}{3}] $\lambda 4959$, H$\alpha$, and H$\beta$ were also measured, but since the precision of our line centroiding went almost linearly with counts, these additional features did not significantly improve the accuracy of our measurements. They were, however, useful for exploring the systematics of the PN population (see \S7). \begin{figure} \epsscale{1.1} \plotone{f1.eps} \caption{Sample spectra from each of our four instrumental configurations in the wavelength range between 4850 and 5050 \AA. Included are three spectra from the Blanco telescope, illustrating the relationship between data quality and velocity uncertainty ($\Delta v$). Note that virtually all velocity information is contained in the [\ion{O}{3}]~$\lambda$5007; the flux in [\ion{O}{3}] $\lambda 4959$ is $\sim$3~times weaker, and H$\beta$ is barely visible. \label{spectra} } \end{figure} In order to use PNe as kinematic probes in Paper~III, it is imperative that the uncertainty associated with each velocity measurement be known. To investigate this number, we began with a straightforward internal consistency check. Since most of our spectra were acquired using a sequence of four 45-minute exposures, we simply divided the datasets in two and compared the velocities found from the co-addition of the first two exposures with those derived from the co-addition of the last two frames. Obviously, since each spectral subset contained only half the signal of the whole, only the brightest PNe could be analyzed in this way. Consequently we restricted this analysis to the PNe of M83 and M94. Moreover, since the combination of two exposures was not always sufficient for eliminating cosmic ray hits, a number of objects were found to have wildly discrepant results. Nevertheless, this comparison demonstrated that any systematic drift over the 3~hr period of observation was minimal. An analysis of the 119 PN observations in M94 showed just random scatter and in 108 of the cases, the two measurements were within the $1\,\sigma$ internal uncertainty. Similarly, our comparison of 314~PN velocity pairs in M83 yielded only 63 objects with more than $1\,\sigma$ errors, and again, found no evidence for any time-dependent velocity shift. Not only does this confirm the validity of our 3 hour co-additions, but it also suggests that the internal uncertainties returned by {\tt emsao} are reasonable. We next tested our data for systematic errors associated with the independent fiber setups. To do this, we took advantage of the fact that many of our PNe were targeted multiple times using different fibers and different wavelength calibrations. We began by intercomparing all our PN spectra, and identifying those objects whose [\ion{O}{3}] line flux differed drastically from one setup to the next. In these cases, where co-addition would have only degraded the signal, the lower signal-to-noise observation was dropped from the analysis. We then combined the data to create a single, summed spectrum for each planetary, and with the aid of {\tt emsao}, we measured each object's velocity and velocity uncertainty. Next, we compared these summed velocities to the velocities found with the spectra of the individual fiber-setups. By combining the results from all the planetaries observed using multiple setups, we were able to infer the mean velocity offset of each fiber configuration. The results from this analysis are given in Table~\ref{tabBetSetups}. As can be seen, there is scant evidence of any systematic velocity shift associated with the individual setups. Of the 51 configurations used in this program (where we group all the M101 MRS spectra as being in the galaxy's Setup~7), over half have offsets within one standard deviation of the mean, and $\sim$80\% have offsets that agree to within $2 \,\sigma$. This means that, at most, the systematic error associated with each individual setup is $\sim$1.6~km~s$^{-1}$. (Such an error would increase the number of $2 \, \sigma$ agreements to 96\%, the number appropriate for a Gaussian distribution.) In addition, in only four cases is the amplitude of the systematic shift observed to be greater than 5~km~s$^{-1}$: Setup~9 in IC~342 ($+9.2 \pm 4.3$~km~s$^{-1}$), Setup~2 in M83 ($+5.3 \pm 2.0$~km~s$^{-1}$), Setup~1 in M101 ($+6.1 \pm 2.4$~km~s$^{-1}$), and Setup~5 in M101 ($+8.2 \pm 4.0$~km~s$^{-1}$). In none of these cases does the offset even approach $3 \, \sigma$; this consistency again confirms that any systematic shift between individual fiber setups is minimal. An illustration of our measurement stability is shown in Figure~\ref{mult_obs}, where the velocities and velocity uncertainties of several of the most observed PNe are plotted. Since the systematic errors in our velocity measurements are minimal, we can use our data to test whether the internal errors computed by {\tt emsao} represent the true uncertainties of our measurements. To do this, we performed a pairwise comparison of all the PNe observed with multiple setups, using the $\chi^2$ statistic \begin{equation} \chi^2 = \sum_{i\neq j} {\left(v_i - v_j\right)^2 \over \sigma_i^2 + \sigma_j^2}\label{chi2}. \end{equation} In the equation, $v_i$ and $v_j$ represent the independent velocity measurements, $\sigma_i$ and $\sigma_j$ are their internal uncertainties ({i.e.,}\ 0.85 times the half-width half-max errors reported by {\tt emsao}), and the sum is taken over all observations performed with a similar spectrograph+grating configuration. \begin{deluxetable*}{crrlcrrlcrrlcrrlcrrl} \tabletypesize{\scriptsize} \tablecaption{Mean Velocity Offsets Between Setups\label{tabBetSetups}} \tablewidth{0pt} \tablehead{&\multicolumn{3}{c}{IC 342} &&\multicolumn{3}{c}{M74} &&\multicolumn{3}{c}{M83} &&\multicolumn{3}{c}{M94} &&\multicolumn{3}{c}{M101}\\ Setup &N &$\langle v \rangle$ &$\sigma_{\langle v \rangle}$ &&N &$\langle v \rangle$ &$\sigma_{\langle v \rangle}$ &&N &$\langle v \rangle$ &$\sigma_{\langle v \rangle}$ &&N &$\langle v \rangle$ &$\sigma_{\langle v \rangle}$ &&N &$\langle v \rangle$ &$\sigma_{\langle v \rangle}$ } \startdata 1 & 16 & 1.4 & 2.7 && 17 & $-1.0$ & 1.5 && 65 & 3.0 & 0.8 && 14 & $-2.4$ & 1.5 && 20 & 6.1 & 2.4 \\ 2 & 20 & $-1.0$ & 2.8 && 18 & 4.2 & 1.4 && 44 & 5.3 & 2.0 && 14 & 3.3 & 2.3 && 21 & 1.0 & 2.5 \\ 3 & 13 & 1.2 & 3.8 && 11 & 0.8 & 2.4 && 51 & 1.1 & 1.2 && 14 & 2.7 & 1.6 && 22 & $-4.6$ & 4.1 \\ 4 & 16 & $-4.3$ & 2.8 && 15 & $-2.6$ & 2.6 && 56 & $-4.1$ & 1.4 && 21 & 0.4 & 1.1 && 24 & 2.1 & 2.4 \\ 5 & 20 & 1.1 & 0.9 && 19 & $-3.3$ & 3.1 && 57 & 0.7 & 1.2 && 19 & 0.6 & 0.9 && 15 & 8.2 & 4.0 \\ 6 & 14 & 0.7 & 1.6 && 14 & $-2.4$ & 1.5 && 66 & $-3.5$ & 0.8 && 14 & $-1.4$ & 4.0 && 22 & 2.2 & 3.2 \\ 7 & 9 & 0.1 & 3.7 && 9 & 3.9 & 4.0 && 50 & $-0.4$ & 0.8 && 13 & $-4.0$ & 1.8 && 18 & 3.4 & 1.8 \\ 8 & 8 & $-4.0$ & 3.7 && 18 & $-0.6$ & 1.2 && 52 & 1.7 & 0.6 && 17 & 0.2 & 0.8 && \dots & \dots & \dots \\ 9 & 10 & 9.2 & 4.3 && 8 & $-1.4$ & 2.9 && \dots & \dots & \dots && 15 & 4.7 & 2.0 && \dots & \dots & \dots \\ 10 & 12 & 2.7 & 2.5 && 13 & $-0.3$ & 1.4 && \dots & \dots & \dots && 15 & 0.1 & 0.7 && \dots & \dots & \dots \\ 11 & 14 & 1.9 & 3.4 && 12 & $-4.0$ & 4.3 && \dots & \dots & \dots && 18 & $-0.9$ & 1.1 && \dots & \dots & \dots \\ 12 & \dots & \dots & \dots && 15 & 0.3 & 2.6 && \dots & \dots & \dots && \dots & \dots & \dots && \dots & \dots & \dots \\ 13 & \dots & \dots & \dots && 6 & $-0.8$ & 4.5 && \dots & \dots & \dots && \dots & \dots & \dots && \dots & \dots & \dots \\ 14 & \dots & \dots & \dots && 8 & 0.7 & 2.6 && \dots & \dots & \dots && \dots & \dots & \dots && \dots & \dots & \dots \\ \enddata \end{deluxetable*} \begin{figure*} \epsscale{1.1} \plotone{f2.eps} \caption{Velocities of objects observed multiple times as a function of setup number. The dashed line indicates the final velocity derived from the summation of all exposures; the dotted lines on either side of this mean indicate the $1 \, \sigma$ uncertainty of the final value. Note the generally good agreement: measurements that are discrepant usually have large uncertainties. M94's setup 6 was affected by clouds, which explains its significantly larger error bars. \label{mult_obs} } \end{figure*} This statistic confirms that the errors produced by {\tt emsao} do indeed represent our true measurement scatter. For the [email protected] data in M83, our pairwise comparison of 482 independent PN measurements yields a reduced $\chi^2$ value of 0.96 for 444 degrees of freedom. This is well within the 90\% confidence range of $0.89 < \chi^2 < 1.11$. Similarly, our dataset of 670 WIYN+VPH velocities for the PNe of IC~342, M74, and M94 produce $\chi^2 = 0.89$ for 503 degrees of freedom. This low value, which is just outside the 90\% confidence interval $0.90 < \chi^2 < 1.11$, suggests that, if anything, the uncertainties given by {\tt emsao} are overestimates. Only in M101, where we combined Hydra and MRS data, did {\tt emsao} underestimate the velocity errors, giving a reduced $\chi^2 = 1.44$. For these data, we must increase the quoted errors by $\sim$15\% to make the internal and external errors consistent. \subsection{An External Test in M94} We can perform one more test of our spectroscopy by taking advantage of existing PN data in M94. A decade ago, \citet{M94PNS} measured 67 emission-line velocities using a dual-beam, ``counter-dispersed imaging'' spectrograph, designed to create H$\alpha$ images with one arm, and obtain radial velocities via [\ion{O}{3}]~$\lambda$5007\ slitless spectroscopy with the other. These authors surveyed two fields in the galaxy, one along the major axis, and one on the minor axis, and measured 53 and 14 PN candidates, respectively. Their quoted error on these velocities is $\sim$10~km~s$^{-1}$. In Paper~I, we compared the photometric properties of our PNe to those derived by \citet{M94PNS} and found generally good agreement, with 44 objects common to both datasets. We now have spectra for 37 of these planetaries: 31 along the major-axis, and 6 from the minor-axis field. In both cases, the velocity difference between the two samples is close to that expected: for the major axis sample, $\sigma = 10.2$~km~s$^{-1}$, while the six objects on the minor axis have $\sigma = 14.6$~km~s$^{-1}$. This again implies that our velocity errors are less than our targeted goal of 15~km~s$^{-1}$. There is a zero point shift between the two fields, with our velocities being systematically lower by $45.9 \pm 1.8$~km~s$^{-1}$\ along the major axis, and higher by $45.3 \pm 5.9$~km~s$^{-1}$\ near the minor axis. However, this offset is likely due to a zero-point drift in the \citet{M94PNS} observations, since, as the authors point out, their prototype instrument had flexure problems at the telescope. This introduced a significant error into their absolute velocity scale, although it did not affect the measurement of relative motions. Thus, the data provide an additional, independent confirmation of our velocities and velocity uncertainties. As mentioned above, such knowledge is critical for the kinematic study of Paper~III. \section{IDENTIFYING CONTAMINANTS} \subsection{\ion{H}{2} Regions} \citet{p12} have shown that the ratio of [\ion{O}{3}]~$\lambda$5007\ to H$\alpha$ is an excellent tool for discriminating PNe from \ion{H}{2} regions. When the ratio $R = I(\lambda 5007)_0/I({\rm H}\alpha+$[\ion{N}{2}])$_0$ is plotted against absolute [\ion{O}{3}] magnitude (where the apparent [\ion{O}{3}] magnitude is given by $m_{5007} = -2.5 \log F_{5007} - 13.74)$, true PNe occupy a wedge, which is empirically fit by \begin{equation} 4 > \log R > -0.37 \, M_{5007} - 1.16.\label{eqsquiggle1} \end{equation} Practically speaking, this means that PNe in the top $\sim$1.5~mag of the [\ion{O}{3}] planetary nebula luminosity function always have [\ion{O}{3}] $\lambda 5007$ brighter than H$\alpha$. This contrasts with the line ratios of the vast majority of \ion{H}{2} regions, which have H$\alpha$ as their dominant emission feature \citep[{e.g.,}][]{shaver, kniazev, pena}. Though the photometric survey of Paper~I has already eliminated most compact \ion{H}{2} regions from our list of PN candidates, it is possible that a few such objects slipped through due to uncertain photometry. More importantly, errors in the Hydra positioner can cause fibers to miss their intended PNe and instead fall on nearby star-forming regions or supernova remnants. Finally, because all five of our galaxies have a high star-formation rate, diffuse line emission from interstellar material is ubiquitous and often comparable in brightness to the lines of the target PNe. Thus each spectrum must be examined, to make sure that the observed line ratios are consistent with those expected from an [\ion{O}{3}]-bright planetary nebula. To derive these ratios, we needed to obtain an approximate flux calibration for our fiber spectra. Specifically, we needed to estimate the spectral efficiency around H$\alpha$ relative to that at 5007~\AA\null. This was done in a number of ways. For the M83 data, which extends from 4500~\AA\ to 7000~\AA, the process was straightforward: we used the [\ion{O}{3}] and H$\alpha$+[\ion{N}{2}] photometry of Paper~I to derive the expected response between the red and the blue ($F_{\rm phot}$) in the case of uniform efficiency. We then examined plots of the observed spectroscopic flux ratio, $F_{\rm spec}$, and the photometric to spectroscopic ratio, $F_{\rm phot}/F_{\rm spec}$, as a function of $F_{\rm phot}$. The former plot revealed a clear linear trend plus some outliers; the latter showed that for most objects, $F_{\rm phot}/F_{\rm spec}\sim 2$. This factor was then applied globally to the spectra. The outlying objects whose uncorrected spectroscopic [\ion{O}{3}] to H$\alpha$ ratio was more than a factor of three lower than their photometric value were flagged as possible contaminants. A similar criterion was used for those M101 PNe measured with the [email protected] instrument configuration, except in this case, no quantitative H$\alpha$ photometry was available. We therefore had to estimate the efficiency of the instrument from archival data. Five months prior to our observations, 140 of M33's PNe were observed with WIYN using the same grating and instrument configuration as for M101 \citep{M33}. Like the M83 planetaries, these PNe also have photometric measurements at H$\alpha$ and [\ion{O}{3}]~$\lambda$5007. Consequently, by comparing the photometric and spectroscopic line ratios of PNe in M33, we were able to estimate the response ratio needed to test for contamination in the M101 dataset. Obviously, this procedure was not as robust as that for M83: not only did it rely on observations taken during a different observing run, but, unlike the M83 data, the M101 (and M33) PN observations were performed without an atmospheric dispersion corrector. Nevertheless, since we were using the data only to exclude the most obvious of interlopers, our conservative rejection criterion should still be valid. For the remaining WIYN observations, our spectra did not extend far enough into the red to record H$\alpha$. For these objects, we used H$\beta$ as a surrogate. First, we analyzed Hydra observations of a standard star to estimate the throughput of [\ion{O}{3}]~$\lambda$5007\ relative to nearby H$\beta$. As expected, the data indicated that there was only a slight ($\sim$8\%) decrease in efficiency between these two wavelengths. This value (which was close to the $\sim$5\% drop expected from the grating's advertised blaze function), was then used to derive the true [\ion{O}{3}]-H$\beta$ ratio of each object. We then scaled our H$\beta$ values to H$\alpha$, using an estimate of the foreground Galactic extinction \citep{sfd98}, a \citet{ccm89} reddening law with $A_V = 3.1$, and an expected Case~B H$\alpha$ to H$\beta$ ratio of 2.86 \citep{brocklehurst}. Again, objects with $I(\lambda 5007)/I({\rm H}\alpha$) inconsistent with equation~(\ref{eqsquiggle1}) were tagged as possible contaminants. We note that this last analysis has two limitations. The first is that it does not account for the contribution of [\ion{N}{2}] $\lambda\lambda 6548,6584$ to the photometrically defined ratio of equation~(\ref{eqsquiggle1}). For most PNe, this is not a serious omission: an examination of the \citet{M33} sample of M33 PNe shows that [\ion{N}{2}] can safely be neglected in $\sim$85\% of objects. Moreover, in IC~342, M74, and M94, the stronger [\ion{N}{2}] emission line at $\lambda 6584$ was redshifted onto the tail of our H$\alpha$ interference filter, and was thus suppressed by $\sim$55\%, $\sim$88\% and $\sim$79\%, respectively. Consequently, even when [\ion{N}{2}] was strong, it was not contributing much flux to our H$\alpha$ photometry. A more important problem is that by scaling the H$\beta$ flux by 2.86, we are neglecting the contributions of internal galactic and circumstellar extinction, which can greatly increase this value. We will consider this effect in \S 7; for now, we will accept the fact that our extrapolation may have a systematic error which would cause us to overestimate a PN's [\ion{O}{3}]/H$\alpha$ ratio. \begin{figure}[t] \epsscale{1.2} \plotone{f3.eps} \caption{Two examples of a PN spectrum corrected for contamination from a low-excitation object. In both cases, the [\ion{O}{3}] lines are initially double-peaked; after the velocity of H$\beta$ is used to subtract the field emission, the line profiles become consistent with that of the instrument's point spread function. \label{befaft} } \end{figure} After examining the PN spectra, we found that $\sim$15\% of our targets had line ratios which suggested contamination by a low-excitation object. In over half of these cases, the velocity derived from the Balmer lines was indistinguishable from that found from [\ion{O}{3}]~$\lambda$5007. However, in 42 out of the 97 objects, the lines were kinematically different, suggesting flux from two different sources. In these spectra, [\ion{O}{3}]~$\lambda$5007\ was either double-peaked, or had a profile significantly broader than the spectral point-spread-function. When this occurred, we attempted to subtract off the low-excitation component using the velocity of H$\beta$ (or H$\alpha$) as a guide. In 25 cases, we were successful in isolating the PN's emission from that of the contaminating source, and could return the object to the kinematic sample. (See Figure~\ref{befaft} for two examples of these subtractions.) In the other 17 cases, (which involved the lower resolution M83 and M101 data), we noted the blend, but could not deconvolve the velocities. Finally, after identifying the possible contaminants, we revisited their positions on the [\ion{O}{3}]~$\lambda$5007\ and H$\alpha$ images of Paper~I\null. For most of the objects, the targeted PN did have a bright, low-excitation source nearby; in these cases, we simply excluded the source from our kinematic sample. In a few cases where the PN was clearly isolated, we re-examined the line fluxes and line profiles to determine the source of the discrepancy. If we concluded that the recorded spectrum could, indeed, have come from the planetary, we re-classified the object as a PN, and included it in our analysis. \begin{figure}[b] \epsscale{1.2} \plotone{f4.eps} \caption{Spectrum of a $z = 3.12$ Ly$\alpha$ emitting galaxy in the field of M83, compared to that of a normal planetary of similar brightness. Note the absence of [\ion{O}{3}] $\lambda 4959$ and the broad, asymmetric line profile. No other lines are present in the spectrum. The object is among the brightest Ly$\alpha$ galaxies yet discovered. \label{Highz} } \end{figure} \subsection{High Redshift Galaxies} A second possible source of contamination is high redshift galaxies. At $z \sim 3.12$, starbursting galaxies have their Ly$\alpha$ emission redshifted into the bandpass of our [\ion{O}{3}] filter, and because their observer-frame equivalent widths can be exceedingly large, these objects can easily be confused with planetaries \citep{arnaboldi02, ipn3}. The limited depth of our survey prevents us from detecting many of these objects: according to \citet{gronwall}, the luminosity function of $z \sim 3.1$ Ly$\alpha$ emitters (LAEs) in the emission line takes the form of a \citet{schechter} function with $m_{5007}^* \sim 26.9$. Since this cutoff is well below the limiting magnitude of our photometric surveys (see Paper~I), we would not expect to find many LAEs in our sample. \begin{figure*} \epsscale{1.1} \plottwo{f5a.eps}{f5b.eps} \caption{Co-added spectra of PNe in M83 and M94, as a function of [\ion{O}{3}] absolute magnitude and [\ion{O}{3}] $\lambda 5007$/H$\alpha$+[\ion{N}{2}] flux ratio. The abscissa plots wavelength; the ordinate shows relative counts, with the red portion of the M83 spectra reduced by a factor of two to correct for the higher system throughput. The spectra with $I(\lambda5007)_0/I($H$\alpha$+[\ion{N}{2}])$_0 > 2$ have been co-added in the upper rows, while those with a flux ratio $< 2$ have been co-added in the lower rows. The spectra are remarkably similar: the only obvious change is the increased importance of [\ion{N}{2}] in lower luminosity objects. \label{CoAdd} } \end{figure*} Still, it is possible for some extremely bright LAEs to masquerade as PNe. If we extrapolate the \citet{schechter} function of \citet{gronwall} to the limiting magnitude of each survey, then we should expect to find $\sim$0.2~LAEs in the field of M101, $\sim$0.5~LAEs near M74, and $\sim$0.8~LAEs in the wide-field Mosaic frame covering M83. (LAEs in the fields of M94 and IC~342 are exceedingly unlikely, $\sim$2$\times 10^{-3}$ and 4$\times 10^{-7}$, respectively). In fact, one LAE was detected in our survey. The object is in the field of M83 ($\alpha(2000) = 13^h38^m09.26^s$, $\delta(2000)$ = $-29^{\circ}38\arcmin31.5\arcsec$) and is easily recognizable via its strong, asymmetric line profile (centered at 5006.8~\AA), its non-negligible velocity width ($\sim$400~km~s$^{-1}$\ full-width-half-maximum), and an absence of any accompanying emission at the wavelengths of [\ion{O}{3}] $\lambda 4959$ or H$\beta$. The object cannot be a background source at $z \sim 0.34$, since redshifted H$\beta$ is not present at $\sim$6530~\AA, and the object's photometrically inferred equivalent width ($\sim$600~\AA) is much larger than that of a typical [\ion{O}{2}] galaxy \citep{hogg98}. Its line profile and line width are, however, consistent with those observed for other high-redshift Ly$\alpha$ emitters \citep[{e.g.,}][]{dawson}. Interestingly, the bright apparent magnitude of our Ly$\alpha$ emitter ($m_{5007} \sim 25.15$, or $\log F_{5007} = -15.556$) makes it one of the most luminous $z \sim 3.1$ LAEs ever observed, with a total emission-line luminosity of $\sim$2.3$\times 10^{43}$~ergs~s$^{-1}$. Most similarly bright LAEs harbor an AGN \citep{gronwall, ouchi}, but in this object, C~IV $\lambda 1550$ is not seen, nor is there any evidence for a broad line component to Ly$\alpha$. If the source is indeed powered solely by star-formation, then the Case B-based relations of \citet{kennicutt} and \citet{hu} imply a total star-formation rate of $\gtrsim 20 \, M_{\odot}$~yr$^{-1}$. This value is larger than any of the rates derived by \citet{gronwall} for a sample of LAEs in the Extended Chandra Deep Field South, and is comparable to that associated with the brightest $z \sim 3.1$ LAEs identified by \citet{ouchi}. Figure~\ref{Highz} displays the emission-line profile of this object. \section{FINAL PN VELOCITIES} In measuring our velocities, we have relied solely on the wavelength of the extremely strong [\ion{O}{3}] $\lambda 5007$ emission line and have ignored the information contained in the weaker lines of [\ion{O}{3}] $\lambda 4959$, H$\alpha$, H$\beta$, and [\ion{N}{2}] $\lambda\lambda 6548,6584$. With our final PN velocities secured, we could check this decision. To do this, we began by selecting all those PNe with non-contaminated, well-measured ($\sigma_v < 15$~km~s$^{-1}$) emission lines. We then shifted these spectra into the rest frame, and co-added the data, to create a single high signal-to-noise template for each telescope+spectrograph configuration. This template was then cross-correlated against the individual PN spectra using the {\tt xcsao} task of {\tt rvsao} to derive an alternate measure of velocity. \begin{deluxetable*}{lcccrccccrcl} \tabletypesize{\scriptsize} \tablecaption{Planetary Nebula Identifications\label{tabPNe}} \tablewidth{0pt} \tablehead{&&&&&&&&&\colhead{$v_{\odot}$} &\colhead{$\sigma_v$} & \\ \colhead{ID} &\colhead{$\alpha(2000)$} &\colhead{$\delta(2000)$} &\colhead{$m_{5007}$} &\colhead{$R$\tablenotemark{a}} &\colhead{$\sigma_{R}$} &\colhead{Type\tablenotemark{b}} &\colhead{N$_{\rm trg}$\tablenotemark{c}} &\colhead{N$_{\rm det}$\tablenotemark{d}} &\colhead{(km~s$^{-1}$)} &\colhead{(km~s$^{-1}$)} &\colhead{Notes} } \startdata IC 342-1 &03:46:54.99 &+68:06:39.1 &25.26 &3.18 &0.76 &Phot &3 &3 &73.7 &4.3 & \\ IC 342-165 &03:47:04.81 &+68:04:31.6 &27.67 &0.68 &0.35 &Phot &3 &0 &\dots &\dots & \\ M74-1 &01:36:39.95 &+15:47:02.5 &25.44 &3.09 &\dots &Spec &4 &4 &657.3 &4.2 & \\ M74-153 &01:36:57.04 &+15:46:48.2 &27.55 &$>0.52$ &\dots &Phot &0 &0 &\dots &\dots & \\ M83-1 &13:37:00.99 &$-$29:57:31.8 &24.24 &2.39 &0.23 &Phot &2 &2 &586.9 &2.5 & \\ M83-241 &13:37:21.58 &$-$29:58:37.3 &27.15 &$>0.60$ &\dots &Phot &0 &0 &\dots &\dots & \\ M94-1 &12:50:52.17 &+41:08:36.2 &23.86 &2.78 &0.16 &Phot &1 &1 &317.4 &2.0 &\tablenotemark{D} \\ M94-150 &12:50:52.03 &+41:11:22.5 &26.21 &0.99 &0.38 &Phot &3 &3 &377.4 &5.8 &\tablenotemark{D} \\ M101-1 &14:02:40.57 &+54:13:56.3 &24.97 &\dots &\dots &\dots &4 &4 &140.7 &4.2 &\tablenotemark{C} \\ M101-65 &14:03:05.94 &+54:25:37.2 &26.43 &\dots &\dots &\dots &2 &1 &221.9 &12.6 & \\ \enddata \tablecomments{C: Emission line is a blend of multiple components, dominated by a low-excitation contaminant; D: PN first measured by \citet{M94PNS}; H: Spectrum likely that of a nearby \ion{H}{2} region; P: Emission line is a blend of multiple components, dominated by the planetary; S: Emission line is a blend, with the velocity obtained by subtracting off the low excitation component; U: Not part of our analysis, with $\sigma_v > $ 15 km~s$^{-1}$; X: Velocity and uncertainty derived from {\tt xcsao}. Table 4 is published in its entirety in the electronic edition of the {\it Astrophysical Journal}. A portion is shown here for guidance regarding its form and content.} \tablenotetext{a}{$R = I(\lambda 5007)_0/I({\rm H}\alpha+$[\ion{N}{2}])$_0$} \tablenotetext{b}{Type of $R$ value given} \tablenotetext{c}{Number of Hydra+MRS setups in which PN was targeted} \tablenotetext{d}{Number of Hydra+MRS setups in which PN was detected} \end{deluxetable*} \begin{deluxetable*}{lcccccccc} \tabletypesize{\scriptsize} \tablecaption{Number of PNe Targeted\label{tabTargDet}} \tablewidth{0pt} \tablehead{&\colhead{Photometric} &\colhead{Not} &\colhead{Not} &\colhead{Probable} &\multicolumn{3}{c}{Number of Detections} &\colhead{Total} \\ \colhead{Galaxy} &\colhead{Sample} &\colhead{Targeted} &\colhead{Detected} &\colhead{Contaminant} &\colhead{$N=1$} &\colhead{$N=2$} &\colhead{$N>2$} &\colhead{$\sigma_v <$ 15 km~s$^{-1}$} } \startdata IC~342 & 165 & 29 & 30 & 3 & 48 & 32 & 23 & 99 \\ M74 & 153 & 28 & 13 & 7 & 43 & 39 & 23 & 102 \\ M83 & 241 & 25 & 12 & 31 & 36 & 66 & 71 & 162 \\ M94 & 150 & 13 & 7 & 2 & 68 & 32 & 28 & 127 \\ M101 & 65 & 1 & 1 & 12 & 13 & 13 & 25 & 48 \\ Total & 774 & 96 & 63 & 55 &208 &182 &170 & 538 \\ \enddata \end{deluxetable*} Figure~\ref{CoAdd} illustrates the validity of this approach. In the figure, we display a sample of co-added spectra using the [email protected] observations in M83 and the WIYN+VPH data in M94. To illustrate the consistency of the spectra, both datasets have been corrected for Galactic extinction (using the \citet{ccm89} reddening law) and broken down by absolute [\ion{O}{3}] $\lambda 5007$ magnitude (using 0.5~mag bins and the galactic distances of Paper~I) and excitation (with [\ion{O}{3}]/H$\alpha$ $R = 2$ as the dividing line). As expected, the weaker Balmer and [\ion{N}{2}] lines are no wider than those of [\ion{O}{3}], demonstrating that our {\tt emsao} velocities are reasonably precise. More importantly, the mean PN spectra appear remarkably similar from one magnitude range to the next. In M83, the only observable change involves the strength of [\ion{N}{2}], which increases in importance as one proceeds down the luminosity function. In the higher resolution M94 data, the strength of H$\beta$ appears anomalously bright in the faintest, high-excitation bin, but this may simply be a stochastic result. Overall, the principal bright lines of our observed PNe are remarkably consistent across the entire range of our data, justifying the use of a single template spectrum for our analysis. A comparison of the results reveals that while the {\tt emsao} and {\tt xcsao} velocities always agree, the formal errors produced by the first package are almost always smaller than those quoted by the latter. Rather than improving our velocities, it appears that by including the spectral regions surrounding the weaker lines, we add in more noise than signal. Since we know from our analysis of variance (see section \S 4) that the uncertainties quoted by {\tt emsao} are accurate, the use of the simpler routine is fully justified. Table~\ref{tabPNe} gives our final list of PNe, including their positions, [\ion{O}{3}]~$\lambda$5007\ magnitudes, values of $R$ (corrected for foreground Galactic extinction), velocities, and velocity uncertainties. Unless otherwise noted, these velocities and their errors come from {\tt emsao}. Out of the 774~PNe detected photometrically, 70\% were measured to a radial velocity accuracy better than 15 km~s$^{-1}$. Of the remaining objects, 96 were not targeted for spectroscopy, either due to fiber-positioning constraints or the apparent faintness of the [\ion{O}{3}]~$\lambda$5007\ line. Only 63 PNe were observed but not detected, with most of these having magnitudes well down the planetary nebula luminosity function. Table~\ref{tabTargDet} summarizes our results. Tables~\ref{tabN5068} and \ref{tabN6946} give the positions and photometric properties of PNe in galaxies which were observed in Paper~I, but not selected for spectroscopic follow-up. \section{PNe AND CIRCUMSTELLAR EXTINCTION} In Paper~III, we will use our planetary nebula velocities to probe the velocity dispersion and disk mass distribution of galactic disks. Ideally, it would be helpful to couple this kinematic data with detailed information about the population of PN progenitors. Unfortunately, this is not easy to do. Just because stars with initial masses between $\sim$1$M_{\odot}$ and $\sim$5$M_{\odot}$ are expected to evolve into planetary nebulae, that does not mean that all intermediate mass stars contribute equally to the bright end of the planetary nebula luminosity function. If the models by \citet{marigo} and \citet{mendez} are correct, then the bright-end of the planetary nebula luminosity function is dominated by objects from relatively massive ($\sim$2$M_{\odot}$), relatively young ($\sim$1~Gyr) progenitors. On the other hand, a number of recent analyses suggest that most [\ion{O}{3}]-bright planetaries are not formed from single stars at all. Alternate scenarios, involving common-envelope interactions \citep{frank}, blue straggler evolution \citep{bs-pn}, and even ionization from accreting white dwarfs \citep{soker} have all been used to explain the PN phenomenon. Thus, at present, it is impossible to say anything definitive about the progenitors of our kinematic test particles. We can, however, combine our spectroscopic line ratios with the [\ion{O}{3}] $\lambda 5007$ and H$\alpha$+[\ion{N}{2}] photometry of Paper~I to gain some insight into the question of the PN population's homogeneity. Specifically, we can use our data to estimate circumstellar extinction, and probe the uniformity of the PN population as a function of absolute magnitude. A PN whose central star is intrinsically faint can have two possible progenitors: it can be a high core-mass, faded remnant that was once at the bright-end cutoff of the PN luminosity function, or it can be a lower-mass star which is just now attaining maximum brightness. In theory, these two scenarios make different predictions about the behavior of the AGB dust envelope. In the case of a high-mass star, the circumstellar extinction should remain roughly constant, since the timescale for central star evolution is much shorter than that for nebular expansion. However, for lower-mass stars, the slower evolutionary timescales allow dust to dissipate and produce systematically less extinction. By measuring the Balmer decrements in a sample of planetary nebulae of varying brightnesses, it may be possible to perform a global test of the PN population. Note that this type of analysis can only be performed on a sample of extragalactic PNe, since in the Galaxy, the uncertainties associated with distance and foreground reddening overwhelm all other aspects of the analysis. The key to performing this experiment is to have well-determined estimates of H$\beta$ relative to [\ion{O}{3}] $\lambda 5007$. As Fig.~\ref{CoAdd} illustrates, such data are not easy to obtain: in our sample of galaxies, only M94 has a sufficient number of high quality measurements. To this sample, we can add in the PNe of M33, which have photometry and spectroscopy from \citet{M33}, and the LMC, where spectroscopic and photometric data are available from \citet{md1, md2}, \citet{vdm} and \citet{p6}. We then estimate each PN's Balmer decrement: for M94 PNe, we simply measure the ratio of H$\beta$ to [\ion{O}{3}] $\lambda 5007$ on our spectra, and then scale these values to H$\alpha$ using our absolute [\ion{O}{3}] and H$\alpha$+[\ion{N}{2}] photometry. Such a procedure neglects the contribution of the nitrogen lines, but since the stronger of these lines, [\ion{N}{2}] $\lambda 6584$, falls on the wings of the narrow-band filter's bandpass (where the transmission is $\sim$1/4 maximum), this is not a serious problem. In the case of M33 and the LMC, where the spectral region about H$\alpha$ is directly observed, we mimic our M94 data by including [\ion{N}{2}] $\lambda 6548$ and one quarter of [\ion{N}{2}] $\lambda 6584$ in our estimate of H$\alpha$. We then compute the logarithmic H$\beta$ extinction for each planetary, using our faux H$\alpha$/H$\beta$ ratio, an assumed intrinsic decrement of 2.86, and a \citet{ccm89} reddening law. \begin{figure}[t] \epsscale{1.05} \plotone{f6.eps} \caption{Balmer-decrement based estimated values, $c^{\prime}$, for the logarithmic extinction at H$\beta$ for PNe in the LMC, M33, and M94, as a function of [\ion{O}{3}] $\lambda 5007$ absolute magnitude. These values are slightly overestimated, since H$\alpha$ is partially contaminated by emission from [\ion{N}{2}]. Spectroscopically derived values of $c$ (without the [\ion{N}{2}] adjustment) in the Magellanic Cloud are shown as gray circles. Note the general agreement between galaxies; there is no evidence for an extinction dependence on metallicity, nor is there any significant trend with absolute magnitude. \label{cval} } \end{figure} The results of this analysis are shown in Figure~\ref{cval}. From the figure, it is clear that the mean circumstellar extinction for PNe in M94 is roughly the same as that observed in M33 and the LMC, {i.e.,}\ $E(B-V) \sim 0.2$. This is a somewhat surprising result: one might reasonably expect the PNe formed from the metal-poor stars of the LMC to have systematically less dust. This does not appear to be the case. More importantly, the data show no evidence for a gradient in extinction with absolute [\ion{O}{3}] magnitude. Note that, as one goes down the luminosity function, [\ion{N}{2}] increases in importance. So, if nitrogen emission were contaminating our analysis, we would expect to see an inverse correlation between estimated extinction and absolute magnitude. But the data show no significant trend at all; this argues against any large change in stellar population with absolute [\ion{O}{3}] magnitude. Formally, our results are consistent with most of the PNe in our sample beginning as high core-mass stars before fading. However, considering the scatter in the data and the uncertainties, our results only weakly support this conclusion. Small population differences could also be consistent with our results. \section{CONCLUSIONS} In Paper~I and in \citet{M101PNe} we identified 774 planetary nebula candidates in five nearby, low-inclination galaxies: IC~342, M74, M83, M94, and M101. We now have spectroscopic confirmations for over 600 of the PNe, with 538 having been measured to better than 15~km~s$^{-1}$\ precision. Of the remaining $\sim$160~objects, most were either not targeted for spectroscopy (due to fiber positioning constraints) or too faint to be detected with our spectrographs. We can now use these data to investigate the kinematic structure of the galaxies' disks and, for the first time, dynamically measure disk mass-to-light ratios throughout the body of large spiral galaxies. \begin{deluxetable}{lcccrc} \tabletypesize{\scriptsize} \tablecaption{NGC~5068 Planetary Nebula Candidates\label{tabN5068}} \tablewidth{0pt} \tablehead{ \colhead{ID} &\colhead{$\alpha(2000)$} &\colhead{$\delta(2000)$} &\colhead{$m_{5007}$} &\colhead{$R$} &\colhead{$\sigma_{R}$} } \startdata NGC 5068-1 &13:18:59.79 &$-2$1:00:07.5 &24.85 &2.32 & 0.21 \\ NGC 5068-2 &13:19:03.17 &$-2$1:01:59.3 &24.94 &3.76 & 0.61 \\ NGC 5068-3 &13:18:51.10 &$-2$1:04:24.9 &24.97 &2.29 & 0.21 \\ NGC 5068-4 &13:18:47.30 &$-2$1:02:49.7 &25.04 &1.70 & 0.16 \\ NGC 5068-5 &13:19:01.06 &$-2$1:00:49.8 &25.07 &1.61 & 0.11 \\ NGC 5068-6 &13:19:02.99 &$-2$1:02:10.8 &25.13 &1.87 & 0.24 \\ NGC 5068-7 &13:18:48.73 &$-2$1:00:09.3 &25.28 &2.29 & 0.33 \\ NGC 5068-8 &13:19:11.11 &$-2$0:59:57.5 &25.30 &3.18 & 0.39 \\ NGC 5068-9 &13:18:57.66 &$-2$1:05:44.5 &25.32 &1.76 & 0.20 \\ NGC 5068-10 &13:19:01.92 &$-2$1:01:35.9 &25.40 &1.65 & 0.17 \\ NGC 5068-11 &13:18:49.66 &$-2$1:04:20.1 &25.40 &$>1.69$ & \dots \\ NGC 5068-12 &13:18:53.90 &$-2$1:00:56.9 &25.41 &1.80 & 0.43 \\ NGC 5068-13 &13:18:53.41 &$-2$0:59:59.2 &25.49 &1.66 & 0.25 \\ NGC 5068-14 &13:18:44.62 &$-2$1:00:01.8 &25.50 &1.54 & 0.15 \\ NGC 5068-15 &13:18:47.50 &$-2$1:01:01.2 &25.57 &1.18 & 0.10 \\ NGC 5068-16 &13:19:01.00 &$-2$1:03:40.2 &25.96 &1.26 & 0.21 \\ NGC 5068-17 &13:18:53.33 &$-2$0:59:51.9 &25.98 &0.83 & 0.11 \\ NGC 5068-18 &13:18:42.89 &$-2$1:03:04.2 &26.02 &0.91 & 0.16 \\ NGC 5068-19 &13:18:53.26 &$-$20:59:18.3 &26.11 &1.07 & 0.16 \\ \enddata \end{deluxetable} \acknowledgments We would like to thank KPNO and CTIO personnel for friendly travel, telescope, and instrumental support (especially Di Harmer for her excellent assistance with WIYN+Hydra) and the HET RAs. We would also like to thank John Feldmeier, George Jacoby, Antonino Cucchiara, Matt Vinciguerra, Patrick Durrell, and Kenneth Moody for help with observing and would like to acknowledge the useful comments of our anonymous referee. This research has made use of the USNOFS Image and Catalogue Archive operated by the United States Naval Observatory, Flagstaff Station (http://www.nofs.navy.mil/data/fchpix/), NASA's Astrophysics Data System, and the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work was supported by NSF grant AST 06-07416 and a Pennsylvania Space Grant Fellowship. Facilities: \facility{Blanco(Hydra)}, \facility{WIYN(Hydra)}, \facility{HET(MRS)}. \begin{deluxetable}{lcccc} \tabletypesize{\scriptsize} \tablecaption{NGC~6946 Planetary Nebula Candidates\label{tabN6946}} \tablewidth{0pt} \tablehead{ \colhead{ID} &\colhead{$\alpha(2000)$} &\colhead{$\delta(2000)$} &\colhead{$m_{5007}$} &\colhead{$R$} } \startdata NGC 6946-1 &20:34:39.26 &+60:04:44.7 &25.61 &$>0.90 $ \\ NGC 6946-2 &20:35:25.42 &+60:08:37.6 &25.72 &$>1.16 $ \\ NGC 6946-3 &20:34:57.58 &+60:08:11.4 &25.72 &$>1.01 $ \\ NGC 6946-4 &20:34:26.31 &+60:07:38.2 &25.76 &$>1.76 $ \\ NGC 6946-5 &20:35:18.35 &+60:07:29.4 &25.79 &$>0.91 $ \\ NGC 6946-6 &20:34:21.94 &+60:07:30.1 &25.79 &$>0.95 $ \\ NGC 6946-7 &20:34:56.08 &+60:06:36.4 &25.80 &$>0.91 $ \\ NGC 6946-8 &20:34:40.98 &+60:04:41.5 &25.81 &$>0.90 $ \\ NGC 6946-9 &20:35:05.18 &+60:12:26.9 &25.87 &$>2.55 $ \\ NGC 6946-10 &20:34:33.63 &+60:05:22.6 &25.89 &$>2.75 $ \\ NGC 6946-11 &20:34:21.69 &+60:11:47.3 &25.91 &$>1.44 $ \\ NGC 6946-12 &20:35:06.80 &+60:10:14.7 &25.95 &$>1.20 $ \\ NGC 6946-13 &20:35:33.83 &+60:11:53.1 &25.96 &$>0.86 $ \\ NGC 6946-14 &20:34:35.39 &+60:05:00.5 &25.96 &$>1.35 $ \\ NGC 6946-15 &20:35:06.52 &+60:12:49.5 &25.98 &$>1.66 $ \\ NGC 6946-16 &20:34:38.62 &+60:06:31.9 &25.99 &$>0.73 $ \\ NGC 6946-17 &20:34:24.38 &+60:06:16.9 &26.01 &$>0.74 $ \\ NGC 6946-18 &20:34:43.52 &+60:07:51.9 &26.01 &$>1.58 $ \\ NGC 6946-19 &20:34:31.10 &+60:05:59.7 &26.03 &$>0.72 $ \\ NGC 6946-20 &20:34:17.78 &+60:09:42.2 &26.04 &$>1.04 $ \\ NGC 6946-21 &20:35:00.52 &+60:08:58.5 &26.05 &$>0.68 $ \\ NGC 6946-22 &20:35:01.74 &+60:07:31.3 &26.11 &$>1.33 $ \\ NGC 6946-23 &20:34:38.03 &+60:07:33.3 &26.11 &$>1.28 $ \\ NGC 6946-24 &20:34:37.65 &+60:11:56.4 &26.11 &$>0.94 $ \\ NGC 6946-25 &20:35:13.96 &+60:11:37.0 &26.12 &$>1.12 $ \\ NGC 6946-26 &20:34:33.41 &+60:10:41.2 &26.14 &$>1.01 $ \\ NGC 6946-27 &20:35:21.04 &+60:04:42.8 &26.15 &$>1.17 $ \\ NGC 6946-28 &20:35:18.84 &+60:11:40.2 &26.15 &$>1.12 $ \\ NGC 6946-29 &20:35:20.54 &+60:13:56.0 &26.15 &$>0.72 $ \\ NGC 6946-30 &20:35:20.18 &+60:05:05.1 &26.16 &$>0.71 $ \\ NGC 6946-31 &20:34:17.43 &+60:10:48.8 &26.17 &$>1.02 $ \\ NGC 6946-32 &20:34:57.31 &+60:07:16.2 &26.18 &$>0.66 $ \\ NGC 6946-33 &20:34:47.16 &+60:09:06.1 &26.19 &$>1.00 $ \\ NGC 6946-34 &20:34:37.06 &+60:10:26.0 &26.21 &$>0.65 $ \\ NGC 6946-35 &20:34:27.87 &+60:06:11.1 &26.21 &$>1.04 $ \\ NGC 6946-36 &20:35:18.87 &+60:07:17.8 &26.22 &$>1.20 $ \\ NGC 6946-37 &20:34:54.98 &+60:14:03.3 &26.23 &$>1.74 $ \\ NGC 6946-38 &20:34:32.68 &+60:10:42.3 &26.25 &$>0.48 $ \\ NGC 6946-39 &20:34:30.10 &+60:07:36.5 &26.25 &$>0.63 $ \\ NGC 6946-40 &20:34:47.17 &+60:08:25.8 &26.26 &$>0.56 $ \\ NGC 6946-41 &20:35:30.19 &+60:10:44.5 &26.29 &$>1.14 $ \\ NGC 6946-42 &20:35:07.16 &+60:05:33.0 &26.29 &$>0.69 $ \\ NGC 6946-43 &20:34:27.10 &+60:09:49.0 &26.31 &$>1.43 $ \\ NGC 6946-44 &20:34:44.20 &+60:09:38.5 &26.33 &$>0.76 $ \\ NGC 6946-45 &20:35:28.44 &+60:04:48.0 &26.37 &$>1.59 $ \\ NGC 6946-46 &20:34:37.18 &+60:06:26.4 &26.37 &$>0.85 $ \\ NGC 6946-47 &20:35:23.43 &+60:10:24.9 &26.38 &$>0.70 $ \\ NGC 6946-48 &20:35:26.51 &+60:13:45.2 &26.40 &$>1.80 $ \\ NGC 6946-49 &20:35:11.05 &+60:09:58.0 &26.44 &$>2.10 $ \\ NGC 6946-50 &20:35:22.40 &+60:10:56.1 &26.47 &$>0.56 $ \\ NGC 6946-51 &20:35:19.32 &+60:10:29.3 &26.50 &$>0.55 $ \\ NGC 6946-52 &20:35:12.56 &+60:11:19.3 &26.52 &$>0.88 $ \\ NGC 6946-53 &20:35:11.96 &+60:12:00.8 &26.53 &$>1.45 $ \\ NGC 6946-54 &20:35:08.93 &+60:08:29.6 &26.53 &$>0.48 $ \\ NGC 6946-55 &20:35:29.14 &+60:10:09.4 &26.55 &$>1.06 $ \\ NGC 6946-56 &20:35:22.54 &+60:08:11.5 &26.56 &$>0.70 $ \\ NGC 6946-57 &20:34:22.88 &+60:10:14.4 &26.62 &$>0.80 $ \\ NGC 6946-58 &20:34:19.04 &+60:05:15.2 &26.63 &$>0.73 $ \\ NGC 6946-59 &20:34:45.99 &+60:04:30.4 &26.65 &$>0.72 $ \\ NGC 6946-60 &20:34:44.63 &+60:12:53.4 &26.68 &$>1.42 $ \\ NGC 6946-61 &20:35:07.59 &+60:07:45.8 &26.69 &$>0.42 $ \\ NGC 6946-62 &20:34:54.67 &+60:06:09.8 &26.70 &$>0.42 $ \\ NGC 6946-63 &20:34:59.01 &+60:13:37.9 &26.72 &$>0.42 $ \\ NGC 6946-64 &20:34:59.77 &+60:12:26.5 &26.74 &$>0.44 $ \\ NGC 6946-65 &20:35:03.43 &+60:12:49.5 &26.83 &$>0.94 $ \\ NGC 6946-66 &20:35:22.44 &+60:06:53.9 &26.85 &$>0.54 $ \\ NGC 6946-67 &20:34:58.04 &+60:12:59.9 &26.91 &$>0.34 $ \\ NGC 6946-68 &20:35:09.01 &+60:11:42.4 &27.00 &$>0.76 $ \\ NGC 6946-69 &20:34:50.50 &+60:06:54.4 &27.05 &$>0.33 $ \\ NGC 6946-70 &20:34:29.19 &+60:12:26.5 &27.05 &$>0.34 $ \\ NGC 6946-71 &20:34:33.01 &+60:11:36.3 &27.32 &$>0.26 $ \\ \enddata \end{deluxetable}
train/arxiv
BkiUc2g5qoTAkOJ5BFjT
4
0.8
\section{Introduction and results} Suppose that $f$ is a transcendental meromorphic function on $\mathbb C$ such that, as $z$ tends to infinity along a path $\gamma $ in the plane, $f(z)$ tends to some $\alpha \in \mathbb C$. Then, for each $t > 0$, an unbounded subpath of $\gamma$ lies in a component $C(t)$ of the set $ \{ z \in \mathbb C : |f(z) - \alpha | < t \}$. Here $C(t) \subseteq C(s)$ if $0 < t < s$, and the intersection $\bigcap_{t>0} C(t)$ is empty \cite{BE}. The path $\gamma$ then determines a transcendental singularity of the inverse function $f^{-1}$ over the asymptotic value $\alpha$ and each $C(t)$ is called a neighbourhood of the singularity \cite{BE,Nev}. Two transcendental singularities over $\alpha$ are distinct if they have disjoint neighbourhoods for some $t > 0$. Following \cite{BE,Nev}, a transcendental singularity of $f^{-1}$ over $\alpha \in \mathbb C$ is said to be direct if $C(t)$, for some $t > 0$, contains finitely many points $z$ with $f(z) = \alpha$, in which case there exists $t_1 > 0$ such that $C(t)$ contains no $\alpha$-points of $f$ for $0 < t < t_1$. A direct singularity over $\alpha \in \mathbb C$ is logarithmic if there exists $t > 0$ such that $\log t/(f(z)-\alpha)$ maps $C(t)$ conformally onto the right half plane. If, on the other hand, $C(t)$ contains infinitely many $\alpha$-points of $f$, for every $t > 0$, then the singularity is called indirect: a well known example is given by $f(z) = z^{-1}\sin z$, with $\alpha=0$ and $\gamma$ the positive real axis $\mathbb R^+$. Transcendental singularities of $f^{-1}$ over $\infty$ and their corresponding neighbourhoods may be defined and classified using $1/f$, and the asymptotic and critical values of $f$ together comprise the singular values of $f^{-1}$. If $f$ has finite (lower) order of growth, as defined in terms of the Nevanlinna characteristic function $T(r, f)$ \cite{Hay2,Nev}, then the number of direct singularities is controlled by the celebrated Denjoy-Carleman-Ahlfors theorem \cite{Hay7,Nev}. \begin{thm}[Denjoy-Carleman-Ahlfors theorem] \label{thmdca} Let $f$ be a transcendental meromorphic function in the plane of finite lower order $\mu$. Then the number of direct transcendental singularities of $f^{-1}$ is at most $\max \{1, 2 \mu \}$. \end{thm} A key consequence of Theorem \ref{thmdca} is that a transcendental entire function of finite lower order $\mu$ has at most $2 \mu $ finite asymptotic values \cite{Hay7}. A result of Bergweiler and Eremenko \cite{BE} shows that the critical values of a meromorphic function of finite (lower) order have a decisive influence on indirect transcendental singularities. \begin{thm}[\cite{BE}] \label{bethm} Let $f$ be a transcendental meromorphic function in the plane of finite lower order.\\ (a) If $f^{-1}$ has an indirect transcendental singularity over $\alpha \in \widehat \mathbb C = \mathbb C \cup \{ \infty \}$ then each neighbourhood of the singularity contains infinitely many zeros of $f'$ which are not $\alpha$-points of $f$; in particular, $\alpha$ is a limit point of critical values of $f$. \\ (b) If $f$ has finitely many critical values then $f^{-1}$ has finitely many transcendental singularities, and all transcendental singularities are logarithmic. \end{thm} Theorem \ref{bethm} was proved in \cite{BE} for $f$ of finite order, and was extended to finite lower order, using essentially the same method, by Hinchliffe \cite{Hin}. Part (b) follows from part (a) combined with Theorem \ref{thmdca} and a well known classification theorem from \cite[p.287]{Nev}, which shows in particular that any transcendental singularity of the inverse function over an isolated singular value is logarithmic. Theorem \ref{bethm} was employed in \cite{BE} to prove a long-standing conjecture of Hayman \cite{Hay1} concerning zeros of $ff'-1$, and has found many subsequent applications, including to zeros of derivatives \cite{La11}. The reader is referred to \cite{BEdir,sixsmith} for further striking results on singularities of the inverse, both restricted to entire functions but independent of the order of growth. The starting question of the present paper concerns the extent to which Theorems \ref{thmdca} and \ref{bethm} hold under the weaker hypothesis that $f^{(k)}/f$ has finite lower order for some $k \in \mathbb N = \{ 1, 2, \ldots \}$. The obvious example $f(z) = \exp( \exp(z) )$ shows that $f'/f$ can have finite order despite $f$ having infinite lower order; here $f^{-1}$ has infinitely many direct (indeed logarithmic) singularities over $0$ and $\infty$, and one over~$1$. Furthermore, if $k \in \mathbb N$ and $A_k$ is a transcendental entire function then the lemma of the logarithmic derivative \cite{Hay2} shows that every non-trivial solution of \begin{equation} \label{de1} w^{(k)} - A_k(z) w = 0 \end{equation} has infinite lower order, even if $A_k$ has finite order. Clearly each of $\exp( \exp(z) )$ and $\exp( z^{-1} \sin z )$ satisfies an equation of form (\ref{de1}) with coefficient of finite order. Note further that if $f$ is a transcendental meromorphic function in the plane and $f'/f$ has finite lower order then it is easy to prove by induction that so has $A_k = f^{(k)}/f$ for every $k \geq 1$, using the formula $A_{k+1} = A_k A_1 + A_k'$, whereas the example $$ f(z) = e^{-z/2} \sin (e^z), \quad \frac{f'(z)}{f(z)} = - \, \frac12 + e^z \cot (e^z), \quad \frac{f''(z)}{f(z)} = \frac14 - e^{2z} , $$ shows that $f''/f$ can have finite order despite $f'/f$ having infinite lower order. \begin{thm} \label{thmds} Let $f$ be a transcendental meromorphic function in the plane such that $f^{-1}$ has $n \geq 1$ distinct direct transcendental singularities over finite non-zero values. Let $k \in \mathbb N$ and let $\mu$ be the lower order of $A_k = f^{(k)}/f$. Then the following statements hold. \\ (i) There exists a set $F_0 \subseteq [1, \infty)$ of finite logarithmic measure such that \begin{equation} \label{b5a} \lim_{r \to + \infty, r \not \in F_0} \frac{ \log \left( \min \{ |A_k(z)| : |z| = r \} \right)}{\log r} = - \infty . \end{equation} (ii) If $n \geq 2$ then $n \leq 2 \mu $. \\ (iii) If $n=1$ and there exist $\kappa > 0$ and a path $\gamma$ tending to infinity in the complement of the neighbourhood $C(\kappa)$ of the singularity, then $\mu \geq 1/2$. \end{thm} Theorem \ref{thmds} will be deduced from a version of the Wiman-Valiron theory for meromorphic functions with direct tracts developed in \cite{BRS}, and part (ii) is sharp, by Example I in Section \ref{exa}. Furthermore, if $g$ is a transcendental entire function of lower order less than $ 1/2$ then the inverse function of $f=1 - 1/g$ has a direct singularity over $1$; in this case $A_k$ obviously has lower order less than $ 1/2$ but the $\cos \pi \lambda$ theorem \cite[Ch. 6]{Hay7} implies that every neighbourhood of the singularity contains circles $|z| = r$ with $r$ arbitrarily large, so that a path $\gamma$ as in (iii) cannot exist. \begin{thm} \label{bekeylem2} Let $f$ be a transcendental meromorphic function in the plane such that $f^{(k)}/f$ has finite lower order for some $k \in \mathbb N$. Assume that $f^{-1}$ has an indirect transcendental singularity over $\alpha \in \widehat \mathbb C $. Then each neighbourhood of the singularity contains infinitely many zeros of $f'f^{(k)}$ which are not $\alpha$-points of $f$. \end{thm} Theorem \ref{bekeylem2} will be proved using a modification of methods from \cite{BE,Hin}. \begin{cor} \label{cor1} Let $f$ be a transcendental meromorphic function in the plane, with finitely many critical values, such that $f'/f$ has finite lower order. Then $f^{-1}$ has finitely many transcendental singularities over finite non-zero values, and $f$ has finitely many asymptotic values. Moreover, all transcendental singularities of $f^{-1}$ are logarithmic. \end{cor} Corollary \ref{cor1} follows from Theorems \ref{thmds} and \ref{bekeylem2}, coupled with \cite[p.287]{Nev}. \begin{cor} \label{cor3} Let $f$ be a transcendental meromorphic function in the plane such that $f''/f$ has lower order $\mu < \infty$ and $f'/f$ and $f''/f'$ have finitely many zeros. Then $f''/f'$ is a rational function and $f$ has finite order and finitely many poles. \end{cor} To prove Corollary \ref{cor3} observe that all but finitely many zeros of $f'f''$ are zeros of $f$. Thus $f^{-1}$ has no indirect singularities, by Theorem \ref{bekeylem2}, and hence $f$ has finitely many asymptotic values, in view of Theorem \ref{thmds}. Since $f$ evidently has finitely many critical values, the result follows via \cite[Theorem 2]{La11}. The condition $\mu < \infty$ holds if $f'/f$ has finite lower order, and is not redundant, because of an example in \cite{La11}. \hfill$\Box$ \vspace{.1in} The last result of this paper is related to the following theorem from \cite{Laschwarzian}. \begin{thm}[\cite{Laschwarzian}]\label{thm1} Let $M$ be a positive integer and let $f$ be a transcendental meromorphic function in the plane with transcendental Schwarzian derivative \begin{equation} \label{Schdef} S_f(z) = \frac{f'''(z)}{f'(z)} - \frac32 \left( \frac{f''(z)}{f'(z)} \right)^2 , \end{equation} such that: (i) $f$ has finitely many critical values and all multiple points of $f$ have multiplicity at most $M $; (ii) the inverse function of $f$ has finitely many transcendental singularities. Then the following three conclusions hold: (a) $f$ has infinitely many multiple points; (b) the inverse function of $S_f$ does not have a direct transcendental singularity over $\infty$; (c) the value $\infty$ is not Borel exceptional for $S_f$. \end{thm} Conclusion (a) is a result of Elfving \cite{elf} and Rolf Nevanlinna \cite{Nev2,Nev}, but was proved in \cite{Laschwarzian} by a completely different method. The following example shows that under the hypotheses of Theorem~\ref{thm1} the inverse of the Schwarzian can have a direct transcendental singularity over a finite value: write $$ g(z) = \sinh z , \quad S_g(z) = 1 - \frac{3 \tanh^2 z}2 , $$ so that $S_g^{-1}$ has two logarithmic singularities over $-1/2$. However, assumptions (i) and (ii) of Theorem \ref{thm1} imply that $f$ belongs to the Speiser class $\mathcal{S}$ \cite{Ber4,BE} consisting of all meromorphic functions in the plane for which the inverse function has finitely many singular values. For $f \in \mathcal{S}$, the following result excludes direct singularities of the inverse of $S_f$ over $0$. \begin{thm}\label{thm1aa} Let $f$ be a transcendental meromorphic function in the plane belonging to the Speiser class $\mathcal{S}$, with transcendental Schwarzian derivative $S_f$. Then the inverse function of $S_f$ does not have a direct transcendental singularity over $0$. \end{thm} The example $f(z) = \tan^2 \sqrt{ z}$ from \cite{CER} shows that for $f \in \mathcal{S}$ it is possible for $0$ to be an asymptotic value of $S_f$. Here direct computation shows that $f''(z)/f'(z)$ tends to $0$ as $z \to \infty$ in the left half plane, and so does $S_f(z)$. The author thanks the referees for helpful comments. \section{Examples illustrating Theorems \ref{thmds} and \ref{bekeylem2}}\label{exa} \noindent \textbf{Example I.} A function extremal for Theorem \ref{thmds}(ii), but not for Theorem \ref{thmdca}, is given by $$ f(0) = 1, \quad \frac{f'(z)}{f(z)} = \frac{\pi z}{\sin \pi z} . $$ Here $f$ is meromorphic in the plane, having at each non-zero integer $n$ a zero or pole of multiplicity $|n|$, depending on the sign and parity of $n$. Hence $N(r, f) $ and $N(r, 1/f)$ have order $2$. Because $$ 0 < \alpha = \int_0^{+\infty} \frac{\pi y}{\sinh \pi y} \, dy = \int_0^{+\infty} \frac{\pi y}{\pi y + (\pi y)^3/6 + \ldots } \, dy < \int_0^1 1 \, dy + \int_1^\infty \frac{6}{\pi^2 y^2 } \, dy < \pi $$ and $f'/f$ is even, $f$ has distinct asymptotic values $e^{\pm i \alpha }$, approached as $z$ tends to infinity along the imaginary axis. As $f'/f$ has finite order and $f$ has no finite non-zero critical values, both of these singularities of $f^{-1}$ are direct by Theorem \ref{bekeylem2}. \hfill$\Box$ \vspace{.1in} \\\\ \textbf{Example II.} Define $g$ by $$ g(0) = 1, \quad \frac{g'(z)}{g(z)} = A_1(z) = \frac1{\pi \cos \sqrt{z}} . $$ The zeros of $\cos \sqrt{z}$ occur where $\sqrt{z} = b_n = (2n+1) \pi/2 $, with $n \in \mathbb Z$, and the residue of $A_1$ at $ b_n^2$ is $\pm (2n+1)$. Thus $g$ is meromorphic in $\mathbb C$, with zeros and poles in $\mathbb R^+$ and no finite non-zero critical values. Integration along the negative real axis shows that $g$ has a non-zero real asymptotic value $\alpha$, and $g^{-1}$ has a logarithmic singularity over $\alpha$ by Corollary \ref{cor1}. This gives $\delta > 0$ and a simply connected component $C$ of $\{ z \in \mathbb C: |g(z) - \alpha | < \delta \}$ with $(-\infty, R) \subseteq C$ for some $R < 0$. Moreover, $C$ is symmetric with respect to $\mathbb R$, since $g$ is real meromorphic, so that $C \cap \mathbb R^+$ is bounded, and $g$ is extremal for Theorem \ref{thmds}(iii). \hfill$\Box$ \vspace{.1in} \noindent \textbf{Example III.} Let $F(z) = \exp( -z /2 - (1/4) \sin 2z ) \cos z $, so that $F''/F$ is entire of finite order. Then $F(z)$ tends to $0$ along $\mathbb R^+$ and this singularity of $F^{-1}$ is evidently indirect. \hfill$\Box$ \vspace{.1in} \noindent \textbf{Example IV.} Define entire functions $A_1$ and $v$ by $$ v(0) = 1, \quad \frac{v'(z)}{v(z)} = A_1(z) = \frac{1 - \cos z}{z^2} = \frac12 + \ldots . $$ Then there exists $\alpha \in \mathbb R^+$ such that $v(x) \to \exp( \pm \alpha )$ as $x \to \pm \infty$ on $\mathbb R$ and, since $A_1$ does not satisfy (\ref{b5a}), Theorem \ref{thmds} implies that $v^{-1}$ has no direct singularities over finite non-zero values. Because all critical points of $v$ are real, all but finitely many of them belong to neighbourhoods of the indirect singularities over $\exp( \pm \alpha )$, and so $v^{-1}$ has no other indirect singularities, by Theorem~\ref{bekeylem2}. Thus applying \cite[p.287]{Nev} again, in conjunction with Iversen's theorem, shows that $v^{-1}$ has logarithmic singularities over the omitted values $0$ and $\infty$. \hfill$\Box$ \vspace{.1in} \noindent \textbf{Example V.} Let $h(z) = \exp( \sin z - z )$, so that $A_1 = h'/h$ is entire of finite order but does not satisfy (\ref{b5a}). Since $h(z)$ tends to $0$ along $\mathbb R^+$, and to $\infty$ on the negative real axis, with $h'(2 \pi n )= 0$ for all $n \in \mathbb Z$, these singularities of $h^{-1}$ are direct but not logarithmic. \hfill$\Box$ \vspace{.1in} \section{Preliminaries} The following well known estimate may be found in Theorem 8.9 of \cite{Hay7}. \begin{lem}[\cite{Hay7}] \label{dcalem} Let $D_1, \ldots , D_n$ be $n \geq 2$ pairwise disjoint plane domains. If $u_1, \ldots, u_n $ are non-constant subharmonic functions on $\mathbb C$ such that $u_j$ vanishes outside $D_j$ then \begin{equation} \label{dca0} \liminf_{r \to \infty} \frac{h(r)}{r^{n/2} } > 0, \quad h(r) = \max_{1 \leq j \leq n} B(r, u_j), \quad B(r, u_j) = \sup \{ u_j(z): |z| = r \} . \end{equation} \end{lem} \hfill$\Box$ \vspace{.1in} For $a \in \mathbb C$ and $R > 0$ denote by $D(a, R)$ the open disc of centre $a$ and radius $R$, and by $S(a, R)$ its boundary circle. \begin{lem} \label{lemderivs} To each $k \in \mathbb N$ corresponds $d_k \in (0, \infty) $ with the following property. Suppose that $0 < R < \infty $ and $w=h(z)$ maps the domain $U \subseteq \mathbb C$ conformally onto $D(a, R)$, with inverse function $F: D(a, R) \to U$. Then there exists an analytic function $V_k: D(a, R) \to \mathbb C$ with \begin{equation} \label{derivest1} h^{(k)}(z) F'(w)^k = V_k(w), \quad |V_k(w)| \leq \frac{d_k}{( R-|w-a|)^{k-1} } \quad \hbox{as} \quad |w-a| \to R- . \end{equation} \end{lem} \textit{Proof.} Assume that $a=0$ and initially that $R=1$. It is clear that (\ref{derivest1}) holds for $k=1$, with $V_1 (w) = 1$. If (\ref{derivest1}) holds for $k$ then it follows that \begin{equation*} h^{(k+1)}(z) F'(w)^{k+1} = V_k'(w) - k h^{(k)}(z) F'(w)^{k-1} F''(w) = V_k'(w) - k V_k(w) \, \frac{ F''(w) }{F'(w)} . \end{equation*} Since $F''(w)/F'(w) = O( 1-|w| )^{-1}$ as $|w| \to 1-$ by \cite[p.5, (1.6)]{Hay9}, applying Cauchy's estimate for derivatives to $V_k$ proves the lemma by induction when $R=1$. In the general case write $w = h(z) = RH(z) = Rv$ and $z = F(w)= G(v)$ so that, as $|w| \to R-$, $$|h^{(k)}(z) F'(w)^k | = R^{1-k} |H^{(k)}(z) G'(v)^k | \leq \frac{d_k R^{1-k} }{( 1-|v|)^{k-1} } = \frac{d_k}{( R-|w|)^{k-1} } .$$ \hfill$\Box$ \vspace{.1in} \begin{lem} \label{belem6a} Let $M \in \mathbb N$ and $s > 2^{24}$ and let $E_1, \ldots , E_{N}$ be $N \geq 24M$ pairwise disjoint domains in $\mathbb C$, and for $t > 0$ let $\phi_j(t)$ be the angular measure of $S(0, t) \cap E_j$. Then at least $N-12M$ of the $E_j$ satisfy \begin{equation} \int_{ [ 4s^{1/2} , s /4 ] } \frac{\pi \, dt}{t \phi_j(t)} > M \log s \quad \hbox{and} \quad \int_{ [ 4s , s^2 /4 ] } \frac{\pi \, dt}{t \phi_j(t)} > M \log s . \label{Djnarrow} \end{equation} \end{lem} \textit{Proof.} This is a standard application as in \cite[Ch. 8]{Hay7} or \cite{BE} of the Cauchy-Schwarz inequality, which gives \begin{equation} \label{csineq} \frac{L^2}{t} \leq \frac1t \left( \sum_{j=1}^{L} \phi_j(t) \right) \left( \sum_{j=1}^{L} \frac1{ \phi_j(t) }\right) \leq 2 \sum_{j=1}^{L} \frac{\pi}{ t \phi_j(t) } \end{equation} for $M \leq L \leq N$ and $t > 0$. If $s > 2^{24}$ and either inequality of (\ref{Djnarrow}) fails for $L \geq 6M $ of the $E_j$, without loss of generality for $j=1, \ldots, L$, then integrating (\ref{csineq}) yields a contradiction via $$ 2 L M \log s < 6 L M \log \frac{\sqrt{s}}{16} \leq L^2 \, \log \frac{\sqrt{s}}{16} \leq 2L M \log s . $$ \hfill$\Box$ \vspace{.1in} \begin{lem}[\cite{Ber4}] \label{lember4} Let $h$ be a transcendental meromorphic function in the plane belonging to the Speiser class $\mathcal{S}$. Then there exist positive constants $C$, $R$ and $M$ such that \begin{equation} \label{Sineq} \left| \frac{z h'(z)}{h(z)} \right| \geq C \log^+ \left| \frac{h(z)}M \right| \quad \text{for $|z| \geq R$.} \end{equation} \end{lem} \section{Proof of Theorem \ref{thmds}} Let $f$ be a transcendental meromorphic function in the plane such that $f^{-1}$ has $n \geq 1$ direct singularities over (not necessarily distinct) finite non-zero values $a_1, \ldots, a_n$. Let $k \in \mathbb N$; then $A_k = f^{(k)}/f$ does not vanish identically. There exist a small positive $\delta $ and non-empty components $D_j $ of $\{ z \in \mathbb C : |f(z) - a_j| < \delta \}$, for $j = 1, \ldots n$, such that $f(z) \not = a_j$ on $D_j$, so that $D_j$ immediately qualifies as a direct tract for $g_j = \delta /(f-a_j)$ in the sense of \cite[Section 2]{BRS}. Here $\delta$ may be chosen so small that if $n \geq 2$ then these $D_j$ are pairwise disjoint. For each $j$, define a non-constant subharmonic function $u_j$ on $\mathbb C$ by $$ u_j(z) = \log |g_j(z)| = \log \left| \frac{\delta }{ f(z) - a_j} \right| \quad (z \in D_j ), \quad u_j(z) = 0 \quad ( z \not \in D_j). $$ Then \cite[Theorem 2.1]{BRS} implies that, with $B(r, u_j)$ as in (\ref{dca0}), \begin{equation} \label{uest1aa} \lim_{r \to + \infty} \frac{B(r, u_j)}{\log r} = + \infty, \quad \lim_{r \to + \infty} a(r, u_j) = + \infty, \quad a(r, u_j) = rB'(r,u_j) . \end{equation} \begin{lem} \label{lemwv} There exists a set $F_0 \subseteq [1, \infty)$, of finite logarithmic measure, such that for each $s \in [1, \infty) \setminus F_0$ and each $j$ there exists $z_j$ with \begin{equation} \label{wvest} |z_j| = s, \quad A_k(z_j) = \frac{f^{(k)}(z_j)}{f(z_j)} = O \left( \exp( - B(s, u_j) /2 ) \right). \end{equation} \end{lem} \textit{Proof.} Fix $\tau$ with $1/2 < \tau < 1$ and apply the version of Wiman-Valiron theory developed in \cite{BRS} for meromorphic functions with direct tracts. By \cite[Theorem 2.2 and Lemma 6.10]{BRS} there exists a set $F_0 \subseteq [1, \infty)$ of finite logarithmic measure such that every $s \in [1, \infty) \setminus F_0$ has the following two properties: first, $a(s, u_j) $ is large, by (\ref{uest1aa}), but satisfies \begin{equation} \label{dca5} a(s, u_j) \leq B(s, u_j)^2 ; \end{equation} second, for each $j$ there exists $z_j $ with $|z_j| = s$ and $u(z_j) = B(s, u_j)$ such that \begin{equation} \label{dca6} \frac{f(z)-a_j}{f(z_j)-a_j} \sim \left( \frac{z}{z_j} \right)^{-a(s, u_j)} \quad \hbox{for} \quad |z-z_j| < \frac{s}{ a(s,u_j)^{\tau} } . \end{equation} A standard application of Cauchy's estimate for derivatives in (\ref{dca6}) now gives \begin{equation*} \left( \frac{f'}{f-a_j} \right)^{(p)} (z) = O \left( \frac{ a(s, u_j) }{s} \right)^{p+1} \quad \hbox{for} \quad p=0, \ldots, k-1 \quad \hbox{and} \quad |z-z_j| < \frac{s }{2 a(s,u_j)^{\tau} } . \end{equation*} It follows via \cite[Lemma 3.5]{Hay2} that $$ \frac{f^{(k)}(z_j)}{f(z_j)} = \frac{f^{(k)}(z_j)}{f(z_j) - a_j} \cdot \frac{f(z_j)-a_j}{f(z_j)} = O \left( \frac{ a(s, u_j)^{k} \exp( - B(s, u_j) ) }{s^k} \right), $$ which, by (\ref{dca5}), yields (\ref{wvest}) for large enough $s \not \in F_0$. \hfill$\Box$ \vspace{.1in} Combining (\ref{uest1aa}) with (\ref{wvest}) for $j=1$ leads to (\ref{b5a}). To prove the remaining assertions it may be assumed that $A_k$ has finite lower order $\mu$. Choose a positive sequence $(r_m)$ tending to infinity such that \begin{equation} \label{dca2} T(8r_m, A_k) < r_m^{\mu+o(1)} . \end{equation} Let $m$ be large and let $w_1, \ldots, w_{q_m}$ be the zeros and poles of $A_k$ in $r_m/4 \leq |z| \leq 4r_m$, repeated according to multiplicity: then (\ref{dca2}) and standard estimates yield \begin{equation} \label{dca3} q_m \leq n(4r_m, A_k) + n(4r_m,1/A_k) \leq \frac2 {\log 2} \, T(8r_m, A_k) + O(1) \leq r_m^{\mu+o(1)} . \end{equation} Let $U_m$ be the union of the discs $D(w_j, r_m^{ - \mu })$. Since the sum of the radii of the discs of $U_m$ is $ o( r_m)$ by (\ref{dca3}), there exists a set $E_m \subseteq [r_m/2, 2r_m ]$, of linear measure at least $r_m$, and so logarithmic measure $l_m \geq 1/2$, such that for $r \in E_m$ the circle $|z| = r$ does not meet $U_m$. A standard application of the Poisson-Jensen formula \cite{Hay2} on the disc $|\zeta| \leq 4r_m$ then yields \begin{equation} \label{dca4} \left| \log \left| A_k(z) \right| \right| \leq r_m^{\mu+o(1)} \quad \hbox{for} \quad |z| \in E_m . \end{equation} Since $m$ is large and $l_m \geq 1/2$, there exists $s_m \in E_m \setminus F_0$. Suppose now that $n=1$ and there exist $\kappa > 0$ and a path $\gamma$ tending to infinity in the complement of the neighbourhood $C(\kappa)$ of the singularity, or that $n \geq 2$. Then (\ref{dca0}) holds, by \cite[Theorem 6.4]{Hay7} when $n=1$, and by Lemma \ref{dcalem} when $n \geq 2$. Combining (\ref{dca0}) and (\ref{wvest}), with $s= s_m \geq r_m /2$, yields points $z_j$ with $|z_j| = s_m$ and, for at least one $j$, $$A_k(z_j) = O \left( \exp( - B(s_m, u_j) /2 ) \right) = O \left( \exp \left( -s_m^{n/2-o(1)} \right) \right) .$$ On combination with (\ref{dca4}) this forces $2 \mu \geq n$. \hfill$\Box$ \vspace{.1in} \section{Indirect singularities} \label{transing} \begin{prop} \label{bekeylem} Let $f$ be a transcendental meromorphic function in the plane such that $ f^{(k)}/f$ has finite lower order $\mu$ for some $k \in \mathbb N$. Assume that $f^{-1}$ has an indirect transcendental singularity over $\alpha \in \mathbb C \setminus \{ 0 \}$. Then for each $\delta > 0$ the neighbourhood $C(\delta)$ of the singularity contains infinitely many zeros of $f' f^{(k)}$. \end{prop} The proof of Proposition \ref{bekeylem} will take up the whole of this section. The method is adapted from those in \cite{BE,Hin}, but some complications arise, in particular when $k \geq 2$. Assume throughout that $f$ and $\alpha$ are as in the hypotheses but $C( \varepsilon )$, for some small $\varepsilon > 0$, contains finitely many zeros of $f'f^{(k)}$. It may be assumed that $\alpha =1$, and that $C( \varepsilon )$ contains no zeros of $f'f^{(k)}$. Choose positive integers $N_1, N_2, \ldots, N_9 $ with $5 \mu + 12 < N_1$ and $N_{j+1}/N_j$ large for each $j$. \begin{lem} \label{belem2} For each $j \in \{ 1, \ldots, N_9 \}$ there exist $z_j \in C( \varepsilon )$ and $a_j \in \mathbb C$ with $0 < r_j = | 1 - a_j| < \varepsilon /2 $, as well as a simply connected domain $D_j \subseteq C( \varepsilon )$, with the following properties. The $a_j$ are pairwise distinct and the $D_j$ pairwise disjoint. Furthermore, the function $f$ maps $D_j$ univalently onto $D(1, r_j)$, with $z_j \in D_j$ and $f(z_j) = 1$. Moreover, $0 \not \in D_j$ but $D_j$ contains a path $\sigma_j $ tending to infinity, which is mapped by $f$ onto the half-open line segment $[1, a_j)$, with $f(z) \to a_j$ as $z \to \infty $ on $\sigma_j$. \end{lem} This is proved exactly as in \cite{BE}. If $0 < T_j < \varepsilon /2$ and $z_j \in C(T_j)$ is such that $f(z_j) = 1$, let $r_j$ be the supremum of $t > 0$ such that the branch of $f^{-1}$ mapping $1$ to $z_j$ admits unrestricted analytic continuation in $D(1, t)$. Then $r_j < T_j$ because $f$ is not univalent on $C(T_j)$, and there is a singularity $a_j$ of $f^{-1}$ with $|1-a_j| = r_j$; moreover, $a_j$ must be an asymptotic value of $f$. The $z_j$ and $T_j$ are then chosen inductively: for the details see \cite{BE} (or \cite[Lemma 10.3]{lajda}). \hfill$\Box$ \vspace{.1in} \begin{lem}\label{belem3} Let the $z_j, a_j, \sigma_j$ and $D_j$ be as in Lemma $\ref{belem2}$. For $t > 0$, let $t \theta_j(t) $ be the length of the longest open arc of $S(0, t)$ which lies in $D_j$. Then $f$ satisfies, as $z$ tends to infinity on $\sigma_j$, \begin{equation} \log \frac{r_j}{|f(z) - a_j|} \geq \int_{|z_j|}^{|z|} \frac{dt}{4 t \theta_j(t) } . \label{be1} \end{equation} \end{lem} \textit{Proof.} Let $z = H(w)$ be the branch of $f^{-1}$ mapping $D(1, r_j )$ onto $D_j$. For $z \in \sigma_j$ the distance from $z$ to $\partial D_j$ is at most $|z| \theta_j(|z|)$. Thus Koebe's quarter theorem \cite[Ch. 1]{Hay9} implies that $$ |(w - a_j ) H'(w)| \leq 4 |z| \theta_j(|z|) \quad \hbox{for} \quad z = H(w), \, w \in [1 , a_j) . $$ Hence, for large $z \in \sigma_j$ and $w = f(z)$, writing $u = H(v)$ for $v \in [1, w]$ gives (\ref{be1}) via \begin{eqnarray*} \log \frac{r_j}{| f(z)-a_j |} &=& \int_{[1, w]} \frac{|dv|}{|a_j-v|} = \int_{H([1,w])} \frac{|du|}{|(a_j-v)H'(v)|} \geq \int_{H([1,w])} \frac{|du|}{4 |u| \theta_j(|u|)} . \end{eqnarray*} \hfill$\Box$ \vspace{.1in} Since $N_1 > 5 \mu $ there exists a positive sequence $(s_n)$ tending to infinity such that \begin{equation} T(s_n^5, f^{(k)}/f) + T(s_n^5 , f/f^{(k)}) \leq s_n^{N_1} . \label{Mexist} \end{equation} Set \begin{equation} G(z) = z^N \frac{f^{(k)}(z)}{f(z)}, \quad N = N_5. \label{Gdefn} \end{equation} Applying \cite[Lemma 4.1]{LaRo2} to $1/G$ (with $\psi (t) = t$ in the notation of \cite{LaRo2}) gives a small positive $\eta$ such that $G$ has no critical values $w$ with $|w| = \eta $ and such that the length $L(r, \eta , G)$ of the level curves $|G(z)| = \eta $ lying in $D(0, r)$ satisfies \begin{equation} L(s_n^4, \eta, G) = O( s_n^6 T( e^8 s_n^4, G)^{1/2} ) = O( s_n^{6 + N_1/2} ) \leq s_n^{N_1} \quad \hbox{ as $n \to \infty$,} \label{lrgest} \end{equation} using (\ref{Mexist}) and the fact that $N_1 > 12$. Assume henceforth that $n$ is large. \begin{lem} \label{lemingrowth} At least $N_8$ of the domain $D_j$ and paths $\sigma_j$, without loss of generality $D_1, \ldots, D_{N_8}$ and $\sigma_1, \ldots, \sigma_{N_8}$, are such that \begin{equation} \label{minest1} |f(z)-a_j| \leq s_n^{- N_7} \quad \hbox{for $z \in \sigma_j$ with $|z| \geq s_n/4$.} \end{equation} \end{lem} \textit{Proof.} By Lemma \ref{belem6a} it may be assumed that, for $j = 1, \ldots, N_8$, \begin{equation*} \int_{ [ 4s_n^{1/2} , s_n /4 ] } \frac{\pi \, dt}{t \theta_j(t)} > N_8 \log s_n , \end{equation*} which, on combination with Lemma \ref{belem3}, leads to (\ref{minest1}). \hfill$\Box$ \vspace{.1in} \begin{lem} \label{belem5a} Let $w_1, \ldots , w_{q_n}$ be the zeros and poles of $G$ in $s_n^{1/4} \leq |z| \leq s_n^4$, repeated according to multiplicity. Then \begin{equation} \label{n(r)est1} q_n \leq n(s_n^4, 1/G) + n(s_n^4, G) = o \left( s_n^{N_1} \right) \end{equation} and there exist $t_n , T_n$ satisfying \begin{equation} s_n^{1/2} - 1 \leq t_n \leq s_n^{1/2} , \quad s_n^2 \leq T_n \leq s_n^2 +1, \label{tnTndef} \end{equation} such that \begin{equation} \max \{ | \log |G(z)| | : z \in S(0, t_n) \cup S(0, T_n) \} \leq s_n^{N_1+1} . \label{tnTn2} \end{equation} \end{lem} \textit{Proof.} (\ref{n(r)est1}) follows from (\ref{Mexist}). Let $U_n$ be the union of the discs $D(w_q, s_n^{-N_1-1} )$: these discs have sum of radii at most $s_n^{-1}$ and so since $n$ is large there exist $t_n, T_n$ satisfying (\ref{tnTndef}) such that the circles $S(0, t_n), S(0, T_n)$ do not meet $U_n$. Hence the Poisson-Jensen formula gives (\ref{tnTn2}). \hfill$\Box$ \vspace{.1in} \begin{lem} \label{lemnocomp} Define sets $E$, $K_n$ and $L_n$ by $E = \{ z \in \mathbb C : |G(z)| < \eta \} $ and $$ K_n = \{ z \in \mathbb C : t_n < |z| < T_n \} , \quad L_n = \{ z \in \mathbb C : s_n/4 < |z| < 4 s_n \}. $$ Then the number of components $E_q$ of $E \cap K_n $ which meet $L_n $ is at most $s_n^{N_1}$. \end{lem} \textit{Proof.} If the closure $F_q$ of $E_q$ lies in $K_n $ then $E_q$ must contain a zero of $G$, whereas if $F_q \not \subseteq K_n $ then $\partial E_q \cap K_n$ has arc length at least $s_n/8$. Thus the lemma follows from (\ref{lrgest}) and (\ref{n(r)est1}). \hfill$\Box$ \vspace{.1in} \begin{lem} \label{belem5} Let $u$ lie on $\sigma_j$ with $s_n/4 \leq |u| \leq 4s_n$. Then, with $d_k$ as in Lemma \ref{lemderivs}, there exists $v$ on $\sigma_j$ such that: \begin{equation} \label{vest1} |u| \leq |v| \leq |u| + s_n^{-N_3}; \quad |f(v ) - a_j| \leq | f(u) - a_j | ; \quad |f^{(k)}(v) | \leq k^k d_k s_n^{kN_3} |f(u)-a_j| . \end{equation} \end{lem} \textit{Proof.} Starting at $u$, follow $\sigma_j$ in the direction in which $|f(z) - a_j|$ decreases. Then $\sigma_j$ describes an arc $\gamma$ joining the circles $S(0, |u|)$ and $S(0, |u| + s_n^{-N_3})$, such that the first two inequalities of (\ref{vest1}) hold for all $v \in \gamma$. Since $f$ maps $D_j$ univalently onto $D(1, r_j)$, the inverse function $H$ of $f$ maps a proper sub-segment $I$ of the half-open line segment $J = [ f(u) , a_j ) $ onto $\gamma$. Assume that the last inequality of (\ref{vest1}) fails for all $v \in \gamma$. Then Lemma \ref{lemderivs} yields, on $I$, $$ |H'(w)| \leq k^{-1} s_n^{-N_3}|f(u)-a_j|^{-1/k} (r_j-|w-1|)^{1/k-1} . $$ Since $1$, $f(u)$ and $a_j$ are collinear, a contradiction arises via \begin{eqnarray*} s_n^{-N_3} \leq \left| \int_I H'( w ) d w \right| &\leq & \int_I k^{-1} s_n^{-N_3} |f(u)-a_j|^{-1/k} (r_j-|w-1|)^{1/k-1} \, |dw| \\ &< & \int_J k^{-1} s_n^{-N_3} |f(u)-a_j|^{-1/k} (r_j-|w-1|)^{1/k-1} \, |dw| \\ &= & \int_{|f(u)-1|}^{r_j} k^{-1} s_n^{-N_3} |f(u)-a_j|^{-1/k} (r_j-t)^{1/k-1} \, dt \\ &=& s_n^{-N_3} |f(u)-a_j|^{-1/k} (r_j-|f(u)-1|)^{1/k} = s_n^{-N_3} . \end{eqnarray*} \hfill$\Box$ \vspace{.1in} \begin{lem} \label{lemgronwall} Let $E_p$ be a component of $E \cap K_n $ which meets $L_n$, and suppose that there exists $j = j(p)$ such that $E_p$ contains $k$ points $\zeta_1, \ldots , \zeta_k \in D_j$ each with $ |f(\zeta_q) - a_j| \leq s_n^{-N_7} $. Assume further that $|\zeta_q - \zeta_{q'}| \geq s_n^{-N_3}$ for $q \neq q'$. Then $|f(z) - a_j | \leq s_n^{- N_2} $ for all $z \in E_p$, and $E_p \subseteq C(\varepsilon)$. \end{lem} \textit{Proof.} Let $M_0 = \sup \{ |f(z)| : z \in E_p \}$; then $M_0 < + \infty$ since poles of $f$ in $\mathbb C \setminus \{ 0 \}$ are poles of $G$, by (\ref{Gdefn}), and $|G(z)| \leq \eta $ on the closure of $E_p$. Choose $u_0 \in E_p$ with $|f(u_0) | \geq M_0/2$. There exists a polynomial $P$, of degree at most $k-1$, such that $$ f(z) = P(z) + \int_{u_0}^z \frac{(z-t)^{k-1}}{(k-1)!} f^{(k)}(t) \, dt \quad \hbox{on $E_p$. } $$ The length of the boundary of $E_p$ is at most $2 s_n^{N_1}$ by (\ref{lrgest}). Hence each $z \in E_p$ can be joined to $u_0$ by a path in the closure of $E_p$, of length at most $4s_n^{N_1}$, and so \begin{equation} \label{flikeP} |f(z) - P(z)| \leq M_0 \eta t_n^{-N_5} (2T_n)^{k-1} 4 s_n^{N_1} \leq M_0 s_n^{-N_4} , \end{equation} by (\ref{Gdefn}) and (\ref{tnTndef}). In particular this gives $| P(\zeta_q) - a_j | \leq (1+M_0) s_n^{-N_4} $ for each $q$. For $z$ in $E_p$, Lagrange's interpolation formula leads to \begin{eqnarray} \label{lagrange} |P(z) - a_j| &=& \left| \sum_{q=1}^k (P(\zeta_q)-a_j) \prod_{\nu \neq q} \frac{z-\zeta_\nu}{\zeta_q-\zeta_\nu} \right| \nonumber \\ &\leq & k (1+M_0) s_n^{-N_4} (2T_n)^{k-1} s_n^{(k-1)N_3} \leq (1+M_0) s_n^{-N_3} . \end{eqnarray} Setting $z = u_0$ in (\ref{lagrange}) then delivers $M_0 \leq 2 |P(u_0)| \leq 2 |a_j| + o(1 + M_0) $ and so $M_0 \leq 5$. Now combining (\ref{flikeP}) with (\ref{lagrange}) yields $|f(z) - a_j | \leq s_n^{- N_2} $ and hence $|f(z) - 1| < \varepsilon$ on $E_p$. Since $E_p$ meets $D_j \subseteq C(\varepsilon)$, this gives $E_p \subseteq C(\varepsilon)$. \hfill$\Box$ \vspace{.1in} For each $j \in \{ 1, \ldots, N_8 \}$ choose $\lambda = s_n^{N_2}$ points $u_{j,1}, \ldots , u_{j,\lambda}$ on $\sigma_j$, each with $s_n/2 \leq |u_{j,\kappa} | \leq s_n$ and such that $|u_{j,\kappa+1}| \geq |u_{j,\kappa} | + 2 s_n^{-N_3}$. Applying Lemma \ref{belem5} with $u = u_{j,\kappa} $ gives points $v_{j,\kappa} \in \sigma_j $ with $s_n/2 \leq |u_{j,\kappa}| \leq |v_{j,\kappa}| \leq |u_{j,\kappa} | + s_n^{-N_3} \leq 2s_n$ and, using (\ref{Gdefn}), (\ref{minest1}) and (\ref{vest1}), \begin{equation} \label{vjest1} |f(v_{j,\kappa}) - a_j|\leq s_n^{-N_7}, \quad |G(v_{j,\kappa})| \leq 2 |v_{j,\kappa} |^{N_5} | f^{(k)} (v_{j,\kappa})| \leq s_n^{-N_6} < \eta . \end{equation} These points $v_{j,\kappa}$ satisfy $|v_{j,\kappa+1}| \geq |v_{j,\kappa} | + s_n^{-N_3}$, and each lies in a component of $E \cap K_n $ which meets $L_n$. Since there are $s_n^{N_2}$ of these $v_{j,\kappa}$ for each $j$, but at most $s_n^{N_1}$ available components $E_p$ by Lemma \ref{lemnocomp}, it must be the case that for each $j$ there are at least $k$ points $v_{j,\kappa}$ lying in the same component $E_p$. Lemma \ref{lemgronwall} then implies that $E_p \subseteq C(\varepsilon)$ and $f(z) = a_j + o(1)$ on $E_p$. Thus for $j=1, \ldots, N_8$ the following exist: a component $C_j = E_{p_j} \subseteq C(\varepsilon) $ of $E \cap K_n $ which meets $L_n$ and on which $f(z) = a_j + o(1)$; a point $v_j \in C_j$ such that, by (\ref{vjest1}), \begin{equation} \label{vjest2} s_n/2 \leq |v_j| \leq 2s_n , \quad |G(v_{j})| \leq s_n^{-N_6} . \end{equation} Since $C_j \subseteq C(\varepsilon)$, the function $\log |1/G(z)|$ is subharmonic on $C_j$. Moreover, because $j' \neq j$ gives $f(z) \to a_{j'} \neq a_j $ as $z \to \infty $ on $\sigma_{j'}$, the $C_j$ are pairwise disjoint and none of them contains a circle $S(0, t)$ with $t \in [t_n, T_n]$. For $t > 0$ let $\phi_j(t)$ be the angular measure of $C_j \cap S(0, t)$. Then (\ref{tnTndef}) and \cite[p.116]{Tsuji1} give a harmonic measure estimate $$ \omega (v_j, C_j, S(0, T_n) \cup S(0, t_n) ) \leq c_1 \exp \left( - \pi \int_{2|v_j|}^{T_n/2} \frac{dt}{t \phi_j(t)} \right) + c_1 \exp \left( - \pi \int_{2t_n}^{|v_j|/2} \frac{dt}{t \phi_j(t)} \right) , $$ for $j=1, \ldots, N_8$, in which $c_1$ is a positive absolute constant. By Lemma \ref{belem6a} and (\ref{tnTndef}), there exists at least one $j$ for which $ \omega (v_j, C_j, S(0, T_n) \cup S(0, t_n) ) \leq 2c_1 s_n^{-N_7}$. For this choice of $j$ the two constants theorem \cite{Nev} delivers, using (\ref{tnTn2}), (\ref{vjest2}) and the fact that $|G(z)| = \eta $ on $\partial C_j \cap K_n$, $$ N_6 \log s_n \leq \log \frac1{|G(v_j)|} \leq \log \frac1\eta + 2c_1 s_n^{- N_7 + N_1 + 1 } , $$ a contradiction since $n$ is large. \hfill$\Box$ \vspace{.1in} \section{Proof of Theorem \ref{bekeylem2}} This is almost identical to the corresponding proof in \cite{BE}, but with Theorem \ref{thmds} standing in for the Denjoy-Carleman-Ahlfors theorem. Suppose that $f$, $k$ and $\alpha$ are as in the hypotheses but there exists $\varepsilon > 0$ such that in the neighbourhood $C( \varepsilon )$ of the singularity the function $f' f^{(k)}$ has finitely many zeros which are not $\alpha$-points of $f$: it may be assumed that there are no such zeros. On the other hand, because the singularity is indirect, $f$ must have infinitely many $\alpha$-points in $C(\varepsilon)$. Since $f^{(k)}/f$ has finite lower order, $f^{-1}$ cannot have infinitely many direct transcendental singularities over finite non-zero values, by Theorem~\ref{thmds}. Set $A(\varepsilon) = \{ w \in \mathbb C : 0 < | w- \alpha | < \varepsilon \}$ if $\alpha \in \mathbb C$, with $A(\varepsilon) = \{ w \in \mathbb C : |w| > 1/ \varepsilon \}$ if $\alpha = \infty $. In either case it may be assumed that $\varepsilon $ is so small that $A(\varepsilon) \subseteq \mathbb C \setminus \{ 0 \}$ and there is no $w$ in $A(\varepsilon)$ such that $f^{-1}$ has a direct transcendental singularity over~$w$. Take $z_0 \in C( \varepsilon )$, with $f(z_0) = w_0 \neq \alpha $, and let $g$ be that branch of $f^{-1}$ mapping $w_0 $ to $z_0$. If $g$ admits unrestricted analytic continuation in $A(\varepsilon)$ then, exactly as in \cite{BE}, the classification theorem from \cite[p.287]{Nev} shows that $z_0$ lies in a component $C_0$ of the set $\{ z \in \mathbb C : f(z) \in A( \varepsilon ) \cup \{ \alpha \} \}$ which contains at most one point $z$ with $f(z) = \alpha$, so that $ C(\varepsilon) \not \subseteq C_0$. But any $z_1 \in C( \varepsilon )$ can be joined to $z_0$ by a path $\lambda$ on which $f(z) \in A(\varepsilon)\cup \{ \alpha \}$, which gives $\lambda \subseteq C_0$ and hence $ C(\varepsilon) \subseteq C_0$, a contradiction. Hence there exists a path $\gamma : [0, 1] \to A(\varepsilon)$, starting at $w_0$, such that analytic continuation of $g$ along $\gamma$ is not possible. This gives rise to $S \in [0, 1]$ such that, as $t \to S-$, the image $z = g( \gamma (t) )$ either tends to infinity or to a zero $z_2 \in C( \varepsilon) $ of $f'$ with $f(z_2) = \gamma(S) \in A(\varepsilon )$, the latter impossible by assumption. It follows that setting $z = \sigma(t) = g( \gamma (t) )$, for $ 0 \leq t < S$, defines a path $\sigma$ tending to infinity in $C( \varepsilon )$, on which $f(z) \to w_1 \in A(\varepsilon)$ as $z \to \infty$. But then there exists $\delta > 0$ such that an unbounded subpath of $\sigma $ lies in a component $C' \subseteq C( \varepsilon)$ of the set $\{ z \in \mathbb C : | f(z) - w_1 | < \delta \}$, with $\delta $ so small that $f'f^{(k)}$ has no zeros on $C'$. Further, the singularity over $w_1$ must be indirect, since direct singularities over values in $A(\varepsilon)$ have been excluded, and this contradicts Proposition~\ref{bekeylem}. \hfill$\Box$ \vspace{.1in} \section{A result needed for Theorem \ref{thm1aa}} \begin{thm} [ \cite{LRW}, Theorem 1] \label{thmlrw} Let $u$ be a subharmonic function in the plane such that $B(r) = \sup \{ u(z): |z| = r \} $ satisfies $\lim_{r \to \infty} ( \log r)^{-1} B(r) = + \infty $. Then there exist $\delta_0 > 0$ and a simple path $\gamma : [0, \infty) \to \mathbb C$ with $\gamma(t) \to \infty$ as $t \to + \infty$ and the following properties: \begin{equation} \label{lrw1} (i) \quad \lim_{z \to \infty, z \in \gamma} \frac{u(z)}{\log |z|} = + \infty; \quad (ii) \quad \text{if $\lambda > 0$ then} \quad \int_\gamma \exp( - \lambda u(z)) \, |dz| < \infty; \end{equation} (iii) if $z = \gamma (t) $ then $u(\gamma(s)) \geq \delta _0 u(z) $ for all $s \geq t$. \end{thm} Conclusion (iii) and the fact that $\gamma$ may be chosen to be simple are not stated in \cite[Theorem~1]{LRW}, but both are implicit in the proof. Here $\gamma = \gamma_1 \cup \gamma_2 \cup \ldots $ is constructed in \cite[Section 3]{LRW} so that, for some fixed $\delta_1 \in (0, 1)$, each $\gamma_k :[k-1, k] \to \mathbb C$ is a simple path from $a_k \in D_k$ to $a_{k+1} \in \partial D_k$, where $D_k$ is the component of $\{ z \in \mathbb C : \, u(z) < (1-\delta_1)^{-1} u(a_k) \}$ containing $a_1$. By \cite[(3.2) and (3.3)]{LRW}, the $\gamma_k$ are such that $0 < \delta_1 u(a_k) \leq u(z) < (1-\delta_1)^{-1} u(a_k) $ on $\lambda_k = \gamma_k \setminus \{ a_{k+1} \} $ and $u(a_{k+1}) \geq (1-\delta_1)^{-1} u(a_k) > u(a_k) $. Hence if $z = \gamma (t) \in \lambda_k $ then $u( \gamma(s)) \geq \delta_1 u(a_k) \geq \delta_1 (1- \delta_1 ) u(\gamma(t)) $ for all $s \geq t$. If the whole path $\gamma$ is not simple, take the least $k \geq 2$ such that $\Gamma_k = \gamma_1 \cup \ldots \cup \gamma_k$ is not simple. Then there exists a maximal $t \in [k-1, k]$ such that $u_k = \gamma_k(t) $ lies in the compact set $\Gamma_{k-1}$, and $t < k$ since $\gamma_k(k) = a_{k+1} \in \partial D_k$. Replacing $\Gamma_k$ by the part of $\Gamma_{k-1}$ from $a_1$ to $u_k$, followed by the part of $\gamma_k$ from $u_k$ to $a_{k+1}$, does not affect conclusions (i), (ii) and (iii), and the argument may then be repeated. \hfill$\Box$ \vspace{.1in} Theorem \ref{thmlrw} leads to the following result. \begin{prop} \label{dtsprop} Let $N \in \mathbb N$ and let $A$ be a transcendental meromorphic function in the plane such that the inverse function of $A$ has a direct transcendental singularity over $0$. Then there exist a path $\gamma$ tending to infinity in $\mathbb C$ and linearly independent solutions $U$, $V$ of \begin{equation} \label{1} w'' + A(z) w = 0 \end{equation} on a simply connected domain containing $\gamma$, such that $U$ and $V$ satisfy, as $z \to \infty$ on $\gamma$, \begin{equation} \label{geqn1} U(z) = z + \frac{O(1)}{z^N} , \quad U'(z) = 1 + \frac{O(1)}{z^N} , \quad V(z) = 1 + \frac{O(1)}{z^N} , \quad V'(z) = \frac{O(1)}{z^N} . \end{equation} \end{prop} To prove Proposition \ref{dtsprop}, observe first that, as in the proof of Theorem \ref{thmds}, there exist a small positive $\delta $ and a non-empty component $D $ of $\{ z \in \mathbb C : |A(z)| < \delta \}$ such that $A(z) \not = 0$ on $D$, as well as a non-constant subharmonic function $u$ on $\mathbb C$ given by $$ u(z) = \log \left| \frac{\delta }{ A(z) } \right| \quad (z \in D ), \quad u(z) = 0 \quad ( z \not \in D). $$ Then $u$ satisfies the hypotheses of Theorem \ref{thmlrw}, by \cite[Theorem 2.1]{BRS}, and so there exists a path $\gamma : [0, \infty) \to D$ as in conclusions (i), (ii) and (iii). In particular, (iii) implies that \begin{equation} \label{maxAest} \text{if $z = \gamma (t) $ then $|A(\gamma(s))| \leq \delta^{1- \delta _0 } |A(z)|^{\delta_0} $ for all $s \geq t$.} \end{equation} Choose a simply connected domain $\Omega$ on which $A$ has no poles, such that $\gamma \subseteq \Omega$. By (\ref{lrw1}) it may be assumed that $|A(t)|^{-1/4} \geq |t|^2 \geq 4$ on $\gamma$, and that \begin{equation} \label{lrw2} \int_{\gamma} |t|^2 |A(t)| \, |dt| \leq \int_{\gamma} |t|^2 |A(t)|^{1/2} \, |dt| \leq \int_{\gamma} |A(t)|^{1/4} \, |dt| < \frac14 . \end{equation} \begin{lem} \label{lemgron1} Let $v$ be a solution of (\ref{1}) on $\Omega$. Then $v(z) = O(|z|)$ as $z \to \infty$ on $\gamma$. \end{lem} \textit{Proof.} This is a standard argument along the lines of Gronwall's lemma. Let $y_0$ be the starting point of $\gamma$. Differentiating twice shows that there exist constants $a_1, b_1$ such that, on $\Omega$, $$ v(z) = a_1 z + b_1 - \int_{y_0}^z (z-t) A(t) v(t) \, dt . $$ If $\phi(z) = v(z)/z$ is unbounded on $\gamma$ there exist $\zeta_n \to \infty$ on $\gamma$ such that $ \phi( \zeta_n ) \to \infty$ and $|\phi(t) | \leq | \phi( \zeta_n )|$ on the part of $\gamma$ joining $y_0$ to $\zeta_n$. If $n$ is large then (\ref{lrw2}) delivers a contradiction via $$ | \phi( \zeta_n )| \leq |a_1| + |b_1| + | \phi( \zeta_n )| \int_{y_0}^z (1+|t|) | t A(t)| \, |dt| \leq |a_1| + |b_1| + \frac{| \phi( \zeta_n )|}2 . $$ \hfill$\Box$ \vspace{.1in} \begin{lem} \label{lemnewLIsolns} (a) Let $ N \in \mathbb N$. Then on $\gamma$ every solution $v_j$ of (\ref{1}) has \begin{equation} \label{newvrep} v_j(z) = \alpha_j z + \beta_j + \int_z^\infty (z-t) A(t) v_j(t) \, dt , \quad \alpha_j, \beta_j \in \mathbb C , \end{equation} the integration being from $z$ to infinity along $\gamma$. Moreover, $v_j$ satisfies, as $z \to \infty$ on $\gamma$, \begin{equation} \label{bettervest} v_j(z) - \alpha_j z - \beta_j = \frac{O(1)}{z^N} , \quad v_j'(z) - \alpha_j = \frac{O(1)}{z^N} . \end{equation} (b) If $v_1, v_2$ are linearly independent solutions of (\ref{1}) on $\Omega$ then $|\alpha_1| + |\alpha_2| > 0$ in (\ref{newvrep}), and if $\alpha_2 = 0$ then $\beta_2 \neq 0$. \end{lem} \textit{Proof.} First, (\ref{newvrep}) follows from (\ref{lrw2}) and Lemma \ref{lemgron1}. Next, (\ref{lrw1}), (\ref{maxAest}), (\ref{lrw2}), (\ref{newvrep}) and Lemma~\ref{lemgron1} imply that, as $z \to \infty$ on $\gamma$, \begin{eqnarray*} | v_j(z)- \alpha_j z - \beta_j | &\leq& |z| \int_{z}^\infty (1+|t|) |A(t)| O( |t| ) \, |dt|\\ &\leq& |z| \, \delta^{(1- \delta _0 )/2} |A(z)|^{\delta_0/2 } \int_{z}^\infty (1+|t|) |A(t)|^{1/2} O( |t| ) \, |dt| = \frac{O(1)}{z^N} ,\\ |v_j'(z) - \alpha_j| &=& \left| \int_{z}^\infty A(t) v_j(t) \, dt \right| \\ &\leq& \delta^{(1- \delta _0 )/2} |A(z)|^{\delta_0/2 } \int_{z}^\infty |A(t)|^{1/2} O( |t| ) \, |dt| = \frac{O(1)}{z^N} . \end{eqnarray*} Finally, suppose that $v_1, v_2$ are linearly independent solutions of (\ref{1}) on $\Omega$ but the conclusion of (b) fails. Then $ v_1(z) v_2'(z) - v_1'(z) v_2(z) \to 0$ as $z \to \infty$ on $\gamma$, by (\ref{bettervest}), contradicting the fact that $W(v_1, v_2)$ is a non-zero constant by Abel's identity. \hfill$\Box$ \vspace{.1in} Now fix linearly independent solutions $v_1, v_2$ of (\ref{1}) on $\Omega$. Then $\alpha_1, \alpha_2$ cannot both vanish in (\ref{newvrep}). On the other hand, it is possible to ensure that one of $\alpha_1, \alpha_2$ is $0$, by otherwise considering $\alpha_2 v_1 - \alpha_1 v_2$. Hence it may be assumed that $\alpha_1 = 1$, while $\alpha_2 =0$ and $\beta_2 = 1$. Now write $U = v_1$ and $V = v_2$, so that Lemma \ref{lemnewLIsolns} gives (\ref{geqn1}). \hfill$\Box$ \vspace{.1in} \section{Proof of Theorem \ref{thm1aa}}\label{WV} Assume that $f$ and $S_f$ are as in the hypotheses but that the inverse function of $S_f$ has a direct transcendental singularity over $0$. Then evidently so has that of $A = S_f/2$, and it is well known that (\ref{Schdef}) implies that $f$ is locally the quotient of linearly independent solutions of (\ref{1}). Now Proposition \ref{dtsprop} gives linearly independent solutions $U$, $V$ of (\ref{1}) satisfying (\ref{geqn1}) on a path $\gamma $ tending to infinity. Moreover, $h = U/V $ has the form $h = T \circ f$, for some M\"obius transformation $T$, and so $h \in \mathcal{S}$, whereas $h(z) \sim z$ and $z h'(z)/h(z) = O(1)$ on $\gamma$, contradicting (\ref{Sineq}). \hfill$\Box$ \vspace{.1in} {\footnotesize
train/arxiv
BkiUboc5qoTDt2GdhY2t
5
1
\section{Introduction} \indent Since the original discovery of black hole radiation by Hawking \cite{1}, the studies on this topic have not terminated. There are many different methods for the derivation of Hawking radiation \cite{2,3,4,5,6,7,8,9}. In \cite{10,11}, a semiclassical method for the derivation of Hawking radiation was formulated by Parikh and Wilczek based on the quantum tunneling picture. In such a method, the radiated particles of a black hole are treated as $s$-waves.\footnote{~ This is reasonable because for an observer at infinity, the radiation of a black hole is spherically symmetric, no matter whether the black hole is rotating or not.} When a particle is radiated from the black hole horizon, it tunnels through a barrier that is made by the tunneling particle itself due to the horizon's contraction \cite{10,11}. To use the WKB approximation, the tunneling rate of an $s$-wave from inside to outside the black hole horizon is given by $$ \Gamma=\Gamma_{0}\exp(-2\mbox{Im}{\cal I}) ~. \eqno{(1)} $$ Here, ${\cal I}$ is the action of the tunneling particle, $\Gamma_{0}$ is a normalization factor. On the other hand, a black hole's radiation satisfies the law of Boltzmann distribution classically, thus the emission rate of a particle of energy $E$ from a black hole horizon can be expressed by $$ \Gamma=\Gamma_{0}\exp(-\beta E) ~, \eqno{(2)} $$ where $\beta=2\pi/\kappa$, $\kappa$ is the surface gravity of the horizon. To compare (2) with (1), the Hawking temperature of a black hole can be derived. After the original work of Parikh and Wilczek, many developments on this topic have been carried out \cite{12,13,14,15}, and many applications of this method for the derivation of Hawking radiation of different types of black holes have been done \cite{16,17,18,19,20,21,22,23,24,25,26,27}. In this paper, we study the Hawking radiation of general four-dimensional rotating black holes from the tunneling approach. We use the null-geodesic method of Parikh and Wilczek \cite{10,11} to calculate the action of a tunneling particle. In \cite{21}, Hawking temperature of Kerr and Kerr-Newman black holes have been derived from tunneling approach using dragging coordinate systems. In such a kind of coordinate system, the spacetime of a four-dimensional rotating black hole has been contracted to a three-dimensional slice. Thus the topology of the spacetime of a rotating black hole has been changed to use the method of \cite{21}. In order to keep the spacetime topology of a rotating black hole, we choose a reference system that is co-rotating with the event horizon to eliminate the motion of $\phi$ degree of freedom of a tunneling particle. We obtain that for a general four-dimensional rotating black hole, its thermal radiation temperature derived from the tunneling approach is in accordance with its Hawking temperature derived from black hole thermodynamics. These contents are given in Section 2. In Section 3, we give the explicit result of the Hawking temperature of the Kerr-Newman-AdS black hole from the tunneling approach. In Section 4, we discuss some of the problems. \section{Hawking temperature of four-dimensional rotating black holes from tunneling} \indent The metric of a four-dimensional spherically symmetric black hole can be expressed as $$ ds^{2}=-A(r)dt^{2}+B(r)dr^{2}+r^{2}d\Omega^{2} \eqno{(3)} $$ generally. Following \cite{10,11}, the imaginary part of the action of a tunneling particle in terms of an $s$-wave can be calculated from $$ \mbox{Im}{\cal I}=\mbox{Im}\int_{r_{h}(M)}^{r_{h}(M-E)}p_{r} ~ dr =\mbox{Im}\int_{r_{h}(M)}^{r_{h}(M-E)}\int_{0}^{p_{r}} dp_{r}^{\prime} ~ dr ~, \eqno{(4)} $$ where $r_{h}$ is the radius of the outer horizon, $M$ is the total mass of the black hole, $M-E$ is the total mass of the black hole after the particle is emitted, $E$ is the energy of the tunneling particle. To make use of the Hamilton's equation $$ \dot{r}=\frac{dH}{dp_{r}}=\frac{d(M-\omega)}{dp_{r}} ~, \eqno{(5)} $$ we can write $$ dp_{r}=\frac{d(M-\omega)}{\dot{r}} ~. \eqno{(6)} $$ To substitute (6) into (4), we have $$ \mbox{Im}{\cal I}=\mbox{Im}\int_{r_{h}(M)}^{r_{h}(M-E)} \int_{0}^{E}\frac{d(M-\omega)}{\dot{r}} ~ dr = \mbox{Im}\int_{0}^{E}\int^{r_{h}(M)}_{r_{h}(M-E)} \frac{dr}{\dot{r}} ~ d\omega ~. \eqno{(7)} $$ Usually, the Hawking temperature of a black hole is very small, zero-mass particles will possess the main part of the whole radiation spectrum. For a tunneling particle of zero-mass in terms of an $s$-wave, it moves in a radial null geodesic. To transform the metric (3) to the Painlev{\'e} form, $\dot{r}$ can be obtained from $ds^{2}=0$ \cite{10,11}. The metric of a four-dimensional rotating black hole can be cast in the form $$ ds^{2}=-g_{tt}(r,\theta)dt^{2}+g_{rr}(r,\theta)dr^{2}+ g_{\theta\theta}(r,\theta)d\theta^{2}+ g_{\phi\phi}(r,\theta)d\phi^{2}-2g_{t\phi}(r,\theta)dtd\phi \eqno{(8)} $$ generally. For the tunneling of a rotating black hole, we can still use the $s$-wave approximation, this is because for an observer at infinity, the radiation of a rotating black hole is still spherically symmetric. However, when a particle is tunneling through the horizon of a rotating black hole, it will be dragged by the rotation of the black hole. Thus, a tunneling particle will have motion in the $\phi$ degree of freedom, i.e. $d\phi\neq 0$, which means that in the calculation of the action of a tunneling particle in formula (4), we need also to consider the contribution to the action that comes from the motion on the $\phi$ degree of freedom, as we can see in \cite{21}. Meanwhile, in the equation of the null geodesic, we cannot set $d\phi=0$, thus, $\dot{r}$ cannot be obtained from $ds^{2}=0$ conveniently. In order to eliminate the motion of $\phi$ degree of freedom of a tunneling particle, we can choose a reference system that is co-rotating with the black hole horizon. This can be realized through the rotating coordinate transformation $$ \phi^{\prime}=\phi-\Omega_{h}t ~~~~~~~~ \mbox{or} ~~~~~~~~ \phi=\phi^{\prime}+\Omega_{h}t ~, \eqno{(9)} $$ where $\Omega_{h}$ is the angular velocity of the event horizon of a rotating black hole, which is a constant and is defined by $$ \Omega_{h}=\frac{g_{t\phi}}{g_{\phi\phi}}\bigg\vert_{r=r_{h}} ~. \eqno{(10)} $$ In (10) and in the following, we use $r_{h}$ to represent the radius of the event horizon of a rotating black hole. In such a co-rotating reference system, the observers located at the horizon cannot observe the rotation of the black hole, they will find that the angular velocity $\Omega_{h}^{\prime}$ of the black hole is zero. Because the tunneling of a particle takes place at the horizon, it will not be dragged by the rotation of the black hole to observe from such a co-rotating reference system. Therefore we have $d\phi^{\prime}=0$ for a tunneling particle, i.e., a tunneling particle has no motion in the $\phi^{\prime}$ degree of freedom. This makes us be able to use equation (4) to calculate the action. Meanwhile, in obtaining the expression of $\dot{r}$ from the null-geodesic method, we can set $d\phi^{\prime}=0$. Under the coordinate transformation (9), the metric (8) turns to the form $$ ds^{2}=-G_{tt}(r,\theta)dt^{2}+g_{rr}(r,\theta)dr^{2}+ g_{\theta\theta}(r,\theta)d\theta^{2}+ g_{\phi\phi}(r,\theta)d\phi^{\prime ~ 2}- 2g_{t\phi}^{\prime}(r,\theta)dtd\phi^{\prime} ~, \eqno{(11)} $$ where \setcounter{equation}{11} \begin{eqnarray} G_{tt} & = & g_{tt}+2g_{t\phi}\Omega_{h}-g_{\phi\phi} \Omega_{h}^{2} ~, \\ g_{t\phi}^{\prime} & = & g_{t\phi}-\Omega_{h}g_{\phi\phi} ~. \end{eqnarray} Because of (10), we have $$ g_{t\phi}^{\prime}\vert_{r=r_{h}}=0 ~. \eqno{(14)} $$ This also indicates $\Omega_{h}^{\prime}= g_{t\phi}^{\prime}/g_{\phi\phi}\vert_{r=r_{h}}=0$. On the other hand, according to (A.5), we have $$ G_{tt}\big\vert_{r=r_{h}}=0 ~. \eqno{(15)}$$ The horizon's radius of the metric (11) is determined by $g^{rr}\vert_{r=r_{h}}=g_{rr}^{-1}\vert_{r=r_{h}}=0$, which is the same equation of the horizon's radius of the metric (8), thus, the horizon's radius of a rotating black hole will not be changed under the coordinate transformation (9). On the other hand, because of (15), the horizon's radius for the metric (11) is also determined by (15). Because in metric (11), $g_{rr}$ is singular on the horizon, in order to calculate the action of a tunneling particle, we need to eliminate such a coordinate singularity first. This can be realized through the Painlev{\'e} coordinate transformation \cite{10,11}. We use $T$ to represent the Painlev{\'e} time coordinate and make a coordinate transformation $$ dt=dT-\sqrt{\frac{g_{rr}(r,\theta_{0})-1} {G_{tt}(r,\theta_{0})}}dr \eqno{(16)}$$ to the metric (11). In (16), like that in \cite{15} in studying the tunneling from Kerr-Newman black hole, we have set $\theta$ to be a constant in order to make the coordinate transformation integrable. Such a manipulation is reasonable because for a tunneling particle in terms of an $s$-wave, it satisfies $d\theta=0$, therefore we can consider the tunneling of a particle at a constant angle $\theta_{0}$. At last we can obtain that the physical result does not depend on the angle $\theta_{0}$. However, the explicit integral of (16) is not needed to be given here. Under the above coordinate transformation, for the metric (11), we have \setcounter{equation}{16} \begin{eqnarray} ds^{2} & = & -G_{tt}(r,\theta_{0})dT^{2}+2\sqrt{G_{tt} (r,\theta_{0})}\sqrt{g_{rr}(r,\theta_{0})-1} ~ drdT+dr^{2} +g_{\phi\phi}(r,\theta_{0})d\phi^{\prime ~ 2} \nonumber \\ & ~ & -2g_{t\phi}^{\prime}(r,\theta_{0})d\phi^{\prime} \bigg(dT-\sqrt{\frac{g_{rr}(r,\theta_{0})-1} {G_{tt}(r,\theta_{0})}}dr\bigg) ~. \end{eqnarray} The horizon's radius for the metric (17) is determined by $G_{tt}\big\vert_{r=r_{h}}=0$, thus, the horizon's radius for the metric (11) is not changed after the coordinate transformation (16). As mentioned above, for a tunneling particle in the co-rotating reference system, it satisfies $d\phi^{\prime}=0$. Thus we have $$ ds^{2}=-G_{tt}(r,\theta_{0})dT^{2}+2\sqrt{G_{tt}(r,\theta_{0})} \sqrt{g_{rr}(r,\theta_{0})-1} ~ drdT+dr^{2} ~. \eqno{(18)} $$ To suppose that the mass of the tunneling particle is zero, then its motion is determined by the null-geodesic equation $ds^{2}=0$. To solve this equation, we obtain $$ \dot{r}=\sqrt{G_{tt}(r,\theta_{0})\cdot g_{rr}(r,\theta_{0})} \bigg(\pm 1-\sqrt{1-\frac{1}{g_{rr}(r,\theta_{0})}} ~ \bigg) ~. \eqno{(19)} $$ Because $G_{tt}\big\vert_{r=r_{h}}=0$, $g_{rr}^{-1}\vert_{r=r_{h}}=0$, $r_{h}$ is a simple zero point of $G_{tt}$ and $g_{rr}^{-1}$, $G_{tt}\cdot g_{rr}$ should be regular at the horizon. The plus and minus signs in (19) correspond to outgoing and ingoing radial null geodesics respectively. For an outgoing tunneling particle, $\dot{r}$ is positive, we have $$ \dot{r}=\sqrt{G_{tt}(r,\theta_{0})\cdot g_{rr}(r,\theta_{0})} \bigg(1-\sqrt{1-\frac{1}{g_{rr}(r,\theta_{0})}} ~ \bigg) ~. \eqno{(20)} $$ To substitute (20) into (7), we obtain $$ \mbox{Im}{\cal I}=\mbox{Im}\int_{0}^{E} \int^{r_{h}(M)}_{r_{h}(M-E)}\frac{dr} {\sqrt{G_{tt}(r,\theta_{0})\cdot g_{rr}(r,\theta_{0})} \Big(1-\sqrt{1-\frac{1}{g_{rr}(r,\theta_{0})}} ~ \Big)} ~ d\omega ~. \eqno{(21)} $$ To multiply $1+\sqrt{1-\frac{1}{g_{rr}(r,\theta_{0})}}$ in the numerator and denominator of the integrand at the same time, we obtain $$ \mbox{Im}{\cal I}=\mbox{Im}\int_{0}^{E} \int^{r_{h}(M)}_{r_{h}(M-E)}\frac {1+\sqrt{1-\frac{1}{g_{rr}(r,\theta_{0})}}} {\sqrt{G_{tt}(r,\theta_{0})\cdot g_{rr}(r,\theta_{0})} ~ \frac{1}{g_{rr}(r,\theta_{0})}} ~ drd\omega ~. \eqno{(22)} $$ For the metric of a four-dimensional rotating black hole, because $g_{rr}$ is singular on the horizon, generally, we can write $g_{rr}$ in the form $$ g_{rr}(r,\theta)=\frac{C(r,\theta)}{r-r_{h}} ~, \eqno{(23)} $$ where $C(r,\theta)$ is a function regular on the horizon. To substitute (23) into (22), we have $$ \mbox{Im}{\cal I}=\mbox{Im}\int_{0}^{E} \int^{r_{h}(M)}_{r_{h}(M-E)}\frac {1+\sqrt{1-\frac{r-r_{h}}{C(r,\theta_{0})}}} {\sqrt{G_{tt}(r,\theta_{0})\cdot g_{rr}(r,\theta_{0})} ~ \frac{r-r_{h}}{C(r,\theta_{0})}} ~ drd\omega ~. \eqno{(24)} $$ In (24), $r_{h}$ is a simple pole of the integrand. To add a small imaginary part to the variable $r$, and to let the integral path round the pole in a semicircle, the integral of $dr$ can be evaluated which results $$ \mbox{Im}{\cal I}=2\pi\int_{0}^{E}\frac{C(r_{h},\theta_{0})} {\sqrt{G_{tt}(r_{h},\theta_{0})\cdot g_{rr}(r_{h},\theta_{0})}} ~ d\omega ~. \eqno{(25)} $$ It is reasonable to suppose that the energy $E$ of the tunneling particle is far less than the total mass $M$ of the black hole, i.e. $E<<M$, thus, in (25), the integrand can be treated as a constant. Therefore we obtain $$ \mbox{Im}{\cal I}=2\pi E\frac{C(r_{h},\theta_{0}) \sqrt{g^{rr}(r_{h},\theta_{0})}} {\sqrt{G_{tt}(r_{h},\theta_{0})}} ~. \eqno{(26)} $$ Because $G_{tt}(r_{h},\theta)=0$, $g^{rr}(r_{h},\theta)=0$, near the horizon, we can expand $G_{tt}(r,\theta_{0})$ and $g^{rr}(r,\theta_{0})$ in the form $$ G_{tt}(r,\theta_{0})=G_{tt}^{\prime}(r_{h},\theta_{0}) (r-r_{h})+\ldots ~, \eqno{(27)} $$ $$ g^{rr}(r,\theta_{0})=g^{rr\prime}(r_{h},\theta_{0}) (r-r_{h})+\ldots ~, \eqno{(28)} $$ where in (27) and (28), $\ldots$ represents high order terms of $(r-r_{h})$. From (23), we have $$ g^{rr\prime}(r_{h},\theta_{0})=\frac{1}{C(r_{h},\theta_{0})} ~. \eqno{(29)} $$ To substitute (27)--(29) into (26), we obtain $$ \mbox{Im}{\cal I}=\frac{2\pi E}{\sqrt{G_{tt}^{\prime} (r_{h},\theta_{0})g^{rr\prime}(r_{h},\theta_{0})}} ~. \eqno{(30)} $$ To substitute (30) into (1), we can see that the tunneling rate can be cast in the form of (2), which is the Boltzmann distribution, and we obtain $$ \mbox{Im}{\cal I}=\frac{\pi E}{\kappa(r_{h})} ~. \eqno{(31)} $$ To compare (31) with (30), we obtain $$ \kappa(r_{h})=\frac{\sqrt{G_{tt}^{\prime}(r_{h},\theta_{0}) g^{rr\prime}(r_{h},\theta_{0})}}{2} ~. \eqno{(32)} $$ Thus, we obtain the thermal temperature of a four-dimensional rotating black hole $$ T_{H}=\frac{\sqrt{G_{tt}^{\prime}(r_{h},\theta_{0}) g^{rr\prime}(r_{h},\theta_{0})}}{4\pi} ~. \eqno{(33)} $$ Equation (33) is derived from the tunneling approach. On the other hand, in Appendix A, we have derived a formula (A.12) for the surface gravity of a four-dimensional rotating black hole from black hole thermodynamics which is given by $$ \kappa(r_{h})=\lim_{r\rightarrow r_{h}} \frac{\partial_{r}\sqrt{G_{tt}}}{\sqrt{g_{rr}}}= \lim_{r\rightarrow r_{h}} \frac{\partial_{r}G_{tt}}{2\sqrt{G_{tt}\cdot g_{rr}}} ~. \eqno{(34)}$$ From black hole thermodynamics \cite{28,29}, we know that on the horizon, $\kappa(r_{h})$ is a constant, therefore we can evaluate it at an arbitrary angle $\theta_{0}$. To substitute (27) and (28) into (34), we obtain $$ \kappa(r_{h})=\frac{\sqrt{G_{tt}^{\prime}(r_{h},\theta_{0}) g^{rr\prime}(r_{h},\theta_{0})}}{2} ~. \eqno{(35)} $$ To compare (32) with (35), we can see that they are equivalent. Because $\kappa(r_{h})$ is a constant on the horizon, the explicit result for the surface gravity of a rotating black hole obtained from (35) will not depend on the parameter $\theta_{0}$. This means that in (32) and (33), the explicit results for the surface gravity and Hawking temperature of a rotating black hole will not depend on the parameter $\theta_{0}$ either. \section{Hawking temperature of Kerr-Newman-AdS black hole} \indent In this section, we derive the Hawking temperature of the Kerr-Newman-AdS black hole using (33) of Section 2. In the Boyer-Lindquist coordinates, the metric of the Kerr-Newman-AdS is given by \cite{30} \setcounter{equation}{35} \begin{eqnarray} ds^{2}= & - & \!\! \frac{1}{\Sigma}[\Delta_{r}- \Delta_{\theta}a^{2}\sin^{2}\theta]dt^{2}+ \frac{\Sigma}{\Delta_{r}}dr^{2}+ \frac{\Sigma}{\Delta_{\theta}}d\theta^{2}+ \frac{1}{\Sigma\Xi^{2}}[\Delta_{\theta}(r^{2}+a^{2})^{2} \nonumber \\ & ~ & -\Delta_{r}a^{2}\sin^{2}\theta]\sin^{2}\theta d\phi^{2} -\frac{2a}{\Sigma\Xi}[\Delta_{\theta}(r^{2}+a^{2})- \Delta_{r}]\sin^{2}\theta dtd\phi ~, \end{eqnarray} where $$ \Sigma=r^{2}+a^{2}\cos^{2}\theta ~, ~~~~~~ \Xi=1+\frac{1}{3}\Lambda a^{2} ~, \eqno{(37)} $$ $$ \Delta_{\theta}=1+\frac{1}{3}\Lambda a^{2}\cos^{2}\theta ~, ~~~~~~ \Delta_{r}=(r^{2}+a^{2}) \big(1-\frac{1}{3}\Lambda r^{2}\big)-2Mr+Q^{2} ~, \eqno{(38)} $$ $\Lambda$ is the cosmological constant, $\Lambda<0$. The horizons of the metric (36) are determined by \setcounter{equation}{38} \begin{eqnarray} \Delta_{r} & = & (r^{2}+a^{2})\big(1-\frac{1}{3} \Lambda r^{2}\big)-2Mr+Q^{2} \nonumber \\ & = & -\frac{1}{3}\Lambda\bigg[r^{4}-\Big(\frac{3}{\Lambda} -a^{2}\Big)r^{2}+\frac{6M}{\Lambda}r-\frac{3}{\Lambda} (a^{2}+Q^{2})\bigg] \nonumber \\ & = & -\frac{1}{3}\Lambda(r-r_{++})(r-r_{--}) (r-r_{+})(r-r_{-})=0 ~. \end{eqnarray} The equation $\Delta_{r}=0$ has four roots \cite{31}, where $r_{++}$ and $r_{--}$ are a pair of complex conjugate roots, $r_{+}$ and $r_{-}$ are two real positive roots, and we suppose $r_{+}>r_{-}$. Thus, $r=r_{+}$ is the event horizon. We first calculate the Hawking temperature at a special value of $\theta_{0}$ and we choose $\theta_{0}=0$. To expand $g^{rr}(r,\theta_{0}=0)$ near the event horizon $r_{h}=r_{+}$, we obtain $$ g^{rr}(r,\theta_{0}=0)=\frac{-\frac{1}{3}\Lambda(r_{+}-r_{++}) (r_{+}-r_{--})(r_{+}-r_{-})}{r_{+}^{2}+a^{2}}(r-r_{+})+\ldots ~, \eqno{(40)} $$ where $\ldots$ are high order terms of $(r-r_{+})$. $G_{tt}$ is defined by (12). We can rewrite it in the form $$ G_{tt}=g_{tt}+g_{t\phi}\Omega_{h}+ (g_{t\phi}-\Omega_{h}g_{\phi\phi})\Omega_{h} =g_{tt}+g_{t\phi}\Omega_{h}+g_{t\phi}^{\prime}\Omega_{h} \eqno{(41)} $$ generally. According to (14), $g_{t\phi}^{\prime}$ is zero on the horizon, thus the last term of (41) does not need to be considered when we expand $G_{tt}(r,\theta_{0})$ near the horizon. The angular velocity of the Kerr-Newman-AdS black hole defined by (10) is $\Omega_{h}=\frac{a\Xi}{r_{+}^{2}+a^{2}}$. At $\theta_{0}=0$, $G_{tt}(r,\theta_{0}=0)$ can be expanded as $$ G_{tt}(r,\theta_{0}=0)=\frac{-\frac{1}{3}\Lambda(r_{+}-r_{++}) (r_{+}-r_{--})(r_{+}-r_{-})}{r_{+}^{2}+a^{2}}(r-r_{+})+\ldots ~, \eqno{(42)} $$ where $\ldots$ are high order terms of $(r-r_{+})$. To compare (42) and (40) with (27) and (28), we can obtain $G_{tt}^{\prime}(r_{+},\theta_{0}=0)$ and $g^{rr\prime}(r_{+},\theta_{0}=0)$. To substitute $G_{tt}^{\prime}(r_{+},\theta_{0}=0)$ and $g^{rr\prime}(r_{+},\theta_{0}=0)$ into (33), we obtain, for the Kerr-Newman-AdS black hole, $$ T_{H}=-\frac{\Lambda}{12\pi(r_{+}^{2}+a^{2})}(r_{+}-r_{++}) (r_{+}-r_{--})(r_{+}-r_{-}) ~. \eqno{(43)} $$ Because $\Lambda<0$, $r_{+}$ and $r_{-}$ are positive, $r_{+}>r_{-}$, $r_{++}$ and $r_{--}$ are complex conjugate, these make sure that $T_{H}$ is positive. At an arbitrary value of $\theta_{0}$, through explicit calculation, $g^{rr}(r,\theta_{0})$ and $G_{tt}(r,\theta_{0})$ can be expanded as \setcounter{equation}{43} \begin{eqnarray} g^{rr}(r,\theta_{0}) & = & \frac{-\frac{1}{3}\Lambda (r_{+}-r_{++})(r_{+}-r_{--})(r_{+}-r_{-})}{r_{+}^{2}+a^{2} \cos^{2}\theta_{0}}(r-r_{+})+\ldots ~, \\ G_{tt}(r,\theta_{0}) & = & \frac{-\frac{1}{3} \Lambda(r_{+}-r_{++})(r_{+}-r_{--}) (r_{+}-r_{-})(r_{+}^{2}+a^{2}\cos^{2}\theta_{0})} {(r_{+}^{2}+a^{2})^{2}}(r-r_{+})+\ldots ~. \end{eqnarray} To compare (45) and (44) with (27) and (28), we can obtain $G_{tt}^{\prime}(r_{+},\theta_{0})$ and $g^{rr\prime}(r_{+},\theta_{0})$. To substitute $G_{tt}^{\prime}(r_{+},\theta_{0})$ and $g^{rr\prime}(r_{+},\theta_{0})$ into (33), we obtain again $$ T_{H}=-\frac{\Lambda}{12\pi(r_{+}^{2}+a^{2})}(r_{+}-r_{++}) (r_{+}-r_{--})(r_{+}-r_{-}) ~. \eqno{(46)} $$ In \cite{32}, another expression for the Hawking temperature of the Kerr-Newman-AdS black hole has been obtained which is given by $$ T_{H}=\frac{3r_{+}^{4}+(a^{2}+l^{2})r_{+}^{2}-l^{2}(a^{2}+Q^{2})} {4\pi l^{2}r_{+}(r_{+}^{2}+a^{2})} ~, \eqno{(47)} $$ where $\Lambda=-3/l^{2}$. It is not difficult to verify that these two expressions of $T_{H}$ for the Kerr-Newman-AdS black hole are equivalent. The result of (46) is also equal to that obtained from (A.13) and (A.14). From this example, we can also see that the explicit result of the Hawking temperature given by (33) does not depend on the parameter $\theta_{0}$. In the case $\Lambda=0$, the metric (36) degenerates to the metric of four-dimensional Kerr-Newman black hole. If the charge is zero, the metric will be the Kerr black hole. Following the same approach as above, we can also obtain their Hawking temperature from tunneling. \section{Discussion} \indent In this paper, we have studied the Hawking radiation of general four-dimensional rotating black holes using the tunneling method of Parikh and Wilczek \cite{10,11}. We obtain that the tunneling rate of a zero-mass particle is given by $$ \Gamma=\Gamma_{0}\exp(-\beta E)=\Gamma_{0}\exp(-E/T_{H}) ~, \eqno{(48)} $$ which is just the Boltzmann distribution. The thermal temperature $T_{H}$ of a four-dimensional rotating black hole is given by (33), which is in accordance with the Hawking temperature derived from black hole thermodynamics. And we have given the explicit result for the Hawking temperature of the Kerr-Newman-AdS black hole from the tunneling approach. In order to eliminate the motion of $\phi$ degree of freedom of a tunneling particle from a rotating black hole, we choose a reference system that is co-rotating with the black hole horizon. In such a co-rotating reference system, we avoided the dimension degeneration in the method of dragging coordinate system adopted in \cite{21} for the tunneling of a rotating black hole. It is necessary to point out that if we use the method of \cite{21} to calculate the action of a tunneling particle directly for the Kerr-Newman-AdS black hole, then we need to consider the action that comes from the motion on the $\phi$ degree of freedom, the calculation will be rather complicated in this case. In order to simplify the calculation, we have made a rotating coordinate transformation first. At the same time, the method provided in this paper is general for a general four-dimensional rotating black hole. And then we applied our result to the special case of the Kerr-Newman-AdS black hole. Another point needed to point out here is that there are some overlaps between the approach of this paper and the manipulation of the tunneling from the Kerr-Newman black hole in \cite{15} using the null-geodesic method. The difference lies in that in \cite{15} the rotating coordinate transformation for the tunneling of a rotating black hole was not proposed clearly, and it has not been used to a general four-dimensional rotating black hole. While in this paper, we have studied the Hawking temperature of a general four-dimensional rotating black hole from tunneling using the rotating coordinate transformation clearly, and then we applied our result to the special case of the Kerr-Newman-AdS black hole. An alternative method for the calculation of the action of a tunneling particle was proposed in \cite{14} from the Hamilton--Jacobi equation approach. Such a method was applied to the tunneling of some rotating black holes in \cite{14,15}. For the tunneling of the Kerr-Newman-AdS black hole, to use the Hamilton--Jacobi equation method of \cite{14}, we can also make a rotating coordinate transformation first to simplify the calculation. The same results of (33) and (46) will be obtained at last. However, limited by the length of this paper, we will not give such a derivation further in this Letter. The tunneling rate (48) and Hawking temperature (33) for a rotating black hole are obtained in the reference system co-rotating with the black hole horizon. However, because the obtained tunneling rate and Hawking temperature of a black hole are scalars, they will not change for an observer static relatively to infinity. Thus, we can deduce that for an observer static relatively to infinity, the tunneling rate and Hawking temperature of a four-dimensional rotating black hole are still given by (48) and (33). The difference lies in that, for a tunneling particle, or an observer, the angular velocity of a rotating black hole is zero in the co-rotating reference system, while it is $\Omega_{h}$ in the static reference system. To combine the first law of black hole thermodynamics, we can generalize the tunneling rate (48) to a particle with non-zero angular momentum and non-zero charge. \vskip 1cm \noindent{\bf \Large Acknowledgements} \indent The author is grateful very much for the comments made by the referee which have improved the contents of this paper. \vskip 2cm \noindent{\bf \Large Appendix A. Hawking temperature of four-dimensional rotating black holes from black hole thermodynamics} \indent In this appendix, we give an expression for the Hawking temperature of a four-dimensional rotating black hole from black hole thermodynamics. The metric of a four-dimensional rotating black hole is given by (8) generally. For the metric (8), there exists the Killing field $$ \xi^{\mu}=\frac{\partial}{\partial t}+ \Omega_{h}\frac{\partial}{\partial \phi} ~, \eqno{({\rm A}.1)}$$ where $\Omega_{h}$ is the angular velocity of the horizon which is a constant. Here, we mean that the horizon is the outer horizon for a rotating black hole. Because the horizon is a null surface and $\xi^{\mu}$ is normal to the horizon, we have on the horizon \cite{28} $$ \xi^{\mu}\xi_{\mu}\vert_{r=r_{h}}=0 ~. \eqno{({\rm A}.2)}$$ For the metric (8), we have $$ \xi^{\mu}\xi_{\mu}=g_{tt}+2g_{t\phi}\Omega_{h} -g_{\phi\phi}\Omega_{h}^{2} ~. \eqno{({\rm A}.3)}$$ Here, we have defined that the square of the norm of the Killing field is positive outside the horizon, at least for the case $\Omega_{h}=0$. As in (12), we define $$ G_{tt}=g_{tt}+2g_{t\phi}\Omega_{h}-g_{\phi\phi} \Omega_{h}^{2} ~. \eqno{({\rm A}.4)}$$ Thus we have $$ G_{tt}\big\vert_{r=r_{h}}=0 ~. \eqno{({\rm A}.5)}$$ Following \cite{28} we write $$ \xi^{\mu}\xi_{\mu}=-\lambda^{2} ~, \eqno{({\rm A}.6)}$$ where $\lambda$ is a scalar function, and it is a constant on the horizon. According to (A.3), we have $\lambda^{2}=-G_{tt}$ for the metric (8) of a four-dimensional rotating black hole. Let $\nabla^{\mu}$ represent the covariant derivative operator, thus $\nabla^{\mu}(\xi^{\nu}\xi_{\nu})$ is also normal to the horizon. Then, according to \cite{28,29}, there exists a function $\kappa$ satisfying the equation $$ \nabla^{\mu}(-\lambda^{2})=-2\kappa\xi^{\mu} ~, \eqno{({\rm A}.7)}$$ where on the horizon $\kappa(r_{h})$ is a constant and is just the horizon's surface gravity. Similarly, we have the lower index equation $$ \nabla_{\mu}(-\lambda^{2})=-2\kappa\xi_{\mu} ~. \eqno{({\rm A}.8)}$$ Thus, from (A.7) and (A.8) we have $$ \nabla^{\mu}(-\lambda^{2})\nabla_{\mu}(-\lambda^{2}) =-4\kappa^{2}\lambda^{2} ~. \eqno{({\rm A}.9)}$$ Because $\lambda^{2}$ is a scalar function, $\kappa^{2}$ is also a scalar function. Therefore the surface gravity of a black hole horizon is invariant under general coordinate transformations, including the rotation of (9). From (A.3), (A.4), (A.6), (A.9), and the axial symmetry of the metric, we obtain $$ 4\kappa^{2}G_{tt}=g^{rr}(\partial_{r}G_{tt})^{2} + g^{\theta\theta}(\partial_{\theta}G_{tt})^{2} ~. \eqno{({\rm A}.10)}$$ Because of (A.5), we have $$ \lim_{r\rightarrow r_{h}}\partial_{\theta}G_{tt}=0 ~. \eqno{({\rm A}.11)}$$ Therefore, to take the limit $r\rightarrow r_{h}$ in both sides of (A.10) yields $$ \kappa(r_{h})=\lim_{r\rightarrow r_{h}} \frac{\partial_{r}\sqrt{G_{tt}}}{\sqrt{g_{rr}}} ~. \eqno{({\rm A}.12)}$$ In (A.12), because $G_{tt}$ is zero on the horizon, the partial derivative is taken before the limit. Thus, the Hawking temperature of the metric (8) is given by $$ T_{H}=\frac{\kappa(r_{h})}{2\pi}=\lim_{r\rightarrow r_{h}} \frac{\partial_{r}\sqrt{G_{tt}}}{2\pi\sqrt{g_{rr}}} ~. \eqno{({\rm A}.13)}$$ Because $\kappa(r_{h})$ is a constant on the horizon \cite{28,29}, it can be evaluated at an arbitrary $\theta$. For convenience, it can be evaluated at $\theta=0$ usually. For the metrics of many four-dimensional rotating black holes, we can see that usually they satisfy $G_{tt}\vert_{\theta=0}=g_{tt}\vert_{\theta=0}$. Thus we can write $$ T_{H}=\frac{\kappa(r_{h})}{2\pi}= \lim_{\theta=0, ~ r\rightarrow r_{h}} \frac{\partial_{r}\sqrt{g_{tt}}}{2\pi\sqrt{g_{rr}}} ~. \eqno{({\rm A}.14)}$$ On the other hand, because $\kappa(r_{h})$ is a constant on the horizon, this means that, in formula (A.13), the dependence of $T_{H}$ on the variable $\theta$ is only apparent. \vskip 2cm
train/arxiv
BkiUdho4eIXh1Pq_-aRa
5
1
\subsection{Learning Cost Predictors for Unit Commitment Problem} As mentioned before, screening without cost objectives enlarge the possible value range of load variables $f_j$, which leads to conservative screening and keep more constraints as non-redundant. To close such gap, in this paper, we investigate if it is possible to tighten the search space of constraint screening by adding a cost constraint in the form of $\sum_{i=1}^n c_ix_i \leq \Bar{C}$, where $\Bar{C}$ is the upper bound whose value needs to be determined in the following sections. To achieve such goal, the adopted method should approximate the map between load input and system costs $J(\boldsymbol{\ell})$ well along with predicting system costs efficiently. Thus, in this paper, we use a neural network (NN) to find the upper bound. To train the NN model, we utilize the past record of UC solutions and the training loss between the output of NN model and the actual cost defined as follows, \begin{align} \label{Training: loss} L: =\left\|f_{NN}(\boldsymbol{\ell}) - J(\boldsymbol{\ell})\right\|_{2}^{2}; \end{align} where $f_{NN}(\boldsymbol{\ell})$ denotes the NN model given load inputs. In the next subsection, we detail how to connect NN's predicted costs to the constraint screening problems. \subsection{Tightening the Search Space for Constraint Screening} Note that the ML model is not directly applied for making operation or dispatch decisions, and alternatively, we are treating the ML prediction as a constraint to reduce the search space of optimization-based constraint screening problems \eqref{screening1} and \eqref{screening2}. With such design, the resulting optimization problem can still find feasible decisions for the original UC problem. We can then add the neural network's prediction as an additional constraint to the original constraint screening problem to further restrict the search space for each transmission's flow bounds. In sample-agnostic case, to ensure feasibility of the constraint screening problem after adding the cost constraint for the whole load space along with restricting the searching space effectively, we need to find a predicted cost given by NN which can serve as the upper bound. Then projected gradient ascent~(PGA) algorithm can be adopted to achieve this goal. PGA can find the upper bound iteratively by moving $\boldsymbol{\ell}$ in the gradient direction at each step along with projecting it onto $\mathcal{L}$, and the details are listed in Algorithm 1. Besides, in practice, the real load samples may be out of distribution, and incur costs which are over the upper bound and thus causing screening failure. Therefore, we use a relaxation parameter $\epsilon$ to adjust the obtained upper bound $\texttt{PGA}(f_{NN}(\boldsymbol{\ell}))$ and integrate it to \eqref{screening1}. Then, we can get the following sample-agnostic screening problem considering the cost constraint, \begin{subequations} \label{Screening: sample-agnostic} \begin{align} \max_{\mathbf{u}, \mathbf{x}, \mathbf{f}, \boldsymbol{\ell}} / \min _{\mathbf{u}, \mathbf{x}, \mathbf{f}, \boldsymbol{\ell}}\quad & f_{j} \label{Screening2:obj}\\ \text { s.t. } \quad & \eqref{Screening:gen} \eqref{Screening:flow}\eqref{Screening:balance}\eqref{Screening:u}\eqref{Screening:load}\\ & \sum_{i=1}^n c_i x_i \leq \texttt{PGA}(f_{NN}(\boldsymbol{\ell}))(1+\epsilon). \label{Screening: cost bound} \end{align} \end{subequations} \begin{figure}[b] \let\@latex@error\@gobble \vspace{-1em} \label{Algorithm: PGA} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{algorithm}[H] \caption{Projected Gradient Ascent Algorithm} \begin{algorithmic}[1] \REQUIRE Load distribution $\mathcal{L}$, trained NN model $f_{NN}(\boldsymbol{\ell})$, step size $\beta$. \ENSURE Upper bound $\texttt{PGA}(f_{NN}(\boldsymbol{\ell}))$. \renewcommand{\algorithmicensure}{\textbf{Initialize:}} \ENSURE Random load vector $\boldsymbol{\ell}^{(0)} \in \mathcal{L}$, $k=0$. \WHILE {$\boldsymbol{\ell}^{(k)}$ doesn't converge} \STATE Update: $\texttt{PGA}(f_{NN}(\boldsymbol{\ell})) \leftarrow f_{NN}(\boldsymbol{\ell}^{(k)})$ \STATE Calculate gradient $\nabla_{\boldsymbol{\ell}} f_{NN}(\boldsymbol{\ell})$ \STATE Update: $\boldsymbol{\ell}^{(k+1)} \leftarrow \texttt{Proj}_{\mathcal{L}}(\boldsymbol{\ell}^{(k)}+\beta\nabla_{\boldsymbol{\ell}} f_{NN}(\boldsymbol{\ell}))$ \STATE $k \leftarrow k+1$ \ENDWHILE \STATE Return $\texttt{PGA}(f_{NN}(\boldsymbol{\ell}))$ \end{algorithmic} \end{algorithm} \end{figure} In sample-aware case, we predict and still relax the UC cost for each specific sample, as the predicted cost may be lower than the actual cost, which can result in an infeasible adjusted screening problem for the investigated sample. Then we add the relaxed cost to \eqref{screening2}, and the adjusted sample-agnostic screening problem can be formulated as follows, \begin{subequations} \label{Screening2: sample-aware} \begin{align} \max_{\mathbf{u}, \mathbf{x}, \mathbf{f}} / \min _{\mathbf{u}, \mathbf{x}, \mathbf{f}}\quad & f_{j}\\ \text { s.t. } \quad & \eqref{Screening2:gen} \eqref{Screening2:flow}\eqref{Screening2:balance}\eqref{Screening2:u}\\ & \sum_{i=1}^n c_i x_i \leq f_{NN}(\boldsymbol{\ell})(1+\epsilon). \label{Screening2: cost bound} \end{align} \end{subequations} Note that the upper bound given by the NN model will be a constant given load region or specific load vector, so the screening problems \eqref{Screening: sample-agnostic} and \eqref{Screening2: sample-aware} can be treated as linear programming problems which are efficient to solve. \subsection{Simulation Setups} We carry out the numerical experiments on IEEE 14-bus, IEEE 39-bus and IEEE 118-bus power systems. For each system, we consider the load with 0\%, 25\%, 50\%, 75\% and 100\% variation which is defined as $r$ around the average nominal values $\overline{\boldsymbol{\ell}}$. When investigating the sample-aware constraint screening, the load level is known in our setting and is defined as $\overline{L}$. Then the load region $\mathcal{L}$ considered here can be represented as: \begin{subequations} \begin{align} (1-r)\overline{\boldsymbol{\ell}}\leq&\boldsymbol{\ell}\leq(1+r)\overline{\boldsymbol{\ell}}\label{UC:load range}\\ \sum_{i=1}^n&l_i=\overline{L}.\label{UC:load level} \end{align} \end{subequations} \begin{table}[htbp] \centering \caption{Comparisons of the relative cost error and relative solution time for IEEE 118-bus system} \label{Results: Comparison__agnostic} \setlength{\tabcolsep}{1.8mm}{\begin{tabular}{c|cccc|cccc} \hline & \multicolumn{4}{c|}{Total cost error (\%)} & \multicolumn{4}{c}{Total solution time (\%)} \\ \hline \diagbox{Method}{Range}& 25 & 50 & 75 & 100 & 25 & 50 & 75 & 100 \\ \hline KNN5 & 5.3 & 9.8 & 3.3 & 4.5 & 18.3 & 16.5 & 16.4 & 16.7 \\ KNN10 & 0.8 & 0 & 0.2 & 1.7 & 18.5 & 19.1 & 19.7 & 20.3 \\ Benchmark & 0 & 0 & 0 & 0 & 31.4 & 36.1 & 40.8 & 45.6 \\ Ours & 0 & 0 & 0 & 0 & 21.7 & 33.4 & 39.5 & 45.7 \\ \hline \end{tabular}} \vspace{-1em} \end{table} \begin{table}[htbp] \caption{Percentage of average reduced constraints of Sample-aware screening} \label{Results: Reduced_constraints__aware} \setlength{\tabcolsep}{1.6mm}{ \begin{tabular}{ccccccc} \hline & Num.Gen. & Num.Lines & Benchmark & Ours & Actual& $\epsilon$ \\ \hline 14-bus & 5 & 20 & 92.5\% & 97.5\% & 97.5\% & 0.01\\ 39-bus & 10 & 46 & 84.7\% & 86.9\% & 92.4\% & 0.03\\ 118-bus & 54 & 186 & 81.5\% & 85.7\% & 97.3\% & 0.01\\ \hline \vspace{-3em} \end{tabular}} \end{table} To generate samples for training and validating the neural network model and KNN model, we use uniform distribution to get different load vectors $\boldsymbol{\ell} \in \mathcal{L}$ for sample-agnostic case or random $\boldsymbol{\ell}$ for sample-aware case, and then solve (\ref{UC}) for all loads. The UC cost and the binding situation of each line flow constraint are recorded. Under each setting, we solve and collect 10,000 samples for each neural network with 20 percentage of generated data split as test samples, while for KNN we solve 2,000 samples only based on 118-bus system due to computation burden. Moreover, when evaluating the screening performance of the benchmark, the proposed method and KNN, we use the same validation data and consider 100 samples for each validation case. The used neural networks all have ReLU activation units and 4 layers, and corresponding neurons on each hidden layer are 50, 30, 30. We feed the load vector as input for the neural network and the output is the corresponding UC cost, then we further use the cost to solve (\ref{Screening: sample-agnostic}) and (\ref{Screening2: sample-aware}). All simulations have been carried out on a laptop with a 2.50 GHz processor and 16G RAM. Specifically, all the optimization problems are modeled using Python and solved with CVXPY\cite{Steven2016CVXPY} powered by GPLK\_MI solver \cite{makhorin2008glpk}. \vspace{-0.1em} \subsection{Simulation results} To ensure the effectiveness and scalability of the proposed cost predictors, we train the NNs for different load levels, and randomly select a specific load vector from each load level to predict the corresponding cost. The results are shown in Fig. \ref{Results: Prediction}, from which it can be seen that the predicted costs are almost equal to the actual costs obtained by solving (\ref{UC}) with the relative error less than 1\%. Note that the predicted cost can be lower than the actual cost, so it is reasonable to consider $\epsilon$ in (\ref{Screening2: cost bound}) to ensure feasibility. Using the NN models trained for the setting load levels and the PGA algorithm, we can get the upper bounds in (\ref{Screening: cost bound}) so as to conduct the sample-agnostic constraint screening. Then, this method is compared with the benchmark and KNN method, which is carried out on 118-bus system and the results are shown in Table \ref{Results: Comparison__agnostic} and Fig. \ref{Results: Reduced_cons_solu_time_agnostic}. According to Table \ref{Results: Comparison__agnostic} where the cost error and the solution time of the reduced problems are relative to the result of the original problem (\ref{UC}), KNN methods can reduce more solution time than other methods. The total cost errors of the case K=5 are lower than that of the case K=10, while the situation of the solution time is on the contrary. Though requiring more solution time, the benchmark and our methods can promise the solution accuracy without cost error. Meanwhile, our method can screen more constraints and save more solution time than the benchmark in all cases of investigated power systems with load variation range from 0\% to 50\% according to Fig. \ref{Results: Reduced_cons_solu_time_agnostic}. In the cases of 39-bus and 118-bus systems with 75\% to 100\% load range, the performance of the two methods are very close. This may be due to the increasing patterns of non-redundant constraints with larger load variation range, i.e., the percentage of the redundant constraints decreases when widening the load variation as shown in Fig. \ref{Results: Reduced_cons_solu_time_agnostic}. Furthermore, according to Table \ref{Results: Reduced_constraints__aware}, the average percentage of redundant constraints can reach 92.4\% to 97.3\% for a specific load vector. Then, with the sample-aware constraint screening, most of the redundant constraints can be removed. Specifically, the benchmark method can screen 81.5\% to 92.5\% of total constraints as redundant, while our method defined in (\ref{Screening2: sample-aware}) can screen 85.7\% to 95.1\% with setting $\epsilon$ properly. The above results show the following positive effects of our method: \begin{enumerate} \item Capturing the mapping between load vector and UC cost well at different load levels. \item Realizing the trade-off between computational efficiency and solution accuracy in the sample-agnostic case. \item Achieving higher screening efficiency in the sample-aware case. \end{enumerate} \vspace{0.5em} \section{Introduction} \input{intro.tex} \vspace{-5pt} \section{Problem Setup} \input{setup} \vspace{0.5em} \section{Learning to Predict UC Costs} \input{algorithm} \vspace{0.5em} \section{Case Study} \input{experiment} \vspace{-0.8em} \section{Conclusion and Future Works} In this paper, we introduce a novel usage of machine learning to help screen redundant constraints. The neural networks are trained to predict UC cost so as to integrate the cost constraints to original screening problem efficiently. With the cost constraints, the search space of constraint screening can be sufficiently tightened. Since our method does not necessarily yield a minimal set of active constraints for the underlying UC problem, in the future work we would like to seek theoretical understandings about the set of constraints and investigate how the proposed techniques can be generalized to multi-step UC problem with nonlinear constraints. We also plan to explore the potential of making sample-agnostic case serve as the warm-start for the sample-aware case. \bibliographystyle{IEEEtran} \subsection{UC Problem Formulation} In this paper, we assume the system operators need to decide both the ON/OFF statuses as well as dispatch level for all generators. As the realistic UC problem requires to take start-up and shut-down costs and logic constraints as well as ramp constraints into considerations, which make the analysis of multi-step constraints more complicated, we firstly consider the single-period UC problem as follows: \begin{subequations} \label{UC} \begin{align} J(\boldsymbol{\ell})=\min _{\mathbf{u}, \mathbf{x}, \mathbf{f}}\quad & \sum_{i=1}^n c_i x_i \label{UC:obj}\\ \text { s.t. } \quad & u_i \underline{x}_i \leq x_i \leq u_i \bar{x}_i, \quad i =1, ..., n \label{UC:gen}\\ &-\overline{\mathbf{f}} \leq \mathbf{K f} \leq \overline{\mathbf{f}} \label{UC:flow}\\ & \mathbf{x}+\overline{\mathbf{A}} \mathbf{f}=\boldsymbol{\ell} \label{UC:balance}\\ & u_i \in \{0, 1\}, \quad i=1,...,n \label{UC:u}. \end{align} \end{subequations} In the UC problem, we optimize over the generator statuses $\mathbf{u}$, the generator dispatch $\mathbf{x}$ and the line power flow $\mathbf{f}$ to find the least-cost solutions with cost denoted as $J(\boldsymbol{\ell})$ in the objective function \eqref{UC:obj}. $c_i$ denotes the cost coefficient. Constraint \eqref{UC:gen}, \eqref{UC:flow} and \eqref{UC:balance} denotes the generation bound, the flow bound and the nodal power balance respectively. Note that the power flows are modeled as a DC approximation, while the phase angles are absorbed into the fundamental flows $\mathbf{f}\in \mathbb{R}^{n-1}$~\cite{chen2022learning, bertsimas1997introduction}; $K$ and $\bar{\mathbf{A}}$ map such fundamental flows to flow constraints and nodal power balance respectively. \eqref{UC:u} enforces the binary constraint of generator statuses, where $u_i=1$ indicates the generator is on. \vspace{-0.5em} \subsection{Constraint Screening} Since there are many redundant line flow constraints when seeking the optimal solution of UC problem with given load region or specific load vector, which brings unnecessary computation burden, constraint screening for the line flow constraints can be meaningful. Similar to \cite{zhai2010fast}, we relax the integer variables $\mathbf{u}$ in \eqref{UC} as continuous variables in $[0,1]$, and the screening approach requires to iteratively solve the relaxed optimization problem to find the upper and lower flow values on each transmission line. If the upper and lower bound cannot be reached by the relaxed optimization problem, we can safely screen out that line flow constraint. For the case that the load region $\mathcal{L}$ is known, a \emph{sample-agnostic constraint screening problem} can be formulated for a group of operating scenarios, which can be given as follows, \begin{subequations} \vspace{-1.5em} \label{screening1} \begin{align} \max_{\mathbf{u}, \mathbf{x}, \mathbf{f}, \boldsymbol{\ell}} / \min _{\mathbf{u}, \mathbf{x}, \mathbf{f}, \boldsymbol{\ell}}\quad & f_{j} \label{Screening:obj}\\ \text { s.t. } \quad & u_i \underline{x}_i \leq x_i \leq u_i \bar{x}_i, \quad i =1, ..., n \label{Screening:gen}\\ &-\overline{\mathbf{f}}_{\mathcal{F}/j} \leq \mathbf{K}_{\mathcal{F}/j} \tilde{\mathbf{f}} \leq \overline{\mathbf{f}}_{\mathcal{F}/j}\label{Screening:flow}\\ & \mathbf{x}+\overline{\mathbf{A}} \mathbf{f}=\boldsymbol{\ell} \label{Screening:balance}\\ & 0\leq u_i \leq 1, \quad i=1,...,n \label{Screening:u}\\ & \boldsymbol{\ell} \in \mathcal{L} \label{Screening:load}; \end{align} \end{subequations} where $\mathcal{F}/j$ denotes all remaining entries of vectors or matrix which excludes those correspond to $f_j$. On the contrary, when the specific load vector is available, we can conduct the following \emph{sample-aware constraint screening}: \begin{subequations} \label{screening2} \begin{align} \max_{\mathbf{u}, \mathbf{x}, \mathbf{f}} / \min _{\mathbf{u}, \mathbf{x}, \mathbf{f}}\quad & f_{j}\\ \text { s.t. } \quad & u_i \underline{x}_i \leq x_i \leq u_i \bar{x}_i, \quad i =1, ..., n \label{Screening2:gen}\\ &-\overline{\mathbf{f}}_{\mathcal{F}/j} \leq \mathbf{K}_{\mathcal{F}/j} \tilde{\mathbf{f}} \leq \overline{\mathbf{f}}_{\mathcal{F}/j}\label{Screening2:flow}\\ & \mathbf{x}+\overline{\mathbf{A}} \mathbf{f}=\boldsymbol{\ell} \label{Screening2:balance}\\ & 0\leq u_i \leq 1, \quad i=1,...,n \label{Screening2:u}; \end{align} \end{subequations} where $\boldsymbol{\ell}$ is a known load vector for UC problem. The above formulations are both optimization-based approaches, which seek to find the limit of the flow while keeping all other flow and generation constraints satisfied. However, this approach still allows some line flow values causing unrealistic cost to reach the upper or lower bounds, and thus there are more redundant constraints reserved\cite{porras2021cost}. Therefore, it is interesting to consider the economical goal in the original UC problem, minimizing the system cost, to further safely screen out constraints.
train/arxiv
BkiUfmw5qhDCuPMnCLyO
5
1
\section{Introduction}\label{sec:introduction} Ammonia borane NH$_{3}$BH$_{3}$ has drawn significant interest in recent years because of its potential as a hydrogen storage material, with a gravimetric storage density of 19.6 mass\%.\cite{Xiong_2008:high-capacity_hydrogen, Chua_2011:development_amidoboranes, Swinnen_2010:potential_hydrogen, Hamilton_2009:b-n_compounds, Heldebrant_2008:effects_chemical, Marder_2007:will_we, Kim_2009:determination_structure} The structure of its solid phase has been explored previously,\cite{Reynhardt_1983:molecular_dynamics, Penner_1999:deuterium_nmr, Klooster_1999:study_n-hh-b, Brown_2006:dynamics_ammonia, Bowden_2007:room-temperature_structure, Lin_2012:experimental_theoretical} but the literature does not agree about the hydrogen behavior at room temperature. The molecule consists of a dative B--N bond and a trio of H atoms (henceforth referred to as a `halo') bonded to each of those two atoms, forming an hourglass shape, visible in Fig.~\ref{fig:structure}. At low temperatures (0 $\sim$ 225~K), the solid exhibits an orthorhombic structure with space group \emph{Pmn}2$_{1}$. Heated above 225~K, it undergoes a phase transition to a body-centered tetragonal structure with space group \emph{I}4\emph{mm}. It is this room-temperature phase that exhibits unexpected experimental results: while the molecule itself has a three-fold symmetry about the B--N axis, neutron\cite{Brown_2006:dynamics_ammonia, Kumar_2010:pressure_induced} and X-ray\cite{Filinchuk_2009:high-pressure_phase, Chen_2010:situ_x-ray, Bowden_2007:room-temperature_structure} diffraction on the solid reveal a four-fold symmetry about the same axis, creating a geometric incompatibility within the structure. Investigating the dynamics of the system with \emph{ab initio} methods, we find that the individual halos are rotating with angular velocity on the order of 0.7~deg/fs $\approx$ 2 rev/ps, such that standard experiments can only probe the time averaged positions, leading to the tetragonal host structure with four-fold symmetry. The precise behavior of these hydrogen halos has been the subject of several studies over three decades. In 1983, Reynhardt and Hoon\cite{Reynhardt_1983:molecular_dynamics} found three-fold reorientations of the BH$_{3}$ and NH$_{3}$ groups with a tunneling frequency of 1.4~MHz in the orthorhombic phase. Penner et al.\cite{Penner_1999:deuterium_nmr} found in 1999 that these groups reoriented independently. Deciphering the behavior in the tetragonal structure has been less straightforward. In the same 1983 study, Reynhardt and Hoon concluded that the BH$_{3}$, and possibly the NH$_{3}$ groups, rotate freely. Brown et al.\cite{Brown_2006:dynamics_ammonia} found that they could describe the disorder entirely with three-fold jump diffusion. Bowden et al.\ tried using a larger unit cell to model the same disorder as spatial variation rather than higher-order rotation; however, they found no evidence to support this model,\cite{Bowden_2007:room-temperature_structure} leaving this disagreement unresolved in the literature. The present study aims to elucidate how the hydrogen halos behave in the solid, especially in the high-temperature, tetragonal structure. To this end, we find thermal barriers to rotation in gas phase as well as both orthorhombic and tetragonal phases. We supplement these findings with \emph{ab initio} molecular dynamics simulations to track individual halos' behavior. \begin{figure} \centering\includegraphics[width={.7\columnwidth}]{tetragonal_structure} \caption{\label{fig:structure}Structure of the high-temperature, body-centered tetragonal phase of NH$_3$BH$_3$. Note that the locations of H atoms in this figure are not indicative of experimental results, but just one possibility of how the halos could be oriented in the solid.} \end{figure} Our \emph{ab initio} simulations are at the density functional theory level, using a plane-wave basis. Since ammonia borane is a strong van der Waals complex,\cite{Chen_2010:situ_x-ray, Lin_2008:raman_spectroscopy} the inclusion of van der Waals forces is essential;\cite{Klooster_1999:study_n-hh-b, Wolstenholme_2011:homopolar_dihydrogen, Wolstenholme_2012:thermal_desorption} we thus use vdW-DF1\cite{Dion_2004:van_waals, Thonhauser_2007:van_waals, Langreth_2009:density_functional} (i.e.\ revPBE exchange and LDA correlation in addition to the nonlocal contribution) as the exchange-correlation functional for all calculations. Car-Parrinello molecular dynamics (CPMD) was performed with the CP code (part of \textsc{Quantum-Espresso} version 5.0.2; the vdW-DF capability in CP is a new feature, which we have just implemented),\cite{Giannozzi_2009:quantum_espresso} using ultrasoft pseudopotentials and wave function and density cutoffs of 475 and 5700 eV. The CPMD simulations used an electronic convergence of 10$\time 10^{-8}$~eV, a fictitious electron mass of 400~a.u., and a time step of 5 a.u. We further used a $2\times2\times2$ supercell, accommodating 16 molecules, and started from the experimental lattice constants\cite{Bowden_2007:room-temperature_structure} at 297 and 90~K for tetragonal and orthorhombic phases, respectively. Similar calculations have been done previously,\cite{Liang_2012:first-principles_study} but with a different functional and at much higher temperature. Climbing image nudged-elastic band (NEB) simulations to find precise rotational barriers for halos and entire molecules were performed with \textsc{Vasp} (version 5.3.3),\cite{Kresse_1996:efficient_iterative, Kresse_1999:ultrasoft_pseudopotentials} utilizing PAW potentials and a cutoff of 500~eV. For solid phase barrier calculations, we used a $4\times4\times4$ $k$-point mesh and 8 images for NEB calculations. In the gas phase, we used only the gamma point and 16 images. Note that nuclear quantum effects have not been taken into account. Further information, in particular including structural information for all our simulations, can be found in the Supplementary Materials. We begin by investigating the barriers for rotations in different situations, i.e.\ the gas-phase molecule and the orthorhombic and tetragonal solid phases. Depending on the situation, we performed two kinds of simulations: ``fixed'' labels simulations where the geometry of the cell as well as the halo has been fixed and the entire halo or molecule is rotated around the axis in a rigid manner. ``NEB'' refers to the transition state formalism of the nudged-elastic band method, where the geometry of the halo can change and adapt along the path, allowing it to lower its energy. While the latter is preferable due to its higher accuracy for barriers, we also use the former i) in order to compare to previous quantum-chemistry calculations; and ii) for the tetragonal high-temperature phase, in which NEB leads to unphysical deformations, as all DFT ground-state simulations are technically done at 0~K and the structure attempts to mimic the orthorhombic phase. Results are summarized in Table~\ref{tab:Torsion_results} and detailed curves for the barriers can be found in the Supplementary Materials. The situation in the gas-phase molecule is the simplest. Completing a fixed rotation of one halo results in a thermal barrier of 84.7~meV, within 5\% of an empirical estimate\cite{Thorne_1983:microwave_spectrum} and in very good agreement with quantum-chemistry calculations,\cite{Demaison_2008:equilibrium_structure} validating our methodology. NEB calculations necessarily decrease the estimate of the barrier, in this case yielding a value of 79.1~meV. In a crystalline environment, dihydrogen bonds between molecules affect how each molecule behaves. In the orthorhombic phase, the dihydrogen bond network creates a 67.5~meV barrier to rotating the entire molecule (NEB). This barrier is low enough that molecules in a crystal can reorient at some rate, given enough temperature. It is interesting to see that a calculation with fixed halo geometry results in a much higher barrier, attesting to the fact that the rotating halo and its surroundings prefer to undergo significant deformation and reorientation during the rotation. For instance, the orientation of the B-N axis prefers to precess as the B halo is rotated in an attempt to maximize the strength of dihydrogen bonds with its neighbors. The ease of the rotation process is dependent on which individual halo is rotated. Our calculations for the N halo barriers are in good agreement with experimental findings (summarized in Table~\ref{tab:Torsion_results}). Accuracy for the B halo is more difficult to gauge. Our results for a fixed rotation are in agreement with a previous theoretical study,\cite{Parvanov_2008:materials_hydrogen} but experimental values line up almost exactly halfway between our calculations for fixed rotation and NEB barriers. Regardless of the magnitude of the difference, we find that the BH$_3$ group faces a larger barrier to rotation than the NH$_3$ group, in agreement with the literature. \begin{table}\small \caption{\label{tab:Torsion_results}Numerical values for calculated rotational barriers in meV and values given in the literature. Error bars for experimental values in the literature typically range from 5 to 10~meV.} \begin{tabular*}{\columnwidth}{@{}@{\extracolsep{\fill}}lrrr@{}} \hline\hline & fixed & NEB & literature \\\hline \multicolumn{4}{@{}l}{\bfseries\sffamily gas phase} \\ one halo & 84.7 & 79.1 & 89.8,$^{a*}$ 86.7$^{b\ddag}$ \\[1.5ex] \multicolumn{4}{@{}l}{\bfseries\sffamily orthorhombic phase} \\ N halo & 106.6 & 94.9 & 100,$^{c*}$ 142,$^{d*}$ 82.7,$^{e*}$ 131.6$^{f\dag\ddag}$ \\ B halo & 443.9 & 102.9 & 260,$^{c*}$ 259,$^{d*}$ 397$^{f\dag}$ \\ molecule & 403.4 & 67.5 & 328$^{f\dag}$ \\[1.5ex] \multicolumn{4}{@{}l}{\bfseries\sffamily tetragonal phase} \\ N halo & 60.1 & & 75.7,$^{d*}$ 50.8$^{e*}$ \\ B halo & 61.9 & & 60.8,$^{c*}$ 61.1,$^{d*}$ 50.8$^{e*}$ \\ molecule & 19.4 & & \\\hline\hline \end{tabular*} $^a$Ref. \citenum{Thorne_1983:microwave_spectrum}, $^b$Ref. \citenum{Demaison_2008:equilibrium_structure}, $^c$Ref. \citenum{Reynhardt_1983:molecular_dynamics}, $^d$Ref. \citenum{Penner_1999:deuterium_nmr}, $^e$Ref. \citenum{Brown_2006:dynamics_ammonia}, $^f$Ref. \citenum{Parvanov_2008:materials_hydrogen}\\ $^\dag$DFT (B3LYP), $^\ddag$quantum chemistry, $^*$experiment \end{table} In the high-temperature tetragonal phase, the rotation of either halo has essentially the same barrier of $\sim$61~meV, in good agreement with the literature (again see Table~\ref{tab:Torsion_results}). But, even more important, rotating the entire molecule requires just 19.4~meV. This barrier is easily overcome at ambient conditions ($k_BT= 25$~meV at room temperature). Previous studies have argued that NH$_{3}$ and BH$_{3}$ groups rotate freely\cite{Reynhardt_1983:molecular_dynamics} and that the molecule rotates as a whole.\cite{Penner_1999:deuterium_nmr} Our evidence supports a combination of both explanations. The barrier to rotating the whole molecule is low enough that it can occur freely at room temperature. The torsional barrier for each group also allows them to rotate independently; since these barriers are within 2~meV, this rate should be equivalent between the groups, leading atoms in both groups to move at the same rate, as seen experimentally.\cite{Brown_2006:dynamics_ammonia} Based on the barriers in Table~\ref{tab:Torsion_results}, from the Arrhenius equation we estimate (assuming the same pre-exponential factor for all rotations) that whole-molecule rotation occurs at about five times the rate of individual halo rotations at room temperature and approximately 20 and 30 times the rate for rotating individual halos in the low-temperature phase. Also of note is that torsional barriers in the orthorhombic phase are larger than those of an isolated molecule, whereas the torsional barriers in the tetragonal phase are lower. This result alone shows that in the low-temperature phase rotation is suppressed, while it is encouraged in the high-temperature phase. \begin{figure} \centering\includegraphics[width=\columnwidth]{average_omega} \caption{\label{fig:avgomega}Average angular velocity among H atoms at each frame in the simulation. The plot shows a running average over 50 frames. Note that this analysis only captures the angular velocity---motion of the single hydrogen atoms in a direction parallel to the B--N bond is not captured here, making up for some of the ``missing'' kinetic energy and keeping the temperature constant.} \end{figure} With the knowledge of the barriers, we now move to the analysis of the dynamics of the crystalline phase. We performed CPMD simulations in both the orthorhombic (at 4, 77, and 220 K) and tetragonal (at 220, 297, and 380~K) phases in order to study the motion of H atoms in the NH$_{3}$ and BH$_{3}$ groups. We used 1~ps for thermalization of the system and thereafter performed 20~ps production runs. Analyzing the corresponding trajectories leads to the initial (obvious) conclusion that halos rotate more rapidly at higher temperatures. To substantiate this claim, we calculated for each H atom in the simulation the angular velocity about the nearest B--N axis. We then averaged the absolute value of this angular velocity---otherwise there is a lot of cancellation, as halos rotate in both directions---over all H atoms in the simulation to measure how rapidly the halos are rotating in each frame of the simulation. The results of this calculation are shown in Fig.~\ref{fig:avgomega}, confirming the idea that H atoms rotate more quickly at higher temperatures and giving quantitative values for the speed of their rotation, in qualitative agreement with the barriers found earlier. It is important to note that in simulating the tetragonal supercell, the B--N axes typically maintain an instantaneous tilt between 5 and 20 degrees from vertical. At 297 and 380~K, the average orientation is vertical, whereas at 220~K there appears to be a correlation between neighbors similar to that found in the low-temperature phase. Consequently, the dynamics of the 220~K simulations are qualitatively very similar. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{heatmap} \caption{\label{fig:heatmaps}Heat maps of the location of all three H atoms in one N-group halo over the course of CPMD simulations. The corresponding B-group heat maps look very similar. Positions have been flattened into a plane perpendicular to the B--N axis. Motion along this axis is not apparent in these plots. Each map is 3~\AA$^2$.} \end{figure} The order of magnitude for rotations is found to be 0.7~deg/fs $\approx$ 2 rev/ps at room temperature. As such, halos can easily rotate 120 deg in 0.2 ps. Unless experiments can be performed with a resolution smaller than this, they will see time-averaged positions, and halos (with a three-fold symmetry) start looking like rings, as described below. Experiments will thus pick up the symmetry of the tetragonal host lattice, explaining the four-fold symmetry. Casual observation of the simulations reveals that halos in the high-temperature phase are unlikely to undergo full revolutions in a short burst. Rather, a more accurate description of the qualitative behavior is that---as a halo moves close to a neighboring halo---they will rotate some amount in order to form a dihydrogen bond. The halo will then move closer to a different neighbor and adopt a different alignment. A halo equally far from all of its neighbors will also follow the realignment of the opposite halo of the same molecule. Because the molecules are constantly oscillating in the crystal structure due to thermal energy, these reorientation processes result in a constantly shifting dihydrogen bond network. This analysis above describes how rapidly H atoms rotate about their native molecules, but does not describe where they are. To give a more systematic estimate of hydrogen position over time, we provide ``heat maps'' in Fig.~\ref{fig:heatmaps} that describe what angular positions the H atoms inhabit over the course of a whole simulation. Each heat map shows the occupation density for all three H atoms in a particular halo. These heat maps demonstrate a clear pattern of increasing positional disorder at higher temperature. The three-fold symmetry inherent in the molecular structure is apparent in the maps from the orthorhombic phase. This symmetry becomes much less clear in the tetragonal phase, indicating that rotation represents a significant source of the disorder found experimentally. Furthermore, the occupational density in the higher-temperature structure is much more spread out angularly, indicating that reorientation is not limited to 120-degree jumps, as concluded by Brown et al.,\cite{Brown_2006:dynamics_ammonia} but is a more fluid process. In summary, we have calculated torsional and rotational barriers for NH$_{3}$BH$_{3}$ in the gas phase and both low- and high-temperature crystalline structures. In addition, we have studied the dynamics of the crystalline phase explicitly with CPMD simulations. Our calculations indicate that in the low-temperature orthorhombic phase, the BH$_{3}$ and NH$_{3}$ groups reorient along a three-fold rotational potential at different rates. Both entire-molecule and independent reorientations contribute to the experimental rates found previously. In the high-temperature tetragonal phase, on the other hand, the barrier to entire-molecule rotation is low enough that thermal energy in ambient conditions allows the molecule to overcome the three-fold rotational potential. Consequently, the molecule is able to rotate freely with angular velocities on the order of 2 rev/ps. By quantifying the speed of those rotations, we thus resolve a long-standing experimental discrepancy, where a molecule with three-fold symmetry shows four-fold symmetry around the same axis in its crystalline form. This work was supported in full by NSF Grant No.\ DMR-1145968. \section{Optimized Structures} \subsection{Isolated Molecule} \noindent Cartesian coordinates (\emph{x}, \emph{y}, \emph{z}) in \text{\AA}, for the optimized structure found by our simulations of an isolated NH$_3$BH$_3$ molecule. \begin{center}\scriptsize \begin{verbatim} B 0.0000000 0.0000000 0.0000000 N 0.0000000 1.6856212 0.0000000 H 0.9519811 2.0563251 0.0000000 H -1.1740483 -0.3022489 0.0000000 H -0.4759065 2.0567762 0.8242625 H -0.4759065 2.0567762 -0.8242625 H 0.5870187 -0.3021106 -1.0167601 H 0.5870187 -0.3021106 1.0167602 \end{verbatim} \end{center} \subsection{Orthorhombic Structure} \noindent Lattice parameters and fractional coordinates ($x/a$, $y/b$, and $z/c$) resulting from our simulations optimizing the structure of NH$_3$BH$_3$ in the low-temperature, orthorhombic phase. \begin{eqnarray*} a &=& 5.44233\;\text{\AA}\\ b &=& 4.94048\;\text{\AA}\\ c &=& 5.13446\;\text{\AA}\\ \alpha &=& \beta = \gamma = 90.0^{\circ}\\ \end{eqnarray*} \begin{center}\scriptsize \begin{verbatim} B 0.0000000 0.1830125 0.0019077 B 0.5000000 0.8169874 0.5019076 N 0.0000000 0.2402808 0.3127204 N 0.5000000 0.7597191 0.8127204 H 0.0000000 0.4427503 0.3555205 H 0.5000000 0.5572496 0.8555205 H 0.1520232 0.1583956 0.4010154 H 0.3479767 0.8416043 0.9010154 H 0.6520232 0.8416043 0.9010154 H 0.8479767 0.1583956 0.4010154 H 0.0000000 0.9368787 0.9737231 H 0.5000000 0.0631212 0.4737231 H 0.1851884 0.2836140 0.9105986 H 0.3148115 0.7163859 0.4105986 H 0.6851884 0.7163859 0.4105986 H 0.8148115 0.2836140 0.9105986 \end{verbatim} \end{center} \newpage\subsection{Tetragonal Structure --- From Experiment} \noindent Lattice parameters and fractional coordinates ($x/a$, $y/b$, and $z/c$) of the high-temperature, tetragonal structure of NH$_3$BH$_3$, taken from experiment and refined in space group $I4mm$. Taken directly from Ref.~\citenum{Bowden_2007:room-temperature_structure}. \begin{eqnarray*} a &=& b = 5.2630\;\text{\AA}\\ c &=& 5.0504\;\text{\AA}\\ \end{eqnarray*} \begin{center}\scriptsize \begin{verbatim} B 0.0000 0.0000 0.0032 N 0.0000 0.0000 0.6869 H 0.0000 0.1480 0.6190 H 0.0000 0.1990 0.0770 \end{verbatim} \end{center} \subsection{Tetragonal Structure --- For Calculations} \noindent Lattice parameters and fractional coordinates ($x/a$, $y/b$, and $z/c$) used for the high-temperature, tetragonal structure of NH$_3$BH$_3$. The structure from Ref.~\citenum{Bowden_2007:room-temperature_structure} was used as a starting point. Hydrogen halos were completed to include twelve atoms in order to demonstrate both four-fold and three-fold rotational symmetries. We then found the energy of each possible arrangement of the three-atom halos out of these available positions where each molecule retained its characteristic three-fold symmetry. The arrangement with the lowest energy is shown here: \begin{eqnarray*} a &=& b = 5.2630\;\text{\AA}\\ c &=& 5.0504\;\text{\AA}\\ \end{eqnarray*} \begin{center}\scriptsize \begin{verbatim} B 0.0000000 0.0000000 0.0032000 B 0.5000000 0.5000000 0.5032000 N 0.0000000 0.0000000 0.6869000 N 0.5000000 0.5000000 0.1869000 H 0.8010000 0.0000000 0.0770000 H 0.0995000 0.1723390 0.0770000 H 0.0995000 0.8276610 0.0770000 H 0.1480000 0.0000000 0.6190000 H 0.9260000 0.8718280 0.6190000 H 0.9260000 0.1281720 0.6190000 H 0.4005000 0.6723390 0.5770000 H 0.6990000 0.5000000 0.5770000 H 0.4005000 0.3276610 0.5770000 H 0.5740000 0.3718282 0.1190000 H 0.3519998 0.5000000 0.1190000 H 0.5740002 0.6281719 0.1190000 \end{verbatim} \end{center} \newpage\section{Molecular Dynamics Simulations} \noindent For our molecular dynamics calculations, we used the CP code (part of \textsc{Quantum-Espresso}), utilizing ultra-soft pseudopotentials and wave function and density cutoffs of 475 and 5700 eV with an electronic convergence of 10$\time 10^{-8}$~eV, a fictitious electron mass of 400~a.u., and a time step of 5 a.u. We used a Nos\'{e} thermostat with an oscillation frequency of 6.5~THz and allowed the simulations to thermalize for at least 1~ps. Production runs were typically at least 20 ps. The starting structures are listed below. \subsection{Orthorhombic Structure} \noindent Starting fractional coordinates ($x/a$, $y/b$, and $z/c$) and lattice parameters used for CPMD calculations in the low-temperature, orthorhombic phase. A $2\times 2\times 2$ supercell was used, including 16 NH$_3$BH$_3$ molecules. \begin{eqnarray*} a &=& 10.88466\;\text{\AA}\\ b &=& 9.88096\;\text{\AA}\\ c &=& 10.26892\;\text{\AA}\\ \alpha &=& \beta = \gamma = 90.0^{\circ}\\ \end{eqnarray*} \vspace{-6ex} \begin{center}\scriptsize \begin{verbatim} B 0.0000000 0.0804724 0.0009784 B 0.0000000 0.0804724 0.4901925 B 0.0000000 0.5566408 0.0009784 B 0.0000000 0.5566408 0.4901925 B 0.5090651 0.0804724 0.0009784 B 0.5090651 0.0804724 0.4901925 B 0.5090651 0.5566408 0.0009784 B 0.5090651 0.5566408 0.4901925 B 0.2545325 0.3956959 0.2455854 B 0.2545325 0.3956959 0.7347996 B 0.2545325 0.8718642 0.2455854 B 0.2545325 0.8718642 0.7347996 B 0.7635976 0.3956959 0.2455854 B 0.7635976 0.3956959 0.7347996 B 0.7635976 0.8718642 0.2455854 B 0.7635976 0.8718642 0.7347996 N 0.0000000 0.1170898 0.1526837 N 0.0000000 0.1170898 0.6418978 N 0.0000000 0.5932580 0.1526837 N 0.0000000 0.5932580 0.6418978 N 0.5090651 0.1170898 0.1526837 N 0.5090651 0.1170898 0.6418978 N 0.5090651 0.5932580 0.1526837 N 0.5090651 0.5932580 0.6418978 N 0.2545325 0.3590785 0.3972908 N 0.2545325 0.3590785 0.8865048 N 0.2545325 0.8352468 0.3972908 N 0.2545325 0.8352468 0.8865048 N 0.7635976 0.3590785 0.3972908 N 0.7635976 0.3590785 0.8865048 N 0.7635976 0.8352468 0.3972908 N 0.7635976 0.8352468 0.8865048 H 0.0000000 0.1971337 0.1629083 H 0.0000000 0.1971337 0.6521223 H 0.0000000 0.6733020 0.1629083 H 0.0000000 0.6733020 0.6521223 H 0.5090651 0.1971337 0.1629083 H 0.5090651 0.1971337 0.6521223 H 0.5090651 0.6733020 0.1629083 H 0.5090651 0.6733020 0.6521223 H 0.2545325 0.2790346 0.4075153 H 0.2545325 0.2790346 0.8967295 H 0.2545325 0.7552029 0.4075153 H 0.2545325 0.7552029 0.8967295 H 0.7635976 0.2790346 0.4075153 H 0.7635976 0.2790346 0.8967295 H 0.7635976 0.7552029 0.4075153 H 0.7635976 0.7552029 0.8967295 H 0.0717782 0.0795201 0.1932396 H 0.0717782 0.0795201 0.6824537 H 0.0717782 0.5556884 0.1932396 H 0.0717782 0.5556884 0.6824537 H 0.5808432 0.0795201 0.1932396 H 0.5808432 0.0795201 0.6824537 H 0.5808432 0.5556884 0.1932396 H 0.5808432 0.5556884 0.6824537 H 0.1827544 0.3966482 0.4378466 H 0.1827544 0.3966482 0.9270607 H 0.1827544 0.8728165 0.4378466 H 0.1827544 0.8728165 0.9270607 H 0.6918194 0.3966482 0.4378466 H 0.6918194 0.3966482 0.9270607 H 0.6918194 0.8728165 0.4378466 H 0.6918194 0.8728165 0.9270607 H 0.3263107 0.3966482 0.4378466 H 0.3263107 0.3966482 0.9270607 H 0.3263107 0.8728165 0.4378466 H 0.3263107 0.8728165 0.9270607 H 0.8353757 0.3966482 0.4378466 H 0.8353757 0.3966482 0.9270607 H 0.8353757 0.8728165 0.4378466 H 0.8353757 0.8728165 0.9270607 H 0.4372869 0.0795201 0.1932396 H 0.4372869 0.0795201 0.6824537 H 0.4372869 0.5556884 0.1932396 H 0.4372869 0.5556884 0.6824537 H 0.9463518 0.0795201 0.1932396 H 0.9463518 0.0795201 0.6824537 H 0.9463518 0.5556884 0.1932396 H 0.9463518 0.5556884 0.6824537 H 0.0000000 0.4428365 0.4862788 H 0.0000000 0.4428365 0.9754928 H 0.0000000 0.9190048 0.4862788 H 0.0000000 0.9190048 0.9754928 H 0.5090651 0.4428365 0.4862788 H 0.5090651 0.4428365 0.9754928 H 0.5090651 0.9190048 0.4862788 H 0.5090651 0.9190048 0.9754928 H 0.2545325 0.0333318 0.2416718 H 0.2545325 0.0333318 0.7308858 H 0.2545325 0.5095001 0.2416718 H 0.2545325 0.5095001 0.7308858 H 0.7635976 0.0333318 0.2416718 H 0.7635976 0.0333318 0.7308858 H 0.7635976 0.5095001 0.2416718 H 0.7635976 0.5095001 0.7308858 H 0.0789051 0.1309463 0.4490985 H 0.0789051 0.1309463 0.9383126 H 0.0789051 0.6071146 0.4490985 H 0.0789051 0.6071146 0.9383126 H 0.5879701 0.1309463 0.4490985 H 0.5879701 0.1309463 0.9383126 H 0.5879701 0.6071146 0.4490985 H 0.5879701 0.6071146 0.9383126 H 0.1756274 0.3452220 0.2044915 H 0.1756274 0.3452220 0.6937056 H 0.1756274 0.8213903 0.2044915 H 0.1756274 0.8213903 0.6937056 H 0.6846925 0.3452220 0.2044915 H 0.6846925 0.3452220 0.6937056 H 0.6846925 0.8213903 0.2044915 H 0.6846925 0.8213903 0.6937056 H 0.3334376 0.3452220 0.2044915 H 0.3334376 0.3452220 0.6937056 H 0.3334376 0.8213903 0.2044915 H 0.3334376 0.8213903 0.6937056 H 0.8425027 0.3452220 0.2044915 H 0.8425027 0.3452220 0.6937056 H 0.8425027 0.8213903 0.2044915 H 0.8425027 0.8213903 0.6937056 H 0.4301600 0.1309463 0.4490985 H 0.4301600 0.1309463 0.9383126 H 0.4301600 0.6071146 0.4490985 H 0.4301600 0.6071146 0.9383126 H 0.9392250 0.1309463 0.4490985 H 0.9392250 0.1309463 0.9383126 H 0.9392250 0.6071146 0.4490985 H 0.9392250 0.6071146 0.9383126 \end{verbatim} \end{center} \newpage\subsection{Tetragonal Structure} \noindent Starting fractional coordinates ($x/a$, $y/b$, and $z/c$) and lattice parameters used for CPMD calculations in the high-temperature, tetragonal phase. \begin{eqnarray*} a &=& b = 10.526\;\text{\AA}\\ c &=& 10.1008\;\text{\AA}\\ \end{eqnarray*} \begin{center}\scriptsize \begin{verbatim} B 0.0000000 0.0000000 0.0016201 B 0.0000000 0.0000000 0.5016280 B 0.0000000 0.5000038 0.0016201 B 0.0000000 0.5000038 0.5016280 B 0.5000038 0.0000000 0.0016201 B 0.5000038 0.0000000 0.5016280 B 0.5000038 0.5000038 0.0016201 B 0.5000038 0.5000038 0.5016280 B 0.2500019 0.2500019 0.2516239 B 0.2500019 0.2500019 0.7516319 B 0.2500019 0.7500057 0.2516239 B 0.2500019 0.7500057 0.7516319 B 0.7500057 0.2500019 0.2516239 B 0.7500057 0.2500019 0.7516319 B 0.7500057 0.7500057 0.2516239 B 0.7500057 0.7500057 0.7516319 N 0.0000000 0.0000000 0.3434755 N 0.0000000 0.0000000 0.8434833 N 0.0000000 0.5000038 0.3434755 N 0.0000000 0.5000038 0.8434833 N 0.5000038 0.0000000 0.3434755 N 0.5000038 0.0000000 0.8434833 N 0.5000038 0.5000038 0.3434755 N 0.5000038 0.5000038 0.8434833 N 0.2500019 0.2500019 0.0934715 N 0.2500019 0.2500019 0.5934794 N 0.2500019 0.7500057 0.0934715 N 0.2500019 0.7500057 0.5934794 N 0.7500057 0.2500019 0.0934715 N 0.7500057 0.2500019 0.5934794 N 0.7500057 0.7500057 0.0934715 N 0.7500057 0.7500057 0.5934794 H 0.0000000 0.4259533 0.3095699 H 0.0000000 0.4259533 0.8095778 H 0.0000000 0.9259571 0.3095699 H 0.0000000 0.9259571 0.8095778 H 0.5000038 0.4259533 0.3095699 H 0.5000038 0.4259533 0.8095778 H 0.5000038 0.9259571 0.3095699 H 0.5000038 0.9259571 0.8095778 H 0.2500019 0.3240524 0.0595660 H 0.2500019 0.3240524 0.5595738 H 0.2500019 0.8240562 0.0595660 H 0.2500019 0.8240562 0.5595738 H 0.7500057 0.3240524 0.0595660 H 0.7500057 0.3240524 0.5595738 H 0.7500057 0.8240562 0.0595660 H 0.7500057 0.8240562 0.5595738 H 0.0000000 0.0999007 0.0389506 H 0.0000000 0.0999007 0.5389586 H 0.0000000 0.5999046 0.0389506 H 0.0000000 0.5999046 0.5389586 H 0.5000038 0.0999007 0.0389506 H 0.5000038 0.0999007 0.5389586 H 0.5000038 0.5999046 0.0389506 H 0.5000038 0.5999046 0.5389586 H 0.2500019 0.1501012 0.2889545 H 0.2500019 0.1501012 0.7889625 H 0.2500019 0.6501050 0.2889545 H 0.2500019 0.6501050 0.7889625 H 0.7500057 0.1501012 0.2889545 H 0.7500057 0.1501012 0.7889625 H 0.7500057 0.6501050 0.2889545 H 0.7500057 0.6501050 0.7889625 H 0.4316033 0.0395003 0.3095049 H 0.4316033 0.0395003 0.8095127 H 0.4316033 0.5395041 0.3095049 H 0.4316033 0.5395041 0.8095127 H 0.9316071 0.0395003 0.3095049 H 0.9316071 0.0395003 0.8095127 H 0.9316071 0.5395041 0.3095049 H 0.9316071 0.5395041 0.8095127 H 0.0684005 0.0395003 0.3095049 H 0.0684005 0.0395003 0.8095127 H 0.0684005 0.5395041 0.3095049 H 0.0684005 0.5395041 0.8095127 H 0.5684043 0.0395003 0.3095049 H 0.5684043 0.0395003 0.8095127 H 0.5684043 0.5395041 0.3095049 H 0.5684043 0.5395041 0.8095127 H 0.3184024 0.2105016 0.0595009 H 0.3184024 0.2105016 0.5595089 H 0.3184024 0.7105054 0.0595009 H 0.3184024 0.7105054 0.5595089 H 0.8184062 0.2105016 0.0595009 H 0.8184062 0.2105016 0.5595089 H 0.8184062 0.7105054 0.0595009 H 0.8184062 0.7105054 0.5595089 H 0.1816014 0.2105016 0.0595009 H 0.1816014 0.2105016 0.5595089 H 0.1816014 0.7105054 0.0595009 H 0.1816014 0.7105054 0.5595089 H 0.6816052 0.2105016 0.0595009 H 0.6816052 0.2105016 0.5595089 H 0.6816052 0.7105054 0.0595009 H 0.6816052 0.7105054 0.5595089 H 0.0866007 0.4505034 0.0385006 H 0.0866007 0.4505034 0.5385085 H 0.0866007 0.9505072 0.0385006 H 0.0866007 0.9505072 0.5385085 H 0.5866045 0.4505034 0.0385006 H 0.5866045 0.4505034 0.5385085 H 0.5866045 0.9505072 0.0385006 H 0.5866045 0.9505072 0.5385085 H 0.4134031 0.4505034 0.0385006 H 0.4134031 0.4505034 0.5385085 H 0.4134031 0.9505072 0.0385006 H 0.4134031 0.9505072 0.5385085 H 0.9134070 0.4505034 0.0385006 H 0.9134070 0.4505034 0.5385085 H 0.9134070 0.9505072 0.0385006 H 0.9134070 0.9505072 0.5385085 H 0.1634012 0.2995023 0.2885046 H 0.1634012 0.2995023 0.7885125 H 0.1634012 0.7995061 0.2885046 H 0.1634012 0.7995061 0.7885125 H 0.6634051 0.2995023 0.2885046 H 0.6634051 0.2995023 0.7885125 H 0.6634051 0.7995061 0.2885046 H 0.6634051 0.7995061 0.7885125 H 0.3366026 0.2995023 0.2885046 H 0.3366026 0.2995023 0.7885125 H 0.3366026 0.7995061 0.2885046 H 0.3366026 0.7995061 0.7885125 H 0.8366064 0.2995023 0.2885046 H 0.8366064 0.2995023 0.7885125 H 0.8366064 0.7995061 0.2885046 H 0.8366064 0.7995061 0.7885125 \end{verbatim} \end{center} \newpage\section{Detailed Rotational Barriers} \mbox{} \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{barriers} \caption{\label{fig:barriers}Comparison of rotational barriers as obtained by NEB calculations and fixed-geometry calculations. The maxima of these curves are tabulated in Table~1 of the main manuscript.} \end{figure} \vspace*{3.5in}
train/arxiv
BkiUdmo5qYVBihbKCrTB
5
1
\section{Introduction} Now it is widely accepted that an active galactic nucleus (AGN) is a supermassive black hole surrounded by an accretion disc. Above the accretion disc, there is dense, rapidly-moving gas making up the so-called broad line region (BLR) emitting broad emission lines by reprocessing the continuum radiation from the inner accretion disc (see \citealt{Gaskell09}). The BLR is believed to consist of dense clumps of hot gas ($n_H > 10^9$ cm$^{-3}$; $T \sim 10^4$K) in a much hotter, rarefied medium. The motions of the line-emitting clouds is the main cause of the broadening of the line profiles. The profiles of the broad emission lines is a major source of information about the geometry and kinematics of the BLR. Profiles can be broadly categorized into two shape: single-peaked and double-peaked. It is believed that double-peaked profiles are emitted from disc-like clumpy structure (e.g., \citealp{Chen89b,Chen89a,Eracleous94,Eracleous03,Strateva03}). Although obvious double-peaked profiles are seen in only small fraction of AGNs, this does not mean that such discs are absent in other AGNs. Based on some studies, under the specific conditions, disc-like clumpy structure can even produce single-peaked broad emission lines (e.g., \citealp{Chen89a,Dumont90a,Kollatschny02,Kollatschny03}). Spectropolarimetric observations (\citealt{Smith05}) imply the presence of clumpy discs in BLR. On the other hand, other authors have suggested a two-component model in which they consider a spherical distribution of clouds surrounding the central black hole in addition to their distribution in the midplane (e.g., \citealp{Popovic04,Bon09}). In this model, while the disc is responsible for the production of the broad wings, the spherical distribution is responsible for the narrow cores. The integrated emission line profile is a combination of wings and cores. Using the width of the broad Balmer emission lines, $\textsc{FWHM}$, and an effective BLR radius, $r_{BLR}$, which is obtained by either reverberation mapping (RM) (e.g., \citealp{Blandford82,Gaskell86,Peterson93,Peterson04}) or from the relationship between optical luminosity and $r_{BLR}$ (\citealp{Dibai77,Kaspi00,Bentz06}), masses of black holes are estimated from the virial theorem, $M = fr_{BLR}\textsc{FWHM}^{2}/G$, where $G$ is the gravitational constant and $f$ is the ``virial factor" depending on the geometry and kinematics of the BLR and the inclination angle, $\theta_{0}$. Unfortunately with current technology it is impossible to directly observe the structure of the BLR. Therefore the true value of $ f $ for each object is unknown and we are required to use average virial factor, $ \langle f \rangle $, to estimate the mass of supermassive black hole. Comparison of virial masses with independent estimates of black hole masses using to $M - \sigma$ relationship (see \citealt{Kormendy13}) have given empirical estimates of the value of $ \langle f \rangle $. However each study has found different value for $ \langle f \rangle $. For example \citet{Onken04} calculate $ \langle f \rangle = 5.5 \pm 1.8 $, \citet{Woo10} calculate $ \langle f \rangle = 5.2 \pm 1.2 $, \citet{Graham11} calculate $ \langle f \rangle = 3.8^{+0.7}_{-0.6} $ and \citet{Grier13} calculate $ \langle f \rangle = 4.31 \pm 1.05 $. This is because each group take a different sample. Different values for $ \langle f \rangle $ prevent us to have a reliable value for black hole mass. The situation for low-luminosity AGNs (LLAGNs) is somewhat different. The lack of broad optical emission lines in the faintest cases ($ L = 10^{-9}< L <10^{-6} L_{Edd}$) has led to two scenarios about the presence of BLR in LLAGNs. The first, which is somewhat supported by the theoretical models (e.g., \citealp{Nicastro00,Elitzur06}), simply says that the BLR is absent in such faint objects. However, there is clear evidence in favor of the presence of the BLR in some LLAGNs, at least those with $ L > 10^{-5} L_{Edd}$. This supports a second scenario which says that the BLR exists in LLAGNs but we cannot detect them in the faintest cases because the intensity of their broad emission lines is below the detection threshold set mostly by starlight in the host galaxy. In the Palomar survey, it was found that broad H$\alpha$ emission is present in a remarkably high fraction of LINERs (LLAGNs)(see \citealt{Ho08}). Moreover, double-peaked broad emission lines have been found in some LINERs including NGC~7213 (\citealt{Filippenko84}), NGC~1097 (\citealt{Storchi93}), M81 (\citealt{Bower96}), NGC~4450 (\citealt{Ho00}) and NGC~4203 (\citealt{Shields00}). Other studies have also shown the presence of variable broad emission lines for NGC~1097 (\citealt{Storchi93}), M81 (\citealt{Bower96}) and NGC~3065 (e.g., \citealt{Eracleous01}). Recently, \citet{Balmaverde14} found other LLAGNs with $ L=10^{-5} L_{Edd}$ showing BLRs. Since the optical spectra of LLAGNs are severely contaminated by the host galaxy, some authors have suggested using the widths of the Paschen lines rather than Balmer lines to determine a $\textsc{FWHM}$ (\citealp{Landt11,Landt13,La Franca15}). Also, in order to estimate the BLR radius, $r_{BLR}$, we can use near-IR continuum luminosity at 1 $\mu m$ (\citealp{Landt11,Landt13}) or X-ray luminosity (\citealp{Greene10}) rather than the optical luminosity. \citet{Whittle86} used the Boltzmann equation to describe the kinematics of BLR. More recently, \citet{Wang12} showed that the BLR can be considered as a collisionless ensemble of particles. By considering the Newtonian gravity of the black hole and a quadratic drag force, they used the collisionless Boltzmann equation (CBE) to study the dynamics of the clouds for the case where magnetic forces are unimportant. Following this approach, some authors included the effect of magnetic fields on the dynamics of the clouds in the BLR (e.g., \citealt{Khajenabi14}). In this paper, we use the CBE to describe the distribution of the clouds in the BLR. The structure of this paper is as follows: in the section (\ref{s2}) we will establish our basic formalism and apply it in order to classify the clumpy structure of the BLR. In the section (\ref{s3}) we will concentrate on LLAGNs and give more details about the distribution of the clouds in such systems. Moreover, we will derive the virial factor $f$ for them and in the final section, the conclusions are summarized. \section{Kinematic Equations of Clouds in the BLR}\label{s2} This paper only considers axisymmetric, steady-state systems. In the subsection (\ref{ss21}) we start by deriving the general form of the Jeans equations in cylindrical coordinates ($R, \phi, z$) for a system of particles subject to velocity-dependent forces as well as position-dependent forces. Then, by assuming axisymmetry ($\partial /\partial \phi =0$) and a steady state ($\partial /\partial t =0$), we simplify the Jeans equations and use them to describe the distribution of the clouds in the BLR. \subsection{Jeans equations}\label{ss21} If we define the distribution function $F$ as $F=\Delta n/\Delta x \Delta y \Delta z \Delta^{3} v$, the continuity equation in the phase space (see \citealp[][eq 4.11]{Binney87}) is given by $ \partial F/\partial t + \Sigma_{\alpha = 1}^{6} \partial (F \dot{\omega_{\alpha}})/\partial \omega_{\alpha} = 0 $. This equation can be rewritten as $DF/Dt+F(\partial a_{x}/\partial v_{x}+\partial a_{y}/\partial v_{y}+\partial a_{z}/\partial v_{z})=0 $ in Cartesian position-velocity space ($x,y,z,v_{x},v_{y},v_{z}$) where $DF/Dt$ and $\mathbf{a}$ are respectively the Lagrangian derivative in the phase space and the resultant acceleration vector. On the other hand by the chain rule, the partial derivatives with respect to the Cartesian components of the velocity are related to those respect to the cylindrical components by $ \partial/\partial v_{x}=\cos \phi (\partial/\partial v_{R}) - \sin \phi (\partial/\partial v_{\phi}) $, $ \partial/\partial v_{y}=\sin \phi (\partial/\partial v_{R}) + \cos \phi (\partial/\partial v_{\phi}) $ and $ \partial/\partial v_{z}=\partial/\partial v_{z} $. Also the relationship between the Cartesian and cylindrical components of the acceleration vector is $ a_{x}=a_{R}\cos \phi - a_{\phi} \sin \phi $, $ a_{y}=a_{R}\sin \phi + a_{\phi} \cos \phi $ and $ a_{z}=a_{z} $. Combining all of these equations, we obtain the extended form of the CBE in cylindrical position-velocity space ($R,\phi, z, v_{R}, v_{\phi}, v_{z}$) as \[ \frac{\partial F}{\partial t}+v_{R} \frac{\partial F}{\partial R}+\frac{v_{\phi}}{R} \frac{\partial F}{\partial \phi}+v_{z} \frac{\partial F}{\partial z}+\left(a_{R}+\frac{v^{2}_{\phi}}{R}\right) \frac{\partial F}{\partial v_{R}}+ \]\[\] \begin{equation}\label{eq1} \left(a_{\phi}-\frac{v_{R}v_{\phi}}{R}\right) \frac{\partial F}{\partial v_{\phi}}+a_{z} \frac{\partial F}{\partial v_{z}}+F\left(\frac{\partial a_{R}}{\partial v_{R}}+\frac{\partial a_{\phi}}{\partial v_{\phi}}+\frac{\partial a_{z}}{\partial v_{z}}\right)=0. \end{equation} As can be seen, in the absence of velocity-dependent forces, equation (\ref{eq1}) agrees with the standard form (see \citealp[][eq 4.15]{Binney87}). As is shown in the Appendix \ref{a1}, the Jeans equations derived from equation (\ref{eq1}) can be written as \begin{equation}\label{eq2} \frac{\partial n}{\partial t} + \frac{1}{R} \frac{\partial}{\partial R} (nR\langle v_{R} \rangle)+\frac{1}{R}\frac{\partial}{\partial \phi} (n\langle v_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{z} \rangle)=0, \end{equation} and \[\frac{\partial}{\partial t} (n\langle v_{R} \rangle)+\frac{\partial}{\partial R} (n\langle v^{2}_{R} \rangle)+\frac{1}{R}\frac{\partial}{\partial \phi} (n\langle v_{R}v_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{R}v_{z} \rangle)\] \begin{equation}\label{eq3} +n\frac{\langle v^{2}_{R}\rangle -\langle v^{2}_{\phi}\rangle}{R}-n \langle a_{R} \rangle =0, \end{equation} and \[\frac{\partial}{\partial t} (n\langle v_{\phi} \rangle)+\frac{\partial}{\partial R} (n\langle v_{R}v_{\phi} \rangle)+\frac{1}{R} \frac{\partial}{\partial \phi} (n\langle v^{2}_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{\phi}v_{z} \rangle)\] \begin{equation}\label{eq4} +\frac{2n}{R}\langle v_{\phi}v_{R}\rangle -n \langle a_{\phi} \rangle =0, \end{equation} and \[\frac{\partial}{\partial t} (n\langle v_{z} \rangle)+\frac{\partial}{\partial R} (n\langle v_{R}v_{z} \rangle)+\frac{1}{R}\frac{\partial}{\partial \phi} (n\langle v_{\phi}v_{z} \rangle)+\frac{\partial}{\partial z} (n\langle v^{2}_{z} \rangle)\] \begin{equation}\label{eq5} +\frac{n\langle v_{R}v_{z}\rangle}{R}-n \langle a_{z} \rangle =0, \end{equation} where $n$ is the volume number density in the position-place. Equations (\ref{eq2}) - (\ref{eq5}) are an extended form of the Jeans equations describing a collisionless system of particles undergoing both velocity-dependent and position-dependent forces. Considering gravity as dominant force for a axisymmetric system of particles, equations (\ref{eq2}) - (\ref{eq5}) reduce to the standard form of the Jeans equations (see equations 4.28 and 4.29 of \citealp{Binney87}). \subsection{Dynamics and geometry of BLR}\label{ss22} In this subsection, by considering a steady axisymmetric system, we include the Newtonian gravity of the black hole, the isotropic radiation pressure of the central source, and the drag force between the clouds and the ambient medium for the linear regime as the dominant forces. First we discuss about the role of radiation pressure and gravity and after that, through the analysis of the clouds near the midplane, we classify the distribution of the clouds in the BLR. \subsubsection{Radiation pressure versus gravity} Assuming that the clouds are optically thick, the radiative force can be expressed as \begin{equation}\label{eq6} \mathbf{F}_{rad}=\frac{\sigma}{c}\mathcal{F}\mathbf{e}_{r}, \end{equation} where $\sigma $ and $ c $ are the cloud's cross-section and the speed of the light respectively, and the isotropic radiation flux, $\mathcal{F}$, is \begin{equation}\label{eq7} \mathcal{F}(r)=\frac{L}{4\pi r^{2}}. \end{equation} $L$ is the bolometric luminosity of the central source and $ r=\sqrt{R^{2}+z^{2}}$ is the spherical radius. For the clouds near the midplane, $ z \ll R$, the radiative force per unit of mass can be written as \begin{equation}\label{eq8} \mathbf{a}_{Rad}=\Omega_{k,mid}^{2}\frac{3l}{2\mu \sigma_{T}N_{cl}}(R\mathbf{e}_{R}+z\mathbf{e}_{z}), \end{equation} where $\Omega_{k,mid}=\sqrt{GM/R^{3}}$ is the Keplerian angular velocity in the midplane, $\mu $ is the mean molecular weight, $\sigma_{T}$ is the Thomson cross-section, $l$ is the Eddington ratio, and $ N_{cl}$ is the column density of each cloud. Following previous studies, we consider the clouds, with conserved mass $ m_{cl}$, in pressure equilibrium with the inter-cloud gas (e.g., \citealp{Netzer10,Krause11,Khajenabi15}). Furthermore, the pressure of the ambient medium, and hence the gas density in individual clouds, $n_{gas}$, are assumed to have a power-law dependence on the (spherical) distance from the centre as $n_{gas} \propto r^{-s}$. As a result, since $r \approx R $ for the clouds near the midplane, the column density defined by $ N_{cl}=m_{cl}/R_{cl}^{2}$ finally becomes \begin{equation}\label{eq9} N_{cl}=N_{0}\left(\frac{R}{R_{0}}\right)^{-2s/3}, \end{equation} where $R_{0}$ is one light day and $N_{0}$ is the column density at $R_{0}$. In addition to the gravitational and radiative forces, the drag force opposing relative movement of clouds and the ambient gas is another force that needs to be taken into consideration. Depending on the size of the clouds, there are two regimes for the drag force. These include the Epstein and the Stokes regimes (e.g., \citealt{Armitage13}). In the Epstein regime, which dominates for the small clouds, the magnitude of the force is proportional to the relative velocity. In the Stokes regime the drag affecting the movement of the large clouds increases as the square of the relative velocity. We assume the clouds have spherical shapes, so the drag coefficient in the Stokes regime depends solely on the Reynolds number which is proportional to the relative velocity. \citet{Shadmehri15} demonstrated that the Reynolds number in the inter-cloud gas is lower than unity. This means that: (1) we can consider the inter-cloud gas as having a laminar flow and (2) the drag coefficient in the Stokes regime is proportional to the inverse of the Reynolds number (e.g., \citealt{Armitage13}). This means that the drag coefficient in the Stokes regime is proportional to the inverse of relative velocity. As a result, in both the Epstein and Stokes regimes, the magnitude of the drag force is proportional to the relative velocity and both small and large clouds are affected by a linear drag force as $\mathbf{F_{d}}=f_{l}(\mathbf{v}-\mathbf{w})$, where $\mathbf{w}$ is the velocity of the ambient medium and $f_{l}$ is the drag coefficient. The equations of motion of an individual cloud near the midplane are therefore given by \[a_{R}=-\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]-f_{l}(v_{R}-w_{R}),\] \[a_{\phi}=-f_{l}(v_{\phi}-w_{\phi}),\] \begin{equation}\label{eq10} a_{z}=-\Omega_{k,mid}^{2}z\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]-f_{l}(v_{z}-w_{z}), \end{equation} where $R_{c}$ is the critical radius defined as \begin{equation}\label{eq11} R_{c}=R_{0}\left( \frac{2\mu\sigma_{T}N_{0}}{3l}\right)^{3/2s}. \end{equation} From equations (\ref{eq8}) and (\ref{eq9}), we see that at $R=R_{c}$, the attractive gravitational and repulsive radiative force cancel each other and the magnitude of the resultant force is equal to zero. For $R > R_{c}$, the radiative force is stronger than gravity and accelerates the clouds away from the central black hole. For $R < R_{c}$ gravity overcomes the radiative force and pulls the clouds inwards. If we assume $ s=3/2 $ , $ \mu=0.61 $, $ l=0.001 $, $ N_{0}\approx 10^{23} cm^{-2} $ and $ \sigma_{T}=6.7\times 10^{-25} cm^{2} $, the value of $ R_{c} $ almost becomes 27 light days which is in order of BLR radius (e.g., \citealt{Krolik91}). Assuming a steady state and axisymmetry, we substitute equations (\ref{eq10}) into equations (\ref{eq3}) - (\ref{eq5}) to obtain \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{panel1a.eps}\\ \includegraphics[width=0.8\columnwidth]{panel1b.eps}\\ \includegraphics[width=0.8\columnwidth]{panel1c.eps} \caption{Classification of clumpy BLR structure. The black filled circle on the left shows the supermassive black hole. The grey and white areas show the clumpy and non-clumpy regions in the BLR respectively. The first panel shows class A in which the clouds occupy all positions in the BLR. The second panel shows class B. In this class we have a disc for $R < R_{c}$ and a cloudy torus (wind region) for $R > R_{c}$. The third panel shows class C in which the clumpy structure in the BLR is disc-like.} \label{figure1} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{panel2a.eps}\\ \includegraphics[width=0.8\columnwidth]{panel2b.eps} \caption{(a) $\log \chi$ versus $\log L$, for different values of the column density, $N_{0}$. (b) $\log \chi$ versus $\log N_{0}$ for different values of the bolometric luminosity, $L$. In this Figure, $\log \chi < -2$ and $ -2 < \log \chi < 0$ and $\log \chi > 0$ represents classes A, B, and C respectively. } \label{figure2} \end{figure} \[\frac{\partial}{\partial R} (n\langle v^{2}_{R} \rangle)+\frac{\partial}{\partial z} (n\langle v_{R}v_{z} \rangle)+n\frac{\langle v^{2}_{R}\rangle -\langle v^{2}_{\phi}\rangle }{R}\] \begin{equation}\label{eq12} +n\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]+nf_{l}(\langle v_{R}\rangle -w_{R}) =0, \end{equation} and \begin{equation}\label{eq13} \frac{\partial}{\partial R} (n\langle v_{R}v_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{\phi}v_{z} \rangle)+\frac{2n}{R}\langle v_{\phi}v_{R}\rangle+nf_{l}(\langle v_{\phi}\rangle -w_{\phi}) =0, \end{equation} and \[\frac{\partial}{\partial R} (n\langle v_{R}v_{z} \rangle)+\frac{\partial}{\partial z} (n\langle v^{2}_{z} \rangle)+\frac{n\langle v_{R}v_{z}\rangle}{R}+n\Omega_{k,mid}^{2}z\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right] \] \begin{equation}\label{eq14} +nf_{l}(\langle v_{z}\rangle -w_{z}) =0. \end{equation} In this paper, we assume that the drag coefficients are sufficiently large that the clouds and the ambient medium are strongly coupled to each other. Thus we can write $\langle v_{R} \rangle=w_{R}$, $\langle v_{\phi} \rangle=w_{\phi}$ and $\langle v_{z} \rangle=w_{z}$. For simplicity we also assume $\langle v_{z} \rangle =0$, $\langle v_{R}v_{z} \rangle =0 $ and $\langle v_{\phi}v_{z} \rangle =0$. Therefore equations (\ref{eq2}), (\ref{eq12}), (\ref{eq13}) and (\ref{eq14}) are respectively reduced to \begin{equation}\label{eq15} \frac{1}{R} \frac{\partial}{\partial R} (nR\langle v_{R} \rangle)=0, \end{equation} \begin{equation}\label{eq16} \frac{\partial}{\partial R} (n\langle v^{2}_{R} \rangle)+n\frac{\langle v^{2}_{R}\rangle -\langle v^{2}_{\phi}\rangle }{R}+n\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]=0, \end{equation} \begin{equation}\label{eq17} \frac{\partial}{\partial R} (n\langle v_{R}v_{\phi} \rangle)+\frac{2n}{R}\langle v_{R}v_{\phi}\rangle =0, \end{equation} and \begin{equation}\label{eq18} \langle v^{2}_{z} \rangle \frac{\partial n}{\partial z} +n\Omega_{k,mid}^{2}z\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]=0. \end{equation} For simplicity $\langle v^{2}_{z} \rangle $ is assumed to be constant in equation (\ref{eq18}). Equation (\ref{eq18}) shows while for $R < R_{c}$ we have $\partial n/\partial z<0 $ and most of the clouds are distributed near the midplane, for $R>R_{c}$ we have $\partial n/\partial z>0 $ and the clouds tend to be located in the higher altitudes. This result is quite in an agreement with what we discussed about $ R_{c} $. \subsubsection{Classification of clumpy distribution} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{panel3a.eps}\\ \includegraphics[width=0.8\columnwidth]{panel3b.eps}\\ \includegraphics[width=0.8\columnwidth]{panel3c.eps}\\ \includegraphics[width=0.8\columnwidth]{panel3d.eps} \caption{The scale height $ h_{g}$ versus the radial distance, $R$. The panels (a), (b), (c) and (d) respectively show the effect of the central luminosity , the black hole mass, z-component of the velocity dispersion and the density index on the geometric shape and thickness of the clumpy disc.} \label{figure3} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{panel4a.eps} \includegraphics[width=0.8\columnwidth]{panel4b.eps} \caption{The distribution of BLR clouds for different values of the central luminosity. $ M=4.1 \times 10^{6}M_{\odot}$, $\sigma_{z}=10^{7} cm/s$, $ s=1 $ and $ n_{tot}=10^{6}$. (a) The surface number density versus radial distance (b) The volume number density versus radial distance at $ z=0, 0.2 $ and $ 0.5 $ light day (lt-d).} \label{figure4} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{panel5a.eps} \includegraphics[width=0.8\columnwidth]{panel5b.eps} \caption{The distribution of BLR clouds for different density indices. $M=4.1 \times 10^{6}M_{\odot}$, $\sigma_{z}=10^{7} cm/s$, $ l=10^{-4}$ and $n_{tot}=10^{6}$. (a) The surface number density versus the radial distance, (b) The volume number density versus radial distance at $ z=0, 0.2 $ and $ 0.5 $ light days (lt-d).} \label{figure5} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{panel6a.eps} \includegraphics[width=0.8\columnwidth]{panel6b.eps} \caption{The distribution of BLR clouds for different values of the total number of the clouds. $M=4.1 \times 10^{6}M_{\odot}$, $\sigma_{z}=10^{7} cm/s$, $ l=10^{-2}$ and $ s=1 $. (a) Surface number density versus the radial distance (b) Volume number density versus the radial distance at $ z=0, 0.2 $ and $ 0.5 $ light days (lt-d).} \label{figure6} \end{figure} In this part we compare $R_{c}$ with the innermost radius of the BLR $R_{in}$ and the outermost radius $R_{out}$ to show that there are three classes for clumpy distribution in the BLR: \textbf{Class A}: $R_{c } < R_{in}$ and the clouds fill all positions in the intercloud gas (see the first panel in Figure 1). \textbf{Class B}: $R_{in} < R_{c} < R_{out}$ and the clumpy structure is the combination of the inner disc extending from $R_{in}$ to $R_{c}$ and the outer cloudy torus (wind region) extending from $R_{c}$ to $R_{out}$ (see the second panel in Figure 1). \textbf{Class C}: $R_{c}>R_{out}$ and the clumpy structure is disc-like (see the third panel in Figure 1). Depending on the observational data which we know about each AGN we suggest three methods to find what category it belong to. 1) For some AGNs with black hole mass independently estimated from the $M - \sigma$ relationship (e.g., \citealp{Onken04,Woo10,Graham11,Grier13}), we can determine the Eddington luminosity defined by $ L_{Edd}=1.3 \times 10^{38} M/M_{\odot} erg/s $. On the other hand, by measuring the bolometric luminosity, $L$, we have the Eddington ratio $ l=L/L_{Edd}$. Also, we assume $ 1.2 \times 10^{21}cm^{-2}< N_{0} < 1.5 \times 10^{24}cm^{-2}$ for the column density (e.g., \citealt{Marconi08}) and $1 < s < 2.5$ (e.g., \citealt{Rees89}). We can then calculate the value of $R_{c}$ from equation (\ref{eq11}) and compare it with $R_{in}$ and $R_{out}$ specified by reverberation mapping technique (e.g., \citealt{Krolik91}) to determine what class each object belongs to. 2) For cases for which reverberation mapping is not available, the $R-L$ relationship (\citealp{Kaspi00,Bentz06}) can help us to estimate the BLR radius and compare it with $R_{c}$ estimated by the method described above. 3) For AGNs with unknown black hole mass, if we assume $R_{in}=10R_{Sch}$ and $R_{out}=1000R_{Sch}$ we can define the parameter $\chi$ as $\chi =R_{c}/R_{out}= (0.005c^{2}R_{0}/GM)(2\mu \sigma_{T}N_{0}/3l)^{3/2s}$. Clearly $ \chi < 0.01 $ implies $R_{c} < R_{in}$ and such systems belong to class A. Also, in the cases for which $0.01 < \chi < 1$ and $\chi > 1$, we see that they belong to the classes B and C respectively. Assuming that $s=3/2$ we can combine the black hole mass with the Eddington ratio to get $\chi$ as \begin{equation}\label{eq19} \chi = 4.89 \times 10^{21} \frac{\mu N_{0}}{L}. \end{equation} The parameter $\chi$ derived from equation (\ref{eq19}) can give the class of BLR structure for AGNs with unknown black hole mass. The top and bottom panels in Figure \ref{figure2} show $\log \chi$ as a function of $\log L$ and $\log N_{0}$ respectively. In Figure 2 we can see that for quasars with luminosities ranging from $10^{44} erg/s $ to $ 10^{47} erg/s$, we have $ -2<\log \chi<0 $ for less luminous systems and $\log \chi<-2 $ for the brightest cases. We therefore expect that the BLR structure of AGNs is similar to classes A and B for the higher and lower luminosity cases. For Seyfert galaxies with $ 10^{41} erg/s < L < 10^{44} erg/s$, the expected distribution of clouds is as in classes B and C for higher and lower luminosity objects. Finally for all the LLAGNs with $ L<10^{41} erg/s$ we see $\log \chi$ is positive and BLR structure is as in class C (i.e., disc-like). Some studies, through a discussion on the shape of broad emission lines, have shown the existence of disc sturcture (class C) in LLAGNs (e.g., \citealp{Eracleous03,Storchi16}). Moreover they have found the existence of a non-disc region (outer torus in class B) in outer parts of BLR in Seyfert galaxies. As the final step in this section, we solve equations (\ref{eq15}) and (\ref{eq18}) for the disc-like approximation to obtain the volume number density $n$. Because of the assumption of strong coupling between the BLR clouds and the hot intercloud medium we have $\langle v_{R} \rangle =w_{R}$. Furthermore, if we assume an advection-dominated accretion flow (ADAF) for the model describing the intercloud gas and because of the self-similar solution we can write $ w_{R}=-\alpha c_{1}v_{k,mid}$ (e.g., \citealt{Narayan94}), where $\alpha $ and $ c_{1}$ are two constants in order of unity and $ v_{k,mid}$ is the Keplerian velocity in the midplane. On the other hand, from equation (\ref{eq15}) we can see that $ nR\langle v_{R} \rangle $ does not depend on $R$, so \begin{equation}\label{eq20} n(R,z)=-\frac{\Lambda(z)}{\alpha c_{1}\sqrt{GM}}R^{-\frac{1}{2}}, \end{equation} where $\Lambda(z)$ give the vertical dependence of $n$. By substituting equation (\ref{eq20}) into equation (\ref{eq18}) and after some algebraic manipulations, equation (\ref{eq18}) can be written as \begin{equation}\label{eq21} \langle v^{2}_{z} \rangle \frac{\partial \Lambda(z)}{\partial z} +\Omega_{k,mid}^{2}z\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]\Lambda(z)=0. \end{equation} If we substitute $\Lambda(z)$ derived by integrating equation (\ref{eq21}) into equation (\ref{eq20}), the volume number density is given by \begin{equation}\label{eq22} n(R,z)=-\frac{k_{0}}{\alpha c_{1}\sqrt{GM}}R^{-\frac{1}{2}}\exp \left(-\frac{z^{2}}{2h^{2}_{g}}\right), \end{equation} where $k_{0}$ is the constant of integration and $h_{g}$ is the scale height which is \begin{equation}\label{eq23} h_{g}(R)=\frac{\sigma_{z}}{\Omega_{k,mid}}\sqrt{\frac{1}{1-(R/R_{c})^{\frac{2s}{3}}}}. \end{equation} Integrating $n$ over all positions occupied by the clouds, we calculate $ k_{0}$ in terms of the total number of the cloud $ n_{tot}$, and substitute it into equation (\ref{eq22}) to derive $n$ as \begin{equation}\label{eq24} n(R,z)=\frac{\sqrt{GM}}{(2\pi)^{3/2}}\frac{n_{tot}}{\sigma_{z}\gamma}R^{-1/2}\exp \left(-\frac{z^{2}}{2h^{2}_{g}}\right), \end{equation} where $\sigma_{z}=\sqrt{\langle v^{2}_{z} \rangle}$ is $z$-component of the velocity dispersion and $\gamma $ is defined by $\gamma=\int_{R_{in}}^{R_{out}}\sqrt{1/1-(R/R_{c})^{\frac{2s}{3}}}R^{2}dR$, and where $R_{in}$ and $R_{out}$ are assumed to be 1 and 10 light day respectively. To calculate the surface number density, $\Sigma$, we integrate equation (\ref{eq24}) over $z$ to get \begin{equation}\label{eq25} \Sigma (R)=\frac{1}{2\pi}\frac{n_{tot}}{\gamma}R\sqrt{\frac{1}{1-(R/R_{c})^{\frac{2s}{3}}}}. \end{equation} In all subsequent sections, we adopt $\mu =0.61$. \section{VIRIAL FACTOR}\label{s3} \subsection{Disc-like configurations}\label{ss31} In the last section we saw that in class C we have a disc-like distribution for the BLR. There are now two questions: what is the geometric shape of the clumpy disc? and is its thickness small compared to its radial sizes? In this section, we will answer to these questions and explore the role of various physical parameters on the shape and thickness of the clumpy disc. According to equation (\ref{eq24}) we consider the curved surface $ z=h_{g}(R)$ as the boundary between the clumpy and non-clumpy regions. Through plotting the scale height profile as a function of the radial distance in Figure \ref{figure3} we show the geometric shape of the clumpy disc. In Figure \ref{figure3}, we see that there is a positive correlation between the radial distance and scale height. Panels a and b show the role of the central luminosity and black hole mass on the thickness of the clumpy disc respectively. From these we see that with an increase in the central luminosity, the disc thickness increases, and that an increase in the black hole mass leads to a decrease in thickness. This is because as $l$ increases, radiative pressure dominates and pushes the clouds away from the midplane. However, an increase in the black hole mass leads to an increase in the gravitational attraction pulling the clouds toward the midplane. Panels c and d show that there is similar behaviour for $h_{g}$ as a function of $\sigma_{z}$ and $s$. With increases in both of them, $h_{g}$ increases as well. Figure (\ref{figure3}) also shows that, as suggested by some authors (e.g., \citealp{Dumont90b,Goad12}), the clumpy disc is flared (bowl-shaped). The thickness of the clumpy disc is small compared to the radial size ($ h_{g} \approx 0.1R$). This is important. Obviously, as the central luminosity declines, the clouds exposed to this radiation reprocess less amount of energy. As a result the broad emission lines are weak in LLAGNs. However, in addition to this, the small thickness of the clumpy disc results in a small solid angle being covered by the clumpy disc and this leads to little capture of the central radiation. The fraction of radiation captured is given by $ d\Omega /4\pi \approx \Theta^{2}/2 \approx h_{g}^{2}/2R^{2}$ which is of the order of $ 10^{-3}$, where $ d\Omega $ is the solid angle covered by the clumpy disc and $\Theta \approx h_{g}/R$. Therefore, all the clouds located in the BLR of LLAGN can only receive $ 0.001 $ of the central radiation which itself is in order of $ 10^{-4}-10^{-5}$ the Eddington luminosity and this leads to the presence of very weak broad emission lines in the spectra of LLAGNs. In the very lowest luminosity cases ($ L=10^{-8} - 10^{-9} L_{Edd}$), the small thickness of the clumpy disc together with the faintness of the central source can cause that the broad emission lines to fall below observational detection thresholds and this can explain the lack of detection of any broad emission lines in such faint objects. Figures \ref{figure4}, \ref{figure5} and \ref{figure6} respectively clarify the effect of $l$, $s$ and $n_{tot}$ on the surface number density $\Sigma $ and volume number density $n$ which are plotted versus the radius $R$. In these three figures, the first panels present the behaviour of $\Sigma $ versus $R$ and the second panels show the variations of $n$ with radius at $ z=0, 0.2 $ and $ 0.5 $ light day (lt-d). Values of the fixed parameters are specified in the captions. In these figures, the profiles of the surface number density show that most of the clouds are located in the outer parts of the BLR. However the profiles of the volume number density are somewhat different. Whilst in the midplane most of the clouds are close to the central black hole, at $ z=0.2$ lt-d, the value of $n$ in the central parts is almost zero and with increasing the radius, it increases to a maximum and then, for larger radial distances $n$ gradually declines. At $ z = 0.5$ lt-d the behaviour of the curve is similar to that at $z = 0.2$ lt-d but the peak of the curve moves towards radii. Note that, by considering the positive correlation between $ h_{g}$ and $R$, there is no contradiction between the behaviour of $\Sigma $ and $n$ as a function of $R$. Panel 4a shows that as $l$ increases, the ratio of the clouds in the outer regions of the BLR to those in the inner regions increases as well. On the other hand, by assuming strong coupling between the clouds and the ambient medium, we have $\langle v_{\phi} \rangle=w_{\phi} \propto v_{k,mid}$ and $\langle v_{R} \rangle=w_{R} \propto v_{k,mid}$. Consequently, we conclude that, with increasing of $l$, the number of slowly moving clouds in the outer parts increases and the number of quickly-moving clouds in the inner parts is reduced. In other words, as obtained by \citet{Netzer10}, we expect that with increasing $l$, the width of broad emission lines $\textsc{FWHM}$ decreases. Panel 4b shows that an increase in $l$ leads to the reduction in the number of the clouds near the midplane. This is because they are distributed to higher altitudes. As is shown in Figure \ref{figure5} the effect of the density index is similar to that of the central luminosity. Finally, in Figure \ref{figure6} we see that an increase in the total number of the clouds leads, as is expected, to an increase in the value of $\Sigma $ and $n$. \begin{figure} \centering \includegraphics[width=0.79\columnwidth]{panel7a.eps} \includegraphics[width=0.79\columnwidth]{panel7b.eps} \includegraphics[width=0.79\columnwidth]{panel7c.eps} \caption{The panels (a), (b) and (c) show the virial factor as a function of inclination angle $\theta_{0}$, z-component of the velocity dispersion $\sigma_{z}$, and width of broad emission line $\textsc{FWHM}$ respectively. Values of other fixed parameter are listed on each panel.} \label{figure7} \end{figure} \begin{figure} \centering \includegraphics[width=0.79\columnwidth]{panel8a.eps} \includegraphics[width=0.79\columnwidth]{panel8b.eps} \includegraphics[width=0.79\columnwidth]{panel8c.eps} \includegraphics[width=0.79\columnwidth]{panel8d.eps} \caption{The panels (a), (b), (c) and (d) show the virial factor as a function of bolometric luminosity ($L$), column density ($N_{0}$), $\beta$ and the density index ($s$) respectively. Values of other fixed parameter are listed on each panel.} \label{figure8} \end{figure} \subsection{Calculation of Virial factor for disc-like structure}\label{ss32} We can derive the virial factor, $f$, for the disc-like clumpy structure as a function of the kinematic parameters of the BLR and the inclination angle (the angle between the observer's line of sight and the axis of symmetry of the thin-disc. In Appendix \ref{a2}, we will show that the averaged line of sight velocity square $\langle v_{n}^{2} \rangle _{avr}$ can be written as \begin{equation}\label{eq26} \langle v_{n}^{2} \rangle _{avr}=\frac{1+\beta}{2}\langle v_{R}^{2} \rangle \sin^{2} \theta_{0}+\langle v_{z}^{2} \rangle \cos^{2} \theta_{0}, \end{equation} where $\theta_{0}$ is the inclination angle and $\beta = \langle v_{\phi}^{2} \rangle/\langle v_{R}^{2} \rangle $ is taken to be constant. As in section (\ref{s2}), if we assume $\langle v_{z} \rangle =0$, we have $\langle v_{z}^{2} \rangle=\sigma_{z}^{2}$ which is taken to be constant. However, $\langle v_{R}^{2} \rangle $ has to be derived by solving equation (\ref{eq16}). Dividing equation (\ref{eq16}) into $ n$, we can write \begin{equation}\label{eq27} \frac{\partial \langle v^{2}_{R} \rangle}{\partial R}+\left[\frac{\partial \ln (n)}{\partial R}+\frac{1-\beta}{R}\right]\langle v^{2}_{R}\rangle =-\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{\frac{2s}{3}}\right], \end{equation} where $\partial \ln (n)/\partial R $ is derived by the substitution of $ n(R,z)$, from equation (\ref{eq24}). Equation (\ref{eq27}), as a first order differential equation, then becomes \begin{equation}\label{eq28} \frac{\partial \langle v^{2}_{R} \rangle}{\partial R}+\left[\frac{1-2\beta}{2R}+\frac{z^{2}}{h_{g}^{3}}\frac{dh_{g}}{dR}\right]\langle v^{2}_{R}\rangle =-\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{\frac{2s}{3}}\right] . \end{equation} Finding the integrating factor, we can write the solution of equation (\ref{eq28}) as \[\langle v^{2}_{R} \rangle =-GMR^{\frac{2\beta -1}{2}}\exp\left(\frac{z^{2}}{2h_{g}^{2}}\right)\int R^{-\frac{3+2\beta}{2}}\left[1-\left(\frac{R}{R_{c}}\right)^{\frac{2s}{3}}\right]\] \begin{equation}\label{eq29} \times \exp\left(-\frac{z^{2}}{2h_{g}^{2}}\right)dR. \end{equation} In this integral, the range of the variation of $\exp(-z^{2}/2h_{g}^{2})$ as a function of $R$, is given by $ [\partial \exp(-z^{2}/2h_{g}^{2})/\partial R]\Delta R \approx \exp(-z^{2}/2h_{g}^{2})(z^{2}/h_{g}^{3})(h_{g}/R)\Delta R $ which is in the order of unity. This is because, in thin disc, we have $ z \approx h_{g}$ and $\Delta R \approx R $. On the other hand, by assuming $R_{out}\approx 10R_{in}$, $\beta=3 $ and $ s=3/2$, we see that, as $R$ increases from $R_{in}$ to $R_{out}$, the value of the other terms inside the integral, $R^{-(3+2\beta)/2}[1-(R/R_{c})^{2s/3}]$, becomes smaller by a factor of 1000. We therefore assume that the value of $\exp(-z^{2}/2h_{g}^{2})$ remains constant and by taking it out of the integral, $\langle v_{R}^{2} \rangle $ is given by \begin{equation}\label{eq30} \langle v_{R}^{2}\rangle = \frac{GM}{R}\left[\frac{2}{1+2\beta}+\frac{6}{4s-6\beta -3}\left(\frac{R}{R_{c}}\right)^{\frac{2s}{3}}\right]+c_{0}, \end{equation} where $ c_{0}$ is the constant of integration calculated as follows. As we discussed in the subsection (\ref{ss22}), for the thin-disc structure, we have $R<R_{c}$. As a result, in order to find the value of $ c_{0}$, we suppose that $\langle v_{R}^{2} \rangle + \langle v_{\phi}^{2} \rangle = (1+\beta)\langle v_{R}^{2} \rangle \approx 2\textsc{FWHM}^{2}$ at $R=0.5R_{c}$. Finally, by substituting the constant of the integration into equation (\ref{eq30}), $\langle v_{R}^{2} \rangle $ can be expressed as \[\langle v_{R}^{2}\rangle = \frac{GM}{R}\left\{\left[\frac{2}{1+2\beta}+\frac{6}{4s-6\beta -3}\left(\frac{R}{R_{c}}\right)^{2s/3}\right]-\left[\frac{4}{1+2\beta}\right.\right.\] \begin{equation}\label{eq31} \left. \left. +\frac{12}{4s-6\beta -3}\left(\frac{1}{2}\right)^{2s/3}\left(\frac{R}{R_{c}}\right)\right]+\frac{2}{1+\beta}\frac{R}{GM}\textsc{FWHM}^{2}\right\}. \end{equation} By substituting $\langle v_{R}^{2}\rangle $ into equation (\ref{eq26}), the virial factor defined by $ f=GM/R\langle v_{n}^{2} \rangle _{avr}$, is given by \[f=\left\{\left[\left(\frac{1+\beta}{1+2\beta}+\frac{3+3\beta}{4s-6\beta -3}\left(\frac{R}{R_{c}}\right)^{2s/3}\right)-\left(\frac{2+2\beta}{1+2\beta}\right.\right.\right.\] \[\left.\left.+\frac{6+6\beta}{4s-6\beta -3}\left(\frac{1}{2}\right)^{2s/3}\left(\frac{R}{R_{c}}\right)\right)+\frac{R}{GM}\textsc{FWHM}^{2}\right]\sin^{2}\theta_{0}\] \begin{equation}\label{eq32} \left.+\frac{R}{GM}\sigma_{z}^{2}\cos^{2}\theta_{0}\right\}^{-1} . \end{equation} Finally, assuming $R \approx R_{out} \approx 1000R_{Sch}$, the virial factor becomes \[f=\left\{\left[\left(\frac{1+\beta}{1+2\beta}+\frac{3+3\beta}{4s-6\beta -3}\right)\chi^{-\frac{2s}{3}}-\left(\frac{2+2\beta}{1+2\beta}\right.\right.\right.\] \[\left.\left.\left.\left.+\frac{6+6\beta}{4s-6\beta -3}\left(\frac{1}{2}\right)^{\frac{2s}{3}}\right)\chi^{-1}+2000\left(\frac{\textsc{FWHM}}{c}\right)^{2}\right]\sin^{2}\theta_{0}\right.\right.\] \begin{equation}\label{eq33} \left.+2000\left(\frac{\sigma_{z}}{c}\right)^{2}\cos^{2}\theta_{0}\right\}^{-1}, \end{equation} where $\chi = R_{c}/R_{out}$ is defined by equation (\ref{eq19}). Figures (\ref{figure7}) and (\ref{figure8}) show the variation of the virial factor as a function of the various parameters. In spite of all the approximations we have used, we see that the value of $f$ is of the order of unity. This is in an agreement with previous works (e.g., \citealp{Onken04,Woo10,Graham11,Grier13}). Figure (\ref{figure7}) shows that the virial factor changes significantly with the inclination angle $\theta_{0}$, the width of broad emission lines $\textsc{FWHM}$, and $z$-component of the velocity dispersion $\sigma_{z}$. From the first panel of Figure (\ref{figure7}), it can be seen that as $\theta_{0}$ increases from 0 to 40 (type1 AGNs), the value of $f$ rapidly falls from nearly 9.0 to 1.0 and with increasing $\theta_{0}$ from 40 to 90 (type2 AGNs), it gradually decreases from nearly 1.0 to 0.5. The negative correlation between $ f $ and $ \theta_{0} $ is similar to the finding derived by \citet{Pancoast14} for five Seyfert galaxies including: Arp 151, Mrk 1310, NGC~5548, NGC~6814 and SBS 1116+583A. In the second panel of Figure (\ref{figure7}) showing similar behaviour for the virial factor as a function of $\textsc{FWHM}$, we see $ 0.5 \lesssim f \lesssim 6.5 $. The anticorrelation between $f$ and $\textsc{FWHM}$ has been confirmed by \citet{Brotherton15}. Finally the third panel shows that the value of $ f $ is nearly between 1.4 to 1.8. However, unlike two first panels, the slope of the curve in $ f - \sigma_{z} $ diagram is shallow for lower values of $ \sigma_{z} $ and steep for higher values. From Figure (\ref{figure8}) we see that with increasing $\beta$, $s$ and $N_{0}$ the value of the Virial factor, $ f $, decreases and with increasing $L$ it increases. But we have to note that the range of variation of $ f $ is so small (of the order of 0.01 for the variation of $ L $ and 0.001 for the variation of three other quantities). In other words, $f$ is relatively insensitive to the variation of $L$, $\beta$, $s$ and $N_{0}$. The insensitive correlation between the Virial factor and bolometric luminosity is in agreement with the results found by \citet{Netzer10}. They found that while cloud orbits are strongly affected by radiation pressure, there is a relatively small change in $r_{BLR}\textsc{FWHM}^{2}$. This means that radiation pressure does not change the value of virial factor significantly. \section{CONCLUSIONS}\label{s5} In this work, considering the clouds as a collisionless ensemble of particles, we employed the cylindrical form of Jeans equations calculated in section 2 to describe a geometric model for their distributions in the BLR. The effective forces in this study are the Newtonian gravity of the black hole, the isotropic radiative force arisen from the central source and the drag force for linear regime. Taking them into account we showed that there are three classes for BLR configuration: (A) non disc (B) disc-wind (C) pure disc structure (see Figure \ref{figure1}). We also found that the distribution of BLR clouds in the brightest quasars belongs to class A, in the dimmer quasars and brighter Seyfert galaxies it belongs to class B, and in the fainter Seyfert galaxies and all LLAGNs (LINERs) it belongs to class C. Then we derived the Virial factor, $ f $, for disc-like structures and found a negative correlation for $f$ as a function of the inclination angle, the width of broad emission line and z-component of the velocity dispersion. We also found $ 1.0 \lesssim f \lesssim 9.0 $ for type1 AGNs and $ 0.5 \lesssim f \lesssim 1.0 $ for type2 AGNs. Moreover we saw that $ f $ approximately varies from 0.5 to 6.5 for different values of $\textsc{FWHM}$ and from 1.4 to 1.8 for different values of $ \sigma_{z} $. We also indicated that $ f $ doesn't change significantly with the variations of bolometric luminosity, column density of each cloud, density index and $ \beta = \langle v_{\phi}^{2} \rangle/\langle v_{R}^{2} \rangle $ and the maximum change in the value of $ f $ is of order of 0.01. In introduction, we mentioned that since each group take a different sample of AGNs, they find different values for average virial factor, $ \langle f \rangle $. However different values leads to significant uncertainties in the estimation of black hole mass. On the other hand, in this paper, we saw that $ f $ significantly changes with the inclination angle $ \theta_{0} $ and $ \textsc{FWHM} $ (Figure \ref{figure7}). Therefore in order to have more accurate estimation for black hole mass, we suggest observational campaigns to divide a sample of objects into a few subsamples based on the value of $ \theta_{0} $ and $ \textsc{FWHM} $ of objects and then determine the value of $ \langle f \rangle $ for each subsample separately. Therefore we will have several values for $ \langle f \rangle $. Finally regarding the value of $ \theta_{0} $ and $ \textsc{FWHM} $ of each object with unknown black hole mass, we use the appropriate value of $ \langle f \rangle $ in the virial theorem to have more accurate estimation of black hole mass. \section*{Acknowledgements} I am very grateful to the referee, Jian-Min Wang, for his very useful comments which improved the manuscript. I also thank Scott Tremaine for his useful suggestions that clarified some points about the extended form of the collisionless Boltzmann equation.
train/arxiv
BkiUdYo5qdmB60zYsfgV
5
1
\section{Introduction} \setcounter{footnote}{8} Moments after the Big Bang, a brief period of nucleosynthesis created the first elements and their isotopes \citep{HoyTay64,Pee66,WagFowHoy67}, including hydrogen (H), deuterium (D), helium-3 ($^{3}$He), helium-4 ($^{4}$He), and a small amount of lithium-7 ($^{7}$Li). The creation of these elements, commonly referred to as Big Bang nucleosynthesis (BBN), was concluded in $\lesssim15$ minutes and currently offers our earliest reliable probe of cosmology and particle physics (for a review, see \citealt{Ste07,Ioc09,Ste12,Cyb15}). The amount of each primordial nuclide that was made during BBN depends most sensitively on the expansion rate of the Universe and the number density ratio of baryons-to-photons. Assuming the Standard Model of cosmology and particle physics, the expansion rate of the Universe during BBN is driven by photons, electrons, positrons, and 3 neutrino families. Furthermore, within the framework of the Standard Model, the baryon-to-photon ratio at the time of BBN (i.e. minutes after the Big Bang) is identical to the baryon-to-photon ratio at recombination ($\sim400\,000$ years after the Big Bang). Thus, the abundances of the primordial nuclides for the Standard Model can be estimated from observations of the Cosmic Microwave Background (CMB) radiation, which was recently recorded with exquisite precision by the \textit{Planck} satellite \citep{Efs15}. Using the \textit{Planck} CMB observations\footnote{The primordial abundances listed here use the TT+lowP+lensing measure of the baryon density, $100\,\Omega_{\rm B,0}\,h^{2}({\rm CMB})=2.226\pm0.023$, (i.e. the second data column of Table~4 from \citealt{Efs15}).}, the predicted Standard Model abundances of the primordial elements are (68 per cent confidence limits; see Section~\ref{sec:dh}): \begin{eqnarray} Y_{\rm P}&=&0.2471\pm0.0005\nonumber\\ 10^{5}\,({\rm D/H})_{\rm P}&=&2.414\pm0.047\nonumber\\ 10^{5}\,({\rm ^{3}He/H})_{\rm P}&=&1.110\pm0.022\nonumber\\ A(^{7}{\rm Li/H})_{\rm P}&=&2.745\pm0.021\nonumber \end{eqnarray} where $Y_{\rm P}$ is the fraction of baryons consisting of $^{4}$He, $A(^{7}{\rm Li/H})_{\rm P}\equiv\log_{10}(^{7}{\rm Li/H})_{\rm P}+12$, and D/H, $^{3}$He/H and $^{7}$Li/H are the number abundance ratios of deuterium, helium-3 and lithium-7 relative to hydrogen, respectively. To test the Standard Model, the above predictions are usually compared to direct observational measurements of these abundances in near-primordial environments. High precision measures of the primordial $^{4}$He mass fraction are obtained from low metallicity \textrm{H}\,\textsc{ii}\ regions in nearby star-forming galaxies. Two analyses of the latest measurements, including an infrared transition that was not previously used, find $Y_{\rm P}~=~0.2551\pm0.0022$ \citep{IzoThuGus14}, and $Y_{\rm P}~=~0.2449\pm0.0040$ \citep{AveOliSki15}. These are mutually inconsistent, presumably due to some underlying difference between the analysis methods. The primordial $^{7}{\rm Li/H}$ ratio is deduced from the most metal-poor stars in the halo of the Milky Way. The latest determination \citep{Asp06,Aok09,Mel10,Sbo10,Spi15}, $A(^{7}{\rm Li})=2.199\pm0.086$, implies a $\gtrsim6\sigma$ deviation from the Standard Model value (see \citealt{Fie11} for a review). The source of this discrepancy is currently unknown. The abundance of $^{3}$He has only been measured in Milky Way \textrm{H}\,\textsc{ii}\ regions \citep{BanRooBal02} and in solar system meteorite samples \citep{BusBauWie00,BusBauWie01}. At this time, it is unclear if these measures are representative of the primordial value. However, there is a possibility that $^{3}$He might be detected in emission from nearby, quiescent metal-poor \textrm{H}\,\textsc{ii}\ regions with future, planned telescope facilities \citep{Coo15}. The primordial abundance of deuterium, (D/H)$_{\rm P}$, can be estimated using quasar absorption line systems \citep{Ada76}, which are clouds of gas that absorb the light from an unrelated background quasar. In rare, quiescent clouds of gas the $-82~{\rm km~s}^{-1}$ isotope shift of D relative to H can be resolved, allowing a measurement of the column density ratio \textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}. The most reliable measures of (D/H)$_{\rm P}$\ come from near-pristine damped Lyman-$\alpha$ systems (DLAs). As discussed in \citet{PetCoo12a} and \citet{Coo14}, metal-poor DLAs exhibit the following properties that facilitate a high precision and reliable determination of the primordial deuterium abundance: (1) The Lorentzian damped Ly$\alpha$\ absorption line uniquely determines the total column density of neutral H atoms along the line-of-sight. (2) The array of weak, high order \textrm{D}\,\textsc{i}\ absorption lines depend only on the total column density of neutral D atoms along the line-of-sight. Provided that these absorption lines fall on the linear regime of the curve-of-growth, the derived $N$(\textrm{D}\,\textsc{i}) should not depend on the gas kinematics or the instrument resolution. In addition, the assumption that D/H=\textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}\ is justified in these systems; the ionization correction is expected to be $\lesssim0.1$~per~cent \citep{Sav02,CooPet16}. Furthermore, galactic chemical evolution models suggest that most of the deuterium atoms in these almost pristine systems are yet to be cycled through many generations of stars; the correction for astration (i.e. the processing of gas through stars) is therefore negligible (see the comprehensive list of references provided by \citealt{Cyb15,Dvo16}). Using a sample of 5 quasar absorption line systems that satisfy a set of strict criteria, \citet{Coo14} recently estimated that the primordial abundance of deuterium is log$_{10}$\,(D/H)$_{\rm P}$~=~$-4.597\pm0.006$, or expressed as a linear quantity, $10^{5}\,({\rm D/H})_{\rm P} = 2.53\pm0.04$. These 5 systems exhibit a D/H plateau over at least a factor of $\sim10$ in metallicity, and this plateau was found to be in good agreement with the expected value for the cosmological model derived by \textit{Planck} assuming the Standard Model of particle physics. In this paper, we build on this work and present a new determination of the primordial abundance of deuterium obtained from the lowest metallicity DLA currently known. In Section~\ref{sec:obs}, we present the details of our observations and data reduction procedures. Our data analysis is almost identical to that described in \citet{Coo14}, and we provide a summary of this procedure in Section~\ref{sec:analysis}. In Section~\ref{sec:chemcomp}, we report the chemical composition of this near-pristine DLA. In Section~\ref{sec:dh}, we present new calculations of BBN that incorporate the latest nuclear cross sections, discuss the main results of our analysis, and highlight the cosmological implications of our findings. We summarize our conclusions in Section~\ref{sec:conc}. \section{Observations and Data Reduction} \label{sec:obs} In this paper, we present high quality echelle observations of the quasar J1358+0349 ($z_{\rm em}\simeq2.894$, Right Ascension$=13\rahr58\ramin03.\rasec97$, Declination$=+03\decdeg49\decmin36.\decsec0$), which was discovered with a low resolution ($R\sim2000$) spectrum acquired by the Sloan Digital Sky Survey (SDSS). This SDSS spectrum revealed strong \textrm{H}\,\textsc{i}\ absorption at a redshift $z_{\rm abs}=2.8528$ with no apparent absorption at the wavelengths of the corresponding metal lines, indicating the presence of a very metal-poor DLA \citep{Pen10}. \citet{Pen10} reobserved this quasar with the Echellette Spectrograph and Imager (ESI), which is mounted on the Keck II telescope. These medium resolution observations ($R\sim5300$, corresponding to a velocity full width at half maximum $v_{\rm FWHM}\simeq57~{\rm km~s}^{-1}$) confirmed that this DLA is among the most metal-poor systems currently known, with an estimated metallicity\footnote{Throughout this paper, we adopt the notation [X/Y] to represent the relative number density of elements $X$ and $Y$ on a logarithmic and solar abundance scale. Explicitly, [X/Y]~$=~\log_{10}(N({\rm X})/N({\rm Y}))-\log_{10}(n({\rm X})/n({\rm Y}))_{\odot}$.} ${\rm [Fe/H]}=-3.03\pm0.11$. We confirm the low metallicity with the higher resolution data presented here; we find ${\rm [Fe/H]}=-3.25\pm0.11$ (see Section~\ref{sec:chemcomp}), assuming a solar abundance $\log_{10}({\rm Fe/H})_{\odot}=-4.53$ \citep{Asp09}. Identifying DLAs where the \textrm{D}\,\textsc{i}\ Lyman series absorption lines are well-resolved from the much stronger \textrm{H}\,\textsc{i}\ Lyman series is one of the primary difficulties of finding DLAs where D/H can be measured. The probability of resolving these features can be increased by finding gas clouds with simple kinematics, which are more common at the lowest metallicity \citep{Led06,Mur07,Pro08,Nel13,JorMurTho13,CooPetJor15}; in general, the most metal-poor systems exhibit simple and quiescent kinematics. Given the low metallicity of the DLA towards J1358+0349, based on the ESI spectra, we acquired two high-quality, high resolution spectra of this quasar with the aim of measuring D/H. We describe these observations below. \subsection{HIRES observations} We observed J1358+0349 with the High Resolution Echelle Spectrometer (HIRES; \citealt{Vog94}) on the Keck I telescope on 2013 May 6 in good seeing conditions ($\sim0.7''$~FWHM) for a total of 21,000~s divided equally into $7\times3000~{\rm s}$ exposures. We used the blue-sensitive ultraviolet cross-disperser to maximize the efficiency near the DLA Lyman limit. We used the C1 decker ($7.0''\times0.861''$), which provides a nominal instrument resolution of $R~\simeq~48,000$ ($v_{\rm FWHM}\simeq6.4~{\rm km~s}^{-1}$) for a uniformly illuminated slit. By measuring the widths of $670$ ThAr wavelength calibration lines\footnote{Ideally, O$_{2}$ telluric absorption should be used to determine the instrument resolution, since the broadening of these lines should closely represent the instrument resolution of the quasar absorption spectrum; unlike the sky and ThAr lamp emission lines, the quasar light does not uniformly illuminate the slit. However, the telluric O$_{2}$ molecular absorption band near 6300\,\AA\ was too weak to reliably measure the instrument FWHM.}, we determined the instrument resolution to be $v_{\rm FWHM}=6.17\pm0.02~{\rm km~s}^{-1}$, which is somewhat lower than the nominal value. All frames were binned $2\times2$ during read-out. The science exposures were bracketed by a ThAr wavelength calibration frame. The final data cover the wavelength range 3480\,\AA\,--\,6344\,\AA, with small gaps in the ranges 4397\,\AA\,--\,4418\,\AA\ and 5397\,\AA\,--\,5423\,\AA\ due to the gaps between the three HIRES detectors. \subsection{UVES observations} The HIRES data confirmed the very low metallicity of the DLA, and revealed several resolved \textrm{D}\,\textsc{i}\ absorption lines, suggesting that this system would be ideal to estimate the primordial deuterium abundance. To increase the signal-to-noise (S/N) ratio of the data, we observed J1358+0349 for a total of 40,384\,s with the Very Large Telescope (VLT) Ultraviolet and Visual Echelle Spectrograph (UVES; \citealt{Dek00}) in service mode.\footnote{Our observations were carried out on 2014 March 28 ($3\times3495\,{\rm s}$), 2014 May 27 ($3\times3495\,{\rm s}$), 2014 March 24 ($4\times3495\,{\rm s}$), 2014 April 30 ($1\times3495\,{\rm s}$, $1\times1939\,{\rm s}$).} We used dichroic 1, with the HER\_5 filter in the blue arm, and the SHP700 filter in the red arm. The echelle grating in the blue arm provided a central wavelength of 3900\,\AA, whilst the grating in the red arm had a central wavelength of 5640\,\AA. The UVES data cover the wavelength range 3450\,\AA\,--\,6648\,\AA, with small gaps in the ranges 4530\,\AA\,--\,4622\,\AA\ and 5601\,\AA\,--\,5675\,\AA. All exposures were binned $2\times2$ at the time of read-out. We used the $0.9''$ slit to match closely the nominal resolution provided by the HIRES observations (the nominal UVES values are $R~\simeq~46,000$, $v_{\rm FWHM}~\simeq~6.5~{\rm km~s}^{-1}$). By fitting $268$ ThAr emission lines, we derived an instrumental resolution of $v_{\rm FWHM}=6.39\pm0.04~{\rm km~s}^{-1}$ for a uniformly illuminated slit with our setup. Our value is in good agreement with the nominal UVES instrument resolution.\footnote{We note that the telluric O$_{2}$ molecular absorption band near 6300\,\AA\ was too weak, like the HIRES data, to reliably measure the instrument FWHM.} \subsection{Data Reduction} The HIRES and UVES data described above provide complete wavelength coverage from the DLA Lyman limit ($\sim$3520\,\AA) to 6648\,\AA\ (1725\,\AA\ in the rest-frame of the DLA). The data were reduced with the HIRESRedux and UVESRedux\footnote{These reduction packages can be obtained from\\ http://www.ucolick.org/$\sim$xavier/HIRedux/index.html} software packages, maintained by J.~X.~Prochaska (for a description of the reduction algorithms, see \citealt{BerBurPro15}). The standard reduction steps were followed. First, the bias level was subtracted from all frames using the overscan region. The pixel-to-pixel variations were then removed using an archived image, where the detector was uniformly illuminated. The orders were defined using a quartz lamp with an identical slit and setup as the science exposures. A ThAr lamp was used to model the regions of constant wavelength across the detector (e.g. \citealt{Kel03}). Using this model, the sky background was subtracted from the science exposure. The spectrum of the quasar was extracted using an optimal extraction algorithm, and mapped to a vacuum, heliocentric wavelength scale with reference to the ThAr exposure. Each echelle order was corrected for the echelle blaze function, resampled onto a $2.5~{\rm km~s}^{-1}$ pixel scale, and combined using the UVES\_\textsc{popler} software.\footnote{UVES\_\textsc{popler} can be downloaded from\\ http://astronomy.swin.edu.au/$\sim$mmurphy/UVES\_popler/} Since the HIRES and UVES data were acquired with slightly different instrument resolutions, we separately combined the UVES and HIRES data. Deviant pixels and ghosts were manually removed, and an initial estimate of the quasar continuum was applied. The data were flux calibrated using the SDSS discovery spectrum as a reference. Specifically, the UVES and HIRES data were convolved with the SDSS instrument resolution, and then resampled onto the wavelength scale of the SDSS spectrum to determine the sensitivity function. The sensitivity function was then applied to the non-convolved UVES and HIRES data, with an extrapolation to blue wavelengths where the SDSS spectrum does not extend. The final HIRES spectrum has a S/N near the DLA Ly$\alpha$\ absorption line of ${\rm S/N}\simeq30$, and a ${\rm S/N}\simeq16$ near the Lyman limit of the DLA. The equivalent values for UVES are ${\rm S/N}\simeq40$ and ${\rm S/N}\simeq11$, respectively. \section{Analysis Method} \label{sec:analysis} \begin{figure*} \centering {\includegraphics[angle=0,width=140mm]{fig_1}}\\ \caption{ \textit{Top panels}: The flux calibrated \textrm{H}\,\textsc{i}\ Ly$\alpha$\ absorption profile (black histogram) is shown for the DLA at $z_{\rm abs}=2.853054$ towards the quasar J1358$+$0349. The best-fitting quasar continuum model (blue long-dashed curves) and the best-fitting absorption profile (red line) are overlaid. The green dashed line indicates the fitted zero-level of the data. The spectrograph used to take the data is indicated in the upper left corner of each panel. \textit{Bottom panels}: Same as the top panels, but with the quasar continuum normalized, and the data are plotted in the rest-frame of the DLA. The absorption feature that is fit near a rest wavelength of 1206.5 is a combination of the Si\,\textsc{iii}\ absorption from the DLA and an unrelated blend. } \label{fig:lya} \end{figure*} Our analysis method is identical to that outlined by \citet{Coo14}. In this section, we summarize the main aspects of this procedure. We use the Absorption LIne Software (\textsc{alis}) package to provide a simultaneous fit to the emission spectrum of the quasar and the absorption lines of the DLA.\footnote{\textsc{alis} is available for download at the following website:\\https://github.com/rcooke-ast/ALIS} \textsc{alis} uses a chi-squared minimization procedure to deduce the model parameter values that best fit the data, weighted by the quasar error spectrum. Our line fitting procedure was applied at the same time to both the UVES and HIRES data, to find the model that fitted \textit{both} sets of data best. We simultaneously fit the \textrm{H}\,\textsc{i}\ and \textrm{D}\,\textsc{i}\ Lyman series absorption lines, all of the significantly detected metal absorption lines, the zero-levels of the HIRES and UVES data, the continuum in the neighborhood of all absorption lines, the relative velocity offset between the HIRES and UVES data, and the instrument resolution of both datasets. The continuum is approximated by a low order Legendre polynomial (typically of order $\lesssim4$, except near Ly$\alpha$\ where we use a polynomial of order 8). To allow for relative differences in the quasar continuum between the HIRES and UVES data, we apply a constant or linear scaling to the HIRES data, and the parameters of this scaling are allowed to vary during the minimization procedure. The portion of the Ly$\alpha$\ absorption profile where the optical depth is $\tau\gtrsim1$ provides most of the power to determine the total \textrm{H}\,\textsc{i}\ column density; when the quasar flux recovers to $\gtrsim50$ per cent of the continuum, the absorption profile flattens and becomes increasingly sensitive to the continuum level rather than the \textrm{H}\,\textsc{i}\ absorption. We therefore fit every pixel in the core of the Ly$\alpha$\ absorption until the Lorentzian wings of the profile are 50 per cent of the continuum (i.e. $\tau\gtrsim0.7$; in this case, all pixels within $\pm1300~{\rm km~s}^{-1}$). During the analysis, we fit all of the contaminating absorption features within this velocity window instead of masking the affected pixels. Outside this velocity window, we include pixels in the fit that we deem are free of contamination. The best-fitting model of the Ly$\alpha$\ absorption feature is overlaid on the HIRES and UVES data in Figure~\ref{fig:lya}. \begin{table*} \centering \begin{minipage}[c]{0.6\textwidth} \caption{\textsc{Best fitting model parameters for the DLA at $z_{\rm abs}=2.853054$ towards the QSO J1358$+$0349}} \hspace{-0.6cm}\begin{tabular}{@{}crrccccc} \hline \multicolumn{1}{c}{Comp.} & \multicolumn{1}{c}{$z_{\rm abs}$} & \multicolumn{1}{c}{$b_{\rm turb}$} & \multicolumn{1}{c}{$\log N$\/(H\,{\sc i})} & \multicolumn{1}{c}{$\log {\rm (D\,\textsc{i}/H\,\textsc{i})}$} & \multicolumn{1}{c}{$\log N$\/(N\,{\sc i})} & \multicolumn{1}{c}{$\log N$\/(N\,{\sc ii})} & \multicolumn{1}{c}{$\log N$\/(N\,{\sc iii})}\\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{(km~s$^{-1}$)} & \multicolumn{1}{c}{(cm$^{-2}$)} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{(cm$^{-2}$)} & \multicolumn{1}{c}{(cm$^{-2}$)} & \multicolumn{1}{c}{(cm$^{-2}$)}\\ \hline 1 & $2.852874$ & $4.7$ & $20.16$ & $-4.582^{\rm a}$ & $12.61$ & \ldots$^{\rm b}$& \ldots$^{\rm b}$ \\ & $\pm 0.000002$ &$\pm 0.2$ & $\pm 0.02$ & $\pm 0.012$ & $\pm 0.10$ & & \\ 2 & $2.853004$ & $3.9$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & $13.25^{\rm c}$ & $13.32^{\rm c}$ \\ & $\pm 0.000003$ &$\pm 0.3$ & & & & $\pm0.04$ & $\pm0.06$ \\ 3 & $2.853054$ & $2.5$ & $20.27$ & $-4.582^{\rm a}$ & $12.23$ & \ldots$^{\rm b}$& $13.33$ \\ & $\pm 0.000003$ &$\pm 0.5$ & $\pm 0.02$ & $\pm 0.012$ & $\pm0.24$ & & $\pm0.06$ \\ 4 & $2.85372$ & $14.2$ & $18.23$ & $-4.582^{\rm a}$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & $12.60$ \\ & $\pm 0.00001$ &$\pm 1.4$ & $\pm 0.07$ & $\pm 0.012$ & & & $\pm0.13$ \\ Total & \ldots & \ldots & $20.524$ & $-4.582^{\rm a}$ & $12.77$ & $13.25$ & $13.67$ \\ & & & $\pm 0.006$ & $\pm 0.012$ & $\pm0.11$ & $\pm 0.04$ & $\pm0.02$ \\ \hline \end{tabular} \smallskip\smallskip\smallskip \begin{tabular}{@{}ccccccc} \hline \multicolumn{1}{c}{Comp.} & \multicolumn{1}{c}{$\log N$\/(O\,{\sc i})} & \multicolumn{1}{c}{$\log N$\/(Al\,{\sc ii})} & \multicolumn{1}{c}{$\log N$\/(Si\,{\sc ii})} & \multicolumn{1}{c}{$\log N$\/(Si\,{\sc iii})} & \multicolumn{1}{c}{$\log N$\/(S\,{\sc ii})} & \multicolumn{1}{c}{$\log N$\/(Fe\,{\sc ii})}\\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{(cm$^{-2}$)} & \multicolumn{1}{c}{(cm$^{-2}$)} & \multicolumn{1}{c}{(cm$^{-2}$)} & \multicolumn{1}{c}{(cm$^{-2}$)} & \multicolumn{1}{c}{(cm$^{-2}$)} & \multicolumn{1}{c}{(cm$^{-2}$)}\\ \hline 1 & $14.23$ & $11.27$ & $12.78$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & $12.31$ \\ & $\pm 0.02$ & $\pm 0.07$ & $\pm 0.03$ & & & $\pm 0.18$ \\ 2 & \ldots$^{\rm b}$ & $11.93$ & $12.94$ & $12.89$ & $13.02$ & $12.51$ \\ & & $\pm0.02$ & $\pm0.04$ & $\pm0.12$ & $\pm0.10$ & $\pm0.11$ \\ 3 & $13.89$ & \ldots$^{\rm b}$ & $12.57$ & $12.65$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ \\ & $\pm 0.02$ & & $\pm0.08$ & $\pm0.10$ & & \\ 4 & $12.86$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & $12.19$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ \\ & $\pm 0.12$ & & & $\pm0.04$ & & \\ Total & $14.41$ & $12.01$ & $13.27$ & $13.14$ & $13.02$ & $12.74$ \\ & $\pm0.01$ & $\pm0.02$ & $\pm0.01$ & $\pm0.07$ & $\pm0.10$ & $\pm0.10$ \\ \hline \end{tabular} \smallskip $^{\rm a}${Forced to be the same for all components.}\\ \hspace{0.5cm}$^{\rm b}${Absorption is undetected for this ion in this component.}\\ \hspace{0.5cm}$^{\rm c}${Since the N\,\textsc{ii}\ and N\,\textsc{iii}\ absorption lines arise from more highly ionized gas, we tie their total Doppler parameter, and allow it to vary independently of the Doppler parameter of the other absorption lines at the redshift of this component. The total Doppler parameter for these higher stages of N ionization is $b=10.5\pm0.9\,{\rm km~s}^{-1}$.}\\ \label{tab:compstruct} \end{minipage} \end{table*} \begin{figure*}[Ht] \centering {\includegraphics[angle=0,width=160mm]{fig_2}}\\ \caption{ A selection of the metal absorption lines associated with the DLA at $z_{\rm abs}=2.853054$ towards J1358$+$0349 that are used in our analysis. The best-fitting model (red line) is derived from a simultaneous fit to both the UVES and HIRES data. However, in these panels we only show the data (black histogram) and corresponding model for the dataset with the higher S/N near the absorption line. In all panels, the best-fitting zero-level of the data (short green dashed line) has been removed, and the continuum has been normalized (long blue dashed line). Note that we have used a different y-axis scale for the top row of panels to emphasize the weakest absorption features. The red tick marks above the spectrum correspond to the locations of the absorption components of the annotated ion (see Table~\ref{tab:compstruct}). The green tick marks in the N\,\textsc{iii}\,$\lambda989$ panel are for a blend with Si\,\textsc{ii}\,$\lambda989$, the latter of which is largely determined from the multitude of other Si\,\textsc{ii}\ absorption lines. The absorption at $-25~{\rm km~s}^{-1}$ in the N\,\textsc{ii}\,$\lambda1083$ panel is assumed to be an unrelated blend.} \label{fig:metals} \end{figure*} Our spectrum includes 16 metal absorption lines from the elements C, N, O, Al, Si, S and Fe in a range of ionization stages, (C\,\textsc{ii}, C\,\textsc{iv}, N\,\textsc{i}, N\,\textsc{ii}, N\,\textsc{iii}, O\,\textsc{i}, Al\,\textsc{ii}, Si\,\textsc{ii}, Si\,\textsc{iii}, Si\,\textsc{iv}, S\,\textsc{ii}\ and Fe\,\textsc{ii}). The component structure of our absorption model (see Table~\ref{tab:compstruct}) is set by the unblended, narrow metal absorption lines that are the dominant ionization stage in neutral gas. The metal absorption lines that are used in our analysis are presented in Figure~\ref{fig:metals}. We find that the neutral N\,\textsc{i}\ and O\,\textsc{i}\ lines, which accurately trace the \textrm{D}\,\textsc{i}\ bearing gas \citep[][see also, \citealt{FieSte71,SteWerGel71}]{CooPet16}, are reproduced with just two principal absorption components. The strong O\,\textsc{i}\,$\lambda1302$ absorption line also exhibits a much weaker absorption feature, comprising $\sim3$ per cent of the total O\,\textsc{i}\ column density, and is redshifted by $v\simeq+50\,{\rm km~s}^{-1}$ relative to the two main components; this feature is also detected in the strong C\,\textsc{ii}\,$\lambda1334$ and Si\,\textsc{ii}\,$\lambda1260$ absorption lines (not shown). The first and higher ions, such as N\,\textsc{ii}, Al\,\textsc{ii}, Si\,\textsc{ii}, and S\,\textsc{ii}, require an additional absorption component that is slightly blueshifted by $v\simeq-4\,{\rm km~s}^{-1}$ relative to the systemic redshift $z_{\rm abs}=2.853054$, and is presumably due to ionized gas. We explicitly fit to the \textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}\ ratio by requiring that all \textrm{D}\,\textsc{i}\ absorption components (i.e. components 1, 3, and 4 in Table~\ref{tab:compstruct}) have the same D/H ratio. Note that the subdominant \textrm{D}\,\textsc{i}\ absorption component (component 4, located at $+50~{\rm km~s}^{-1}$ relative to the systemic redshift of the DLA) is not resolved from the \textrm{H}\,\textsc{i}\ absorption; the absorption properties of this component are only determined by the \textrm{H}\,\textsc{i}\ and O\,\textsc{i}\ absorption. The initial starting value of the logarithmic \textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}\ ratio was randomly generated on the interval $(-4.8,-4.4)$. We assume that the absorption lines of all species are represented by a Voigt profile, comprising contributions from both turbulent and thermal broadening. The standard assumption is that all gas constituents in a given absorption component will share a common turbulent Doppler parameter and a constant kinetic temperature. As we discuss in \citet{Coo14}, at the current level of precision, a Voigt profile that is broadened according to the above description is probably insufficient to accurately model the \textrm{H}\,\textsc{i}, \textrm{D}\,\textsc{i}, and metal absorption lines simultaneously; in reality, there is a distribution of turbulence and temperature along the line of sight. To circumvent this model limitation, we tie the component redshifts and turbulent Doppler parameters of all ions, and allow the thermal broadening to be specified separately for the \textrm{D}\,\textsc{i}\ and \textrm{H}\,\textsc{i}\ absorption. This prescription allows the kinematics of the \textrm{H}\,\textsc{i}, \textrm{D}\,\textsc{i}, and metal absorption lines to be deduced almost independently. We also stress that, as discussed in \citet{Coo14}, weak unblended \textrm{D}\,\textsc{i}\ absorption lines do not depend on the form of the Voigt profile; the equivalent widths of weak \textrm{D}\,\textsc{i}\ absorption lines uniquely determine the \textrm{D}\,\textsc{i}\ column density. Similarly, the absorption profile of the \textrm{H}\,\textsc{i}\ damped Ly$\alpha$\ absorption line is independent of the turbulence and kinetic temperature used for the Voigt profile fitting. Our HIRES and UVES data of the Lyman series absorption lines, together with the best-fitting model, are presented in Figures~\ref{fig:lyseriesa} and \ref{fig:lyseriesb}. In our analysis, we only use the \textrm{H}\,\textsc{i}\ absorption lines that exhibit either a clean blue or clean red wing. Similarly, we only consider the \textrm{D}\,\textsc{i}\ absorption lines that are free of unrelated contaminating absorption. These include \textrm{D}\,\textsc{i}\ Ly6, Ly7, Ly9, and Ly13; of these, only Ly9 and Ly13 are weak, unsaturated absorption lines. We also note that \textrm{D}\,\textsc{i}\ Ly13 is barely resolved from the \textrm{H}\,\textsc{i}\ Ly14 absorption (see bottom panels of Fig.~\ref{fig:lyseriesb}). Since the \textrm{H}\,\textsc{i}\ Ly14 absorption is well constrained by the host of other \textrm{H}\,\textsc{i}\ Lyman series lines, we deem the \textrm{D}\,\textsc{i}\ equivalent width of Ly13 (particularly from the HIRES data) to be well-determined. However, the DLA system that we analyze here is certainly less ideal for the determination of D/H than our previously reported cases in \citet{PetCoo12a} and \citet{Coo14}. In this new system, many of the weak \textrm{D}\,\textsc{i}\ absorption lines are blended with unrelated absorption features (presumably contamination from low redshift Ly$\alpha$\ absorption),\footnote{This is one of the unpredictable, and inherent difficulties associated with measuring the D/H ratio in $z\sim3$ quasar absorption line systems.} resulting in fewer unsaturated \textrm{D}\,\textsc{i}\ lines. However, we are still able to constrain the value of the D/H ratio within tight limits, thanks to the high S/N of our data near the Lyman limit of the DLA, and the relatively well-determined value of the \textrm{H}\,\textsc{i}\ column density. Initially, the instrumental FWHM was allowed to vary freely, with no prior (as implemented in \citealt{Coo14}). In this case, the fitted value of the instrumental FWHM was larger than that allowed by the widths of the ThAr arc lines (see Section~\ref{sec:obs}), implying that the DLA absorption lines are too structured to permit a reliable estimate of the FWHM. Thereafter, we fixed the instrumental FWHM to be equal to the widths of the ThAr emission lines. \begin{figure*} \centering {\includegraphics[angle=0,width=160mm]{fig_3}}\\ \caption{ The black histogram shows our HIRES data (left panels) and UVES data (right panels), covering the \textrm{H}\,\textsc{i}\ and \textrm{D}\,\textsc{i}\ Lyman series absorption lines from Ly$\alpha$--Ly7 (top to bottom panels, respectively). Our best-fitting model is overlaid with the solid red line. The plotted data have been corrected for the best-fitting zero-level (short green dashed line), and are normalized by the best-fitting continuum model (long blue dashed line). Tick marks above the spectrum indicate the absorption components for \textrm{H}\,\textsc{i}\ (red ticks), and \textrm{D}\,\textsc{i}\ (green ticks). } \label{fig:lyseriesa} \end{figure*} \begin{figure*} \centering {\includegraphics[angle=0,width=160mm]{fig_4}}\\ \caption{ Same as Fig.~\ref{fig:lyseriesa}, for the \textrm{H}\,\textsc{i}\ and \textrm{D}\,\textsc{i}\ transitions Ly8--Ly15. Note that the leftmost set of red tick marks in the bottom panels indicate the \textrm{H}\,\textsc{i}\ Ly15 absorption components, while the central red tick marks in these panels indicate \textrm{H}\,\textsc{i}\ Ly14 absorption. } \label{fig:lyseriesb} \end{figure*} Finally, the relative velocity shift between the HIRES and UVES data is determined during the $\chi^{2}$-minimization process, with a best fitting value of $0.20\pm0.12~{\rm km~s}^{-1}$. We also fit a wavelength independent correction to the zero-level of each spectrum. This approximation also accounts for the fraction of the quasar light that is not covered by the DLA absorption. The best-fit values\footnote{This parameter is largely driven by the trough of the Ly$\alpha$\ absorption.} for the zero-level are $0.016\pm0.003$ (HIRES) and $0.003\pm0.003$ (UVES). Our analysis was performed blindly, such that the $N$(\textrm{D}\,\textsc{i})/$N$(\textrm{H}\,\textsc{i}) ratio was only revealed after our profile analysis had been finalized, and the minimum $\chi^{2}$ had been reached; no changes were made to the data reduction or analysis after the results were unblinded. We then performed $2000$ Monte Carlo simulations to ensure that the global minimum $\chi^{2}$ had been found. Each Monte Carlo simulation was initialized with the best-fitting model parameters, perturbed by twice the covariance matrix of the parameter values. The final parameter values listed in Table~\ref{tab:compstruct} correspond to the model with $\chi^{2}/{\rm dof}=7770/8678$, that provides the global minimum chi-squared\footnote{As discussed in \citet{Coo14}, the $\chi^{2}$ value reported here should not be used for a statistical analysis, since: (1) correlations between pixels are not accounted for; and (2) the selected wavelength regions used for fitting tend to be those with smaller statistical fluctuations.}. \section{DLA Chemical Composition} \label{sec:chemcomp} The chemistry of the DLA at $z_{\rm abs}=2.853054$ towards J1358$+$0349 is remarkable for several reasons. On the basis of six O\,\textsc{i}\ lines, we determine the average metallicity of the DLA to be [O/H]~$= -2.804\pm0.015$, assuming a solar O abundance of $\log\,({\rm O/H})_{\odot}=-3.31$ \citep{Asp09}. This cloud is therefore the most pristine DLA currently known (see \citealt{Coo11b}). Furthermore, under our assumption that \textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}\ is constant between the two main components, the O abundance of the strongest \textrm{H}\,\textsc{i}\ absorption (component 3 in Table~\ref{tab:compstruct}) is [O/H]~$= -3.07\pm0.03$. We list the absolute and relative element abundances of this DLA in Table~\ref{tab:abund}. Due to the presence of ionized gas (see Section~\ref{sec:analysis}), we quote an upper limit on the abundance of Al, Si, S, and Fe; the first ions of these elements are the dominant stage of ionization in neutral (\textrm{H}\,\textsc{i}) gas, but are also present in ionized (\textrm{H}\,\textsc{ii}) gas. We also note that the [N/O] ratio is well-determined in this DLA, since both N\,\textsc{i}\ and O\,\textsc{i}\ trace the \textrm{H}\,\textsc{i}\ bearing gas due to charge transfer reactions \citep{FieSte71,SteWerGel71}. Our value of [N/O] is consistent with, or slightly lower than, the primary N/O plateau \citep{IzoThu99,Cen03,vZHay06,Pet08,PetLedSri08,PetCoo12b,Zaf14}. In the final column of Table~\ref{tab:abund}, we also list the relative element abundances of component 1 ($z_{\rm abs}=2.852874$); this absorption component probably arises from predominantly neutral gas, since the higher stages of ionization are not detected in this component (see Table~\ref{tab:compstruct}). Therefore, if the metals are well-mixed in this near-pristine DLA,\footnote{Note that chemical variations have not been observed in other low metallicity DLAs \citep{Pro03,Coo11b}.} then component 1 should reflect the chemistry of this system. Relative to a typical metal-poor DLA \citep{Coo11b}, we find that this absorption component is somewhat enhanced in oxygen relative to Al, Si, and Fe. It is not unexpected that the lighter elements, such as C and O, exhibit an enhancement relative to the heavier elements (e.g. Fe) in the lowest metallicity DLAs \citep{Coo11a,CooMad14}; this could be a signature of the (washed out?) chemical enrichment from the first generation of stars \citep[e.g.][]{UmeNom03}. \begin{table} \begin{center} \caption{\textsc{Chemical composition of the DLA at $z_{\rm abs}=2.853054$ towards J1358$+$0349}} \hspace{-0.6cm}\begin{tabular}{@{}lcccc} \hline \multicolumn{1}{c}{X} & \multicolumn{1}{c}{log\,$\epsilon$(X)$_{\odot}\,^{a,b}$} & \multicolumn{1}{c}{[X/H]$^{c}$} & \multicolumn{1}{c}{[X/O]$^{c}$} & \multicolumn{1}{c}{[X/O]$_{1}\,^{d}$}\\ \hline N & $7.83$ & $-3.58\pm0.11$ & $-0.78\pm0.11$ & $-0.76\pm0.10$ \\ O & $8.69$ & $-2.804\pm0.015$ & \ldots & \ldots \\ Al & $6.44$ & $<-2.95$ & $<-0.15$ & $-0.71\pm0.07$ \\ Si & $7.51$ & $<-2.764$ & $<+0.04$ & $-0.27\pm0.04$ \\ S & $7.14$ & $<-2.64$ & $<+0.16$ & \ldots \\ Fe & $7.47$ & $<-3.25$ & $<-0.45$ & $-0.70\pm0.18$ \\ \hline \end{tabular} \label{tab:abund} \end{center} $^{\rm a}${log\,$\epsilon$(X) = 12 + log\,$N({\rm X})/N({\rm H}).$}\\ \hspace{0.5cm}$^{\rm b}${\citet{Asp09}.}\\ \hspace{0.5cm}$^{\rm c}${Limits are quoted for the first ions due to the presence of ionized gas.}\\ \hspace{0.5cm}$^{\rm d}${The final column lists the element abundance ratios of the mostly neutral absorption component at $z_{\rm abs}=2.852874$ (i.e. component number 1).}\\ \end{table} \section{The Deuterium Abundance} \label{sec:dh} The near-pristine gas in the DLA reported here is a highly suitable environment for measuring the primordial abundance of deuterium (see also \citealt{FumOmePro11} for the most metal-poor Lyman Limit system). However, as discussed in Section~\ref{sec:analysis}, the structure of the absorption lines and the unfortunate level of unrelated contamination limit the \textit{accuracy} with which the deuterium abundance can be measured in this system. The measured value of \textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}\ in this DLA, expressed as a logarithmic and linear quantity, is: \begin{equation} \log_{10}\,({\rm D\,\textsc{i}/H\,\textsc{i}}) = -4.582\pm0.012 \end{equation} \begin{equation} 10^{5}~{\rm D\,\textsc{i}/H\,\textsc{i}} = 2.62\pm0.07 \end{equation} which is consistent with the inverse variance weighted mean value of the five other high precision measurements reported by \citet{Coo14}, $10^{5}~{\rm D\,\textsc{i}/H\,\textsc{i}} = 2.53\pm0.04$. The \textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}\ measurement precision obtained from this new DLA is comparable to the systems analyzed by \citet{Coo14}, reflecting the high S/N of our data and the well-determined value of the total \textrm{H}\,\textsc{i}\ column density. Despite the very low metallicity of this system, we also detect weak absorption from N\,\textsc{i}\ and N\,\textsc{ii}, resulting in an ion ratio $\log({\rm N\,\textsc{ii}/N\,\textsc{i}})=0.48\pm0.12$. As recently highlighted by \citet{CooPet16}, charge transfer ensures that this ion ratio is sensitive to the relative ionization of deuterium and hydrogen in DLAs, and can be used to assess if an ionization correction must be applied to the measured \textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}\ ratio to recover the true D/H abundance. Using Equation~28 from \citet{CooPet16}, we estimate that the D/H ionization correction for this system is: \begin{equation} {\rm IC(D/H)}\equiv\log_{10}{\rm (D/H)} - \log_{10}\,N({\rm D\,\textsc{i}})/N({\rm H\,\textsc{i}}) \end{equation} \begin{equation} {\rm IC(D/H)}=(-4.9\pm1.0)\times10^{-4} \end{equation} which includes a 6 per cent uncertainty in the ionization correction relation, as recommended by \citet{CooPet16}. Since this correction is a factor of $\sim25$ below the precision of this single measurement, we do not apply this correction to our results. \subsection{Metallicity Evolution} In what follows, we only consider the six highest quality, and self-consistently analyzed D/H abundance measurements; this sample includes the new measurement that we report herein, and the sample of five measurements previously analyzed by \citet{Coo14}. These measures are presented as a function of [O/H] metallicity in Fig.~\ref{fig:measures}, and are listed in Table~\ref{tab:dhmeasures}. For other recent D/H measures, and a more complete list of literature measurements, see \cite{Rie15} and \citet{Bal15}. A visual inspection of Figure~\ref{fig:measures} may suggest that there is a mild evolution (decline) of D/H with metallicity, given that the value deduced here for the lowest metallicity DLA is the highest of the six high-precision measures. However, we caution that the trend is not statistically significant, given the small size of the current sample. Specifically, assuming a linear evolution of the D/H abundance with metallicity, we find: \begin{equation} \label{eqn:linearevol} \log_{10}\,({\rm D/H}) = (-4.583\pm0.010) - (2.8\pm2.0)\times10^{3}({\rm O/H}) \end{equation} where ${\rm (O/H)}=10^{\rm [O/H]-3.31}\equiv N$(O\,\textsc{i})/$N$(\textrm{H}\,\textsc{i}). The $p$-value of a non-evolving D/H ratio (rather than a linear evolution with O/H) is 0.15, indicating that our null hypothesis (the D/H abundance is constant over the metallicity range of our sample) can only be rejected at the $1.4 \sigma$ significance level. It is intriguing that the tentative decline of D/H with increasing metallicity is in the same sense as expected from galactic chemical evolution. On the other hand, published models of the astration of D (see \citealt{Cyb15} for a list of references) do not predict any significant evolution over the metallicity range relevant here. For example, the recent galactic chemical evolution models of \citet{Wei16} entertain very minor corrections for astration at the metallicities of the DLAs considered here \citep[see also][]{Rom06,Dvo16}. Specifically the D/H astration correction is estimated to be 0.33 per cent and 0.023 per cent (+0.0015 and +0.0001 in the log) from the least to the most metal-poor DLA listed in Table~\ref{tab:dhmeasures}. These (systematic) upward corrections to D/H are significantly smaller than the random errors associated with the six measures of D/H. For comparison, converting Equation~16 of \citet{Wei16} into the form of our Equation~\ref{eqn:linearevol}, we estimate a slope of $\approx-140$ for their fiducial model, which is a factor of $\sim20$ lower than the value estimated using the observational data (see Equation~\ref{eqn:linearevol}). This suggests that astration is not responsible for the mild evolution of D/H with metallicity (if there is one at all over the range of O/H values of our sample). Another possibility is that deuterium may be preferentially depleted onto dust grains \citep{Jur82,Dra04,Dra06}. This effect has been seen in the local interstellar medium of the Milky Way \citep{Woo04,ProTriHow05,Lin06,EliProLop07,LalHebWal08,ProSteFie10}. However, unlike the Milky Way, the DLAs that we investigate here are very low metallicity ([Fe/H]~$<-2.0$); even the most refractory elements in such DLAs exhibit negligible dust depletions \citep{Pet97,Vla04,Ake05}, and very low metallicity DLAs are not expected to harbor a significant amount of dust \citep[see][and references therein]{MurBer16}. Ultimately, this issue will be clarified by extending the number of precision measures of D/H over a wider range of metallicity than covered by the present sample. \begin{figure*} \centering {\includegraphics[angle=0,width=85mm]{fig_5a} \hspace{6mm}\includegraphics[angle=0,width=85mm]{fig_5b}}\\ \caption{ We plot the current sample of high quality primordial D/H abundance measurements (symbols with error bars) as a function of the oxygen abundance. The green symbol (with the lowest value of [O/H]) corresponds to the new measurement reported here, and the blue symbols are taken from \citet{Coo14}. The red dashed and dotted horizontal lines indicate the 68 and 95 per cent confidence interval on the weighted mean value of the six high precision D/H measures listed in Table~\ref{tab:dhmeasures}. The right axes show the conversion between D/H and $\Omega_{\rm B,0}\,h^{2}$\ for the Standard Model. The conversion shown in the left panel uses the recent theoretical determination of the $d(p, \gamma)^3$He reaction rate (and its error) by \citet{Mar16}, while the right panel uses an empirical $d(p, \gamma)^3$He rate and error based on the best available experimental data (see \citet{NolBur00} and \citet{NolHol11} for a critical assessment of the available experimental data). In both panels, the gray horizontal band shows the Standard Model D/H abundance based on our BBN calculations (see text) and the universal baryon density determined from the CMB temperature fluctuations \citep{Efs15}. The dark and light shades of gray represent the 68 and 95 per cent confidence bounds, respectively, including the uncertainty in the conversion of $\Omega_{\rm B,0}\,h^{2}$\ to D/H (0.83 per cent for the left panel and 2.0 per cent for the right panel). The Standard Model value displayed in the left panel is 0.02~dex lower in $\log_{10}$(D/H) than that shown in Figure~5 of \citet{Coo14}. This shift is largely due to the updated \textit{Planck} results \citep{Efs15}, and the updated theoretical $d(p,\gamma)^{3}{\rm He}$ reaction rate \citep{Mar16}. } \label{fig:measures} \end{figure*} \begin{table*} \begin{center} \caption{\textsc{precision d/h measures considered in this paper}} \hspace{-0.6cm}\begin{tabular}{@{}lccccc} \hline \multicolumn{1}{c}{QSO} & \multicolumn{1}{c}{$z_{\rm em}$} & \multicolumn{1}{c}{$z_{\rm abs}$} & \multicolumn{1}{c}{log~$N$(\textrm{H}\,\textsc{i})/cm$^{-2}$} & \multicolumn{1}{c}{[O/H]$^{\rm a}$} & \multicolumn{1}{c}{log~\textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}}\\ \hline HS\,0105$+$1619 & $2.652$ & $2.53651$ & $19.426\pm0.006$ & $-1.771\pm0.021$ & $-4.589\pm0.026$ \\ Q0913$+$072 & $2.785$ & $2.61829$ & $20.312\pm0.008$ & $-2.416\pm0.011$ & $-4.597\pm0.018$ \\ SDSS~J1358$+$0349 & $2.894$ & $2.85305$ & $20.524\pm0.006$ & $-2.804\pm0.015$ & $-4.582\pm0.012$ \\ SDSS~J1358$+$6522 & $3.173$ & $3.06726$ & $20.495\pm0.008$ & $-2.335\pm0.022$ & $-4.588\pm0.012$ \\ SDSS~J1419$+$0829 & $3.030$ & $3.04973$ & $20.392\pm0.003$ & $-1.922\pm0.010$ & $-4.601\pm0.009$ \\ SDSS~J1558$-$0031 & $2.823$ & $2.70242$ & $20.75\pm0.03$ & $-1.650\pm0.040$ & $-4.619\pm0.026$ \\ \hline \end{tabular} \label{tab:dhmeasures} $^{\rm a}${We adopt the solar value log\,(O/H) + 12 = 8.69 \citep{Asp09}.}\\ \end{center} \end{table*} \subsection{Implications for Cosmology} As discussed above, the six self-consistently analyzed D/H abundance measurements that we consider here are statistically consistent with being drawn from the same value. Hereafter, we assume that all six measures provide a reliable estimate of the primordial abundance of deuterium, $({\rm D/H})_{\rm P}$. From the weighted mean of these independent values we deduce our best estimate of the primordial deuterium abundance: \begin{equation} \label{eqn:dhp} \log_{10}\,({\rm D/H})_{\rm P} = -4.5940\pm0.0056 \end{equation} or, expressed as a linear quantity: \begin{equation} 10^{5}\,({\rm D/H})_{\rm P} = 2.547\pm0.033. \end{equation} To compare our determination of $({\rm D/H})_{\rm P}$ with the latest \textit{Planck} CMB results, we computed a series of detailed BBN calculations that include the latest nuclear physics input. Our simulation suite is identical to that described by \citet{NolBur00}, but includes updates to: (1) The neutron lifetime from \citet{PDG14}; (2) new experimental cross section measurements for $d(d,n)^{3}{\rm He}$, $d(d,p)^{3}{\rm H}$ \citep{Gre95,Leo06}, and $^{3}{\rm He}(\alpha,\gamma)^{7}{\rm Be}$ \citep{CybFieOli08,Ade11}; and (3) new theoretical cross section calculations of $p(n,\gamma)d$ \citep{Rup00} and $d(p,\gamma)^3\mathrm{He}$ \citep{Mar16}. For further details on all but $d(p,\gamma)^3\mathrm{He}$, see \citet{NolHol11}. The $d(p,\gamma)^3\mathrm{He}$ reaction rate can now be reliably computed with a precision of about 1 per cent, compared with current laboratory measurements that have an uncertainty of $\gtrsim7$ per cent. Our previous work used the $d(p,\gamma)^3\mathrm{He}$ reaction rate calculated by \citet{Mar05}. Recently, \citet{Mar16} have published a revised calculation, which includes a $\sim 2.5$ per cent relativistic correction that had previously been found to be large in $d(n,\gamma)^3\mathrm{H}$. The new calculation also includes a quantitative error estimate that is better than 1 per cent at most energies and incorporates wave functions that have been extensively tested for accuracy. We use the numerical uncertainty quoted by \citet{Mar16}, and do not use laboratory data to inform the theoretical rate (see e.g. \citealt{Coc15}); at BBN energies, the laboratory data predominantly consist of one experiment that has relatively low precision and is in moderate conflict with the calculation. For comparison, we also consider how the output nucleosynthesis is altered if we use the empirical $d(p,\gamma)^3\mathrm{He}$ reaction rate instead of the theoretical rate (see below). Although we use the numerical uncertainty quoted by \citet{Mar16}, it should be pointed out that no quantitative estimate exists for further uncertainties in construction of the nucleon-nucleon potential and current operators, which could be of similar size. We have attempted to account for some of this with a 0.5 per cent correlated error on all points of the curve. We now describe a summary of our BBN calculations, and direct the reader to \citet{NolBur00} for further details. First, the calculations are initialized with a Gaussian random realization of each cross section measurement or (in the cases of $p(n,\gamma)d$ and $d(p,\gamma)^3\mathrm{He}$) calculation. The distributions of point-to-point errors and of the (usually larger) normalization errors shared by all points from a given experiment are sampled independently. Then a continuous, piecewise polynomial is fit to the sampled cross sections. The thermal reaction rates at BBN temperatures are calculated for each realization, using the sampled and fitted cross sections. These rates are used as input into a BBN code, along with a Gaussian random realization of the neutron lifetime, and the output nucleosynthesis is stored. At a given value of the expansion rate (parameterized by the number of neutrino species, $N_{\nu}$\footnote{Our BBN model includes the effect of incomplete neutrino decoupling, which makes $N_\mathrm{eff} \neq 3$ at recombination for the Standard Model, as a small additive correction to the $Y_\mathrm{P}$ yield. BBN yields away from the Standard Model are computed by rescaling the neutrino energy density during BBN by a factor $N_\nu/3$. We then assume that the expansion rate at recombination is governed by an effective number of neutrino species, $N_\mathrm{eff} = 3.046 N_\nu /3$ (\citealt{Man05}; see also, \citealt{Gro15}). To the best of our knowledge, no detailed calculation of neutrino weak decoupling has been published for expansion rates equivalent to $N_\nu \neq 3$.}) and the density ratio of baryons-to-photons ($\eta_{10}$, in units of $10^{10}$), we perform 24\,000 Monte Carlo realizations, which was deemed to provide smooth $2\sigma$ confidence contours as a function of $\eta_{10}$ \citep[see][]{NolBur00}. This procedure provides a thorough accounting of the current error budget for primordial nucleosynthesis calculations. We computed the resulting nucleosynthesis over the range $1.8\le{N_{\nu}}\le4$ (in steps of $0.2$) and $0.477\le\log_{10}\,\eta_{10}\le1.0$ (in steps of $\sim0.026$), and interpolated this two-dimensional grid with a cubic spline. Our interpolated grid of values is accurate to within $0.1$ per cent. For a given $N_{\rm eff}$\ and $\Omega_{\rm B,0}\,h^{2}$, the final distribution of D/H values is Gaussian in shape, and offers an uncertainty on (D/H)$_{\rm P}$ of $\lesssim1$ per cent over the full parameter grid; for the Standard Model, the uncertainty of the primordial deuterium abundance is $\sim0.83$ per cent when using the theoretical $d(p,\gamma)^3\mathrm{He}$ reaction rate. For convenience, we also provide the following simple fitting formula that describes how the D/H abundance depends on $\eta_{10}$ and $N_{\rm eff}$: \begin{equation} \label{eqn:dhconv} 10^{5}\,({\rm D/H})_{\rm P} = 2.47\,(1\pm0.01)\,(6/\eta_{\rm D})^{1.68} \end{equation} where \begin{equation} \eta_{\rm D} = \eta_{10} - 1.08\,(S-1)\,(1.1\,\eta_{10}-1) \end{equation} \begin{equation} \label{eqn:dhconvb} S = \Big(1+\frac{7\Delta N_{\nu}}{43}\Big)^{1/2} \end{equation} and $N_{\rm eff} = 3.046\,(1 + \Delta N_{\nu}/3)$. This functional form is a slightly modified version of the form introduced by \citet{KneSte04}, and is accurate to within 0.4 per cent over the range $2.3\le{N_{\rm eff}}\le3.7$ and $5.4\le\eta_{10}\le6.6$. The uncertainty quoted in Equation~\ref{eqn:dhconv} includes both the 0.4 per cent uncertainty in the form of the fitting function as well as the uncertainty in the BBN calculation. To convert the baryon-to-photon ratio into a measurement of the cosmic density of baryons, we use the conversion $\eta_{10}=(273.78\pm0.18)\times\Omega_{\rm B,0}\,h^{2}$ \citep{Ste06}, which assumes a primordial helium mass fraction $Y_{\rm P}=0.2471\pm0.0005$ (see Equation 43-44 from \citealt{LopTur99}) and a present day CMB photon temperature $T_{\gamma,0}=2.72548\pm0.00057$ \citep{Fix09}. Using the weighted mean value of the primordial deuterium abundance (Equation~\ref{eqn:dhp}), we estimate the cosmic density of baryons for the Standard Model: \begin{equation} \label{eqn:obhhbbn} 100\,\Omega_{\rm B,0}\,h^{2}({\rm BBN}) = 2.156\pm0.017\pm0.011 \end{equation} where the first error term includes the uncertainty in the measurement and analysis, and the second error term provides the uncertainty in the BBN calculations. This level of precision is comparable to or somewhat better than that achieved by the latest data release from the \textit{Planck} team \citep{Efs15}. The value of $\Omega_{\rm B,0}\,h^{2}({\rm BBN})$ reported here (Equation~\ref{eqn:obhhbbn}) differs from the one reported by \citet{Coo14} in two ways: (1) Our new measure of $\Omega_{\rm B,0}\,h^{2}({\rm BBN})$ is lower by 0.00046 (i.e. a $\sim2.1$ per cent change); and (2) the measurement uncertainty is now the dominant term of the total error budget, whereas the earlier estimate was dominated by the uncertainty in the BBN calculations. The reduced uncertainty here results from using the \citet{Mar16} $d(p,\gamma)^3\mathrm{He}$ cross section and its estimated $\sim1$ per cent error. Previously, we used the \citet{Mar05} calculation, which lacked a quantitative error estimate.\footnote{Our previous estimate of the $d(p,\gamma)^3\mathrm{He}$ cross section uncertainty was based on experimental cross section measurements below the BBN energy range (with an error of 7 per cent). Note that both the \citet{Mar05} and \citet{Mar16} calculations agree closely with these low energy experimental data \citep{NolHol11}.} The new calculation also reduces the D yield slightly through a combination of a better electromagnetic current operator and more careful attention to the wave function precision.\footnote{\citet{Mar16} also present BBN calculations based on their new cross sections, using the Parthenope code \citep{Pis08}. At the Planck baryon density, they now find $(\mathrm{D/H})_\mathrm{P} = 2.46\times 10^{-5}$ after a small change to their code (Marcucci 2016, private communication). Using either the \citet{Mar16} rate or the \citet{Ade11} rate for $d(p,\gamma)^3\mathrm{He}$, there is a consistent 2\%\ difference between their BBN code and ours.} The \citet{Mar16} cross section calculation results in a change to both the normalization and shape of the D/H abundance as a function of $\eta_{10}$; for the Standard Model, the primordial D/H abundance is shifted by $2.6$ per cent, and the uncertainty of this reaction rate is reduced by a factor of $\sim4$ relative to that used by \citet{Coo14}. The Standard Model value of the cosmic baryon density obtained from our BBN analysis is somewhat lower than that extracted from the temperature fluctuations of the CMB, $100\,\Omega_{\rm B,0}\,h^{2}({\rm CMB})=2.226\pm0.023$ \citep[][see gray bands in Fig.~\ref{fig:measures}]{Efs15}\footnote{This value of $\Omega_{\rm B,0}\,h^{2}$\ corresponds to the TT+lowP+lensing analysis (i.e. the second data column of Table~4 from \citealt{Efs15}).}. This difference corresponds to a $2.3\sigma$ discrepancy between BBN and the CMB for the Standard Model. If we consider the \textit{Planck} fits that include high-$l$ polarization, the significance of the disagreement becomes $2.7\sigma$ (TT,TE,EE+lowP+lensing), or $3\sigma$ in combination with external data (TT,TE,EE+lowP+lensing+ext). We also note that the central value of $\Omega_{\rm B,0}\,h^{2}$\ derived from the \textit{Planck} CMB is robust; the \textit{Planck} team consider a series of one parameter extensions to the base $\Lambda$CDM model and in all cases, the uncertainty on $\Omega_{\rm B,0}\,h^{2}$\ is inflated but the central value remains unchanged. By considering a deviation in the Standard Model expansion rate of the Universe, as parameterized by $N_{\rm eff}$, the significance of the disagreement between CMB and BBN is reduced to the $1.5\sigma$ level.\footnote{The disagreement becomes more significant ($2.4\sigma$) if we consider the \textit{Planck} TT,TE,EE+lowP analysis.} This comparison is shown in Fig.~\ref{fig:neffobhh} for the \textit{Planck} TT+lowP analysis \citep[for similar comparisons between CMB and BBN constraints, see][]{Efs14,Efs15,Coo14,NolSte15,Cyb15}. If we assume that $N_{\rm eff}$\ and $\Omega_{\rm B,0}\,h^{2}$\ do not change from BBN to recombination, the combined confidence bounds on the baryon density and the effective number of neutrino families are (95 per cent confidence limits): \begin{eqnarray} 100\,\Omega_{\rm B,0}\,h^{2} &=& 2.235\pm0.071\\ N_{\rm eff} &=& 3.44\pm0.45. \end{eqnarray} \begin{figure} \centering {\includegraphics[angle=0,width=80mm]{fig_6}}\\ \caption{ Comparing the expansion rate (parameterized by $N_{\rm eff}$) and the cosmic density of baryons ($\Omega_{\rm B,0}\,h^{2}$) from BBN (blue contours) and CMB (gray contours). The dark and light shades illustrate the 68\%\ and 95\%\ confidence contours, respectively. } \label{fig:neffobhh} \end{figure} The aforementioned disagreement between the CMB and BBN has emerged as a result of the improved reaction rate calculation reported recently by \citet{Mar16}. To show the change introduced by this new rate, we have repeated our BBN calculations using an empirically derived $d(p,\gamma)^3\mathrm{He}$ rate, in place of the theoretical rate. We use all published data that are credible as absolute cross sections \citep{Gri62,Sch97,Ma97,Cas02},\footnote{ Of these, only \citet{Ma97} probe the key energy range of late-BBN deuterium burning; see \citet{NolBur00} and \citet{NolHol11} for further details.} and generate Monte Carlo realizations of these experimental data, as described above. Our BBN calculations, combined with our measurement of the primordial D/H abundance (Equation~\ref{eqn:dhp}), return a Standard Model value of the cosmic baryon density: \begin{equation} \label{eqn:obhhbbne} 100\,\Omega_{\rm B,0}\,h^{2}({\rm BBN}) = 2.260\pm0.018\pm0.029 \end{equation} which is in somewhat better agreement with the \citet{Efs15} value, albeit with a much larger nuclear error (i.e. the second error term in Equation~\ref{eqn:obhhbbne}).\footnote{The data-driven Monte Carlo procedure that we use here has greater freedom to match $S$-factor data than the widely used quadratic fit of \citet{Ade11}, resulting in a somewhat lower $d(p,\gamma)^3\mathrm{He}$ rate. Adopting the \citet{Ade11} $S$-factor curve would change Equation~\ref{eqn:obhhbbne} to $100\,\Omega_{\rm B,0}\,h^{2}({\rm BBN}) = 2.225\pm0.018\pm0.033$.} In the right panel of Fig.~\ref{fig:measures}, we compare our D/H measurements to the Standard Model deuterium abundance based on the \citet{Efs15} baryon density and our calculations that use the empirical $d(p,\gamma)^3\mathrm{He}$ rate. Using the empirical rate shifts the Standard Model value of the primordial D/H abundance upwards by $\sim8$ per cent, and inflates the corresponding uncertainty by a factor of $\sim1.5$. At present, it is difficult to tell how seriously to interpret the discrepancy between BBN and the CMB. Doubling the estimated nuclear error in Equation~\ref{eqn:obhhbbn} still leaves us with a $2\sigma$ disagreement (assuming $N_\mathrm{eff} = 3.046)$. This doubling would require a $\sim4$ per cent error on $d(p,\gamma)^3\mathrm{He}$, which seems a large overestimate relative to the $\sim1$ per cent errors quoted by \citet{Mar16}\footnote{Similarly, other relevant reaction rates, such as $d+d$, have been measured in the laboratory with high precision and are unlikely to contribute significantly to the error budget.}. Alternatively, the CMB and BBN would agree exactly if the \citet{Mar16} rate was scaled downwards by $\sim10$ per cent \citep[see e.g.][]{DiV14,Efs15}; however, a significant change to the rate normalization is unlikely, given the accuracy with which rates can now be calculated for a three-body system \citep{Kie08}. It is helpful that the lack of empirical information on $d(p,\gamma)^3\mathrm{He}$ at BBN energies is currently being addressed by the LUNA collaboration \citep{Gus14}. However, if they achieve high precision, their result seems unlikely to fit well with both cosmology and nuclear theory simultaneously. \section{Summary and Conclusions} \label{sec:conc} Several probes of cosmology have now pinned down the content of the Universe with exquisite detail. In this paper, we build on our previous work to obtain precise measurements of the primordial deuterium abundance by presenting high quality spectra of a DLA at $z_{\rm abs}=2.852054$ towards the quasar J1358$+$0349, taken with both the UVES and HIRES instruments. Our primary conclusions are as follows:\\ \noindent ~~(i) The absorption system reported here is the most metal-poor DLA currently known, with an average oxygen abundance [O/H]~$= -2.804\pm0.015$. Furthermore, in one of the absorption components, we estimate [O/H]~$= -3.07\pm0.03$. This environment is therefore ideally suited to estimate the primordial abundance of deuterium. On the other hand, we have found an unusual amount of unrelated absorption that contaminates many of the weak, high order, \textrm{D}\,\textsc{i}\ absorption lines. Consequently, the accuracy in the determination of the D/H ratio achieved for this system is not as high as the best cases reported by \citet[][J1419$+$0829]{PetCoo12a} and \citet[][J1358$+$6522]{Coo14}, see Table~\ref{tab:dhmeasures}. \smallskip \noindent ~~(ii) Using an identical analysis strategy to that described in \citet{Coo14}, we measure a D/H abundance of $\log_{10}\,({\rm D\,\textsc{i}/H\,\textsc{i}}) = -4.582\pm0.012$ for this near-pristine DLA. We estimate that this abundance ratio should be adjusted by $(-4.9\pm1.0)\times10^{-4}$~dex to account for \textrm{D}\,\textsc{ii}\ charge transfer recombination with \textrm{H}\,\textsc{i}. This ionization correction is a factor of $\sim25$ less than the D/H measurement precision of this system, and confirms that ${\rm D\,\textsc{i}/H\,\textsc{i}}\cong{\rm D/H}$ in DLAs. \smallskip \noindent ~~(iii) On the basis of six high precision and self-consistently analyzed D/H abundance measurements, we report tentative evidence for a decrease of the D/H abundance with increasing metallicity. If confirmed, this modest decrease of the D/H ratio could provide an important opportunity to study the chemical evolution of deuterium in near-pristine environments. \smallskip \noindent ~~(iv) A weighted mean of these six independent D/H measures leads to our best estimate of the primordial D/H abundance, $\log_{10}\,({\rm D/H})_{\rm P} = -4.5940\pm0.0056$. We combine this new determination of (D/H)$_{\rm P}$ with a suite of detailed Monte Carlo BBN calculations. These calculations include updates to several key cross sections, and propagate the uncertainties of the experimental and theoretical reaction rates. We deduce a value of the cosmic baryon density $100\,\Omega_{\rm B,0}\,h^{2}({\rm BBN}) = 2.156\pm0.017\pm0.011$, where the first error term represents the D/H measurement uncertainty and the second error term includes the uncertainty of the BBN calculations. \smallskip \noindent ~~(v) The above estimate of $\Omega_{\rm B,0}\,h^{2}$(BBN) is comparable in precision to the recent determination of $\Omega_{\rm B,0}\,h^{2}$\ from the CMB temperature fluctuations recorded by the \textit{Planck} satellite. However, using the best available BBN reaction rates, we find a $2.3\sigma$ difference between $\Omega_{\rm B,0}\,h^{2}$(BBN) and $\Omega_{\rm B,0}\,h^{2}$(CMB), assuming the Standard Model value for the effective number of neutrino species, $N_{\rm eff}=3.046$. Allowing $N_{\rm eff}$\ to vary, the disagreement between BBN and the CMB can be reduced to the $1.5\sigma$ significance level, resulting in a bound on the effective number of neutrino families, $N_{\rm eff} = 3.44\pm0.45$. \smallskip \noindent ~~(vi) By replacing the theoretical $d(p,\gamma)^{3}{\rm He}$ cross section with the current best empirical estimate, we derive a baryon density $100\,\Omega_{\rm B,0}\,h^{2}({\rm BBN}) = 2.260\pm0.034$, which agrees with the \textit{Planck} baryon density for the Standard Model. However, this agreement is partly due to the larger error estimate for the nuclear data. Forthcoming experimental measurements of the crucial $d(p,\gamma)^{3}{\rm He}$ reaction rate by the LUNA collaboration will provide important additional information regarding this discrepancy, since the empirical rate currently rests mainly on a single experiment, and absolute cross sections often turn out in hindsight to have underestimated errors. The theory of few-body nuclear systems is now precise enough that a resolution in favor of the current empirical rate would present a serious problem for nuclear physics. \smallskip Our study highlights the importance of expanding the present small statistics of high precision D/H measurements, in combination with new efforts to achieve high precision in the nuclear inputs to BBN. We believe that precise measurements of the primordial D/H abundance should be considered an important goal for the future generation of echelle spectrographs on large telescopes, optimized for wavelengths down to the atmospheric cutoff. This point is discussed further in Appendix~\ref{app:future}. \section*{Acknowledgements} We are grateful to the staff astronomers at the VLT \&\ Keck Observatories for their assistance with the observations, and to Jason X. Prochaska and Michael Murphy for providing some of the software that was used to reduce the data. We thank Gary Steigman for interesting discussions, and useful comments on an early draft. We also thank an anonymous referee who provided helpful suggestions that improved the presentation of this work. RJC is currently supported by NASA through Hubble Fellowship grant HST-HF-51338.001-A, awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5- 26555. RAJ acknowledges support from an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1102683. We thank the Hawaiian people for the opportunity to observe from Mauna Kea; without their hospitality, this work would not have been possible. RJC thanks JBC for his impeccable timing and invaluable insight.
train/arxiv
BkiUdrA4eIOjRsQM8pQT
5
1
\section*{Introduction} The ample cone of a del Pezzo surface $Y$ (or rather the associated dual polyhedron) was studied classically by, among others, Gosset, Schoute, Kantor, Coble, Todd, Coxeter, and Du Val. For a brief historical discussion, one can consult the remarks in \S11.x of \cite{Coxeter}. From this point of view, the lines on $Y$ are the main object of geometric interest, as they are the walls of the ample cone or the vertices of the dual polyhedron. The corresponding root system (in case $K_Y^2 \leq 6$) only manifests itself geometrically by allowing del Pezzo surfaces with rational double points, or equivalently smooth surfaces $Y$ with $-K_Y$ nef and big but not ample. This is explicitly worked out in Part II of Du Val's series of papers \cite{duV}. On the other hand, the root system, or rather its Weyl group, appears for a smooth del Pezzo surface as a group of symmetries of the ample cone, a fact which (in a somewhat different guise) was already known to Cartan. Perhaps the culmination of the classical side of the story is Du Val's 1937 paper \cite{duV2}, where he also systematically considers the blowup of $\mathbb{P}^2$ at $n\geq 9$ points. In modern times, Manin explained the appearance of the Weyl group by noting that the orthogonal complement to $K_Y$ in $H^2(Y;\mathbb{Z})$ is a root lattice $\Lambda$. Moreover, given any root of $\Lambda$, in other words an element $\beta$ of square $-2$, there exists a deformation of $Y$ for which $\beta = \pm[C]$, where $C$ is a smooth rational curve of self-intersection $-2$. For modern expositions of the theory, see for example Manin's book \cite{Manin} or Demazure's account in \cite{777}. In general, it seems hard to study an arbitrary rational surface $Y$ without imposing some extra conditions. One very natural condition is that $-K_Y$ is effective, i.e.\ that $-K_Y = D$ for an effective divisor $D$. In case the intersection matrix of $D$ is negative definite, such pairs $(Y,D)$ arise naturally in the study of minimally elliptic singularities: the case where $D$ is a smooth elliptic curve corresponds to the case of simple elliptic singularities, the case where $D$ is a nodal curve or a cycle of smooth rational curves meeting transversally corresponds to the case of cusp singularities, and the case where $D$ is reduced but has one component with a cusp, two components with a tacnode or three components meeting at a point, corresponds to triangle singularities. From this point of view, the case where $D$ is a cycle of rational curves is the most plentiful. The systematic study of such surfaces in case the intersection matrix of $D$ is negative definite dates back to Looijenga's seminal paper \cite{Looij}. However, for various technical reasons, most of the results of that paper are proved under the assumption that the number of components in the cycle is at most $5$. Some of the main points of \cite{Looij} are as follows: Denote by $R$ the set of elements in $H^2(Y; \mathbb{Z})$ of square $-2$ which are orthogonal to the components of $D$ and which are of the form $\pm [C]$, where $C$ is a smooth rational curve disjoint from $D$, for some deformation of the pair $(Y,D)$. In terms of deformations of singularities, the set $R$ is related to the possible rational double point singularities which can arise as deformations of the dual cusp to the cusp singularity corresponding to $D$. Looijenga noted that, in general, there exist elements in $H^2(Y; \mathbb{Z})$ of square $-2$ which are orthogonal to the components of $D$ but which do not lie in $R$. Moreover, reflections in elements of the set $R$ give symmetries of the ``generic" ample cone (which is the same as the ample cone in case there are no smooth rational curves on $Y$ disjoint from $D$). Finally, still under the assumption of at most 5 components, any isometry of $H^2(Y; \mathbb{Z})$ which preserves the positive cone, the classes $[D_i]$ and the set $R$, preserves the generic ample cone. This paper, which is an attempt to see how much of \cite{Looij} can be generalized to the case of arbitrarily many components, is motivated by a question raised by the recent work of Gross, Hacking and Keel \cite{GHK} on, among matters, the global Torelli theorem for pairs $(Y,D)$ where $D$ is an anticanonical cycle on the rational surface $Y$. In order to formulate this theorem in a fairly general way, one would like to characterize the isometries $f$ of $H^2(Y, \mathbb{Z})$, preserving the positive cone and fixing the classes $[D_i]$, which preserve the ample cone of $Y$. It is natural to ask if, at least in the generic case, the condition that $f(R)=R$ is sufficient. In this paper, we give various criteria on $R$ which insure that, if an isometry $f$ of $H^2(Y; \mathbb{Z})$ preserves the positive cone, the classes $[D_i]$ and the set $R$, then $f$ preserves the generic ample cone. Typically, one needs a hypothesis which says that $R$ is large. For example, one such hypothesis is that there is a subset of $R$ which spans a negative definite codimension one subspace of the orthogonal complement to the components of $D$. In theory, at least under various extra hypotheses, such a result gives a necessary and sufficient condition for an isometry to preserve the generic ample cone. In practice, however, the determination of the set $R$ in general is a difficult problem, which seems close in its complexity to the problem of describing the generic ample cone of $Y$. Finally, we show that some assumptions on $(Y,D)$ are necessary, by giving examples where $R=\emptyset$, so that the condition that an isometry $f$ preserves $R$ is automatic, and of isometries $f$ such that $f$ preserves the positive cone, the classes $[D_i]$ and (vacuously) the set $R$, but $f$ does not preserve the generic ample cone. We do not yet have a good understanding of the relationship between preserving the ample cone and preserving the set $R$. An outline of this paper is as follows. The preliminary Section 1 reviews standard methods for constructing nef classes on algebraic surfaces and applies this to the study of when the normal surface obtained by contracting a negative definite anticanonical cycle on a rational surface is projective. In Section 2, we analyze the ample cone and generic ample cone of a pair $(Y,D)$ and show that the set $R$ defined by Looijenga is exactly the set of elements $\beta$ in $H^2(Y; \mathbb{Z})$ of square $-2$ which are orthogonal to the components of $D$ such that reflection about $\beta$ preserves the generic ample cone. Much of the material of \S2 overlaps with results in \cite{GHK}, proved there by somewhat different methods. Section 3 is devoted to giving various sufficient conditions for an isometry $f$ of $H^2(Y; \mathbb{Z})$ to preserve the generic ample cone, including the one described above. Section 4 gives examples of pairs $(Y,D)$ satisfying the sufficient conditions of \S3 where the number of components of $D$ and the multiplicity $-D^2$ are arbitrarily large, as well as examples showing that some hypotheses on $(Y,D)$ are necessary. \medskip \noindent\textbf{Acknowledgements.} It is a pleasure to thank Mark Gross, Paul Hacking and Sean Keel for access to their manuscript \cite{GHK} and for extremely stimulating correspondence and conversations about these and other matters, and Radu Laza for many helpful discussions. \medskip \noindent\textbf{Notation and conventions.} We work over $\mathbb{C}$. If $X$ is a smooth projective surface with $h^1(\mathcal{O}_X) = h^2(\mathcal{O}_X) = 0$ and $\alpha \in H^2(X; \mathbb{Z})$, we denote by $L_\alpha$ the corresponding holomorphic line bundle, i.e.\ $c_1(L_\alpha) = \alpha$. Given a curve $C$ or divisor class $G$ on $X$, we denote by $[C]$ or $[G]$ the corresponding element of $H^2(X; \mathbb{Z})$. Intersection pairing on curves or divisors, or on elements in the second cohomology of a smooth surface (viewed as a canonically oriented $4$-manifold) is denoted by multiplication. \section{Preliminaries} Throughout this paper, $Y$ denotes a smooth rational surface with $-K_Y = D= \sum_{i=1}^rD_i$ a (reduced) cycle of rational curves, i.e.\ each $D_i$ is a smooth rational curve and $D_i$ meets $D_{i\pm 1}$ transversally, where $i$ is taken mod $r$, except for $r=1$, in which case $D_1=D$ is an irreducible nodal curve. We note, however, that many of the results in this paper can be generalized to the case where $D\in |-K_Y|$ is not assumed to be a cycle. The integer $r=r(D)$ is called the \textsl{length} of $D$. An \textsl{orientation} of $D$ is an orientation of the dual graph (with appropriate modifications in case $r=1$). We shall abbreviate the data of the surface $Y$ and the oriented cycle $D$ by $(Y,D)$ and refer to it as a \textsl{anticanonical pair}. If the intersection matrix $(D_i\cdot D_j)$ is negative definite, we say that $(Y,D)$ is a \textsl{negative definite anticanonical pair}. \begin{definition}\label{defcurves} An irreducible curve $E$ on $Y$ is an \textsl{exceptional curve} if $E\cong \mathbb{P}^1$, $E^2 = -1$, and $E \neq D_i$ for any $i$. An irreducible curve $C$ on $Y$ is a \textsl{$-2$-curve} if $C\cong \mathbb{P}^1$, $C^2 = -2$, and $C \neq D_i$ for any $i$. Let $\Delta_Y$ be the set of all $-2$-curves on $Y$, and let $\mathsf{W}({\Delta_Y})$ be the group of integral isometries of $H^2(Y; \mathbb{R})$ generated by the reflections in the classes in the set $\Delta_Y$. \end{definition} \begin{definition} Let $\Lambda = \Lambda(Y,D) \subseteq H^2(Y; \mathbb{Z})$ be the orthogonal complement of the lattice spanned by the classes $[D_i]$. Fixing the identification $\operatorname{Pic}^0D \cong \mathbb{G}_m$ defined by the orientation of the cycle $D$, we define the \textsl{period map} $\varphi_Y \colon \Lambda \to \mathbb{G}_m$ via: if $\alpha \in \Lambda$ and $L_\alpha$ is the corresponding line bundle, then $\varphi_Y(\alpha) \in \mathbb{G}_m$ is the image of the line bundle of multi-degree $0$ on $D$ defined by $L_\alpha|D$. Clearly $\varphi_Y$ is a homomorphism. \end{definition} By \cite{Looij}, \cite{FriedmanScattone}, \cite{Fried2}, we have: \begin{theorem}\label{surjper} The period map is surjective. More precisely, given $Y$ as above and given an arbitrary homomorphism $\varphi \colon \Lambda \to\mathbb{G}_m$, there exists a deformation of the pair $(Y,D)$ over a smooth connected base, which we can take to be a product of $\mathbb{G}_m$'s, such that the monodromy of the family is trivial and there exists a fiber of the deformation, say $(Y', D')$ such that, under the induced identification of $\Lambda(Y',D')$ with $\Lambda$, $\varphi_{Y'} = \varphi$. \qed \end{theorem} For future reference, we recall some standard facts about negative definite curves on a surface: \begin{lemma}\label{negdef} Let $X$ be a smooth projective surface and let $G_1, \dots , G_n$ be irreducible curves on $X$ such that the intersection matrix $(G_i\cdot G_j)$ is negative definite. Let $F$ be an effective divisor on $X$, not necessarily reduced or irreducible, and such that, for all $i$, $G_i$ is not a component of $F$. \begin{enumerate} \item[\rm{(i)}] Given $r_i \in \mathbb{R}$, if $(F + \sum_ir_iG_i) \cdot G_j = 0$ for all $j$, then $r_i \geq 0$ for all $i$, and, for every subset $I$ of $\{1, \dots, n\}$, if $\bigcup_{i\in I}G_i$ is a connected curve such that $F\cdot G_j \neq 0$ for some $j\in I$, then $r_i > 0$ for $i\in I$. \item[\rm{(ii)}] Given $s_i, t_i \in \mathbb{R}$, if $[F] + \sum_is_i[G_i] = \sum_it_i[G_i]$, then $F=0$ and $s_i = t_i$ for all $i$. \qed \end{enumerate} \end{lemma} The following general result is also well-known: \begin{proposition}\label{constructnef} Let $X$ be a smooth projective surface and let $G_1, \dots , G_n$ be irreducible curves on $X$ such that the intersection matrix $(G_i\cdot G_j)$ is negative definite. (We do not, however, assume that $\bigcup_iG_i$ is connected.) Then there exists a nef and big divisor $H$ on $X$ such that $H\cdot G_j = 0$ for all $j$ and, if $C$ is an irreducible curve such that $C \neq G_j$ for any $j$, then $H\cdot C >0$. In fact, the set of nef and big $\mathbb{R}$-divisors which are orthogonal to $\{G_1, \dots, G_n\}$ is a nonempty open subset of $\{G_1, \dots, G_n\}^\perp \otimes \mathbb{R}$. \end{proposition} \begin{proof} Fix an ample divisor $H_0$ on $X$. Since $(G_i\cdot G_j)$ is negative definite, there exist $r_i\in \mathbb{Q}$ such that $(\sum_ir_iG_i) \cdot G_j = -(H_0\cdot G_j)$ for every $j$, and hence $(H_0 + \sum_ir_iG_i) \cdot G_j = 0$. By Lemma~\ref{negdef}, $r_i > 0$ for every $i$. There exists an $N > 0$ such that $Nr_i \in \mathbb{Z}$ for all $i$. Then $H = N(H_0 + \sum_ir_iG_i)$ is an effective divisor satisfying $H\cdot G_j = 0$ for all $j$. If $C$ is an irreducible curve such that $C \neq G_j$ for any $j$, then $H_0 \cdot C > 0$ and $G_i \cdot C \geq 0$ for all $i$, hence $H\cdot C >0$. In particular $H$ is nef. Finally $H$ is big since $H^2 = NH\cdot(H_0 + \sum_ir_iG_i) = N(H\cdot H_0) > 0$, as $H_0$ is ample. To see the final statement, we apply the above argument to an ample $\mathbb{R}$-divisor $x$ (i.e.\ an element in the interior of the ample cone) to see that $x + \sum_ir_iG_i$ is a nef and big $\mathbb{R}$-divisor orthogonal to $\{G_1, \dots, G_n\}$. Since $x + \sum_ir_iG_i$ is simply the orthogonal projection $p$ of $x$ onto $\{G_1, \dots, G_n\}^\perp \otimes \mathbb{R}$, and $p\colon H^2(X; \mathbb{R}) \to \{G_1, \dots, G_n\}^\perp \otimes \mathbb{R}$ is an open map, the image of the interior of the ample cone of $X$ is then a nonempty open subset of $\{G_1, \dots, G_n\}^\perp \otimes \mathbb{R}$ consisting of nef and big $\mathbb{R}$-divisors orthogonal to $\{G_1, \dots, G_n\}$. \end{proof} Applying the above construction to $X=Y$ and $D_1, \dots, D_r$, we can find a nef and big divisor $H$ such that $H\cdot D_j = 0$ for all $j$ and such that, if $C$ is an irreducible curve such that $C \neq D_j$ for any $j$, then $H\cdot C >0$. \begin{proposition} Let $(Y,D)$ be a negative definite anticanonical pair and let $H$ be a nef and big divisor such that $H\cdot D_j = 0$ for all $j$ and such that, if $C$ is an irreducible curve such that $C \neq D_j$ for any $j$, then $H\cdot C >0$. Suppose in addition that $\mathcal{O}_Y(H)|D = \mathcal{O}_D$, i.e.\ that $\varphi_Y([H]) =1$. Then the $D_i$ are not fixed components of $|H|$. Hence, if $\overline{Y}$ denotes the normal complex surface obtained by contracting the $D_i$, then $H$ induces an ample divisor $\overline{H}$ on $\overline{Y}$ and $|3\overline{H}|$ defines an embedding of $\overline{Y}$ in $\mathbb{P}^N$ for some $N$. \end{proposition} \begin{proof} Consider the exact sequence $$0 \to \mathcal{O}_Y(H-D) \to \mathcal{O}_Y(H) \to \mathcal{O}_D \to 0.$$ Looking at the long exact cohomology sequence, as $$H^1(Y; \mathcal{O}_Y(H-D)) = H^1(Y;\mathcal{O}_Y(H) \otimes K_Y)$$ is Serre dual to $H^1(Y; \mathcal{O}_Y(-H)) = 0$, by Mumford vanishing, there exists a section of $\mathcal{O}_Y(H)$ which is nowhere vanishing on $D$, proving the first statement. The second follows from Nakai-Moishezon and the third from general results on linear series on anticanonical pairs \cite{Fried1}. \end{proof} \begin{remark} By the surjectivity of the period map \ref{surjper}, for any $(Y,D)$ a negative definite anticanonical pair and $H$ a nef and big divisor on $Y$ such that $H\cdot D_j = 0$ for all $j$ and $H\cdot C > 0$ for all curves $C\neq D_i$, there exists a deformation of the pair $(Y,D)$ such that the divisor corresponding to $H$ has trivial restriction to $D$. More generally, one can consider deformations such that $\varphi_Y([H])$ is a torsion point of $\mathbb{G}_m$. In this case, if $\overline{Y}$ is the normal surface obtained by contracting $D$, then $\overline{Y}$ is projective. Note that this implies that the set of pairs $(Y,D)$ such that $\overline{Y}$ is projective is Zariski dense in the moduli space. However, as the set of torsion points is not dense in $\mathbb{G}_m$ in the classical topology, the set of projective surfaces $\overline{Y}$ will not be dense in the classical topology. \end{remark} \section{Roots and nodal classes} \begin{definition} Let $\mathcal{C}=\mathcal{C}(Y)$ be the positive cone of $Y$, i.e.\ $$\mathcal{C} = \{x\in H^2(Y; \mathbb{R}): x^2 >0\}.$$ Then $\mathcal{C}$ has two components, and exactly one of them, say $\mathcal{C}^+=\mathcal{C}^+(Y)$, contains the classes of ample divisors. We also define $$\mathcal{C}^+_D = \mathcal{C}^+_D(Y) = \{x\in \mathcal{C}^+: x \cdot [D_i] \geq 0 \text{ for all $i$ }\}.$$ Let $\overline{\mathcal{A}}(Y)\subseteq \mathcal{C}^+ \subseteq H^2(Y; \mathbb{R})$ be the (closure of) the ample (nef, K\"ahler) cone of $Y$ in $\mathcal{C}^+$. By definition, $\overline{\mathcal{A}}(Y)$ is closed in $\mathcal{C}^+$ but not in general in $H^2(Y; \mathbb{R})$. \end{definition} \begin{definition} Let $\alpha \in H^2(Y; \mathbb{Z}), \alpha \neq 0$. The \textsl{ oriented wall $W^\alpha$ associated to $\alpha$} is the set $\{x\in \mathcal{C}^+: x\cdot \alpha =0\}$, i.e.\ the intersection of $\mathcal{C}^+$ with the orthogonal space to $\alpha$, together with the preferred half space defined by $x\cdot \alpha \geq 0$. If $C$ is a curve on $Y$, we write $W^C$ for $W^{[C]}$. A standard result (see for example \cite{FriedmanMorgan}, II (1.8)) shows that, if $I$ is a subset of $H^2(Y; \mathbb{Z})$ and there exists an $N\in \mathbb{Z}^+$ such that $-N \leq \alpha^2 < 0$ for all $\alpha \in I$, then the collection of walls $\{W^\alpha: \alpha \in I\}$ is locally finite on $\mathcal{C}^+$. Finally, we say that $W^\alpha$ is a \textsl{face} of $\overline{\mathcal{A}}(Y)$ if $\partial \overline{\mathcal{A}}(Y)\cap W^\alpha$ contains an open subset of $W^\alpha$ and $x\cdot \alpha \geq 0$ for all $x\in \overline{\mathcal{A}}(Y)$. \end{definition} \begin{lemma} $\overline{\mathcal{A}}(Y)$ is the set of all $x\in \mathcal{C}^+$ such that $x\cdot [D_i]\geq 0$, $x\cdot [E] \geq 0$ for all exceptional curves $E$ and $x\cdot [C] \geq 0$ for all $-2$-curves $C$. Moreover, if $\alpha$ is the class associated to an exceptional or $-2$-curve, or $\alpha =[D_i]$ for some $i$ such that $D_i^2< 0$ then $W^\alpha$ is a face of $\overline{\mathcal{A}}(Y)$. If $\alpha, \beta$ are two such classes, $W^\alpha = W^\beta$ $\iff$ $\alpha =\beta$. \end{lemma} \begin{proof} For the first claim, it is enough to show that, if $G$ is an irreducible curve on $Y$ with $G^2 < 0$, then $G$ is either $D_i$ for some $i$, an exceptional curve or a $-2$-curve. This follows immediately from adjunction since, if $G\neq D_i$ for any $i$, then $G\cdot D \geq 0$ and $-2 \leq 2p_a(G) -2 = G^2 - G \cdot D < 0$, hence $p_a(G) =0$ and either $G^2 = -2$, $G \cdot D =0$, or $G^2 = G \cdot D =-1$. The last two statements follow from the openness statement in Proposition~\ref{constructnef} and the fact that no two distinct classes of the types listed above are multiples of each other. \end{proof} As an alternate characterization of the classes in the previous lemma, we have: \begin{lemma}\label{numeric} Let $H$ be a nef divisor such that $H\cdot D >0$. \begin{enumerate} \item[\rm(i)] If $\alpha \in H^2(Y; \mathbb{Z})$ with $\alpha ^2 = \alpha \cdot [K_Y] = -1$, then $\alpha \cdot [H] \geq 0$ $\iff$ $\alpha$ is the class of an effective curve. In particular, the wall $W^\alpha$ does not pass through the interior of $\overline{\mathcal{A}}(Y)$ (cf.\ \cite{FriedmanMorgan}, p.\ 332 for a more general statement). \item[\rm(ii)] If $\beta \in H^2(Y; \mathbb{Z})$ with $\beta ^2= -2$, $\beta\cdot [D_i] = 0$ for all $i$, $\beta \cdot [H] \geq 0$, and $\varphi_Y(\beta) =1$, then $\pm \beta$ is the class of an effective curve, and $\beta$ is effective if $\beta \cdot [H] > 0$. \end{enumerate} Hence the ample cone $\overline{\mathcal{A}}(Y)$ is the set of all $x\in \mathcal{C}^+$ such that $x\cdot [D_i]\geq 0$ and $x\cdot\alpha \geq 0$ for all classes $\alpha$ and $\beta$ as described in {\rm(i)} and {\rm(ii)} above, where in case {\rm(ii)} we assume in addition that $\beta$ is effective, or equivalently that $\beta \cdot [H] > 0$ for some nef divisor $H$. \end{lemma} \begin{proof} (i) Clearly, if $\alpha$ is the class of an effective curve, then $\alpha \cdot [H] \geq 0$ since $H$ is nef. Conversely, assume that $\alpha ^2 = \alpha \cdot [K_Y] = -1$ and that $\alpha \cdot [H] \geq 0$. By Riemann-Roch, $\chi(L_\alpha) = 1$. Hence either $h^0(L_\alpha) > 0$ or $h^2(L_\alpha) > 0$. But $h^2(L_\alpha) = h^0(L_\alpha^{-1}\otimes K_Y)$ and $[H] \cdot (-\alpha - [D] ) < 0$, by assumption. Thus $h^0(L_\alpha) > 0$ and hence $\alpha$ is the class of an effective curve. \smallskip \noindent (ii) As in (i), $H \cdot (-\beta - [D] ) < 0$, and hence $h^0(L_\beta^{-1}\otimes K_Y) = 0$. Thus $h^2(L_\beta) =0$. Suppose that $h^0(L_\beta) = 0$. Then, by Riemann-Roch, $\chi(L_\beta) = 0$ and hence $h^1(L_\beta) = 0$. Hence $h^1(L_\beta^{-1}\otimes K_Y) = 0$. Since $\varphi_Y(\beta) =1$, $L_\beta^{\pm 1}|D = \mathcal{O}_D$. Thus there is an exact sequence $$0 \to L_\beta^{-1}\otimes \mathcal{O}_Y(-D) \to L_\beta^{-1}\to\mathcal{O}_D \to 0.$$ Since $H^1(L_\beta^{-1}\otimes K_Y) =H^1(L_\beta^{-1}\otimes \mathcal{O}_Y(-D)) =0$, the map $H^0(L_\beta^{-1})\to H^0(\mathcal{O}_D)$ is surjective and hence $-\beta$ is the class of an effective curve. \end{proof} It is natural to make the following definition: \begin{definition} Let $\alpha \in H^2(Y; \mathbb{Z})$. Then $\alpha$ is a \textsl{numerical exceptional curve} if $\alpha ^2 = \alpha \cdot [K_Y] = -1$. The numerical exceptional curve $\alpha$ is \textsl{effective} if $h^0(L_\alpha) > 0$, i.e.\ if $\alpha = [G]$, where $G$ is an effective curve. \end{definition} A minor variation of the proof of Lemma~\ref{numeric} shows: \begin{lemma}\label{remarkafternumeric} Let $H$ be a nef and big divisor such that $H\cdot G > 0$ for all $G$ an irreducible curve not equal to $D_i$ for some $i$, and let $\alpha$ be a numerical exceptional curve. \begin{enumerate} \item[\rm{(i)}] Suppose that $[H]\cdot\alpha \geq 0$. Then either $[H]\cdot\alpha > 0$ and $\alpha$ is effective or $H\cdot D = [H]\cdot\alpha =0$ and $\alpha$ is an integral linear combination of the $[D_i]$. \item[\rm{(ii)}] If $(Y,D)$ is negative definite and $\alpha$ is an integral linear combination of the $[D_i]$, then either some component $D_i$ is a smooth rational curve of self-intersection $-1$ or $K_Y^2=-1$, $\alpha = K_Y$ and hence $\alpha$ is not effective. \item[\rm{(iii)}] If no component $D_i$ is a smooth rational curve of self-intersection $-1$, then $\alpha$ is effective $\iff$ $[H] \cdot \alpha > 0$. \end{enumerate} \end{lemma} \begin{proof} (i) As in the proof of Lemma~\ref{numeric}, either $\alpha$ or $-\alpha -[D]$ is the class of an effective divisor. If $-\alpha -[D]$ is the class of an effective divisor, then $0\leq [H]\cdot (-\alpha -[D]) \leq 0$, so that $[H]\cdot \alpha=H \cdot D = 0$. In particular $(Y,D)$ is negative definite. Moreover, if $G$ is an effective divisor with $[G] = -\alpha -[D]$, then every component of $G$ is equal to some $D_i$, hence $[G]$ and therefore $\alpha = -[G]-[D]$ are integral linear combinations of the $[D_i]$. \medskip \noindent (ii) Suppose that $\alpha$ is an integral linear combination of the $[D_i]$ but that no $D_i$ is a smooth rational curve of self-intersection $-1$. We shall show that $K_Y^2=-1$ and $\alpha = K_Y$. First suppose that $K_Y^2=-1$. Then $\bigoplus_i\mathbb{Z}\cdot [D_i] = \mathbb{Z}\cdot [K_Y] \oplus L$, where $L$, the orthogonal complement of $[K_Y]$ in $\bigoplus_i\mathbb{Z}\cdot [D_i]$, is even and negative definite. Thus $\alpha = a[K_Y] + \beta$, with either $\beta = 0$ or $\beta^2 \leq -2$, and $\alpha ^2 = -a^2 + \beta^2$. Hence, if $\alpha ^2 = \alpha \cdot [K_Y] = -1$, the only possibility is $\beta = 0$ and $a=1$. In case $K_Y^2 < -1$, $D$ is reducible, and no $D_i$ is a smooth rational curve of self-intersection $-1$, then $D_i^2\leq -2$ for all $i$ and either $D_i^2 \leq -4$ for some $i$ or there exist $i\neq j$ such that $D_i^2 = D_j^2=-3$. In this case, it is easy to check that, for all integers $a_i$ such that $a_i \neq 0$ for some $i$, $(\sum_ia_iD_i)^2 < -1$. This contradicts $\alpha^2 =-1$. \medskip \noindent (iii) If $[H]\cdot \alpha > 0$, then $\alpha$ is effective by (i). If $[H]\cdot \alpha< 0$, then clearly $\alpha$ is not effective. Suppose that $[H]\cdot \alpha =0$; we must show that, again, $\alpha$ is not effective. Suppose that $\alpha=[G]$ is effective. By the hypothesis on $H$, every component of $G$ is a $D_i$ for some $i$, so that $\alpha = \sum_ia_i[D_i]$ for some $a_i \in \mathbb{Z}$, $a_i \geq 0$. Let $I\subseteq \{1, \dots, r\}$ be the set of $i$ such that $a_i > 0$. Then $H\cdot D_i = 0$ for all $i\in I$. If $I = \{1, \dots, r\}$, then $(Y,D)$ is negative definite and we are done by (ii). Otherwise, $\bigcup_{i\in I}D_i$ is a union of chains of curves whose components $D_i$ satisfy $D_i ^2 \leq -2$. It is then easy to check that $\alpha^2 < -1$ in this case, a contradiction. Hence $\alpha$ is not effective. \end{proof} \begin{definition} Let $Y_t$ be a generic small deformation of $Y$, and identify $H^2( Y_t ;\mathbb{R})$ with $H^2( Y ;\mathbb{R})$. Define $\overline{\mathcal{A}}_{\text{\rm{gen}}}= \overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)$ to be the ample cone $\overline{\mathcal{A}}(Y_t)$ of $Y_t$, viewed as a subset of $H^2( Y ;\mathbb{R})$. \end{definition} \begin{lemma}\label{describeA} With notation as above, \begin{enumerate} \item[\rm(i)] If there do not exist any $-2$-curves on $Y$, then $\overline{\mathcal{A}}(Y) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$. More generally, $\overline{\mathcal{A}}_{\text{\rm{gen}}}$ is the set of all $x\in \mathcal{C}^+$ such that $x\cdot [D_i] \geq 0$ and $x\cdot \alpha \geq 0$ for all effective numerical exceptional curves. In particular, $$\overline{\mathcal{A}}(Y) \subseteq \overline{\mathcal{A}}_{\text{\rm{gen}}}.$$ \item[\rm(ii)] $\overline{\mathcal{A}}(Y) = \{ x\in \overline{\mathcal{A}}_{\text{\rm{gen}}}: x\cdot [C] \geq 0 \text{ for all $-2$-curves $C$} \}$. \end{enumerate} \end{lemma} \begin{proof} Let $Y$ be a surface with no $-2$-curves (such surfaces exist and are generic by the surjectivity of the period map, Theorem~\ref{surjper}). Fix a nef divisor $H$ on $Y$ with $H\cdot D >0$. Then $\overline{\mathcal{A}}(Y)$ is the set of all $x\in \mathcal{C}^+$ such that $x\cdot [D_i] \geq 0$ and $x\cdot [E] \geq 0$ for all exceptional curves $E$, and this last condition is equivalent to $x\cdot \alpha \geq 0$ for all $\alpha\in H^2(Y; \mathbb{Z})$ such that $\alpha^2 =\alpha\cdot [K_Y] =-1$ and $\alpha \cdot [H] \geq 0$, by Lemma~\ref{numeric}. Since this condition is independent of the choice of $Y$, because we can choose the divisor $H$ to be ample and to vary in a small deformation, the first part of (i) follows, and the remaining statements are clear. \end{proof} In fact, the argument above shows: \begin{lemma}\label{definv} The set of effective numerical exceptional curves and the set $\overline{\mathcal{A}}_{\text{\rm{gen}}}$ are locally constant, and hence are invariant in a global deformation with trivial monodromy under the induced identifications. \qed \end{lemma} \begin{lemma}\label{reflect} If $C$ is a $-2$-curve on $Y$, then the wall $W^C$ meets the interior of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$, and in fact $r_C( \overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$, where $r_C\colon H^2(Y; \mathbb{R}) \to H^2(Y; \mathbb{R})$ is reflection in the class $[C]$. Hence $\overline{\mathcal{A}}(Y)$ is a fundamental domain for the action of the group $\mathsf{W}({\Delta_Y})$ on $\overline{\mathcal{A}}_{\text{\rm{gen}}}$, where $\mathsf{W}({\Delta_Y})$ is the group generated by the reflections in the classes in the set $\Delta_Y$ of $-2$-curves on $Y$. \end{lemma} \begin{proof} Clearly, if $r_C( \overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$, then $W^C$ meets the interior of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$. To see that $r_C( \overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$, assume first more generally that $\beta\in \Lambda$ is any class with $\beta^2 = -2$, and let $r_\beta$ be the corresponding reflection. Then $r_\beta$ permutes the set of $\alpha \in H^2(Y; \mathbb{Z})$ such that $\alpha^2 =\alpha\cdot [K_Y] =-1$, but does not necessarily preserve the condition that $\alpha$ is effective, i.e.\ that $\alpha \cdot [H] \geq 0$ for some nef divisor $H$ on $Y$ with $H\cdot D >0$. However, for $\beta = [C]$, there exists by Proposition~\ref{constructnef} a nef and big divisor $H_0$ such that $H_0 \cdot C = 0$ and $H\cdot D > 0$. Hence $[H_0]$ is invariant under $r_C$, and so $r_C$ permutes the set of $\alpha \in H^2(Y; \mathbb{Z})$ such that $\alpha^2 =\alpha\cdot [K_Y] =-1$ and $\alpha \cdot [H_0] \geq 0$. Thus $r_C$ permutes the set of effective numerical exceptional curves and hence the faces of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$, so that $r_C( \overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$. Since $\overline{\mathcal{A}}(Y) \subseteq \overline{\mathcal{A}}_{\text{\rm{gen}}}$ is given by (ii) of Lemma~\ref{describeA}, the final statement is then a general result in the theory of reflection groups (cf.\ \cite{Bour}, V \S3). \end{proof} \begin{remark}\label{monodromyinvar} (i) The argument for the first part of Lemma~\ref{reflect} essentially boils down to the following: let $\overline{Y}$ be the normal surface obtained by contracting $C$. Then the reflection $r_C$ is the monodromy associated to a generic smoothing of the singular surface $\overline{Y}$, and the cone $\overline{\mathcal{A}}_{\text{\rm{gen}}}$ is invariant under monodromy. \smallskip \noindent (ii) If $E$ is an exceptional curve, then $W^E$ is a face of $\overline{\mathcal{A}}(Y)$. For a generic $Y$ (i.e.\ no $-2$-curves), Lemma~\ref{reflect} then says that the set of exceptional curves on $Y$ is invariant under the reflection group generated by all classes of square $-2$ which become the classes of a $-2$-curve under some specialization. A somewhat more involved statement holds in the nongeneric case. \end{remark} \begin{lemma}\label{permapinvar} With $\mathsf{W}({\Delta_Y})$ as in Definition~\ref{defcurves}, for all $w\in \mathsf{W}({\Delta_Y})$ and all $\beta \in \Lambda$, $\varphi_{Y}(w(\alpha)) = \varphi_{Y}(\alpha)$. \end{lemma} \begin{proof} This is clear since $\varphi_Y([C]) =1$, hence $\varphi_{Y}(r_C(\alpha)) = \varphi_{Y}(\alpha)$ for all $\alpha \in \Lambda$. \end{proof} \begin{lemma}\label{Weyltrans} Suppose that $C=\sum_ia_iC_i$, where the $C_i$ are $-2$-curves, $a_i\in \mathbb{Z}$, $C^2 = -2$, the support of $C$ is connected, and $(C_i\cdot C_j)$ is negative definite. Then there exists an element $w$ in the group generated by reflections in the $[C_i]$ such that $w([C]) = [C_i]$ for some $i$. \end{lemma} \begin{proof} This follows from the well known fact that, if $R$ is an irreducible root system such that all roots have the same length, then the Weyl group $\mathsf{W}(R)$ acts transitively on the set of roots. \end{proof} \begin{theorem}\label{mainprop} Let $\beta \in \Lambda$ with $\beta^2 = -2$. Then the following are equivalent: \begin{enumerate} \item[\rm(i)] Let $Y_1$ be a deformation of $Y$ with trivial monodromy such that $\varphi_{Y_1}(\beta) = 1$. Then, with $\mathsf{W}({\Delta_{Y_1}})$ as in Definition~\ref{defcurves}, there exists $w\in \mathsf{W}({\Delta_{Y_1}})$ such that $w(\beta)=[C]$, where $C$ is a $-2$-curve on $Y_1$. In particular, if $Y_1$ is generic subject to the condition that $\varphi_{Y_1}(\beta) = 1$ (i.e.\ if $\operatorname{Ker} \varphi_{Y_1} = \mathbb{Z} \cdot \beta$), then $\pm \beta = [C]$ for a $-2$-curve $C$. \item[\rm(ii)] The wall $W^\beta$ meets the interior of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$. \item[\rm(iii)] If $r_\beta$ is reflection in the class $\beta$, then $r_\beta(\overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$. \end{enumerate} \end{theorem} \begin{proof} Lemma~\ref{reflect} implies that (i) $\implies$ (iii) in case $Y=Y_1$ and $\beta = [C]$ where $C$ is a $-2$-curve. The case where $w(\beta) = [C]$ follows easily from this since, for all $w\in \mathsf{W}({\Delta_{Y_1}})$, $w\circ r_\beta \circ w^{-1} = r_{w(\beta)}$. Lemma~\ref{definv} then handles the case where $Y_1$ is replaced by a general deformation $Y$. Also, clearly (iii) $\implies$ (ii). So it is enough to show that (ii) $\implies$ (i). In fact, by Lemma~\ref{Weyltrans}, it is enough to show that, if $Y$ is any surface such that $\varphi_Y(\beta) = 1$ and $W^\beta$ meets the interior of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$, then there exists a $w\in \mathsf{W}({\Delta_Y})$ such that $w(\beta) = [\sum_ia_iC_i]$ where $a_i\in \mathbb{Z}^+$, the $C_i$ are curves disjoint from $D$, and $\bigcup_iC_i$ is connected. By hypothesis, there exists an $x$ in the interior of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$ such that $x\cdot \beta =0$. In particular, $x\cdot [D_i] >0$ for all $i$. We can assume that $x=[H]$ is the class of a divisor $H$. After replacing $x$ by $w(x)$ and $\beta$ by $w(\beta)$ for some $w\in \mathsf{W}({\Delta_Y})$, we can assume that $x$ (and hence $H$) lies in $\overline{\mathcal{A}}(Y)$, so that $H$ is a nef and big divisor with $H\cdot D_i > 0$ for all $i$, and we still have $\varphi_Y(\beta) = 1$ by Lemma~\ref{permapinvar}. By Lemma~\ref{numeric}, possibly after replacing $\beta$ by $-\beta$, $\beta = [\sum_ia_iC_i]$ where the $C_i$ are irreducible curves and $a_i \in \mathbb{Z}^+$. Since $\beta\cdot [H] =\sum_ia_i(C_i\cdot H)=0$, $C_i\cdot H \geq 0$, and $D_j\cdot H > 0$, $C_i\cdot H = 0$ for all $i$ and no $C_i$ is equal to $D_j$ for any $j$. Hence the $C_i$ are curves meeting each $D_j$ in at most finitely many points and $\sum_ia_i(C_i\cdot D_j)=0$, so that $C_i\cap D_j =\emptyset$. Finally each $(C_i)^2 < 0$ by Hodge index, and so each $C_i$ is a $-2$-curve. Moreover the $C_i$ span a negative definite lattice, and in particular their classes are independent. From this, the statement about the connectedness of $\bigcup_iC_i$ is clear. \end{proof} \begin{definition} Let $R=R_Y$ be the set of all $\beta \in \Lambda$ such that $\beta ^2 = -2$ and such that there exists some deformation of $Y$ for which $\beta$ becomes the class of a $-2$-curve. Following \cite{GHK}, we call $R$ the set of \textsl{Looijenga roots} (or briefly \textsl{roots}) of $Y$. Note that $R$ only depends on the deformation type of $Y$. The definition of $R$ is slightly ill-posed, since we have not specified an identification of the cohomologies of the fibers along the deformation. In particular, if $\beta = [C]$ is a $-2$-curve on $Y$, then by (i) of Remark~\ref{monodromyinvar}, if $Y'$ is a nearby deformation of $Y$, then a general smoothing of the ordinary double point on the contraction of $C$ on $Y$ has monodromy which sends $[C]$ to $-[C]$, and hence $-\beta \in R$ as well. To avoid this issue, it is simpler to define $R$ to be the set of $\beta \in \Lambda$, $\beta^2=-2$, which satisfy either of the equivalent conditions (ii), (iii) of Theorem~\ref{mainprop}. Given $Y$, let $\Delta_Y$ be the set of classes of $-2$-curves on $Y$ and $\mathsf{W}({\Delta_Y})$ the reflection group generated by $\Delta_Y$. Finally set $R^{\text{\rm{nod}}}$, the set of \textsl{nodal classes}, to be $\mathsf{W}({\Delta_Y})\cdot \Delta_Y$. Then $R^{\text{\rm{nod}}} \subseteq R$. \end{definition} \begin{corollary}\label{preserveamp} \text{\rm{(i)}} If $f\colon H^2(Y; \mathbb{Z}) \to H^2(Y; \mathbb{Z})$ is an integral isometry preserving the classes $[D_i]$ such that $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$, then $f(R) = R$. \smallskip \noindent \text{\rm{(ii)}} If $\mathsf{W}(R)$ is the reflection group generated by reflections in the elements of $R$, then $\mathsf{W}(R) \cdot R = R$ and $w(\overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$ for all $w\in \mathsf{W}(R)$. \qed \end{corollary} \begin{remark} A result similar to Theorem~\ref{mainprop} classifies the elements of $H^2(Y;\mathbb{Z})$ which are represented by the class of a smoothly embedded $2$-sphere of self-inter\-section $-2$ in terms of the ``super $P$-cell" of \cite{FriedmanMorgan}. \end{remark} In \cite{Looij}, for the case where the length $r(D) \leq 5$, Looijenga defines a subset $R_L$ of $\Lambda$ by starting with a particular configuration $B$ of elements of square $-2$ (a \textsl{root basis} in the terminology of \cite{Looij}), and setting $R_L = \mathsf{W}(B)\cdot B$, where $\mathsf{W}(B)$ is the reflection group generated by $B$. In fact, the set $R_L$ is just the set $R$ of Looijenga roots: \begin{proposition} In the above notation, $R_L = R$. \end{proposition} \begin{proof} It is easy to see from the construction of \cite[I \S2]{Looij} that $B \subseteq R$. Hence $R_L \subseteq R$. Conversely, if $\alpha \in R$, then, by (ii) of Corollary~\ref{preserveamp}, $r_\alpha(\overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$. It then follows from \cite[Proposition I (4.7)]{Looij} that $r_\alpha \in \mathsf{W}(B)$. By a general result in the theory of reflection groups \cite[V \S3.2, Thm.\ 1(iv)]{Bour}, $r_\alpha = r_\beta$ for some $\beta \in R_L$. Thus $\alpha =\pm \beta$, so that $\alpha \in R_L$. Hence $R\subseteq R_L$, and therefore $R_L = R$. \end{proof} \begin{example}\label{irredex} Let $(Y,D)$ be the blowup of $\mathbb{P}^2$ at $N \geq 10$ points on an irreducible nodal cubic curve. We let $h$ be the pullback of the class of a line on $\mathbb{P}^2$ and $e_1, \dots, e_N$ be the classes of the exceptional curves. \smallskip \noindent (i) Let $\alpha = -3h + \sum_{i=1}^{10}e_i$. Then $\alpha^2 = \alpha \cdot [K_Y] = -1$, so that $\alpha$ is a numerical exceptional curve. But there exists a nef and big divisor $H$ (for example $h$) such that $\alpha \cdot [H] < 0$, so that $\alpha$ is not effective. Hence, $\alpha \cdot x \leq 0$ for all $x\in \overline{\mathcal{A}}(Y)=\overline{\mathcal{A}}_{\text{\rm{gen}}}$, since $W^\alpha$ does not pass through the interior of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$. Note that $W^\alpha$ is never a face of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$. For $N=10$, $W^{-\alpha}$ is a face of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$, but this is no longer the case for $N \geq 11$. Thus the condition $\alpha \cdot [H] \geq 0$ for some $H$ such that $H\cdot D > 0$ is necessary for $\alpha$ to be effective. More generally, let $f = 3h -\sum_{i=1}^9e_i$ and set $\alpha = kf + e_{10}$ (the case above corresponds to $k=-1$). As above, $\alpha$ is a numerical exceptional curve. For $k\leq -1$, $h\cdot \alpha < 0$, and hence $\alpha$ is not effective. For $k\geq 1$, $\alpha$ is effective but it is not the class of an exceptional curve: for all $x\in \overline{\mathcal{A}}_{\text{\rm{gen}}}$, $x\cdot f > 0$, and $x\cdot e_{10}\geq 0$. Hence $x\cdot \alpha > 0$ for all $x \in \overline{\mathcal{A}}_{\text{\rm{gen}}}$. Thus $W^\alpha$ is not a face of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$ and so $\alpha$ is not the class of an exceptional curve. \smallskip \noindent (ii) With $\alpha$ any of the classes as above, suppose that $N \geq 11$ and $k\neq 0$ and set $\beta = \alpha -e_{11}$. Then $\beta^2 =-2$ and $\beta \cdot [K_Y] = 0$. However, $$r_\beta(e_{11}) = e_{11} + (e_{11}\cdot \beta)\beta = \alpha.$$ Since $W^{e_{11}}$ is a face of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$ and $W^\alpha$ is not a face of $\overline{\mathcal{A}}_{\text{\rm{gen}}}$, $r_\beta( \overline{\mathcal{A}}_{\text{\rm{gen}}})\neq \overline{\mathcal{A}}_{\text{\rm{gen}}}$. Hence $\beta$ does not satisfy any of the equivalent conditions of Theorem~\ref{mainprop}, so that $\beta \notin R$. \end{example} \begin{remark}\label{bestposs} In the situation of the example above, it is well-known that if $D$ is irreducible, $N \leq 9$ (i.e.\ $D^2\geq 0$), and there are no $-2$-curves on $Y$, then every numerical exceptional curve is the class of an exceptional curve, so (i) above is best possible. A generalization is given in Proposition~\ref{nonneg} below. We shall show in Proposition~\ref{minusone} that the example in (ii) is best possible as well. \end{remark} The numerical exceptional curves given in (i) of Example~\ref{irredex} were known to Du Val. In fact, he showed that they are essentially the only numerical curves in case $Y$ is the blowup of $\mathbb{P}^2$ at $10$ points (\cite{duV2}, pp.\ 46--47): \begin{proposition} Suppose that $(Y,D)$ is the blowup of $\mathbb{P}^2$ at $10$ points lying on an irreducible cubic, that $Y$ is generic in the sense that there are no $-2$-curves on $Y$, and that $\alpha$ is a numerical exceptional curve. Then there exists an exceptional curve $E$ on $Y$ and an integer $k$ such that $\alpha$ is the class of $k(D + E) + E$. \end{proposition} \begin{proof} Suppose that $\alpha$ is a numerical exceptional curve on $Y$. Then, since $K_Y^2=-1$, $\lambda = \alpha +[D] = \alpha -[K_Y]$ satisfies: $\lambda^2 = \lambda \cdot \alpha = \lambda \cdot [K_Y] = 0$. In particular, $\lambda \in \Lambda$. Conversely, given an isotropic vector $\lambda \in \Lambda$, if we set $\alpha = \lambda + [K_Y]$, then $\alpha$ is a numerical exceptional curve. Any isotropic vector $\lambda \in \Lambda$ can be uniquely written as $n\lambda_0$, where $n \in \mathbb{Z}$ and $\lambda_0$ is primitive and lies in $\overline{\mathcal{C}^+}$. Note that $H^2(Y; \mathbb{Z}) = \mathbb{Z}[K_Y] \oplus \Lambda$ and that $\Lambda = U \oplus (-E_8)$ (both sums orthogonal). An easy exercise shows that, if $\operatorname{Aut}^+(\Lambda)$ is the group of integral isometries $A$ of $\Lambda$ such that $A(\mathcal{C}^+\cap \Lambda) = \mathcal{C}^+\cap \Lambda$, i.e.\ $A$ has real spinor norm equal to $1$, then every $A \in \operatorname{Aut}^+(\Lambda)$ extends uniquely to an integral isometry of $H^2(Y; \mathbb{Z})$ fixing $[K_Y]$ and hence $[D]$, and moreover that $\operatorname{Aut}^+(\Lambda)$ acts transitively on the set of (nonzero) primitive isotropic vectors in $\overline{\mathcal{C}^+} \cap \Lambda$. Hence there exists an $A \in \operatorname{Aut}^+(\Lambda)$ such that $A(\lambda_0) = f$, in the notation of Example~\ref{irredex}. If we continue to denote by $A$ the extension of $A$ to an isometry of $H^2(Y; \mathbb{Z})$, then $A(\alpha) = nf + [K_Y] = (n-1) f + e_{10}$, since $f = -[K_Y] + e_{10}$. It follows that $\alpha = (n-1)\lambda_0 + A^{-1}(e_{10})$. Using Proposition~\ref{minusone} below, $A^{-1}$ preserves the walls of the ample cone of $Y$, and thus $A^{-1}(e_{10}) =e$ is the class of an exceptional curve $E$, and $\lambda_0 =A^{-1}(f) = A^{-1}([D] + e_{10}) = [D] + E$. Hence, setting $k=n-1$, $\alpha$ is the class of $k(D + E) + E$ as claimed. \end{proof} The proof above shows the following: \begin{corollary} Let $(Y,D)$ be the blowup of $\mathbb{P}^2$ at $10$ points lying on an irreducible cubic and such that there are no $-2$-curves on $Y$, let $\alpha$ be a numerical exceptional curve on $Y$, and let $\lambda = \alpha -[K_Y]$. Then: \begin{enumerate} \item[\rm{(i)}] $\alpha$ is effective $\iff$ $\lambda \in (\overline{\mathcal{C}^+}-\{0\}) \cap \Lambda$. \item[\rm{(ii)}] $\alpha$ is not effective $\iff$ $\lambda \in (-\overline{\mathcal{C}^+}) \cap \Lambda$. \item[\rm{(iii)}] $\alpha$ is the class of an exceptional curve $\iff$ $\lambda$ is a primitive isotropic vector in $\overline{\mathcal{C}^+} \cap \Lambda$. Thus there is a bijection from the set of exceptional curves on $Y$ to the set of primitive isotropic vectors in $\overline{\mathcal{C}^+} \cap \Lambda$. \qed \end{enumerate} \end{corollary} \begin{remark} In the above situation, let $\mathsf{W}$ be the group generated by the reflections in the classes $e_1-e_2, \dots, e_9-e_{10}, h-e_2-e_2-e_3$, which are easily seen to be Looijenga roots. A classical argument (usually called Noether's inequality) shows that, if $\lambda_0$ is a primitive integral isotropic vector in $\Lambda$ lying in $\overline{\mathcal{C}^+}$, then there exists $w\in \mathsf{W}$ such that $w(\lambda_0) = f = 3h -\sum_{i=1}^9 e_i$, in the notation of Example~\ref{irredex}. Thus, $\mathsf{W}$ acts transitively on the set of such vectors. Using standard results about the affine Weyl group of $E_8$, it is then easy to see that $\mathsf{W} = \operatorname{Aut}^+(\Lambda)$. This was already noted by Du Val in \cite{duV2}. \end{remark} \section{Roots and the ample cone} By Corollary~\ref{preserveamp}, if $f\colon H^2(Y; \mathbb{Z}) \to H^2(Y; \mathbb{Z})$ is an integral isometry preserving the classes $[D_i]$ such that $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$, then $f(R) = R$. In this section, we find criteria for when the converse holds. We begin with the following: \begin{lemma}\label{containopen} Let $f\colon H^2(Y; \mathbb{Z}) \to H^2(Y; \mathbb{Z})$ be an integral isometry preserving $\mathcal{C}^+$ and the classes $[D_i]$. If $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}) \cap \overline{\mathcal{A}}_{\text{\rm{gen}}}$ contains an open set, then $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}) = \overline{\mathcal{A}}_{\text{\rm{gen}}}$. \end{lemma} \begin{proof} Choosing $x\in f(\overline{\mathcal{A}}_{\text{\rm{gen}}}) \cap \overline{\mathcal{A}}_{\text{\rm{gen}}}$ corresponding to an ample divisor, it is easy to see that $f(\overline{\mathcal{A}}_{\text{\rm{gen}}})$ and $\overline{\mathcal{A}}_{\text{\rm{gen}}}$ have the same set of walls, hence are equal. \end{proof} Next we deal with the case where one component of $D$ is a smooth rational curve of self-intersection $-1$. \begin{lemma}\label{excepcomp} Suppose that $D$ is reducible and that $D_r^2=-1$. Let $(\overline{Y}, \overline{D})$ be the anticanonical pair obtained by contracting $D_r$. Then any isometry $f$ of $H^2(Y;\mathbb{Z})$ preserving the classes $[D_i]$, $1\leq i\leq r$, defines an isometry $\bar{f}$ of $H^2(\overline{Y};\mathbb{Z})$ preserving the classes $[\overline{D}_i]$, $1\leq i\leq r-1$, and conversely. Moreover, $f$ preserves $\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)$ $\iff$ $\bar{f}$ preserves $\overline{\mathcal{A}}_{\text{\rm{gen}}}(\overline{Y})$, and $R_Y$ is naturally identified with the roots $R_{\overline{Y}}$ of $\overline{Y}$. \end{lemma} \begin{proof} The first statement is clear. Identifying $H^2(\overline{Y}, \mathbb{Z})$ with $[D_r]^\perp \subseteq H^2(Y; \mathbb{Z})$, it is clear that $\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y) \cap [D_r]^\perp =\overline{\mathcal{A}}_{\text{\rm{gen}}}(\overline{Y})$. Hence, if $f$ preserves $\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)$, then $\bar{f}$ preserves $\overline{\mathcal{A}}_{\text{\rm{gen}}}(\overline{Y})$. Since a divisor $\overline{H}$ on $\overline{Y}$ is ample $\iff$ $N \overline{H} - D_r$ is ample for all $N \gg 0$, it follows that, if $\bar{f}$ preserves $\overline{\mathcal{A}}_{\text{\rm{gen}}}(\overline{Y})$, then $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)) \cap \overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)$ contains an open set, and hence $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)) = \overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)$ by Lemma~\ref{containopen}. It follows from this and from Theorem~\ref{mainprop} that $R_Y$ is naturally identified with $R_{\overline{Y}}$ (or directly from the definition by noting that there is a bijection from the set of deformations of $(Y,D)$ to those of $(\overline{Y}, \overline{D})$). \end{proof} Henceforth, then, we shall always assume if need be that no component of $D$ is a smooth rational curve of self-intersection $-1$. We turn to the straightforward case where $(Y,D)$ is not negative definite: \begin{proposition}\label{nonneg} Suppose that $(Y,D)$ and $(Y', D')$ are two anticanonical pairs with $r(D) = r(D')$ and such that neither pair is negative definite. If $f\colon H^2(Y; \mathbb{Z}) \to H^2(Y'; \mathbb{Z})$ is an integral isometry with $f([D_i]) = [D_i']$ for all $i$, then $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)) = \overline{\mathcal{A}}_{\text{\rm{gen}}}(Y')$ and hence $f(R_Y) = R_{Y'}$. Moreover, $$R_Y = \{\beta \in \Lambda(Y,D): \beta^2 =-2\}.$$ \end{proposition} \begin{proof} By Lemma~\ref{excepcomp}, we may assume that no $D_i$ has self-intersection $-1$. The statement that the cycle is not negative definite is then equivalent to the statement that either $D_j^2 \geq 0$ for some $j$ or $D_i^2 = -2$ for all $i$ and $r\geq 2$. In the first case, $D_j$ is nef and $D_j \cdot D > 0$. Hence, if $\alpha$ is a numerical exceptional curve such that $\alpha\cdot [D_i] \geq 0$, then $\alpha$ is effective by Lemma~\ref{numeric}. Thus $\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)$ is the set of all $x\in \mathcal{C}^+_D(Y)$ such that $x\cdot \alpha \geq 0$ for all numerical exceptional curves $\alpha$ such that $\alpha\cdot [D_i] \geq 0$. Since $f(\alpha)^2 = \alpha^2$, $f([D_i]) = [D_i']$, and $f(\alpha) \cdot [K_{Y'}] = \alpha \cdot [K_Y]$, it follows that $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)) = \overline{\mathcal{A}}_{\text{\rm{gen}}}(Y')$. Applying this to reflection in a class $\beta$ of square $-2$ in $\Lambda(Y,D)$ then implies that $\beta \in R_Y$. The case where $D_i^2 = -2$ for every $i$ is similar, using the nef divisor $D = \sum_iD_i$ with $D^2 = 0$. If $\alpha$ is a numerical exceptional curve, then $\alpha$ is effective since $(-\alpha + [K_Y]) \cdot [D] = \alpha \cdot [K_Y] = -1$. The rest of the argument proceeds as before. \end{proof} \begin{remark} If $D$ is irreducible and not negative definite (i.e.\ $D^2 \geq 0$) and there are no $-2$-curves on $Y$, then, as is well-known and noted in Remark~\ref{bestposs}, every numerical exceptional curve is the class of an exceptional curve. However, if $D$ is reducible but not negative definite, then, even if there are no $-2$-curves on $Y$, there may well exist numerical exceptional curves which are not effective, and effective numerical exceptional curves which are not the class of an exceptional curve. \end{remark} From now on we assume that $D$ is negative definite. The case $K_Y^2=-1$ can also be handled by straightforward methods, as noted in \cite{Looij}. (See also \cite{FriedmanMorgan}, II(2.7)(c) in case $D$ is irreducible.) \begin{proposition}\label{minusone} Let $(Y,D)$ and $(Y', D')$ be two negative definite anticanonical pairs with $r(D) = r(D')$ and $K_Y^2 = K_{Y'}^2 = -1$. Let $f\colon H^2(Y; \mathbb{Z}) \to H^2(Y'; \mathbb{Z})$ be an isometry such that $f([D_i]) = [D_i']$ for all $i$ and $f (\mathcal{C}^+(Y)) = \mathcal{C}^+(Y')$. Then $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)) = \overline{\mathcal{A}}_{\text{\rm{gen}}}(Y')$. Moreover, $$R_Y = \{\beta \in \Lambda(Y,D): \beta^2 = -2\},$$ and hence $f(R_Y) = R_{Y'}'$. \end{proposition} \begin{proof} Since $(Y,D)$ is negative definite, no component of $D$ is a smooth rational curve of self-intersection $-1$. Fix a nef and big divisor $H$ such that $H\cdot D_i =0$ for all $i$ and $H\cdot G > 0$ for every irreducible curve $G \neq D_i$. If $\alpha$ is a numerical exceptional curve, $(\alpha -[K_Y])^2 = (\alpha + [D]) ^2 = 0$. By Lemma~\ref{remarkafternumeric}, $\alpha$ is effective $\iff$ $[H] \cdot \alpha > 0$ $\iff$ $[H] \cdot (\alpha + [D]) > 0$. By the Light Cone Lemma (cf.\ \cite{FriedmanMorgan}, p.\ 320), this last condition is equivalent to: $\alpha + [D] \in \overline{\mathcal{C}^+}-\{0\}$. Since this condition is clearly preserved by an isometry $f$ as in the statement of the proposition, we see that $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)) = \overline{\mathcal{A}}_{\text{\rm{gen}}}(Y')$. The final statement then follows as in the proof of Proposition~\ref{nonneg}. \end{proof} \begin{remark} The hypothesis $K_Y^2 =-1$ implies that $r(D) \leq 10$, so there are only finitely many examples of the above type. For $r(D) = 10$, there is essentially just one combinatorial possibility for $(Y,D)$ neglecting the orientation (cf.\ \cite{FriedmanMiranda}, (4.7), where it is easy to check that this is the only possibility). For $r(D) = 9$, however, there are two different possibilities for the combinatorial type of $(Y,D)$ (again ignoring the orientation). Begin with an anticanonical pair $(\overline{Y}, \overline{D})$, where $\overline{Y}$ is a rational elliptic surface and $\overline{D}=\overline{D}_0 + \cdots + \overline{D}_8$ is a fiber of type $\widetilde{A}_8$ (or $I_9$ in Kodaira's notation). There is a unique such rational elliptic surface $\overline{Y}$ and its Mordell-Weil group has order $3$ (see for example \cite{MirandaPersson}). In particular, possibly after relabeling the components, there is an exceptional curve meeting $\overline{D}_i$ $\iff$ $i=0,3,6$. It is easy to see that blowing up a point on a component $\overline{D}_i$ meeting an exceptional curve leads to a different combinatorial possibility for an anticanonical pair $(Y,D)$ with $K_Y^2 =-1$ and $r(D) = 9$ than blowing up a point on a component $\overline{D}_i$ which does not meet an exceptional curve. \end{remark} We turn now to the case where $(Y,D)$ is negative definite but with no assumption on $K_Y^2$. \begin{definition} A point $x\in \mathcal{C}^+ \cap \Lambda$ is \textsl{$R$-distinguished} if there exists a codimension one negative definite subspace $V$ of $\Lambda \otimes \mathbb{R}$ spanned by elements of $R$ such that $x\in V^\perp$. Note that the definition only depends on the deformation type of the pair $(Y,D)$. \end{definition} \begin{remark} Clearly, if $V$ is a codimension one negative definite subspace of $\Lambda \otimes \mathbb{R}$ spanned by elements of $R$, then $V$ is defined over $\mathbb{Q}$ and $V^\perp \cap (\Lambda \otimes \mathbb{R})$ is a one-dimensional subspace of $H^2(Y; \mathbb{R})$ defined over $\mathbb{Q}$ and spanned by an $h\in H^2(Y; \mathbb{Z})$ with $h^2 > 0$, $h\cdot [D_i] =0$, and $h\cdot \beta = 0$ for all $\beta \in R\cap V$. Hence, if $h\in \mathcal{C}^+\cap \Lambda$, then $h$ is $R$-distinguished. Also, if the rank of $\Lambda$ is one, then $\{0\}$ is a codimension one negative definite subspace of $\Lambda \otimes \mathbb{R}$, and hence every point of $\mathcal{C}^+ \cap \Lambda$ is $R$-distinguished. However, as we shall see, there exist deformation types $(Y,D)$ with no $R$-distinguished points. \end{remark} The following is also clear: \begin{lemma} Let $(Y,D)$ and $(Y', D')$ be two anticanonical pairs with $r(D) = r(D')$ and let $f\colon H^2(Y; \mathbb{Z}) \to H^2(Y'; \mathbb{Z})$ be an isometry such that $f([D_i]) = [D_i']$ for all $i$, $f (\mathcal{C}^+(Y)) = \mathcal{C}^+(Y')$, and $f(R_Y) = R_{Y'}$. Then, if $x$ is a $R_Y$-distinguished point of $\mathcal{C}^+(Y) \cap \Lambda(Y,D)$, $f(x)$ is a $R_{Y'}$-distinguished point of $ \mathcal{C}^+(Y') \cap \Lambda(Y',D')$. \qed \end{lemma} Our goal now is to prove: \begin{theorem}\label{disttheorem} Let $(Y,D)$ and $(Y', D')$ be two anticanonical pairs with $r(D) = r(D')$ and let $f\colon H^2(Y; \mathbb{Z}) \to H^2(Y'; \mathbb{Z})$ be an isometry such that $f([D_i]) = [D_i']$ for all $i$, $f (\mathcal{C}^+(Y)) = \mathcal{C}^+(Y')$, and $f(R_Y) = R_{Y'}$. If there exists a $R$-distinguished point of $\mathcal{C}^+ \cap \Lambda$, then $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)) = \overline{\mathcal{A}}_{\text{\rm{gen}}}(Y')$. \end{theorem} We begin by showing: \begin{proposition}\label{aprop} Let $x$ be a $R$-distinguished point of $\mathcal{C}^+ \cap \Lambda$. Then $x\in \overline{\mathcal{A}}_{\text{\rm{gen}}}$. Moreover, if $\alpha$ is a numerical exceptional curve and $\alpha$ is not in the span of the $[D_j]$, then $\alpha$ is effective $\iff$ $\alpha \cdot x \geq 0$. \end{proposition} \begin{proof} It is enough by Lemma~\ref{definv} to check this on some (global) deformation of $(Y,D)$ with trivial monodromy. By Theorem~\ref{surjper}, we can assume that $$\operatorname{Ker} \varphi_Y = V \cap \Lambda,$$ where $V$ is as in the definition of $R$-distinguished. In particular, if $C \in \Delta_Y$, i.e.\ $C$ is a $-2$-curve on $Y$, then $[C] \in V$. It follows from (i) of Theorem~\ref{mainprop} that every $\beta \in V\cap R$ is a sum of elements of $\Delta_Y$, so that $\Delta_Y$ spans $V$ over $\mathbb{Q}$. Thus there exist $-2$-curves $C_1, \dots, C_k$ such that $V$ is spanned by the classes $[C_i]$, and the intersection matrix $(C_i\cdot C_j)$ is negative definite. The classes $[C_1], \dots, [C_k], [D_1], \dots, [D_r]$ span a negative definite sublattice of $H^2(Y; \mathbb{Z})$. By Lemma~\ref{constructnef} there exists a nef and big divisor $H$ such that $H$ is perpendicular to the curves $C_1, \dots, C_k, D_1, \dots, D_r$. Clearly, then, $[H] \in \overline{\mathcal{A}}(Y) \subseteq \overline{\mathcal{A}}_{\text{\rm{gen}}}$ and $[H] = tx$ for some $t\in \mathbb{R}^+$. Hence $x\in \overline{\mathcal{A}}_{\text{\rm{gen}}}$ as well. Note that $[H]^\perp$ is spanned over $\mathbb{Q}$ by $[C_1], \dots, [C_k], [D_1], \dots, [D_r]$. Since $x\in \overline{\mathcal{A}}(Y)$, if $\alpha$ is effective, $x\cdot \alpha \geq 0$. Conversely, suppose that $\alpha$ is a numerical exceptional curve with $x\cdot \alpha \geq 0$ and that $\alpha$ is not effective. Then $-\alpha + [K_Y]=[G]$, where $G$ is effective, and $H\cdot (-\alpha + [K_Y]) = -\alpha \cdot [H] \leq 0$. Hence $(-\alpha + [K_Y]) \cdot [H] = 0$. \begin{claim} $-\alpha + [K_Y] = \sum_ia_i[C_i] + \sum_jb_j[D_j]$ where the $a_i, b_j \in \mathbb{Z}$. \end{claim} \begin{proof}[Proof of the claim] In any case, since $-\alpha + [K_Y]$ is perpendicular to $[H]$, there exist $a_i, b_j \in \mathbb{Q}$ such that $-\alpha + [K_Y] = \sum_ia_i[C_i] + \sum_jb_j[D_j]$. Write $-\alpha + [K_Y]= [G] = \sum_in_i[C_i] + \sum_j m_j[D_j] +[F]$, where $n_i, m_j \in \mathbb{Z}$ and $F$ is an effective curve not containing $C_i$ or $D_j$ in its support for any $i,j$. By (ii) of Lemma~\ref{negdef}, $F=0$, $a_i = n_i$ and $b_j= m_j$ for all $i,j$. Hence $a_i, b_j \in \mathbb{Z}$. \end{proof} Since $-\alpha + [K_Y]$ is an integral linear combination of the $[C_i]$ and $[D_j]$, the same holds for $\alpha$. Then $\alpha = \sum_ic_i[C_i] + \sum_jd_j[D_j]$ with $c_i, d_j\in \mathbb{Z}$. But $\alpha ^2 =-1 = (\sum_ic_iC_i)^2 + (\sum_jd_jD_j)^2$. Both terms are non-positive, and so $(\sum_ic_iC_i)^2 \geq -1$. But if $\sum_ic_iC_i \neq 0$, then $(\sum_ic_iC_i)^2 \leq -2$. Thus $\sum_ic_iC_i =0$ and $\alpha$ lies in the span of the $[D_j]$. Conversely, if $\alpha$ is not in the span of the $[D_j]$ and $\alpha \cdot x \geq 0$, then $\alpha$ is the class of an effective curve. \end{proof} \begin{proof}[Proof of Theorem~\ref{disttheorem}] It follows from Proposition~\ref{aprop} that, if $x\in \mathcal{C}^+(Y) \cap \Lambda(Y,D)$ is $R_Y$-distinguished, then $\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)$ is the set of all $y\in \mathcal{C}^+_D(Y)$ such that $\alpha \cdot y \geq 0$ for all $\alpha$ a numerical exceptional curve on $Y$, not in the span of the $[D_i]$, such that $\alpha \cdot x \geq 0$. Let $f$ be an isometry satisfying the conditions of the theorem. Then $f(x)$ is $R_{Y'}$-distinguished, and $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y))$ is clearly the set of all $y\in \mathcal{C}^+_{D'}(Y')$ such that $\alpha \cdot y \geq 0$ for all $\alpha$ a numerical exceptional curve on $Y'$, not in the span of the $[D_i']$, such that $\alpha \cdot f(x) \geq 0$. Again by Proposition~\ref{aprop}, this set is exactly $\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y')$. \end{proof} Theorem~\ref{disttheorem} covers all of the cases in \cite{Looij} except for the case of $5$ components: By inspection of the root diagrams on pp.\ 275--277 of \cite{Looij}, the complement of any trivalent vertex spans a negative definite codimension one subspace, except in the case of $5$ components. To give a direct argument along the above lines which also handles this case (and all of the other cases in \cite{Looij}), we recall the basic setup there: There exists a subset $B= \{\beta_1, \dots, \beta_n\} \subseteq R$ such that $B$ is a basis for $\Lambda\otimes \mathbb{R}$, and there exist $n_i\in \mathbb{Z}^+$ such that $(\sum_in_i\beta_i) \cdot \beta_j > 0$ for all $j$ (compare also \cite{Looijpre} (1.18)). In particular, note that the intersection matrix $(\beta_i\cdot \beta_j)$ is non-singular. Finally, by the classification of Theorem (1.1) in \cite{Looij}, there exists a deformation of $(Y,D)$ for which $\beta_i = [C_i]$ is the class of a $-2$-curve for all $i$. (With some care, this explicit argument could be avoided by appealing to the surjectivity of the period map and (i) of Theorem~\ref{mainprop}.) \begin{theorem} Let $(Y,D)$ and $(Y', D')$ be two anticanonical pairs satisfying the hypotheses of the preceding paragraph, both negative definite, with $r(D) = r(D')$, and let $f\colon H^2(Y; \mathbb{Z}) \to H^2(Y'; \mathbb{Z})$ be an isometry such that $f([D_i]) = [D_i']$ for all $i$, $f (\mathcal{C}^+(Y)) = \mathcal{C}^+(Y')$, and $f(R_Y) = R_{Y'}$. Then $f(\overline{\mathcal{A}}_{\text{\rm{gen}}}(Y)) = \overline{\mathcal{A}}_{\text{\rm{gen}}}(Y')$. \end{theorem} \begin{proof} (Sketch) With notation as in the paragraph preceding the statement of the theorem, let $h =\sum_in_i\beta_i$ have the property that $h\cdot \beta_i > 0$. By the arguments used in the proof of Theorem~\ref{disttheorem}, it is enough to show that $h\in \overline{\mathcal{A}}_{\text{\rm{gen}}}$ and that, if $\alpha$ is a numerical exceptional curve and $\alpha$ is not in the span of the $[D_j]$, then $\alpha$ is effective $\iff$ $\alpha \cdot h \geq 0$. Also, it is enough to prove this for some deformation of $(Y,D)$, so we can assume $\beta_i = [C_i]$ is the class of a $-2$-curve for all $i$, hence that $h$ is the class of $H=\sum_in_iC_i$. By construction, $H\cdot C_j > 0$ for every $j$, hence $H$ is nef and big. By Lemma~\ref{remarkafternumeric}, it is enough to show that, if $G$ is an irreducible curve not equal to $D_i$ for any $i$, then $H\cdot G > 0$. Since $H$ is nef, it suffices to rule out the case $H\cdot G =0$, in which case $G^2 < 0$. As $G\neq D_j$ for any $j$, then $G$ is either a $-2$-curve or an exceptional curve. The case where $G$ is a $-2$-curve is impossible since then $G$ is orthogonal to the span of the $[C_i]$, but the $[C_i]$ span $\Lambda$ over $\mathbb{Q}$ and the intersection form is nondegenerate. So $G=E$ is an exceptional curve disjoint from the $C_i$. If $(\overline{Y}, \overline{D})$ is the anticanonical pair obtained by contracting $E$, then the $[C_i]$ define classes in $\overline{\Lambda} = \Lambda(\overline{Y}, \overline{D})$. Since the intersection form $(C_i\cdot C_j)$ is nondegenerate, the rank of $\overline{\Lambda}$ is at least that of the rank of $\Lambda$. It is easy to check that the classes of $\overline{D}_1, \dots, \overline{D}_r$ are linearly independent: if say $E$ meets $D_1$, then the intersection matrix of $\overline{D}_2, \dots, \overline{D}_r$ is still negative definite, and then (ii) of Lemma~\ref{negdef} (with $F = \overline{D}_1$ and $G_1, \dots, G_n = \overline{D}_2, \dots, \overline{D}_r$) shows that the classes of $\overline{D}_1, \dots, \overline{D}_r$ are linearly independent. Hence the rank of $H^2(\overline{Y}; \mathbb{Z})$ is greater than or equal to the rank of $H^2(Y; \mathbb{Z})$, which contradicts the fact that $\overline{Y}$ is obtained from $Y$ by contracting an exceptional curve. \end{proof} \section{Some examples} \begin{example} We give a series of examples satisfying the hypotheses of Theorem~\ref{disttheorem} where the number of components and the multiplicities are arbitrarily large. Let $(\overline{Y}, \overline{D})$ be the anticanonical pair obtained by making $k+6$ infinitely near blowups starting with the double point of a nodal cubic. Thus $\overline{D} = \overline{D}_0 + \cdots + \overline{D}_{k+6}$, where $\overline{D}_0^2 = -k$, $\overline{D}_i^2 = -2$, $1\leq i\leq k+5$, and $\overline{D}_{k+6}^2 = -1$. Now blow up $N \geq 1$ points $p_1, \dots, p_N$ on $\overline{D}_{k+6}$, and let $(Y,D)$ be the resulting anticanonical pair. Note that $(Y,D)$ is negative definite as long as $k\geq 3$ or $k=2$ and $N\geq 2$. Clearly $r(D) = k+7$ and $K_Y^2 = 3-k-N$. It follows that $\Lambda = \Lambda(Y,D)$ has rank $N$. If $E_1, \dots, E_N$ are the exceptional curves corresponding to $p_1, \dots, p_N$, then the classes $[E_i] - [E_{i+1}]$ span a negative definite root lattice of type $A_{N-1}$ in $\Lambda$. By making all of the blowups infinitely near to the first point, we see that all of the classes $[E_i] - [E_{i+1}]$ lie in $R$. Hence $(Y,D)$ satisfies the hypotheses of Theorem~\ref{disttheorem}. \end{example} Next we turn to examples where the rank of $\Lambda$ is small. The case where the rank of $\Lambda$ is $1$ is covered by Theorem~\ref{disttheorem}, as well as the case where the rank of $\Lambda$ is $2$ and $R\neq \emptyset$. Note that, conjecturally at least, the case where $R\neq \emptyset$ should be related to the question of whether the dual cusp singularity deforms to an ordinary double point. It is easy to construct examples where the rank of $\Lambda$ is $2$ and with $R\neq \emptyset$: begin with an anticanonical pair $(\hat{Y}, \hat{D})$ where the rank of $\Lambda(\hat{Y}, \hat{D})$ is $1$, locate a component $\hat{D}_i$ such that there exists an exceptional curve $E$ on $\hat{Y}$ with $E \cdot \hat{D}_i = 1$, and blow up a point of $\hat{D}_i$ to obtain a new anticanonical pair $(Y,D)$ together with exceptional curves $E,E'$ (where we continue to denote by $E$ the pullback to $Y$ and by $E'$ the new exceptional curve), such that $[E] -[E'] \in R$. So our interest is in finding examples where $R=\emptyset$. \begin{remark} In case the rank of $\Lambda$ is $2$ and $R\neq \emptyset$, it is easy to see that either $(\overline{\mathcal{A}}_{\text{\rm{gen}}} \cap \Lambda)/\mathbb{R}^+$ is a closed (compact) interval or $\overline{\mathcal{A}}_{\text{\rm{gen}}} \cap \Lambda =\mathcal{C}^+\cap \Lambda$ (and in fact both cases arise). In either case, there is at most one wall $W^\beta$ with $\beta \in R$ passing through the interior of $\overline{\mathcal{A}}_{\text{\rm{gen}}} \cap \Lambda$, and hence either $R=\emptyset$ or $R =\{\pm \beta\}$. \end{remark} \begin{example} We give an example where the rank of $\Lambda$ is $2$ and there are no $\beta \in \Lambda$ such that $\beta^2 = -2$, in particular $R=\emptyset$, hence the condition $f(R) = R$ is automatic for every isometry $f$, and of an isometry $f$ which preserves $\mathcal{C}^+$ and the classes $[D_i]$ but not the generic ample cone. Let $(\overline{Y}, \overline{D})$ be the anticanonical pair obtained by making $9$ infinitely near blowups starting with the double point of a nodal cubic. Thus $\overline{D} = \overline{D}_0 + \cdots + \overline{D}_9$, where $\overline{D}_0 = 3H -2E_1 -\sum_{i=2}^9E_i$, $\overline{D}_i = E_i - E_{i+1}$, $1\leq i \leq 8$, and $\overline{D}_9 = E_9$. Make two more blowups, one at a point $p_{10}$ on $\overline{D}_9$, and one at a point $p_{11}$ on $\overline{D}_4$. This yields an anticanonical pair $(Y,D)$ with $D_0 = 3H -2E_1 -\sum_{i=2}^9E_i$, $D_i = E_i - E_{i+1}$, $i>0$ and $i\neq 4$, and $D_4 = E_4 - E_5 - E_{11}$. Thus $$(-d_0, \dots, -d_9) = (3,2,2,2,3,2,2,2,2,2),$$ i.e.\ $D$ is of type $\displaystyle \begin{pmatrix} 3&3\\3&5\end{pmatrix}$, with dual cycle $\displaystyle \begin{pmatrix} 6&8\\0&0\end{pmatrix}$ in the notation of \cite{FriedmanMiranda}. Set \begin{align*} G_1 &= 5H - 2\sum_{i=1}^4E_i - \sum_{i=5}^{10}E_i -E_{11};\\ G_2 &= 10H-5\sum_{i=1}^4E_i-\sum_{i=5}^{10}E_i -4E_{11}. \end{align*} It is straightforward to check that $(G_i\cdot D_j) = 0$ for $i=1,2$ and $0\leq j \leq 9$. Hence $G_1, G_2 \in \Lambda$. Also, $$G_1^2 = 2; \qquad G_2^2 = -22; \qquad G_1\cdot G_2 = 0.$$ The corresponding quadratic form $$q(n,m) = (nG_1 + mG_2)^2 = 2n^2 - 22m^2$$ has discriminant $-44=-2^2\cdot 11$. Note that this is consistent with the fact that the discriminant of the dual cycle is $$\det \begin{pmatrix} -6&2\\2&-8\end{pmatrix} = 44.$$ It is easy to see that $G_1$ and $G_2$ are linearly independent mod $2$ and hence span a primitive lattice, which must therefore equal $\Lambda$. First we claim that there is no element of $\Lambda$ of square $-2$. This is equivalent to the statement that there is no solution in integers to the equation $n^2 - 11m^2 = -1$, i.e.\ that the fundamental unit in $\mathbb{Z}[\sqrt{11}]$ has norm $1$. But clearly if there were an integral solution to $n^2 - 11m^2 = -1$, then since $-11\equiv 1 \bmod 4$, we could write $-1$ as a sum of squares mod $4$, which is impossible. In fact, the fundamental unit in $\mathbb{Z}[\sqrt{11}]$ is $10 + 3\sqrt{11}$. Thus, if $R$ is the set of roots for $(Y,D)$, then $R=\emptyset$. In particular, any isometry $f$ trivially satisfies: $f(R) = R$. Finally, we claim that there is an isometry $f$ of $H^2(Y; \mathbb{Z})$ such that $f([D_i]) = [D_i]$ for all $i$ and $f(\mathcal{C}^+) = \mathcal{C}^+$, but such that $f$ does not preserve the generic ample cone. Note that the unit group $U$ of $\mathbb{Z}[\sqrt{11}]$ acts as a group of isometries on $\Lambda$, and hence acts as a group of isometries (with $\mathbb{Q}$-coefficients) of the lattice $H^2(Y; \mathbb{Q}) = (\Lambda \otimes \mathbb{Q}) \oplus \bigoplus_i\mathbb{Q}[D_i]$, fixing the classes $[D_i]$. Also, any isometry of $\Lambda$ which is trivial on the discriminant group $\Lambda^{\vee}/\Lambda$ extends to an integral isometry of $H^2(Y; \mathbb{Z})$ fixing the $[D_i]$. Concretely, the discriminant form $\Lambda^{\vee}/\Lambda \cong \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/22\mathbb{Z}$. If $\mu = 10 + 3\sqrt{11}$, then it is easy to check that the automorphism of $\Lambda$ corresponding to $\mu^2 = 199+ 60\sqrt{11}$ acts trivially on $\Lambda^{\vee}/\Lambda$ and hence defines an isometry $f$ of $H^2(Y; \mathbb{Z})$ fixing the $[D_i]$. Then $f$ acts freely on $(\mathcal{C}^+\cap \Lambda)/\mathbb{R}^+$, which is just a copy of $\mathbb{R}$ (and $f$ acts on it via translation). But the intersection of the generic ample cone with $\Lambda$ has the nontrivial wall $W^{E_{11}}$, so that the intersection cannot be all of $\mathcal{C}^+\cap \Lambda$. It then follows that $f^{\pm1}$ does not preserve the generic ample cone. Explicitly, let $(\hat{Y}, \hat{D})$ be the surface obtained by contracting $E_{11}$ and let $\hat{G}_1 = 4G_1-G_2 = 10H - 3\sum_{i=1}^{10}E_i$ be the pullback of the positive generator of $\Lambda(\hat{Y}, \hat{D})$. Thus $\hat{G}_1$ is nef and big, so that $\hat{G}_1\in \overline{\mathcal{A}}_{\text{\rm{gen}}}$. Clearly $\hat{G}_1\in W^{E_{11}}$. If $A = \displaystyle \begin{pmatrix} a & 11b\\b&a\end{pmatrix}$ is the isometry of $\Lambda$ corresponding to multiplication by the unit $a + b\sqrt{11}$, then $A(G_1) = aG_1 +bG_2$, $A(G_2) = 11bG_1 + aG_2$, and $A( \hat{G}_1) = (4a-11b)G_1 + (4b-a)G_2$. Thus $$E_{11} \cdot A( \hat{G}_1) = (4a-11b) + 4(4b-a) = 5b,$$ hence $E_{11} \cdot A( \hat{G}_1) < 0$ if $b< 0$. Taking $f^{-1}$, which corresponds to $199- 60\sqrt{11}$, we see that $f^{-1}(\hat{G}_1)\notin \overline{\mathcal{A}}_{\text{\rm{gen}}}$. \end{example} \begin{example} In this example, the rank of $\Lambda$ is $2$ and $R=\emptyset$, but there exist infinitely many $\beta \in \Lambda$ such that $\beta^2 = -2$. The condition $f(R) = R$ is again automatic for every isometry $f$, and reflection about every $\beta \in \Lambda$ with $\beta^2=-2$ is an isometry which preserves $\mathcal{C}^+$ and the classes $[D_i]$ but not the generic ample cone. As in the previous example, let $(\overline{Y}, \overline{D})$ be the anticanonical pair obtained by making $9$ infinitely near blowups starting with the double point of a nodal cubic. Thus $\overline{D} = \overline{D}_0 + \cdots + \overline{D}_9$, where $\overline{D}_0 = 3H -2E_1 -\sum_{i=2}^9E_i$, $\overline{D}_i = E_i - E_{i+1}$, $1\leq i \leq 8$, and $\overline{D}_9 = E_9$. Make two more blowups, one at a point $p_{10}$ on $\overline{D}_9$, and one at a point $p_{11}$ on $\overline{D}_0$. This yields an anticanonical pair $(Y,D)$ with $D_0 = 3H -2E_1 -\sum_{i=2}^9E_i-E_{11}$ and $D_i = E_i - E_{i+1}$, $1\leq i\leq 9$. Thus $$(-d_0, \dots, -d_9) = (4,2,2,2,2,2,2,2,2,2),$$ i.e.\ $D$ is of type $\displaystyle \begin{pmatrix} 4\\9\end{pmatrix}$, with dual cycle $\displaystyle \begin{pmatrix} 12\\1\end{pmatrix}$ in the notation of \cite{FriedmanMiranda}. Set \begin{align*} G_1 &= 10H - 3\sum_{i=1}^{10}E_i ;\\ G_2 &= 3H- \sum_{i=1}^{10}E_i +E_{11}. \end{align*} It is straightforward to check that $(G_i\cdot D_j) = 0$ for $i=1,2$ and $0\leq j \leq 9$. Hence $G_1, G_2 \in \Lambda$. Also, $$G_1^2 = 10; \qquad G_2^2 = -2 ; \qquad G_1\cdot G_2 = 0.$$ The corresponding quadratic form $$q(n,m) = (nG_1 + mG_2)^2 = 10n^2 - 2m^2$$ has discriminant $-20=-2^2\cdot 5$. Note that this is consistent with the fact that the discriminant of the dual cycle is $$\det \begin{pmatrix} -12&2\\2&-2\end{pmatrix} = 20.$$ It is easy to see that $G_1$ and $G_2$ are linearly independent mod $2$ and hence span a primitive lattice, which must therefore equal $\Lambda$. To give a partial description of $\overline{\mathcal{A}}_{\text{\rm{gen}}} \cap \Lambda$, note that (as for $\hat{G}_1$ in the previous example) $G_1$ is the pullback to $Y$ of a positive generator for $\Lambda(\hat{Y}, \hat{D})$, where $\hat{Y}$ denotes the surface obtained by contracting $E_{11}$. Thus $G_1$ is nef and big, so that $G_1\in \overline{\mathcal{A}}_{\text{\rm{gen}}}$ and also $G_1\in W^{E_{11}}$. Hence $$\mathcal{C}^+\cap \Lambda = \{nG_1+ mG_2: 5n^2 - m^2 > 0, n>0\},$$ i.e.\ $n>0$ and $-n\sqrt{5} < m < n\sqrt{5}$. The condition $E_{11} \cdot (nG_1+ mG_2) \geq 0$ gives $m\leq 0$. To get a second inequality on $n$ and $m$, let $$E' = 5H - 4E_{11} - \sum_{i=1}^{10}E_i.$$ Then $(E')^2 = E' \cdot K_Y = -1$, and $H\cdot E' > 0$. Hence $E'$ is effective. (In fact one can show that $E'$ is generically the class of an exceptional curve.) Thus, for all $nG_1+ mG_2 \in \overline{\mathcal{A}}_{\text{\rm{gen}}}$, $$E' \cdot (nG_1+ mG_2) = 20n + 9m \geq 0,$$ hence $$\overline{\mathcal{A}}_{\text{\rm{gen}}} \cap \Lambda \subseteq \{nG_1 + mG_2: n > 0, -{\textstyle\frac{20}{9}}n \leq m \leq 0.\}.$$ Next we describe the classes $\beta\in \Lambda$ with $\beta^2 = -2$. The element $\beta = aG_1 + bG_2\in \Lambda$ satisfies $\beta^2 = -2$ $\iff$ $5a^2 -b^2 =-1$, i.e.\ $\iff$ $b+ a\sqrt{5}$ is a unit in the (non-integrally closed) ring $\mathbb{Z}[\sqrt{5}]$. For example, the class $G_2$ corresponds to $1$; as we have seen, the wall $W^{G_2} = W^{E_{11}}$. The fundamental unit in $\mathbb{Z}[\sqrt{5}]$ is easily checked to be $9 + 4\sqrt{5}$. However, since we are only concerned with walls which are rays in the fourth quadrant $\{(nG_1+ mG_2): n > 0, m< 0\}$, we shall consider instead $\pm(9-4\sqrt{5})$, and shall choose the sign corresponding to $\beta = 4G_1 - 9G_2$. Note that $$\beta\cdot (nG_1+ mG_2) = 40n + 18m = 0 \iff E'\cdot (nG_1+ mG_2) = 0.$$ Hence $W^\beta = W^{E'}$. Moreover, for every $\gamma\in \Lambda$ such that $\gamma^2 = -2$ and such that the wall $W^\gamma$ passes through the fourth quadrant, either $W^\gamma =W^\beta$ or the corresponding ray $W^\gamma$ lies below $W^\beta$. Thus, for every $\gamma \in \Lambda$ with $\gamma^2 = -2$, $r_\gamma$ does not preserve $\overline{\mathcal{A}}_{\text{\rm{gen}}} \cap \Lambda$. Hence $R=\emptyset$. Note that, aside from the isometries $r_\beta$, where $\beta^2 = -2$, one can also construct isometries of infinite order preserving $\mathcal{C}^+$ and the classes $[D_i]$ which do not fix preserve $\overline{\mathcal{A}}_{\text{\rm{gen}}}$ using multiplication by fundamental units in $\mathbb{Z}[\sqrt{5}]$, as in the previous example. \end{example} \begin{remark} The exceptional curve $E'$ used in the above example is part of a general series of such. For $n\geq 0$, let $Y$ be the blowup of $\mathbb{P}^2$ at $2n+1$ points $p_0, \dots, p_{2n}$, with corresponding exceptional curves $E_0, \dots, E_{2n}$, and consider the divisor $$A= nH - (n-1)E_0 - \sum_{i=1}^{2n}E_i.$$ Then $A^2 = A\cdot K_Y = -1$, and it is easy to see that there exist $p_0, \dots, p_{2n}$ such that $A$ is the class of an exceptional curve. In fact, if $\mathbb{F}_1$ is the blowup of $\mathbb{P}^2$ at $p_0$, then $\Sigma = nH - (n-1)E_0$ is very ample on $\mathbb{F}_1$ and, for an anticanonical divisor $D\in |-K_{\mathbb{F}_1}| = |3H-E_0|$, $\Sigma \cdot D = 2n+1$. From this it is easy to see that we can choose the points $p_1, \dots, p_{2n}$ to lie on the image of $D$ in $\mathbb{P}^2$, and hence we can arrange the blowup $Y$ to have (for example) an irreducible anticanonical nodal curve. \end{remark}
train/arxiv
BkiUdV84eIfiUSz0sGCV
5
1
\section{Introduction} \label{sec:intro} Stochastic kinetic models, most naturally represented by Markov jump processes (MJPs), can be used to model a wide range of real-world phenomena including the evolution of biological systems such as intra-cellular processes \citep{GoliWilk05,Wilkinson09}, predator-prey interaction \citep{BWK08,ferm2008,GoliWilk11} and epidemics \citep{bailey1975,oneill1999,boys2007}. The focus of this paper is to perform exact and fully Bayesian inference for the parameters governing the MJP, using discrete time course observations that may be incomplete and subject to measurement error. A number of recent attempts to address the inference problem have been made. For example, a data augmentation approach was adopted by \cite{BWK08} and applied to discrete (and error-free) observations of a Lotka-Volterra process. The particle marginal Metropolis-Hastings (PMMH) algorithm of \cite{andrieu09} has been applied by \cite{GoliWilk11} and \cite{sherlock2014} to estimate the parameters in model auto-regulatory networks. The PMMH algorithm offers a promising approach, as it permits a joint update of the parameters and latent process, thus alleviating mixing problems associated with strong correlations. Moreover, the simplest approach is ``likelihood-free'' in the sense that only forward simulations from the MJP are required. These simulations can be readily obtained by using, for example, Gillespie's direct method \citep{Gillespie77}. The PMMH scheme requires running a sequential Monte Carlo (SMC) scheme (such as the bootstrap particle filter of \cite{gordon93}) at every iteration. Given the potential for huge computational burden, improvements to the overall efficiency of PMMH for MJPs has been the focus of \cite{Goli14}. The latter propose a delayed acceptance analogue of PMMH, (daPMMH), that uses approximations to the MJP such as the chemical Langevin equation (CLE) and linear noise approximation (LNA) \citep{kampen2001} to screen out parameter draws that are likely to be rejected by the sampler. It should be noted that the simplest likelihood free implementations of both PMMH and daPMMH are likely to perform poorly unless the noise in the measurement error process dominates the intrinsic stochasticity in the MJP. Essentially, in low measurement error cases, only a small number of simulated trajectories will be given reasonable weight inside the SMC scheme, leading to highly variable estimates of marginal likelihood used by the PMMH scheme to construct the acceptance probability. Intolerably long mixing times ensue, unless computational budget permits a large number of particles to be used. In the special case of error-free observation, the algorithm will be impracticable for models of realistic size and complexity, since in this case, trajectories must ``hit'' the observations. The development of efficient schemes for generating MJP trajectories that are conditioned on the observations, henceforth referred to as MJP bridges, is therefore of paramount importance. Whilst there is considerable work on the construction of bridges for continuous valued Markov (diffusion) processes \citep{DurhGall02,DelHu06,Fearnhead08,StramYan07,schauer14,delmoral14}, seemingly little has been done for discrete state spaces. The approach taken by \cite{BWK08} linearly interpolates the hazard function between observation times but requires full and error-free observation of the system of interest. \cite{FanShe08} consider an importance sampling algorithm for finite state Markov processes where informative observations are dealt with by sampling reaction times from a truncated exponential distribution and reaction type probabilities are weighted by the probability of reaching the next observation. \cite{hajiaghayi14} improve the performance of particle-based Monte Carlo algorithms by analytically marginalising waiting times. The method requires a user-specified potential to push trajectories towards the observation. Our novel contribution is an MJP bridge obtained by sampling a jump process with a conditioned hazard that is derived by approximating the expected number of reaction events between observations, given the observations themselves. The resulting hazard is time dependent, however, we find that a simple implementation based on exponential waiting times between proposed reaction events works well in practice. We also adapt the recently proposed bridge particle filter of \cite{delmoral14} to our problem. Their scheme works by generating forward simulations from the process of interest, and reweighting at a set of intermediate times at which resampling may also take place. A look ahead step in the spirit of \cite{lin2013} prunes out trajectories that are inconsistent with the next observation. The implementation requires an approximation to the (unavailable) transition probability governing the MJP. \cite{delmoral14} used a flexible Gaussian process to approximate the unavailable transition density of a diffusion process. Here, we take advantage of two well known continuous time approximations of the MJP by considering use of the transition density under a discretisation of the CLE or the tractable transition density under the LNA. The methods we propose are simple to implement and are not restricted to finite state spaces. In section~2, a review of the basic structure of the problem is presented, showing how the Markov process representation of a reaction network is constructed. In section~3, we consider the problem of sampling conditioned MJPs and give three viable solutions to this problem. In section~4, it is shown how the recently proposed particle MCMC algorithms \citep{andrieu09} may be applied to this class of models. It is also shown how the bridge constructs introduced in section~3 can be used with a pMCMC scheme. The methodology is applied to a number of applications in section~5 before conclusions are drawn in section~6. \section{Stochastic kinetic models} \label{sec:stochkin} We consider a reaction network involving $u$ species $\mathcal{X}_1, \mathcal{X}_2,\ldots,\mathcal{X}_u$ and $v$ reactions $\mathcal{R}_1,\mathcal{R}_2,\ldots,\mathcal{R}_v$, with a typical reaction denoted by $\mathcal{R}_i$ and written using standard chemical reaction notation as \begin{align*} \mathcal{R}_i:\quad p_{i1}\mathcal{X}_1+p_{i2}\mathcal{X}_2+\cdots+p_{iu}\mathcal{X}_u &\longrightarrow q_{i1}\mathcal{X}_1+q_{i2}\mathcal{X}_2+\cdots+q_{iu}\mathcal{X}_u \end{align*} Let $X_{j,t}$ denote the number of molecules of species $\mathcal{X}_j$ at time $t$, and let $X_t$ be the $u$-vector $X_t = (X_{1,t},X_{2,t},\linebreak[1] \ldots,\linebreak[0] X_{u,t})'$. The dynamics of this model can be described by a vector of rates (or hazards) of the reactions together with a matrix which describes the effect of each reaction on the state. We therefore define a rate function $h_i(X_t,c_i)$, giving the overall hazard of a type $i$ reaction occurring, and we let this depend explicitly on the reaction rate constant $c_i$, as well as the state of the system at time $t$. We model the system with a Markov jump process (MJP), so that for an infinitesimal time increment $dt$, the probability of a type $i$ reaction occurring in the time interval $(t,t+dt]$ is $h_i(X_t,c_i)dt$. When a type $i$ reaction does occur, the system state changes discretely, via the $i$th row of the so called net effect matrix $A$, a $v\times u$ matrix with $(i,j)$th element given by $q_{ij}-p_{ij}$. In what follows, for notational convenience, we work with the stoichiometry matrix defined as $S=A'$. Under the standard assumption of mass action kinetics, the hazard function for a particular reaction of type $i$ takes the form of the rate constant multiplied by a product of binomial coefficients expressing the number of ways in which the reaction can occur, that is, \[ h_i(X_t,c_i) = c_i\prod_{j=1}^u \binom{X_{j,t}}{p_{ij}}. \] Values for $c=(c_1,c_2,\ldots,c_v)'$ and the initial system state $X_0=x_0$ complete specification of the Markov process. Although this process is rarely analytically tractable for interesting models, it is straightforward to forward-simulate exact realisations of this Markov process using a discrete event simulation method. This is due to the fact that if the current time and state of the system are $t$ and $X_t$ respectively, then the time to the next event will be exponential with rate parameter \[ h_0(X_t,c)=\sum_{i=1}^v h_i(X_t,c_i), \] and the event will be a reaction of type $\mathcal{R}_i$ with probability $h_i(X_t,c_i)/h_0(X_t,c)$ independently of the waiting time. Forwards simulation of process realisations in this way is typically referred to as \emph{Gillespie's direct method} in the stochastic kinetics literature, after \cite{Gillespie77}. See \cite{Wilkinson06} for further background on stochastic kinetic modelling. \index{Gillespie algorithm} The primary goal of this paper is that of inference for the stochastic rate constants $c$, given potentially noisy observations of the system state at a set of discrete times. \cite{GoliWilk11} demonstrated that it is possible to use a particle marginal Metropolis-Hastings (PMMH) scheme for such problems, using only the ability to forward simulate from the system of interest and evaluate the density associated with the observation error process. This ``likelihood free'' implementation uses the bootstrap particle filter of \cite{gordon93}. As noted by \cite{GoliWilk11}, the efficiency of this algorithm is likely to be very poor when observations are highly informative. Moreover, in the special case of error-free observation, the algorithm will be computationally intractable for models of realistic size and complexity. We therefore consider three constructions for generating realisations of conditioned jump processes, for use in a PMMH scheme. These constructs rely on the ability to construct both cheap and accurate approximations of the MJP. We therefore consider two candidate approximations in the next section. \subsection{SKM approximations} \subsubsection{Chemical Langevin equation} Over an infinitesimal time interval, $(t,t+dt]$, the reaction hazards will remain constant almost surely. The occurrence of reaction events can therefore be regarded as the occurrence of events of a Poisson process with independent realisations for each reaction type. Therefore, the mean and variance of the change in the MJP over the infinitesimal time interval can be calculated as \[ \operatorname{E}(dX_t)=S\,h(X_t,c)dt, \operatorname{Var}(dX_t)= S\operatorname{diag}\{h(X_t,c)\}S'dt. \] The It\^o stochastic differential equation (SDE) that has the same infinitesimal mean and variance as the true MJP is therefore \begin{equation} dX_t = S\,h(X_t,c)dt + \sqrt{S\operatorname{diag}\{h(X_t,c)\}S'}\,dW_t, \label{cle} \end{equation} where (without loss of generality) $\sqrt{S\operatorname{diag}\{h(X_t,c)\}S'}$ is a $u\times u$ matrix and $W_t$ is a $u$-vector of standard Brownian motion. Equation \eqref{cle} is the SDE most commonly referred to as the chemical Langevin equation (CLE), and can be shown to approximate the SKM increasingly well in high concentration scenarios \citep{Gillespie00}. The CLE can rarely be solved analytically, and it is common to work with a discretisation such as the Euler-Maruyama discretisation: \[ \Delta X_t\equiv X_{t+\Delta t}-X_{t} = S\,h(X_t,c)\Delta t + \sqrt{S\operatorname{diag}\{h(X_t,c)\}S'\Delta t}\,Z \] where $Z$ is a standard multivariate Gaussian random variable. For a more formal discussion of the CLE and its derivation, we refer the reader to \cite{Gillespie92b} and \cite{Gillespie00}. \subsubsection{Linear noise approximation} The linear noise approximation (LNA) further approximates the MJP by linearising the drift and noise terms of the CLE. The LNA generally possesses a greater degree of numerical and analytic tractability than the CLE. For example, the LNA solution involves (numerically) integrating a set of ODEs for which standard routines, such as the \texttt{lsoda} package \citep{petzold83}, exist. The LNA can be derived in a number of more or less formal ways \citep{kurtz1970,elf2003,Komorowski09}. Our brief derivation follows the approach of \cite{Wilkinson06} to which we refer the reader for further details. We begin by replacing the hazard function $h(X_t,c)$ in equation~(\ref{cle}) with the rescaled form $\Omega f(X_t /\Omega,c)$ where $\Omega$ is the volume of the container in which the reactions are taking place. Note that the LNA approximates the CLE increasingly well as $\Omega$ and $X_t$ become large, that is, as the system approaches its thermodynamic limit. The CLE then becomes \begin{equation}\label{lna1} d X_t = \Omega S f(X_t /\Omega,c)dt + \sqrt{\Omega S\textrm{diag}\{f(X_t /\Omega,c)\}S'}\,dW_t. \end{equation} We now write the solution $X_t$ of the CLE as a deterministic process plus a residual stochastic process \citep{kampen2001}, \begin{equation}\label{lna0} X_t = \Omega z_{t}+\sqrt{\Omega}M_{t}. \end{equation} We then Taylor expand the rate function around $z_t$ to give \begin{equation}\label{lna3} f(z_t+M_t /\sqrt{\Omega},c) = f(z_t,c)+\frac{1}{\sqrt{\Omega}}F_t M_t + O(\Omega^{-1}) \end{equation} where $F_t$ is the $v\times u$ Jacobian matrix with $(i,j)$th element $\partial f_{i}(z_t,c) / \partial z_{j,t}$ and we suppress the dependence of $F_t$ on $z_t$ and $c$ for simplicity. Substituting (\ref{lna0}) and (\ref{lna3}) into equation~(\ref{lna1}) and collecting terms of $O(1)$ and $O(1/\sqrt{\Omega})$ give the ODE satisfied by $z_t$, and SDE satisfied by $M_t$ respectively, as \begin{align} dz_{t}&=S\,f(z_{t},c)dt \label{lna4}\\ dM_{t}&=S\, F_t M_t dt + \sqrt{S\,\textrm{diag}\{f(z_t,c)\}S'}\,dW_t . \label{lna5} \end{align} Equations~(\ref{lna0}), (\ref{lna4}) and (\ref{lna5}) give the linear noise approximation of the CLE and in turn, an approximation of the Markov jump process model. For fixed or Gaussian initial conditions, that is $M_{0}\sim \textrm{N}(m_{0},V_{0})$, the SDE in (\ref{lna5}) can be solved explicitly to give \[ M_{t}|c \sim \textrm{N}\left(G_{t}\, m_{0}\,,\,G_{t}\,\Psi_{t}\,G_{t}'\right) \] where $G_t$ and $\Psi_t$ satisfy the coupled ODE system given by \begin{align} dG_{t}&= F_{t}G_{t}dt; \quad G_{0}=I_{u\times u}, \\ d\Psi_{t} &= G_{t}^{-1}S\,\textrm{diag}\{f(z_t,c)\}S' \left(G_{t}^{-1}\right)'; \quad \Psi_0 = V_0. \end{align} Hence we obtain \[ X_{t} \sim \textrm{N}\left(\Omega\,z_{t}+\sqrt{\Omega}\,G_{t}\, m_{0} \,,\, \Omega\,G_{t}\,\Psi_{t}\,G_{t}' \right). \] In what follows we assume, without loss of generality, that $\Omega=1$. \section{Sampling conditioned MJPs} \label{sec:cond} We suppose that interest lies in the Markov jump process over an interval $(0,t]$ denoted by $\mathbf{X}(t)=\{X_{s}\,|\, 0< s \leq t\}$. In fact, it is convenient to denote by $\mathbf{X}(t)$ the collection of reaction times and types over the interval $(0,t]$, which in turn gives the sample path of each species over this interval. Suppose further that the initial state $x_{0}$ is a known fixed value and that (a subset of components of) the process is observed at time $t$ subject to Gaussian error, giving a single observation $y_{t}$ on the random variable \[ Y_{t}=P'X_{t}+\varepsilon_{t}\,,\qquad \varepsilon_{t}\sim \textrm{N}\left(0,\Sigma\right). \] Here, $Y_{t}$ is a length-$p$ vector, $P$ is a constant matrix of dimension $u\times p$ and $\varepsilon_{t}$ is a length-$p$ Gaussian random vector. We denote the density linking $Y_{t}$ and $X_{t}$ as $p(y_{t}|x_{t})$. For simplicity, we assume in this section that both $\Sigma$ and the values of the rate constants $c$ are known, and drop them from the notation where possible. Our goal is to generate trajectories from $\mathbf{X}(t)|x_{0},y_{t}$ with associated probability function \begin{align*} \pi(\mathbf{x}(t)|x_{0},y_{t})&= \frac{p(y_{t}|x_{t})\pi(\mathbf{x}(t)|x_{0})}{p(y_{t}|x_{0})}\\ & \propto p(y_{t}|x_{t})\pi(\mathbf{x}(t)|x_{0}) \end{align*} where $\pi(\mathbf{x}(t)|x_{0})$ is the probability function associated with $\mathbf{x}(t)$. Although $\pi(\mathbf{x}(t)|x_{0},y_{t})$ will typically be intractable, simulation from $\pi(\mathbf{x}(t)|x_{0})$ is straightforward (via Gillespie's direct method), suggesting the construction of a numerical scheme such as Markov chain Monte Carlo or importance sampling. In keeping with the pMCMC methods described in section~4, we focus on the latter. \begin{algorithm}[t] \caption{Myopic importance sampling}\label{mimp} \begin{enumerate} \item For $i=1,2,\ldots ,N$: \begin{itemize} \item[(a)] Draw $\mathbf{x}(t)^i\sim \pi(\mathbf{x}(t)|x_{0})$ using the Gillespie's direct method. \item[(b)] Construct the unnormalised weight $\tilde{w}^{i}=p(y_{t}|x_{t}^i)$. \end{itemize} \item Normalise the weights: $w^{i}=\tilde{w}^{i} / \sum_{i=1}^{N}\tilde{w}^{i}$. \item Resample (with replacement) from the discrete distribution on $\big\{\mathbf{x}(t)^1,\ldots,\mathbf{x}(t)^N\big\}$ using the normalised weights as probabilities. \end{enumerate} \end{algorithm} The simplest importance sampling strategy (given in Algorithm~\ref{mimp}) proposes from $\pi(\mathbf{x}(t)|x_{0})$ and weights by $p(y_{t}|x_{t})$. If desired, an unweighted sample can easily be obtained by resampling (with replacement) from the discrete distribution over trajectory draws using the normalised weights as probabilities. Plainly, taking the average of the unnormalised weights gives an unbiased estimate of the normalising constant \[ p(y_{t}|x_{0})=\textrm{E}_{\mathbf{X}(t)|x_{0}}\left(p(y_{t}|X_{t})\right). \] This strategy is likely to work well provided that $y_{t}$ is not particularly informative. The proposal mechanism is independent of the observation $y_{t}$ and as $\Sigma$ is reduced, the variance of the importance weights increases. In an error free scenario, with $y_{t}\equiv x_{t}$, the unnormalised weights take the value 1 if $x_{t}^{i}=x_{t}$ and are 0 otherwise. Hence, in this extreme scenario, only trajectories that ``hit'' the observation have non-zero weight. In order to circumvent these problems, in section~\ref{sec:condhaz} we derive a novel proposal mechanism based on an approximation of the expected number of reaction events over the interval of interest, conditioned on the observation. In addition, in section~\ref{sec:bpf} we adapt a recently proposed bridge particle filter \citep{delmoral14} to our problem. \subsection{Conditioned hazard} \label{sec:condhaz} We suppose that we have simulated as far as time $s$ and derive an approximation of the expected number of reaction events over the interval $(s,t]$. Let $\Delta R_{s}$ denote the number of reaction events over the time $t-s=\Delta s$. We approximate $\Delta R_{s}$ by assuming a constant reaction hazard over $\Delta s$. A normal approximation to the corresponding Poisson distribution then gives \[ \Delta R_{s}\sim \textrm{N}\left(h(x_s,c)\Delta s\,,\,H(x_s,c)\Delta s\right) \] where $H(x_s,c)=\textrm{diag}\{h(x_s,c)\}$. Under the Gaussian observation regime we have that \[ Y_{t}|X_{s}=x_{s} \sim \textrm{N}\left(P'\left(x_{s}+S\,\Delta R_{s}\right)\,,\,P'S\,H(x_s,c)S'P\Delta s +\Sigma \right). \] Hence, the joint distribution of $\Delta R_{s}$ and $Y_{t}$ can then be obtained approximately as \[ \begin{pmatrix} \Delta R_{s} \\ Y_{t} \end{pmatrix} \sim \textrm{N}\left\{\begin{pmatrix} h(x_s,c)\Delta s \\ P'\left(x_{s}+S\,h(x_s,c)\Delta s\right)\end{pmatrix}\,,\, \begin{pmatrix} H(x_s,c)\Delta s & H(x_s,c)S'P\Delta s\\ P'S\,H(x_s,c)\Delta s & P'S\,H(x_s,c)S'P\Delta s +\Sigma\end{pmatrix}\right\}. \] Taking the expectation of $\Delta R_{s}|Y_{t}=y_{t}$ using standard multivariate normal theory, and dividing the resulting expression by $\Delta s$ gives an approximate conditioned hazard as \begin{align} &h^{*}(x_s,c|y_{t})=h(x_s,c) \nonumber \\ &\qquad+H(x_s,c)S'P\left(P'S\,H(x_s,c)S'P\Delta s +\Sigma\right)^{-1}\left(y_{t}-P'\left[x_{s}+S\,h(x_s,c)\Delta s\right]\right). \label{haz} \end{align} A proposed path $\mathbf{x}(t)^{*}$ can then be produced by sampling reaction events according to an inhomogeneous Poisson process with rate given by (\ref{haz}). An importance sampling scheme based on this proposal mechanism can then be obtained. Although the conditioned hazard in (\ref{haz}) depends on the current time $s$ in a nonlinear way, a simple implementation ignores this time dependence, giving exponential waiting times between proposed reaction events. Algorithm~\ref{condMJP} describes the mechanism for generating $\mathbf{x}(t)^{*}$. \begin{algorithm}[t] \caption{Approximate conditioned MJP generation}\label{condMJP} \begin{enumerate} \item Set $s=0$ and $x^{*}_{s}=x_{0}$. \item Calculate $h^{*}(x_{s}^{*},c|y_{t})$ and the combined hazard $h^{*}_{0}(x_{s}^{*},c|y_{t})=\sum_{i=1}^v h_{i}^{*}(x_{s}^{*},c_i|y_{t})$. \item Simulate the time to the next event, $\tau\sim \textrm{Exp}\{h^{*}_{0}(x_{s}^{*},c|y_{t})\}$. \item Simulate the reaction index, $j$, as a discrete random quantity with probabilities proportional to $h_{i}^{*}(x_{s}^{*},c_{i}|y_{t})$, $i=1,\ldots ,v$. \item Put $x_{s+\tau}^{*}=x_{s}+S^{j}$ where $S^{j}$ denotes the $j$th column of $S$. Put $s:=s+\tau$. \item Output $x_{s}^{*}$ and $s$. If $s<t$, return to step 2. \end{enumerate} \end{algorithm} To calculate the importance weights, we first note that $\pi(\mathbf{x}(t)|x_{0})$ can be written explicitly by considering the generation of all reaction times and types over $(0,t]$. To this end, we let $r_{j}$ denote the number of reaction events of type $\mathcal{R}_{j}$, $j=1,\ldots,v$, and define $n_{r}=\sum_{j=1}^{v}r_{j}$ as the total number of reaction events over the interval. Reaction times (assumed to be in increasing order) and types are denoted by $(t_{i},\nu_{i})$, $i=1,\ldots ,n_{r}$, $\nu_{i}\in \{1,\ldots ,v\}$ and we take $t_{0}=0$ and $t_{n_{r}+1}=t$. \cite{Wilkinson06} gives $\pi(\mathbf{x}(t)|x_{0})$, also known as the complete data likelihood over $(0,t]$, as \begin{align*} \pi(\mathbf{x}(t)|x_{0})&=\left\{\prod_{i=1}^{n_{r}}h_{\nu_{i}}\left(x_{t_{i-1}},c_{\nu_{i}}\right)\right\} \exp\left\{-\sum_{i=1}^{n_{r}}h_{0}\left(x_{t_{i}},c\right)\left[t_{i+1}-t_{i}\right]\right\}\\ &= \left\{\prod_{i=1}^{n_{r}}h_{\nu_{i}}\left(x_{t_{i-1}},c_{\nu_{i}}\right)\right\} \exp\left\{-\int_{0}^{t}h_{0}\left(x_{t},c\right)\,dt\right\}. \end{align*} We let $q(\mathbf{x}(t)|x_{0},y_{t})$ denote the complete data likelihood under the approximate jump process with hazard $h^{*}(x_{s}^{*},c|y_{t})$, so that the importance weight for a path $\mathbf{x}(t)$ is given by \begin{align} &p(y_{t}|x_{t})\frac{\pi(\mathbf{x}(t)|x_{0})}{q(\mathbf{x}(t)|x_{0},y_{t})}\nonumber \\ &\qquad =p(y_{t}|x_{t})\left\{\prod_{i=1}^{n_{r}}\frac{h_{\nu_{i}}\left(x_{t_{i-1}},c_{\nu_{i}}\right)}{h^{*}_{\nu_{i}}\left(x_{t_{i-1}},c_{\nu_{i}}|y_{t}\right)}\right\} \exp\left\{-\sum_{i=1}^{n_{r}}\left[h_{0}\left(x_{t_{i}},c\right)-h^{*}_{0}\left(x_{t_{i}},c|y_{t}\right)\right]\left[t_{i+1}-t_{i}\right]\right\}. \label{weight} \end{align} When the inhomogeneous Poisson process approximation is sampled exactly, the importance weight in (\ref{weight}) becomes \begin{align*} & p(y_{t}|x_{t})\left\{\prod_{i=1}^{n_{r}}\frac{h_{\nu_{i}}\left(x_{t_{i-1}},c_{\nu_{i}}\right)}{h^{*}_{\nu_{i}}\left(x_{t_{i-1}},c_{\nu_{i}}|y_{t}\right)}\right\} \exp\left\{-\int_{0}^{t}\left[h_{0}\left(x_{t},c\right)-h^{*}_{0}\left(x_{t},c|y_{t}\right)\right]\,dt\right\}\\ &\qquad =p(y_{t}|x_{t})\frac{d\mathbb{P}}{d\mathbb{Q}}\left(\mathbf{x}(t)\right) \end{align*} where the last term is seen to be the Radon-Nikodym derivative of the true Markov jump process ($\mathbb{P}$) with respect to the inhomogeneous Poisson process approximation ($\mathbb{Q}$) and measures the closeness of the approximating process to the true process. Algorithm~\ref{impCondMJP} gives importance sampling algorithm that uses an approximate implementation of the inhomogeneous Poisson process approximation. Note that in the special case of no error, the importance weight in step 1(b) has $p(y_{t}|x_{t}^{i})$ replaced with an indicator function assigning the value 1 if $x_{t}^{i}=x_{t}$ and 0 otherwise. Upon completion of the algorithm, an equally weighted sample approximately distributed according to $\pi(\mathbf{x}(t)|x_{0},y_{t})$ is obtained. The average unnormalised weight can be used to (unbiasedly) estimate the normalising constant $p(y_{t}|x_{0})$. \begin{algorithm}[t] \caption{Importance sampling with conditioned hazard}\label{impCondMJP} \begin{enumerate} \item For $i=1,2,\ldots ,N$: \begin{itemize} \item[(a)] Draw $\mathbf{x}(t)^i\sim q(\mathbf{x}(t)|x_{0},y_{t})$ using Algorithm~\ref{condMJP}. \item[(b)] Construct the unnormalised weight \[ \tilde{w}^{i}=p(y_{t}|x_{t}^{i})\frac{\pi(\mathbf{x}(t)^{i}|x_{0})}{q(\mathbf{x}(t)^{i}|x_{0},y_{t})} \] whose form is given by (\ref{weight}). \end{itemize} \item Normalise the weights: $w^{i}=\tilde{w}^{i} / \sum_{i=1}^{N}\tilde{w}^{i}$. \item Resample (with replacement) from the discrete distribution on $\big\{\mathbf{x}(t)^1,\ldots,\mathbf{x}(t)^N\big\}$ using the normalised weights as probabilities. \end{enumerate} \end{algorithm} \subsection{Bridge particle filter} \label{sec:bpf} \cite{delmoral14} considered the problem of sampling continuous time, continuous valued Markov processes and proposed a bridge particle filter to weight forward trajectories based on an approximation to the unknown transition probabilities at each reweighting step. Here, we adapt their method to our problem. We note that when using the bridge particle filter to sample MJP trajectories, it is possible to obtain a likelihood free scheme. Without loss of generality, we adopt an equispaced partition of $[0,t]$ as \[ 0=t_{0}<t_{1}<\cdots < t_{n}=t. \] This partition is used to determine the times at which resampling may take place. Introduce the weight functions \[ W_{k}(x_{t_{k-1}:t_{k}})=\frac{q(y_{t}|x_{t_{k}})}{q(y_{t}|x_{t_{k-1}})} \] where $q(y_{t}|x_{t_{k}})$, $k=0,\ldots ,n$ are positive functions. Note that \[ \frac{q(y_{t}|x_{0})}{q(y_{t}|x_{t})}\prod_{k=1}^{n}W_{k}(x_{t_{k-1}:t_{k}})=1 \] and write $\pi(\mathbf{x}(t)|x_{0},y_{t})$ as \begin{align} \pi(\mathbf{x}(t)|x_{0},y_{t})&\propto p(y_{t}|x_{t})\pi(\mathbf{x}(t)|x_{0})\frac{q(y_{t}|x_{0})}{q(y_{t}|x_{t})}\prod_{k=1}^{n}W_{k}(x_{t_{k-1}:t_{k}})\nonumber \\ &\propto p(y_{t}|x_{t})\frac{q(y_{t}|x_{0})}{q(y_{t}|x_{t})}\prod_{k=1}^{n}W_{k}(x_{t_{k-1}:t_{k}})\pi(\mathbf{x}(t_{k-1}:t_{k})|x_{t_{k-1}})\nonumber \\ &\propto \prod_{k=1}^{n}W_{k}(x_{t_{k-1}:t_{k}})\pi(\mathbf{x}(t_{k-1}:t_{k})|x_{t_{k-1}}) \label{target} \end{align} where $\pi(\mathbf{x}(t_{k-1}:t_{k})|x_{t_{k-1}})$ denotes the probability function associated with $\mathbf{X}(t_{k-1}:t_{k})=\{X_{s}\,|\, t_{k-1}< s \leq t_{k}\}$ and the last line (\ref{target}) follows by taking $q(y_{t}|x_{t})$ to be $p(y_{t}|x_{t})$ and absorbing $q(y_{t}|x_{0})$ into the proportionality constant. The form of (\ref{target}) suggests a sequential Monte Carlo (SMC) scheme (also known as a particle filter) where at time $t_{k-1}$ each particle (trajectory) $\mathbf{x}(t_{k-1})^{i}$ is extended by simulating from $\pi(\mathbf{x}(t_{k-1}:t_{k})|x_{t_{k-1}}^{i})$ and incrementally weighted by $W_{k}(x_{t_{k-1}:t_{k}})$. Intuitively, by ``looking ahead'' to the observation, trajectories that are not consistent with $y_{t}$ are given small weight and should be pruned out with a resampling step. \cite{delmoral14} suggest an adaptive resampling procedure so that resampling is only performed if the effective sample size (ESS) falls below some fraction of the number of particles, say $\beta$. The ESS is defined \citep{liu1995} as a function of the weights $w^{1:N}$ by \begin{equation}\label{ess} ESS\left(w^{1:N}\right)=\frac{\left(\sum_{i=1}^{N}w^{i}\right)^{2}}{\sum_{i=1}^{N}\left(w^{i}\right)^{2}}\,. \end{equation} It remains that we can choose sensible functions $q(y_{t}|x_{t_{k}})$ to be used to construct the weights. We propose to use the density associated with $Y_{t}|X_{t_{k}}=x_{t_{k}}$ under the CLE or LNA: \begin{align*} q_{CLE}(y_{t}|x_{t_{k}})&=\textrm{N}\left(y_{t};\,P'\left[x_{t_{k}}+S\,h(x_{t_{k}},c)(t-t_{k})\right]\,,\,P'S\,H(x_{t_{k}},c)S'P(t-t_{k})+\Sigma\right),\\ q_{LNA}(y_{t}|x_{t_{k}})&=\textrm{N}\left(y_{t};\,P'\left[z_{t}+G_{t-t_{k}}\, (x_{t_{k}}-z_{t_{k}})\right] \,,\, P'G_{t-t_{k}}\,\Psi_{t-t_{k}}\,G_{t-t_{k}}P+\Sigma\right). \end{align*} Note that due to the intractability of the CLE, we propose to use a single step of the Euler-Mauyama approximation. Comments on the relative merits of each scheme are given in Section~\ref{sec:eff}. Algorithm~\ref{bridgePF} gives the sequence of steps necessary to implement the bridge particle filter. The average unnormalised weight obtained at time $t$ can be used to estimate the normalising constant $p(y_{t}|x_{0})$: \[ \widehat{p}(y_{t}|x_{0})\propto \frac{1}{N}\sum_{i=1}^{N}\tilde{w}_{n}^{i}\,. \] \begin{algorithm}[t] \caption{Bridge particle filter}\label{bridgePF} \begin{enumerate} \item Initialise. For $i=1,2,\ldots ,N$: \begin{itemize} \item[(a)] Set $x_{0}^{i}=x_{0}$ and put $w_{0}^{i}=1/N$. \end{itemize} \item Perform sequential importance sampling. For $k=1,2,\ldots ,n$ and $i=1,2,\ldots ,N$: \begin{itemize} \item[(a)] If $ESS(w_{k-1}^{1:N})<\beta N$ draw indices $a_{k}^{i}$ from the discrete distribution on $\{1,\ldots,N\}$ with probabilities given by $w_{k-1}^{1:N}$ and put $w_{k}^{i}=1/N$. Otherwise, put $a_{k}^{i}=i$. \item[(b)] Draw $\mathbf{x}(t_{k-1}:t_{k})^{i}\sim \pi(\cdot|x_{t_{k-1}}^{a_{k}^{i}})$ using the Gillespie algorithm initialised at $x_{t_{k-1}}^{a_{k}^{i}}$. \item[(c)] Construct the unnormalised weight \[ \tilde{w}_{k}^{i}=\tilde{w}_{k-1}^{i}\frac{q(y_{t}|x_{t_{k}}^{i})}{q(y_{t}|x_{t_{k-1}}^{a_{k}^{i}})} \] \item[(d)] Normalise the weights: $w_{k}^{i}=\tilde{w}_{k}^{i} / \sum_{i=1}^{N}\tilde{w}_{k}^{i}$. \end{itemize} \item Let $b_{n}^{i}=i$ and define $b_{k}^{i}=a_{k+1}^{b_{k+1}^{i}}$ recursively. Resample (with replacement) from the discrete distribution on $\big\{(\mathbf{x}(0:t_{1})^{b_{1}^{i}},\ldots,\mathbf{x}(t_{n-1}:t)^{i}),i=1,\ldots,N\big\}$ using the normalised weights as probabilities. \end{enumerate} \end{algorithm} We now consider some special cases of Algorithm~\ref{bridgePF}. For unknown $x_{0}$ with prior probability mass function $\pi(x_{0})$, the target becomes \[ \pi(\mathbf{x}(t),x_{0}|y_{t})\propto \pi(x_{0})q(y_{t}|x_{0})\prod_{k=1}^{n}W_{k}(x_{t_{k-1}:t_{k}})\pi(\mathbf{x}(t_{k-1}:t_{k})|x_{t_{k-1}}) \] which suggests that step 1(a) should be replaced by sampling particles $x_{0}^{i}\sim \pi(x_{0})$. The contribution $q(y_{t}|x_{0})$ could either be absorbed into the final weight (taking care to respect the ancestral lineage of the trajectory), or an initial look ahead step could be performed by resampling amongst the $x_{0}^{i}$ with weights proportional to $q(y_{t}|x_{0}^{i})$. If the latter strategy is adopted and no additional resampling steps are performed, the algorithm reduces to the auxiliary particle filter of \cite{pitt1999}, where particles are pre-weighted by $q(y_{t}|x_{0})$ and propagated through myopic forward simulation. If no resampling steps are performed at any time, the algorithm reduces to the myopic importance sampling strategy described in Section~\ref{mimp}. In the error free scenario, the target can be written as \[ \pi(\mathbf{x}(t)|x_{t},x_{0})\propto\frac{q(x_{t}|x_{0})}{q(x_{t}|x_{t_{n-1}})} \pi(\mathbf{x}(t_{n-1}:t)|x_{t_{n-1}}) \prod_{k=1}^{n-1}W_{k}(x_{t_{k-1}:t_{k}})\pi(\mathbf{x}(t_{k-1}:t_{k})|x_{t_{k-1}}) \] where the incremental weight functions are redefined as \[ W_{k}(x_{t_{k-1}:t_{k}})=\frac{q(x_{t}|x_{t_{k}})}{q(x_{t}|x_{t_{k-1}})}\,. \] The form of the target suggests that at time $t_{n-1}$, particle trajectories should be propagated via $\pi(\mathbf{x}(t_{n-1}:t)|x_{t_{n-1}})$ and weighted by $q(x_{t}|x_{0})/q(x_{t}|x_{t_{n-1}})$, provided that the trajectory ``hits'' the observation $x_t$, otherwise a weight of 0 should be assigned. Hence, unlike in the continuous state space scenario considered by \cite{delmoral14}, the algorithm is likelihood free, in the sense that $\pi(\mathbf{x}(t_{n-1}:t)|x_{t_{n-1}})$ need not be evaluated. \subsection{Comments on efficiency} \label{sec:eff} Application of Algorithm~\ref{impCondMJP} requires calculation of the conditioned hazard function in (\ref{haz}) after every reaction event. The cost of this calculation will therefore be dictated by the number of observed components $p$, given that a $p\times p$ matrix must be inverted. Despite this, for many systems of interest, it is unlikely that all components will be observed and we anticipate that in practice $p< <u$, where $u$ is the number of species in the system. The construction of the conditioned hazard is based on an assumption that the hazard function is constant over diminishing time intervals $(s,t]$ and that the number of reactions over this interval is approximately Gaussian. The performance of the construct is therefore likely to diminish if applied over time horizons during which the reaction hazards vary substantially. We also note that the elements of the conditioned hazard are not guaranteed to be positive and we therefore truncate each hazard component at zero. We implement the bridge particle filter in Algorithm~\ref{bridgePF} with the weight functions obtained either through the CLE or the LNA. To maximise statistical efficiency, we require that $q(y_{t}|x_{t_{k}})\approx p(y_{t}|x_{t_{k}})$. Given the analytic intractability of the CLE, we obtain $q$ via a single step of an Euler-Maruyama scheme. Whilst this is likely to be computationally efficient, given the simplicity of applying a single step of the Euler-Maruyama scheme, we anticipate that applying the scheme over large time intervals (where non-linear dynamics are observed) is likely to be unsatisfactory. The tractability of the LNA has been recently exploited \citep{Komorowski09,fearnhead12,Goli14} and shown to give a reasonable approximation to the MJP for a number of reaction networks. However, use of the LNA requires the solution of a system of $u(u+1)/2 +2u$ coupled ODEs. For most stochastic kinetic models of interest, the solution to the LNA ODEs will not be analytically tractable. Whilst good numerical ODE solvers are readily available, the bridge particle filter is likely to require a full numerical solution over the time interval of interest for each particle (except in the special error free case where only a single solution is required). Both the CLE and LNA replace the intractable transition probability with a Gaussian approximation. Moreover, the approximations may be light tailed relative to the target, and erstwhile valid trajectories may be pruned out by the resampling procedure. Tempering the approximations by raising $q(y_t|x_{t_{k}})$ to a power $\gamma$ ($0<\gamma<1$) may alleviate this problem at the expense of choosing an appropriate value for the additional tuning parameter $\gamma$. We assess the empirical performance of each scheme in Section~\ref{sec:app}. \section{Bayesian inference} \label{sec:bayes} Consider a time interval $[0,T]$ over which a Markov jump process $\mathbf{X}=\{X_{t}\,|\, 0\leq t \leq T\}$ is not observed directly, but observations (on a regular grid) $\mathbf{y}=\{y_{t}\,|\, t=0,1,\ldots ,T\}$ are available and assumed conditionally independent (given $\mathbf{X}$) with conditional probability distribution obtained via the observation equation, \begin{equation}\label{obs_eq} Y_{t}=P'X_{t}+\varepsilon_{t},\qquad \varepsilon_{t}\sim \textrm{N}\left(0,\Sigma\right),\qquad t=0,1,\ldots ,T. \end{equation} As in Section~\ref{sec:cond}, we take $Y_{t}$ to be a length-$p$ vector, $P$ is a constant matrix of dimension $u\times p$ and $\varepsilon_{t}$ is a length-$p$ Gaussian random vector. We assume that primary interest lies in the rate constants $c$ where, in the case of unknown measurement error variance, the parameter vector $c$ is augmented to include the parameters of $\Sigma$. Bayesian inference may then proceed through the marginal posterior density \begin{equation} p(c|\mathbf{y})\propto p(c)p(\mathbf{y}|c)\label{jp} \end{equation} where $p(c)$ is the prior density ascribed to $c$ and $p(\mathbf{y}|c)$ is the marginal likelihood. Since the posterior in (\ref{jp}) will be intractable in practice, samples are usually generated from (\ref{jp}) via MCMC. A further complication is the intractability of the marginal likelihood term, and we therefore adopt the particle marginal Metropolis-Hastings scheme of \cite{andrieu10} which has been successfully applied to stochastic kinetic models in \cite{GoliWilk11} and \cite{Goli14}. \subsection{Particle marginal Metropolis-Hastings} \label{sec:pmmh} Since interest lies in the marginal posterior in (\ref{jp}), we consider the special case of the particle marginal Metropolis-Hastings (PMMH) algorithm \citep{andrieu10} which can be seen as a pseudo-marginal MH scheme \citep{beaumont03,andrieu09b}. Under some fairly mild conditions (for which we refer the reader to \cite{delmoral04} and \cite{andrieu10}), a sequential Monte Carlo scheme targeting the probability associated with the conditioned MJP, $\pi(\mathbf{x}|\mathbf{y},c)$, can be implemented to give an unbiased estimate of the marginal likelihood. We write this estimate as $\widehat{p}(\mathbf{y}|c,u)$ where $u$ denotes all random variables generated by the SMC scheme according to some density $q(u|c)$. We now consider a target of the form \[ \widehat{p}(c,u|\mathbf{y})\propto \widehat{p}(\mathbf{y}|c,u)q(u|c)p(c) \] for which marginalisation over $u$ gives \begin{align*} \int\widehat{p}(c,u|\mathbf{y})\,du&\propto p(c)\textrm{E}_{u|c}\left\{\widehat{p}(\mathbf{y}|c,u)\right\}\\ &\propto p(c)p(\mathbf{y}|c). \end{align*} Hence, a MH scheme targeting $\widehat{p}(c,u|\mathbf{y})$ with proposal kernel $q(c^{*}|c)q(u^{*}|c^{*})$ accepts a move from $(c,u)$ to $(c^{*},u^{*})$ with probability \[ \frac{\widehat{p}\big(\mathbf{y}|c^{*},u^{*}\big)p\big(c^{*}\big)}{\widehat{p}\big(\mathbf{y}|c,u\big)p\big(c\big)} \times \frac{q\big(c|c^{*}\big)} {q\big(c^{*}|c\big)}. \] We see that the values of $u$ need never be stored and it should now be clear that the scheme targets the correct marginal distribution $p(c|\mathbf{y})$. \subsection{Implementation} \label{sec:implement} Algorithms~\ref{impCondMJP} and \ref{bridgePF} can readily be applied to give an SMC scheme targeting $\pi(\mathbf{x}|\mathbf{y},c)$. In each case, an initialisation step should be performed where a weighted sample $\{(x_{0}^{i},w_{0}^{i}),i=1,\ldots ,N\}$ is obtained by drawing values $x_{0}^{i}$ from some prior with mass function $\pi(x_0)$ and assigning weights proportional to $p(y_{0}|x_{0}^{i},c)$. If desired, resampling could be performed so that the algorithm is initialised with an equally weighted sample drawn from $\pi(x_{0}|y_{0},c)$. Algorithms~\ref{impCondMJP} and \ref{bridgePF} can then be applied sequentially, for times $t=1,2,\ldots,T$, simply by replacing $x_{0}$ with $x_{t-1}^{i}$. After assimilating all information, an unbiased estimate of the marginal likelihood $p(\mathbf{y}|c)$ is obtained as \begin{equation}\label{margll} \widehat{p}(\mathbf{y}|c)=\widehat{p}(y_{0}|c)\prod_{t=1}^{T}\widehat{p}(y_{t}|\mathbf{y}(t-1)) \end{equation} where $\mathbf{y}(t-1)=\{y_{t},t=0,1,\ldots,t-1\}$ and we have dropped $u$ from the notation for simplicity. The product in (\ref{margll}) can be obtained from the output of Algorithms~\ref{impCondMJP} and \ref{bridgePF}. For example, when using the conditioned hazard approach, (\ref{margll}) is simply the product of the average unnormalised weight obtained in step 1(b). Use of Algorithms~\ref{impCondMJP} and \ref{bridgePF} in this way give SMC schemes that fall into a class of auxiliary particle filters \citep{pitt1999}. We refer the reader to \cite{pitt12} for a theoretical treatment of the use of an auxiliary particle filter inside an MH scheme. The mixing of the PMMH scheme is likely to depend on the number of particles used in the SMC scheme. Whilst the method can be implemented using just $N=1$ particle, the corresponding estimator of marginal likelihood will be highly variable, and the impact of this on the PMMH algorithm will be a poorly mixing chain. As noted by \cite{andrieu09b}, the mixing efficiency of the PMMH scheme decreases as the variance of the estimated marginal likelihood increases. This problem can be alleviated at the expense of greater computational cost by increasing $N$. This therefore suggests an optimal value of $N$ and finding this choice is the subject of \cite{sherlock2013} and \cite{doucet13}. The former show that for a ``standard asymptotic regime'' $N$ should be chosen so that the variance in the noise in the estimated log-posterior is around 3, but find that for low dimensional problems a smaller value (around 2) is optimal. We therefore recommend performing an initial pilot run of PMMH to obtain an estimate of the posterior mean (or median) parameter value, and a (small) number of additional sampled values. The value of $N$ should then be chosen so that the variance of the noise in the estimated log-posterior is (ideally) in the range $[2,4]$. Since all parameter values must be strictly positive we adopt a proposal kernel corresponding to a random walk on $\log(c)$, with Gaussian innovations. We take the innovation variance to be $\lambda \widehat{\textrm{Var}}(\log(c))$ and follow the practical advice of \cite{sherlock2013} by tuning $\lambda$ to give an acceptance rate of around 15\%. \section{Applications} \label{sec:app} In order to examine the empirical performance of the methods proposed in section~\ref{sec:cond}, we consider three examples. These are a simple (and tractable) birth-death model, the stochastic Lotka-Volterra model examined by \citet{BWK08} and a systems biology model of bacterial motility regulation \citep{Wilkinson11}. \subsection{Birth-Death} \label{sec:bdmodel} The birth-death reaction network takes the form \[ \mathcal{R}_{1}:\, \mathcal{X}_{1} \longrightarrow 2\mathcal{X}_{1},\quad \mathcal{R}_{2}:\,\mathcal{X}_{1} \longrightarrow \emptyset \] with birth and death reactions shown respectively. The stoichiometry matrix is given by \[ S = \begin{pmatrix} 1 & -1 \end{pmatrix} \] and the associated hazard function is \[ h(x_t,c) = (c_{1}x_{t}, c_{2}x_{t})' \] where $x_t$ denotes the state of the system at time $t$. The CLE is given by \[ dX_{t}=\left(c_{1}-c_{2}\right)X_{t}\,dt + \sqrt{\left(c_{1}+c_{2}\right)X_{t}}\,dW_{t} \] which can be seen as a degenerate case of a Feller square-root diffusion \citep{feller52}. For reaction networks of reasonable size and complexity, the CLE will be intractable. To explore the effect of working with a numerical approximation of the CLE inside the bridge particle filter, we adopt the Euler-Maruyama approximation which gives (for a fixed initial condition $x_0$) an approximation to the transition density as \[ X_{t}|X_{0}=x_{0}\sim \textrm{N}\left(x_{0}+\left(c_{1}-c_{2}\right)x_{0}\,t\,,\,\left(c_{1}+c_{2}\right)x_{0}\,t\right). \] The ODE system governing the LNA with initial conditions $z_{0}=x_{0}$, $m_{0}=0$ and $V_{0}=0$ can be solved analytically to give \[ X_{t}|X_{0}=x_{0}\sim \textrm{N}\left(x_{0} e^{(c_{1}-c_{2})t}\,,\,x_{0}\frac{(c_{1}+c_{2})}{(c_{1}-c_{2})} e^{(c_{1}-c_{2})t} \left[e^{(c_{1}-c_{2})t} - 1\right]\right). \] We consider a an example in which $c=(0.5,1)$ and $x_{0}=100$ are fixed. To provide a challenging scenario we took $x_{t}$ to be the upper 99\% quantile of $X_{t}|X_{0}=100$. To assess the performance of each algorithm as an observation is made with increasing time sparsity, we took $t\in\{0.1,0.5,1\}$. Algorithms~\ref{mimp} (denoted MIS), \ref{impCondMJP} (denoted CH) and \ref{bridgePF} (denoted BPF-CLE or BPF-LNA) were run with $N\in\{10,50,100,500\}$ to give a set of $m=5000$ estimates of the transition probability $\pi(x_{t}|x_{0})$ and we denote this set by $\widehat{\pi}^{1:m}_{N}(x_{t}|x_{0})$. The bridge particle filter also requires specification of the intermediate time points at which resampling could take place. For simplicity, we took an equispaced partition of $[0,t]$ with a time step of 0.02 for $t=0.1$, and $0.05$ for $t\in\{0.5,1\}$. We found that these gave a good balance between statistical efficiency and CPU time. \begin{table}[t] \centering \begin{tabular}{|l|c|c|c|c|} \hline Method & $N$ & $t=0.1$ & $t=0.5$ & $t=1$ \\ \hline MIS & 10 & 300, 293, 6.2$\times 10^{-4}$ &171, 168, 3.5$\times 10^{-4}$ &151, 149, 3.0$\times 10^{-4}$ \\ & 50 &1340, 1190, 1.2$\times 10^{-4}$ &827, 773, 7.0$\times 10^{-5}$ &682, 639, 5.8$\times 10^{-5}$ \\ & 100 &2331, 1921, 6.4$\times 10^{-5}$ &1488, 1308, 3.5$\times 10^{-5}$ &1364, 1203, 3.2$\times 10^{-5}$ \\ & 500 &4776, 3771, 1.2$\times 10^{-5}$ &4196, 3230, 6.8$\times 10^{-6}$ &3901, 3004, 6.1$\times 10^{-6}$ \\ \hline CH & 10 &4974, 3264, 1.6$\times 10^{-5}$ &4985, 2998, 7.8$\times 10^{-6}$ &4990, 3581, 2.4$\times 10^{-6}$ \\ & 50 &5000, 4395, 4.6$\times 10^{-6}$ &5000, 4546, 1.2$\times 10^{-6}$ &5000, 4508, 9.7$\times 10^{-7}$ \\ & 100 &5000, 4689, 2.4$\times 10^{-6}$ &5000, 4668, 8.5$\times 10^{-7}$ &5000, 4798, 3.8$\times 10^{-7}$ \\ & 500 &5000, 4921, 7.7$\times 10^{-7}$ &5000, 4943, 1.6$\times 10^{-7}$ &5000, 4939, 1.2$\times 10^{-7}$ \\ \hline BPF-CLE & 10 &2581, 349, 5.7$\times 10^{-4}$ &2412, 556, 7.7$\times 10^{-5}$ &2745, 532, $1.7\times 10^{-5}$ \\ & 50 &4982, 2137, 6.3$\times 10^{-5}$ &4920, 3391, 4.9$\times 10^{-6}$ &3236, 4925, 4.0$\times 10^{-6}$ \\ & 100 &5000, 3519, 1.9$\times 10^{-5}$ &4998, 3979, 2.8$\times 10^{-6}$ &4999, 4106, 4.1$\times 10^{-6}$ \\ & 500 &5000, 3841, 1.5$\times 10^{-5}$ &5000, 4756, 6.7$\times 10^{-7}$ &5000, 4780, 3.2$\times 10^{-6}$ \\ \hline BPF-LNA & 10 &2634, 403, 4.3$\times 10^{-4}$ &2514, 636, 6.9$\times 10^{-5}$ &2843, 1102, 2.4$\times 10^{-5}$ \\ & 50 &4963, 2748, 3.2$\times 10^{-5}$ &4926, 3198, 6.0$\times 10^{-6}$ &4949, 3625, 2.8$\times 10^{-6}$ \\ & 100 &5000, 3612, 1.5$\times 10^{-5}$ &4998, 4055, 2.5$\times10^{-6}$ &5000, 4016, 1.9$\times 10^{-6}$ \\ & 500 &5000, 3643, 1.4$\times 10^{-5}$ &5000, 4655, 8.8$\times 10^{-7}$ &5000, 4771, 5.4$\times 10^{-7}$ \\ \hline \end{tabular} \caption{$\sum_{i=1}^{m}I(\widehat{\pi}_{N}(x_{t}|x_{0})>0)$, $\textrm{ESS}(\widehat{\pi}^{1:m}_{N}(x_{t}|x_{0}))$ and $\textrm{MSE}(\widehat{\pi}^{1:m}_{N}(x_{t}|x_{0}))$, based on 5000 runs of MIS, CH, BPF-CLE and BPF-LNA. For MIS, the expected number of non-zero estimates (as obtained analytically) is reported. In all cases, $x_{0}=100$ and $x_{t}$ is the upper 99\% quantile of $X_{t}|X_{0}=100$.}\label{tab:tabBD} \end{table} To compare the algorithms, we report the number of non-zero normalising constant estimates $\sum_{i=1}^{m}I(\widehat{\pi}_{N}(x_{t}|x_{0})>0)$, the effective sample size $\textrm{ESS}(\widehat{\pi}^{1:m}_{N}(x_{t}|x_{0}))$ whose form is defined in (\ref{ess}) and mean-squared error $\textrm{MSE}(\widehat{\pi}^{1:m}_{N}(x_{t}|x_{0}))$ given by \[ \textrm{MSE}(\widehat{\pi}^{1:m}_{N}(x_{t}|x_{0}))=\frac{1}{m}\sum_{i=1}^{m} \left[\widehat{\pi}^{i}_{N}(x_{t}|x_{0})-\pi(x_{t}|x_{0})\right]^{2} \] where $\pi(x_{t}|x_{0})$ can be obtained analytically \citep{bailey1964}. The results are summarised in Table~\ref{tab:tabBD}. Use of the conditioned hazard and bridge particle filters (CH, BPF-CLE and BPF-LNA) comprehensively outperform the myopic importance sampler (MIS). For example, for the $t=1$ case, an order of magnitude improvement is observed when comparing BPF (CLE or LNA) with MIS in terms of mean squared error. We see a reduction in mean squared error of two orders of magnitude when comparing MIS with CH, across all experiments, and performance (across all metrics) of MIS with $N=500$ is comparable with the performance of CH when $N=10$. BPF-LNA generally outperforms BPF-CLE, although the difference is small. Running the BPF schemes generally requires twice as much computational effort than MIS, whereas CH is roughly three times slower than MIS. Even when this additional cost is taken into account, MIS cannot be recommended in this example. Naturally, the performance of BPF will depend on the accuracy of the normal approximations used by the CLE and LNA. In particular, we expect these approximations to be unsatisfactory when species numbers are low. Moreover, when the conditioned jump process exhibits nonlinear dynamics, we expect the Euler approximation to be particularly poor. We therefore repeated the experiments of Table~\ref{tab:tabBD} with $N=500$, $x_{0}=10$, and $x_{t}$ as the lower 1\% quantile of $X_{t}|X_{0}=10$. Results are reported in Table~\ref{tab:tabBD2}. We see that in this case, MIS outperforms BPF and the performance of BPF-CLE worsens as $t$ increases, suggesting that a single step of the Euler approximation is unsatisfactory for $t>0.1$. Use of the conditioned hazard on the other hand appears fairly robust to different choices of $x_{0}$, $x_{t}$ and $t$. \begin{table}[t] \centering \begin{tabular}{|l|c|c|c|} \hline Method & $t=0.1$ & $t=0.5$ & $t=1$ \\ \hline MIS &5000, 4747, 7.3$\times 10^{-5}$ &4998, 4426, 3.1$\times 10^{-5}$ &4999, 4500, 3.7$\times 10^{-5}$ \\ CH &5000, 4979, 8.7$\times 10^{-6}$ &5000, 4963, 2.3$\times 10^{-6}$ &5000, 4965, 2.58$\times 10^{-6}$ \\ BPF-CLE &5000, 4131, 3.9$\times 10^{-4}$ &5000, 3013, 1.4$\times 10^{-3}$ &5000, 3478, 1.8$\times 10^{-3}$ \\ BPF-LNA &5000, 3946, 3.6$\times 10^{-4}$ &5000, 3667, 1.4$\times 10^{-4}$ &5000, 3639, 1.3$\times 10^{-4}$ \\ \hline \end{tabular} \caption{$\sum_{i=1}^{m}I(\widehat{\pi}_{N}(x_{t}|x_{0})>0)$, $\textrm{ESS}(\widehat{\pi}^{1:m}_{N}(x_{t}|x_{0}))$ and $\textrm{MSE}(\widehat{\pi}^{1:m}_{N}(x_{t}|x_{0}))$, based on 5000 runs of MIS, CH, BPF-CLE and BPF-LNA. For MIS, the expected number of non-zero estimates (as obtained analytically) is reported. In all cases, $N=500$, $x_{0}=10$ and $x_{t}$ is the lower 1\% quantile of $X_{t}|X_{0}=10$.}\label{tab:tabBD2} \end{table} \subsection{Lotka-Volterra} \label{sec:lvmodel} We consider a simple model of predator and prey interaction comprising three reactions: \[ \mathcal{R}_{1}:\, \mathcal{X}_{1} \longrightarrow 2\mathcal{X}_{1},\quad \mathcal{R}_{2}:\, \mathcal{X}_{1}+\mathcal{X}_{2} \longrightarrow 2\mathcal{X}_{2},\quad \mathcal{R}_{3}:\, \mathcal{X}_{2} \longrightarrow \emptyset. \] Denote the current state of the system by $X=(X_{1},X_{2})'$ where we have dropped dependence of the state on $t$ for notational simplicity. The stoichiometry matrix is given by \[ S = \left(\begin{array}{rrr} 1 & -1 & 0\\ 0 & 1 & -1 \end{array}\right) \] and the associated hazard function is \[ h(X,c) = (c_{1}X_{1}, c_{2}X_{1}X_{2}, c_{3}X_{2})'. \] We consider three synthetic datasets consisting of 51 observations at integer times on prey and predator levels generated from the stochastic kinetic model using Gillespie's direct method and corrupted with zero mean Gaussian noise. The observation equation (\ref{obs_eq}) is therefore \[ Y_{t} = X_{t}+\varepsilon_{t}, \] where $X_{t}=(X_{1,t},X_{2,t})'$, $\varepsilon_{t}\sim\textrm{N}(0,\sigma^{2})$. We took $\sigma=10$ to construct the first dataset ($\mathcal{D}_{1}$), $\sigma=5$ to construct the second ($\mathcal{D}_{2}$) and $\sigma=1$ to give the third synthetic dataset ($\mathcal{D}_{3}$). In all cases we assumed $\sigma^{2}$ to be known. True values of the rate constants $(c_{1},c_{2},c_{3})'$ were taken to be 0.5, 0.0025, and 0.3 following \cite{BWK08}. We took the initial latent state as $x_{0}=(71,79)'$ assumed known for simplicity. Independent proper Uniform $U(-8,8)$ priors were ascribed to each $\log(c_i)$, denoted by $\theta_{i}$, $i=1,2,3$ and we let $\theta=(\theta_{1},\theta_{2},\theta_{3})'$ be the quantity for which inferences are to be made. For brevity, we refer to the likelihood-free PMMH scheme (based on forward simulation only) as PMMH-LF, and the scheme based on the conditioned hazard proposal mechanism as PMMH-CH. As the ODEs governing the LNA solution are intractable, we focus on the CLE implementation of the bridge particle filter and refer to this scheme as PMMH-BPF. A pilot run of PMMH-LF was performed for each dataset to give an estimate of the posterior variance $\widehat{\textrm{Var}}(\theta)$, posterior median and 3 additional sampled $\theta$ values. We denote the variance of the noise in the log posterior by $\tau^{2}$ and chose the number of particles $N$ for each scheme so that $\tau^{2}\approx 2$ at the estimated posterior median and $\tau^{2}<4$ at the remaining sampled $\theta$ values (where possible). We updated $\theta$ using a Gaussian random walk with an innovation variance given by $\lambda \widehat{\textrm{Var}}(\theta)$, with the scaling parameter $\lambda$ optimised empirically, using minimum effective sample size (ESS$_{\textrm{min}}$) over each parameter chain. PMMH-BPF requires specification of a set of intermediate times at which resampling could be triggered. We found that resampling every 0.2 time units worked well. We also found that tempering the CLE approximation by raising each contribution $q(y_{t}|x_{t_{k}})$ to the power $\gamma$ performed better than using the CLE approximation directly (with $\gamma=1$). We took $\gamma=0.5, 0.2$ and $0.1$ for each dataset $\mathcal{D}_{1}$, $\mathcal{D}_{2}$ and $\mathcal{D}_{3}$ respectively. All schemes were run for $10^{5}$ iterations, except for PMMH-LF when using dataset $\mathcal{D}_{1}$, whose computational cost necessitated a shorter run of 50,000 iterations. All algorithms are coded in C and run on a desktop computer with a 3.4GHz clock speed. \begin{table}[t] \begin{center} \begin{tabular}{|l|cccccc|} \hline & $N$ & $\tau^{2}$ & Acc. rate& ESS($\theta_1,\theta_2,\theta_3$) & Time (s) & ESS$_{\textrm{min}}/s$\\ \hline & \multicolumn{6}{|c|}{$\mathcal{D}_{1}$ ($\sigma=10$)} \\ PMMH-LF &230 &2.0 &0.15 &(3471, 3465, 3760) &17661 & 0.196 \\ PMMH-CH &50 &2.1 &0.14 &(3178, 3153, 3095) &18773 & 0.165 \\ PMMH-BPF &220 &2.0 &0.16 &(3215, 2994, 3121) &27874 &0.107 \\ \hline & \multicolumn{6}{|c|}{$\mathcal{D}_{2}$ ($\sigma=5$)} \\ PMMH-LF &440 &2.0 &0.15 &(3482, 3845, 3784) &33808 &0.103 \\ PMMH-CH &35 &2.0 &0.15 &(3581, 3210, 3204) &13341 &0.240 \\ PMMH-BPF &250 &1.9 &0.17 &(3779, 3887, 4110) &33436 &0.113 \\ \hline & \multicolumn{6}{|c|}{$\mathcal{D}_{3}$ ($\sigma=1$)} \\ PMMH-LF &25000 &1.9 & 0.18 &(2503, 2746, 2472) & 1277834 &0.00193 \\ PMMH-CH &55 &1.9 &0.14 &(2861, 2720, 2844) &22910 &0.118 \\ PMMH-BPF &3000 &1.8 &0.18 &(3732, 3990, 4221) &290000 &0.0129 \\ \hline \end{tabular} \caption{Lotka-Volterra model. Number of particles $N$, variance of the noise in the log-posterior ($\tau^{2}$) at the posterior median, acceptance rate, effective sample size (ESS) of each parameter chain and wall clock time in seconds and minimum (over each parameter chain) ESS per second.}\label{tab:tabLV} \end{center} \end{table} \begin{figure}[t] \centering \psfrag{thet1}[][][0.8][0]{$\theta_1$} \psfrag{thet2}[][][0.8][0]{$\theta_2$} \psfrag{thet3}[][][0.8][0]{$\theta_3$} \includegraphics[width=5cm,height=15cm,angle=-90]{densLV} \caption{Lotka-Volterra model. Marginal posterior distributions based on synthetic data generated using $\sigma^{2}=10$ (solid), $\sigma^{2}=5$ (dashed) and $\sigma^{2}=1$ (dotted). Values of each $\theta_{i}$ that produced the data are indicated.} \label{fig:margpost} \end{figure} Figure~\ref{fig:margpost} shows the marginal posterior distributions for each dataset and Table~\ref{tab:tabLV} summarises the overall efficiency of each PMMH scheme. When using PMMH-CH, relatively few particles are required (ranging from 35--55) even as noise in the observation process reduces. Although PMMH-BPF required fewer particles than PMMH-LF, as $\sigma$ is reduced, increasing numbers of particles are required by both schemes to optimise overall efficiency. We measure overall efficiency by comparing minimum effective sample size scaled by wall clock time (ESS$_{\textrm{min}}/s$). When using $\mathcal{D}_{1}$ ($\sigma=10$), there is little difference in overall efficiency between each scheme although PMMH-LF is to be preferred. For dataset $\mathcal{D}_{2}$ ($\sigma=5$), PMMH-BPF and PMMH-LF give comparable performance whilst PMMH-CH outperforms PMMH-LF by a factors of 2.3. For $\mathcal{D}_{3}$ ($\sigma=1$) PMMH-CH and PMMH-BPF outperform PMMH-LF by factors of 61 and 6.7 respectively. Computational cost precluded the use of PMMH-LF on a dataset with $\sigma<1$, however, our experiments suggest that PMMH-CH can be successfully applied to synthetic data with $\sigma=0.1$ by using just $N=50$ particles. Finally, we note that PMMH-CH appears to outperform PMMH-BPF, and, whereas the latter requires choosing appropriate intermediate resampling times and a tempering parameter $\gamma$, PMMH-CH requires minimal tuning. Therefore, in the following example, we focus on the PMMH-CH scheme. \subsection{Motility regulation} \label{sec:motmodel} We consider here a simplified model of a key cellular decision made by the gram-positive bacterium \emph{Bacillus subtilis} \citep{sonenshein2002}. This decision is whether or not to grow flagella and become motile \citep{kearns2005}. The \emph{B.\ subtilis}\ sigma factor $\sigma^D$ is key for the regulation of motility. Many of the genes and operons encoding motility-related proteins are governed by this $\sigma$ factor, and so understanding its regulation is key to understanding the motility decision. The gene for $\sigma^D$ is embedded in a large operon containing several other motility-related genes, known as the \emph{fla/che} operon. The \emph{fla/che} operon itself is under the control of another $\sigma$ factor, $\sigma^A$, but is also regulated by other proteins. In particular, transcription of the operon is strongly repressed by the protein \emph{CodY}, which is encoded upstream of \emph{fla/che}. \emph{CodY} inhibits transcription by binding to the \emph{fla/che} promoter. Since \emph{CodY} is upregulated in good nutrient conditions, this is thought to be a key mechanism for motility regulation. As previously mentioned, many motility-related genes are under the control of $\sigma^D$. For simplicity we focus here on one such gene, \emph{hag}, which encodes the protein \emph{flagellin} (or \emph{Hag}), the key building block of the flagella. It so happens that \emph{hag} is also directly repressed by \emph{CodY}. The regulation structure can be encoded as follows. \[ \begin{array}{ll} \mathcal{R}_{1}:\, \textrm{\sf codY} \longrightarrow \textrm{\sf codY}+\textrm{\sf CodY}, & \quad\mathcal{R}_{2}:\, \textrm{\sf CodY}\longrightarrow \emptyset,\\ \mathcal{R}_{3}:\, \textrm{\sf flache}\longrightarrow \textrm{\sf flache}+\textrm{\sf SigD},&\quad \mathcal{R}_{4}:\, \textrm{\sf SigD} \longrightarrow \emptyset,\\ \mathcal{R}_{5}:\, \textrm{\sf SigD\_hag} \longrightarrow \textrm{\sf SigD}+\textrm{\sf hag}+\textrm{\sf Hag} &\quad \mathcal{R}_{6}:\, \textrm{\sf Hag} \longrightarrow \emptyset,\\ \mathcal{R}_{7}:\, \textrm{\sf SigD}+\textrm{\sf hag} \longrightarrow \textrm{\sf SigD\_hag}, &\quad \mathcal{R}_{8}:\, \textrm{\sf SigD\_hag}\longrightarrow \textrm{\sf SigD}+\textrm{\sf hag},\\ \mathcal{R}_{9}:\, \textrm{\sf CodY}+\textrm{\sf flache}\longrightarrow \textrm{\sf CodY\_flache},&\quad \mathcal{R}_{10}:\, \textrm{\sf CodY\_flache} \longrightarrow \textrm{\sf CodY}+\textrm{\sf flache},\\ \mathcal{R}_{11}:\, \textrm{\sf CodY}+\textrm{\sf hag} \longrightarrow \textrm{\sf CodY\_hag} &\quad \mathcal{R}_{12}:\, \textrm{\sf CodY\_hag} \longrightarrow \textrm{\sf CodY}+\textrm{\sf hag}. \end{array} \] Following \cite{Wilkinson11}, we assume that three rate constants are uncertain, namely $c_{3}$ (governing the rate of production of $\textrm{\sf SigD}$), $c_{9}$ and $c_{10}$ (governing the rate at which $\textrm{\sf CodY}$ binds or unbinds to the $\textrm{\sf flache}$ promoter). Values of the rate constants are taken to be \[ c=(0.1,0.0002,1,0.0002,1.0,0.0002,0.01,0.1,0.02,0.1,0.01,0.1)' \] and initial values of $(\textrm{\sf codY},\textrm{\sf CodY},\textrm{\sf flache},\textrm{\sf SigD},\textrm{\sf SigD\_hag},\textrm{\sf hag},\textrm{\sf Hag},\textrm{\sf CodY\_flache},\textrm{\sf CodY\_hag})$ are \[ x_{0}=(1,10,1,10,1,1,10,1,1)'. \] Gillespie's direct method was used to simulate 3 synthetic datasets consisting of 51 observations on $\textrm{\sf SigD}$ only, with inter-observation times of $\Delta t=1,2,5$ time units. A full realisation from the motility model that was used to construct each dataset is shown in Figure~\ref{fig:motdat}. The assumed initial conditions and parameter choices give inherently discrete time series. To provide a challenging (but unrealistic) scenario for the PMMH-CH scheme we assume that error-free observations are available. We adopt independent proper Uniform priors on the log scale: \begin{align*} \log(c_{3})&\sim \textrm{U}\left(\log\{0.01\},\log\{100\}\right)\\ \log(c_{9})&\sim \textrm{U}\left(\log\{0.0002\},\log\{2\}\right)\\ \log(c_{10})&\sim \textrm{U}\left(\log\{0.001\},\log\{10\}\right) \end{align*} which cover two orders of magnitude either side of the ground truth. We ran PMMH-CH for $10^{5}$ iterations, after determining (from short pilot runs) suitable numbers of particles for each dataset, and a scaling $\lambda$ for use in the Gaussian random walk proposal kernel. \begin{figure}[t] \centering \psfrag{t}[][][0.8][0]{$t$} \psfrag{sigD}[][][0.8][0]{$\textrm{\sf SigD}$} \psfrag{hag}[][][0.8][0]{$\textrm{\sf hag}$} \psfrag{Hag}[][][0.8][0]{$\textrm{\sf Hag}$} \psfrag{flache}[][][0.8][0]{$\textrm{\sf flache}$} \psfrag{cody}[][][0.8][0]{$\textrm{\sf codY}$} \psfrag{CodY}[][][0.8][0]{$\textrm{\sf CodY}$} \psfrag{sigDhag}[][][0.8][0]{$\textrm{\sf SigD\_hag}$} \psfrag{CodYhag}[][][0.8][0]{$\textrm{\sf CodY\_hag}$} \psfrag{CodYflache}[][][0.8][0]{$\textrm{\sf CodY\_flache}$} \includegraphics[width=4cm,height=15cm,angle=-90]{mot1} \includegraphics[width=4cm,height=15cm,angle=-90]{mot2} \includegraphics[width=4cm,height=15cm,angle=-90]{mot3} \caption{A typical realisation of the motility model.} \label{fig:motdat} \end{figure} \begin{figure}[t] \centering \psfrag{log(c3)}[][][0.8][0]{$\log(c_{3})$} \psfrag{log(c9)}[][][0.8][0]{$\log(c_{9})$} \psfrag{log(c10)}[][][0.8][0]{$\log(c_{10})$} \includegraphics[width=5cm,height=15cm,angle=-90]{densMOT} \caption{Motility regulation model. Marginal posterior distributions based on synthetic data with inter-observation times of $\Delta t=1$ (solid), $\Delta t=2$ (dashed) and $\Delta t=5$ (dotted). Values of each $\log(c_{i})$ that produced the data are indicated.} \label{fig:margpostMOT} \end{figure} \begin{table}[t] \begin{center} \begin{tabular}{|l|cccccc|} \hline & $N$ & $\tau^{2}$ & Acc. rate& ESS($\theta_1,\theta_2,\theta_3$) & Time (s) & ESS$_{\textrm{min}}/s$\\ \hline $\Delta t=1$ & 400 &1.99 &0.10 &(1635, 2156, 1625) &6933 &0.23 \\ $\Delta t=2$ & 600 &2.05 &0.10 &(1870, 1215, 1518) &6950 &0.17 \\ $\Delta t=5$ & 1200 &2.01 &0.06 &(797, 791, 673) &13628 &0.05 \\ \hline \end{tabular} \caption{Motility regulation model. Number of particles $N$, variance of the noise in the log-posterior ($\tau^{2}$) at the posterior median, acceptance rate, effective sample size (ESS) of each parameter chain and wall clock time in seconds and minimum (over each parameter chain) ESS per second.}\label{tab:tabMR} \end{center} \end{table} Figure~\ref{fig:margpostMOT} shows the marginal posterior distributions for each dataset. We see that despite observing levels of $\textrm{\sf SigD}$ only, sampled parameter values are consistent with the ground truth. Table~\ref{tab:tabMR} summarises the overall efficiency of PMMH-CH when applied to each dataset. We see that as the inter-observation time $\Delta t$ increases, larger numbers of particles are required to maintain a variance in log-posterior of around 2 at the estimated posterior median. Despite using increased particle numbers, statistical efficiency, as measured by effective sample size, appears to reduce as $\Delta t$ is increased. We observed that parameter chains were more likely to ``stick'' (and note the decreasing acceptance rate) leading to reduced ESS. This is not surprising given the assumptions used to derive the conditioned hazard, and we expect its performance to diminish as inter-observation time increases. \section{Discussion and conclusions} \label{sec:disc} This paper considered the problem of performing inference for the parameters governing Markov jump processes in the presence of informative observations. Whilst it is possible to construct particle MCMC schemes for such models given time course data that may be incomplete and subject to error, the simplest ``likelihood-free'' implementation is likely to be computationally intractable, except in high measurement error scenarios. To circumvent this issue, we have proposed a novel method for simulating from a conditioned jump process, by approximating the expected number of reactions between observation times to give a conditioned hazard. We find that a simple implementation of this approach, with exponential waiting times between proposed reaction events, works extremely well in a number of scenarios, and even in challenging multivariate settings. It should be noted however, that the assumptions under-pinning the construct are likely to be invalidated as inter-observation time increases. We compared this approach with a bridge particle filter adapted from \cite{delmoral14}. Implementation of this approach requires the ability to simulate from the model and access to an approximation of the (unavailable) transition probabilities. The overall efficiency of the scheme depends on the accuracy and computational cost of the approximation. Use of the LNA inside the bridge particle filter appears promising, although the requirement of solving a system of ODEs for each particle, and whose dimension increases quadratically with the number of species, is likely to be a barrier to its successful application in high dimensional systems. Using a numerical approximation to the CLE offers a cheaper but less accurate alternative. The bridge particle filter (based on either the LNA or CLE) requires specification of appropriate intermediate resampling times and, when the approximations are likely to be light tailed relative to the jump process transition probability, a tempering parameter. Use of the conditioned hazard on the other hand requires minimal tuning. This approach was successfully applied to the problem of inferring the rate constants governing a Lotka-Volterra system and a simple model of motility regulation. Improvements to the proposed algorithms remain of interest and are the subject of ongoing research. For example, when using the bridge particle filter, it may be possible to specify a resampling regime dynamically, based on the expected time to the next reaction event, evaluated at, for example, the LNA mean. An exact implementation of the conditioned hazard approach and the potential improvement it may offer is also of interest, especially for systems with finite state space, which would permit a thinning approach \citep{lewis1979} to reaction event simulation. \bibliographystyle{apalike}
train/arxiv
BkiUa9w25V5ha7jY7fER
4
0.8
\section{Introduction} In the present paper we continue to study \textit{critical configurations} of six infinite nonintersecting right circular cylinders touching the unit sphere. We call a cylinder configuration \textit{critical} if for each small deformation $t$ that keeps the radii of the cylinders, either \begin{itemize} \item[(T1)] some cylinders start to intersect, or else \item[(T2)] the distances between all of them increase, but by no more than $$\sim\left\Vert t\right\Vert ^{2}\ ,$$ or stay zero, for some. The norm $\left\Vert t\right\Vert $ is defined by formula $\left( \ref{norm}\right) $ below. \end{itemize} \noindent A critical configuration is called a \textsl{locally maximal} configuration if all its deformations are of the first type. Any other critical configuration is called a \textsl{saddle configuration}, and the deformations of type (T2) are then called the \textsl{unlocking} deformations. This definition of a critical configuration is close in spirit to the definition of a critical point of the Morse function, but is adapted to our case of the function being non-smooth minimax function, compare with Definition 4.6 in \cite{KKLS}. \vskip.1cm For example, let $C_{6}$ be the configuration of six nonintersecting cylinders of radius $1,$ parallel to the $z$ direction in $\mathbb{R}^{3}$ and touching the unit ball centered at the origin. One of the results of \cite{OS} is that the configuration $C_{6}$ is a saddle point configuration: there is a deformation of $C_{6}$ along which the unit cylinders cease touching each other; thus, the configuration $C_{6}$ can be unlocked. We note here that the structure of the critical point $C_{6}$ is complicated; in particular, the distance function $D$ (the minimum of the distances between the cylinders) is not even continuous at $C_{6},$ and the limits $\lim_{\mathbf{m}\rightarrow C_{6}}D\left( \mathbf{m}\right) ,$ $\mathbf{m}\in M^{6}$, depend on the direction from which the point $C_{6}$ is approached. Here $M^{6}$ is the relevant configuration space, see the precise definition in Section \ref{sectConfManifold}. \vskip .1cm We have constructed in \cite{OS} the deformation $C_{6,x}$ of the configuration $C_6$. Moving along $C_{6,x}$ the common radius of cylinders grow when $x$ decreases from $1$ to $1/2$ ($x=1$ corresponds to the initial configuration $C_6$). For $x=1/2$ we obtain the configuration $C_{\mathfrak{m}}$, see Figure \ref{confCm}, for which the radius reaches its maximum value $\frac{1}{8}\left( 3+\sqrt{33}\right)$. \begin{figure}[H] \centering \includegraphics[scale=0.24]{FigCMAXcyl2.jpg} \vspace{-.4cm} \caption{Configuration $C_{\mathfrak{m}}$} \label{confCm} \end{figure} \vskip .1cm In \cite{OS-C6} we have shown that the configuration $C_{\mathfrak{m}}$ is a sharp local maximum of the distance function. \vskip .1cm In the present paper we study the configuration $O_{6},$ comprised of the following six radius one cylinders: \vspace{-.2cm} \begin{itemize} \item two cylinders are parallel to the $Oz$ axis and touch the sphere $\mathbb{S}^{2}$ at points $\left(\pm 1,0,0\right) $ on the $Ox$ axis; \vspace{-.3cm} \item two cylinders are parallel to the $Ox$ axis and touch the sphere $\mathbb{S}^{2}$ at points $\left( 0,\pm 1,0\right) $ on the $Oy$ axis; \vspace{-.3cm} \item two cylinders are parallel to the $Oy$ axis and touch the sphere $\mathbb{S}^{2}$ at points $\left( 0,0,\pm 1\right) $ on the $Oz$ axis. \end{itemize} The letter `O' in the name of the configuration refers, probably, to the fact that the points at which the cylinders touch the sphere form the vertices of the regular octahedron. In a forthcoming publication \cite{OS-PC} we will give an interpretation of the configuration $O_6$ which rather relates it to the regular tetrahedron and suggest a generalization for dual pairs of Platonic bodies. \vskip .1cm The configuration $O_6$ is centrally symmetric. There is a freedom in the definition of the configuration $O_6$: two cylinders, touching the sphere $\mathbb{S}^{2}$ at points $\left(\pm 1,0,0\right) $ are parallel to the $z$-axis. Instead one can start with the two cylinders, touching the sphere $\mathbb{S}^{2}$ at points $\left(\pm 1,0,0\right) $ but parallel to the $y$-axis; then add the two cylinders, touching the sphere $\mathbb{S}^{2}$ at points $\left(0,\pm 1,0\right) $ but parallel to the $z$-axis and the two cylinders, touching the sphere $\mathbb{S}^{2}$ at points $\left(0,0,\pm 1\right) $ but parallel to the $x$-axis. However this configuration is obtained from $O_6$ by the rotation around an arbitrary coordinate axis through the angle $\pi/2$ or $-\pi/2$. \vskip .1cm The configuration $O_6$ of cylinders is shown on Figure \ref{octahedrConfCyl} (the green unit ball is in the center). \begin{figure}[H] \vspace{-.8cm} \centering \includegraphics[scale=0.5]{O6Cylinders.jpeg} \vspace{-1cm} \caption{Configuration $O_6$ of cylinders} \label{octahedrConfCyl} \end{figure} \vskip .1cm To visualize configurations of cylinders it is convenient to replace each cylinder by its unique generator (a line parallel to the axis of the cylinder) touching the sphere $\mathbb{S}^{2}$. We define the value of the distance function on a configuration to be the minimum of distances between these tangent lines. \vskip .1cm The configuration $O_6$ of tangent lines is shown on Figure \ref{octahedrConf}. \begin{figure}[th] \vspace{-.4cm} \centering \includegraphics[scale=0.7]{OctahedralConfiguration.jpg} \vspace{-1cm} \caption{Configuration $O_6$ of tangent lines} \label{octahedrConf} \end{figure} \vskip .1cm \noindent About the configuration $O_6$ W. Kuperberg was asking \cite{K} whether it can be unlocked, i.e. whether one can deform it in such a way that all the distances between the cylinders become positive. Our result is that the configuration $O_{6}$ is a sharp local maximum of the distance function $D$, and therefore is not unlockable. Moreover, it is rigid, that is, any continuous deformation, which does not increase the radii of cylinders, reduces to a global rotation in the three-dimensional space. \vskip .1cm In the process of the proof we will, in particular, show that the 15-dimensional tangent space to $M^{6}\,\operatorname{mod}SO\left( 3\right) $ at $O_{6}$ contains a 6-dimensional subspace along which the function $D\left( \mathbf{m}\right)$ decays quadratically, while along any other tangent direction it decays linearly. \vskip .1cm As for the configuration $C_{\mathfrak{m}}$ it turns out that it is sufficient to study the variations of distances up to the second order. \vskip .1cm For the configuration $O_6$ we distinguish twelve distances between the cylinders which are not parallel. Let $\widetilde{D}\left( \mathbf{m}\right)$ be the minimum of these twelve distances. We prove that the configuration $O_{6}$ is a sharp local maximum already of the function $\widetilde{D}$. \vskip .1cm We first show that there are three convex linear dependencies $\lambda_a$, $a=1,2,3$, between the differentials of the twelve distances. We thus have a six-dimensional linear subspace $E$ of the tangent space on which all twelve differentials vanish. \vskip .1cm It so happens that in our coordinates the groups of coordinates entering these linear combinations are disjoint. \vskip .1cm For the configuration $C_{\mathfrak{m}}$ there is one convex linear dependency between the differentials, see \cite{OS-C6} and the restriction of same linear combination $Q$ of the second differentials on the subspace, on which the differential vanish, is negatively defined. We have shown in \cite{OS-C6} that these conditions are sufficient for the local maximality. In the present paper we prove a generalization (of the above result for the configuration $C_{\mathfrak{m}}$) which allows us to make a conclusion about the local maximality of the configuration $O_6$. In this modification the negativity of the form $Q$ is replaced by the non-existence of non-trivial solutions of the system of three inequalities $Q_a>0$, $a=1,2,3$, where $Q_a=\lambda_a(Q_1,\dots,Q_{12})\vert_E$. If there existed a convex linear combination of the forms $Q_a>0$, $a=1,2,3$, with negatively defined restriction on $E$, we could simply refer to the assertion made in \cite{OS-C6}. However, this is not the case, see Section \ref{threeforms}. \vskip .1cm Let $\mathcal{Z}$ be the configuration space of six unit cylinders touching a unit ball. At his mathoverflow page \cite{K2} W. Kuperberg asks, among other questions: \begin{itemize} \item Is the space $\mathcal{Z}$ connected? \item In particular, within $\mathcal{Z}$, is a continuous transition between the configurations $C_6$ and $O_6$ possible? \end{itemize} In the process of our proof of the local maximality of the configuration $O_6$, we establish that the configuration $O_6$ is an isolated point in the space $\mathcal{Z}$ mod $SO(3)$ which implies the negative answer to these questions, see Corollary \ref{cornonconne}, Subsection \ref{configO6}. So a modified question arises: \begin{itemize} \item How many components does the space $\mathcal{Z}$ have? \end{itemize} The configuration $C_{\mathfrak{m}}$, in contrast to the configuration $O_6$, is not mirror symmetric. We make several conjectures concerning the components of the space $\mathcal{Z}$ mod $O(3)$ and mod $SO(3)$. \vskip .1cm The paper is organized as follows. In the next section we recall notation, concerning the manifold $M^{6}$, formulate the maximality result and discuss the connected components of the space $\mathcal{Z}$. Section \ref{maxconfO6} contains the calculation of the necessary differentials. In Section \ref{genforms} we establish analytic results needed for the proofs of the local maximality of the configuration $O_6$. \section{Preliminaries} A cylinder $\varsigma$ touching the unit sphere $\mathbb{S}^{2}$ has a unique generator (a line parallel to the axis of the cylinder) $\iota(\varsigma)$ touching $\mathbb{S}^{2}$. We will usually represent a configuration $\{\varsigma _{1},\dots,\varsigma_{L}\}$ of cylinders touching the unit sphere by the configuration $\{\iota(\varsigma_{1}),\dots,\iota(\varsigma_{L})\}$ of tangent to $\mathbb{S}^{2}$ lines. The manifold of all such six-tuples will be denoted by $M^{6}.$ \vskip .1cm Let $\varsigma^{\prime},\varsigma^{\prime\prime}$ be two equal cylinders of radius $r$ touching $\mathbb{S}^{2},$ which also touch each other, while $\iota^{\prime},\iota^{\prime\prime}$ are the corresponding tangents to $\mathbb{S}^{2}.$ If $d=d_{\iota^{\prime}\iota^{\prime\prime}}$ is the distance between $\iota^{\prime},\iota^{\prime\prime}$ then we have \[ r=\frac{d}{2-d}\ ,\] The study of the manifold of six-tuples of cylinders of equal radii, some of which are touching, is equivalent to the study the manifold $M^{6}$ and the function $D$ on it, defined by \[ D\left( \iota_{1},...,\iota_{6}\right) =\min_{1\leq i<j\leq6}d_{\iota _{i}\iota_{j}}\ .\] \subsection{Configuration manifold} \label{sectConfManifold} Here we collect the notation of \cite{OS}. \vskip.2cm Let $\mathbb{S}^{2}\subset\mathbb{R}^{3}$ be the unit sphere, centered at the origin. For every $x\in\mathbb{S}^{2}$ we denote by $TL_{x}$ the set of all (unoriented) tangent lines to $\mathbb{S}^{2}$ at $x.$ We denote by $M$ the manifold of tangent lines to $\mathbb{S}^{2}.$ We represent a point in $M$ by a pair $\left( x,\xi\right) $, where $\xi$ is a unit tangent vector to $\mathbb{S}^{2}$ at $x,$ though such a pair is not unique: the pair $\left( x,-\xi\right) $ is the same point in $M.$ \vskip .1cm We shall use the following coordinates on $M$. Let $\mathbf{x,y,z}$ be the standard coordinate axes in $\mathbb{R}^{3}$. Let $R_{\mathbf{x}}^{\alpha}$, $R_{\mathbf{y} }^{\alpha}$ and $R_{\mathbf{z}}^{\alpha}$ be the counterclockwise rotations about these axes by an angle $\alpha$, viewed from the tips of axes. We call the point $\mathsf{N}=\left( 0,0,1\right) $ the North pole, and $\mathsf{S}=\left( 0,0,-1\right) $ -- the South pole. By \textit{meridians} we mean geodesics on $\mathbb{S}^{2}$ joining the North pole to the South pole. The meridian in the plane $\mathbf{xz}$ with positive $\mathbf{x}$ coordinates will be called Greenwich. The angle $\varphi$ will denote the latitude on $\mathbb{S}^{2},$ $\varphi\in\left[ -\frac{\pi}{2},\frac{\pi} {2}\right] ,$ and the angle $\varkappa\in\lbrack0,2\pi)$ -- the longitude, so that Greenwich corresponds to $\varkappa=0.$ Every point $x\in\mathbb{S}^{2}$ can be written as $x=\left( \varphi_{x},\varkappa_{x}\right)$. \vskip .1cm Finally, for each $x\in\mathbb{S}^{2}$, we denote by $R_{x}^{\alpha}$ the rotation by the angle $\alpha$ about the axis joining $\left( 0,0,0\right) $ to $x,$ counterclockwise if viewed from its tip, and by $\left( x,\uparrow\right) $ we denote the pair $\left( x,\xi_{x}\right) ,$ $x\neq\mathsf{N,S,}$ where the vector $\xi_{x}$ points to the North. We also abbreviate the notation $\left( x,R_{x}^{\alpha}\uparrow\right) $ to $\left( x,\uparrow_{\alpha }\right) $. \vskip.2cm Let $u=\left( x^{\prime},\xi^{\prime}\right) ,$ $v=\left( x^{\prime\prime},\xi^{\prime\prime}\right) $ be two lines in $M$. We denote by $d_{uv}$ the distance between $u$ and $v$; clearly $d_{uv}=0$ iff $u\cap v\neq\varnothing.$ If the lines $u,v$ are not parallel then the square of $d_{uv}$ is given by the formula \begin{equation}\label{formdist} d_{uv}^{2}=\frac{\det^{2}[\xi^{\prime},\xi^{\prime\prime},x^{\prime\prime }-x^{\prime}]}{1-(\xi^{\prime},\xi^{\prime\prime})^{2}}\ ,\end{equation} where $(\ast,\ast)$ is the scalar product. \vskip.2cm We are studying the critical points of the function \[ D\left( \mathbf{m}\right) =\min_{1\leq i<j\leq N}d_{u_{i}u_{j}}\ ,\] on the manifold $M^{N}$ is of N-tuples \begin{equation} \mathbf{m}=\left\{ u_{1},...,u_{N}\,:\, u_{i}\in M\, ,\, i=1,...,N\right\} .\end{equation} The norm which we mentioned in the Introduction is defined by \begin{equation} \left\Vert u-v\right\Vert =\left\Vert x^{\prime}-x^{\prime\prime}\right\Vert +\min\left\{ \left\Vert \xi^{\prime}-\xi^{\prime\prime}\right\Vert ,\left\Vert \xi^{\prime}+\xi^{\prime\prime}\right\Vert \right\} .\label{norm} \end{equation} \subsection{Configuration $O_{6}$}\label{configO6} We denote by $e_i$ the orthonormal basis in $\mathbb{R}^3$, $$e_1\equiv e_x=(1,0,0)\ ,\ e_2\equiv e_y=(0,1,0)\ ,\ e_3\equiv e_z=(0,0,1)\ .$$ Let $\varrho$ be the rotation of order 3, which cyclically permutes the vectors $e_i$, $$\varrho\colon e_1\mapsto e_2\mapsto e_3\mapsto e_1\ ,$$ $\mathcal{I}$ the central reflection, $$\mathcal{I}v=-v\ ,\ v\in \mathbb{R}^3\ ,$$ and $r$ the rotation around the axe $Ox$ by the angle $\pi$, $$r\colon e_1\mapsto e_1\ ,\ e_2\mapsto -e_2\ ,\ e_3\mapsto -e_3\ .$$ The maps $\varrho$, $\mathcal{I}$ and $r$ generate the group $\mathbb{A}_4\times C_2$ where $\mathbb{A}_4$ is the alternating group on four letters, generated by $\varrho$ and $r$, and $C_2$ is the cyclic group of order 2, generated by $\mathcal{I}$. \vskip .1cm Let $\ell_{1}^+$ be the line in the direction $e_3$ touching the unit sphere at the point $e_1$ and let $\ell_{2}^+=\varrho\ell_{1}^+$, $\ell_{3}^+=\varrho^2\ell_{1}^+$. The images of the lines $\ell_j^+$, $j=1,2,3$, under the central reflection $\mathcal{I}$ will be denoted by $\ell_j^-$, $$\ell_j^-=\mathcal{I}\ell_j^+\ ,\ j=1,2,3\ .$$ The six lines $\ell_j^+$, $\ell_j^-$, $j=1,2,3$, form the configuration $O_6$. The symmetry group of the configuration $O_6$ is $\mathbb{A}_4\times C_2$. \vskip .1cm Let $O_6(t)$ be a deformation of the configuration $O_6$. We have fifteen pairwise distances between the lines in $O_6(t)$. There are three distances $d_{\ell_j^+(t),\ell_j^-(t)}$, $j=1,2,3$, which do not have a well defined limit when $t\to 0$ because the lines $\ell_j^+$ and $\ell_j^-$ are parallel. The remaining twelve distances do have a well defined limit, equal to 1, when $t\to 0$. We shall first study these twelve distances. More generally, let $\mathbf{m}$ be a point in a small enough neighborhood of $O_{6}$ and let $\widetilde{\ell}_j^+$, $\widetilde{\ell}_j^-$, $j=1,2,3$ be the positions of perturbed lines $\ell_j^+$, $\ell_j^-$, $j=1,2,3$. Let \begin{equation}\label{oprtid}\widetilde{D}(\mathbf{m}):=\min_{1\leq i<j\leq 3,\epsilon,\epsilon'=\pm} \left(d_{\widetilde{\ell}_i^\epsilon ,\widetilde{\ell}_j^{\epsilon' }} \right)\ .\end{equation} \begin{theorem}\label{localmaxO6} The configuration $O_{6}\ $is a point of a sharp local maximum of the function $\widetilde{D}$: for any point $\mathbf{m}$ in a vicinity of $O_{6}$ we have \[ \widetilde{D}\left( \mathbf{m}\right) <1=\widetilde{D}\left( O_{6}\right) \ .\] \end{theorem} We have $\widetilde{D}(O_6)=D(O_6)=1$ and $D(\mathbf{m})\leq \widetilde{D}(\mathbf{m})$. This implies that the configuration $O_6$ is locally maximal. \begin{corollary} The configuration $O_{6}\ $is a point of a sharp local maximum of the function $D$. \end{corollary} In the process of the proof of Theorem \ref{localmaxO6}, we will see that there exists a 6-dimensional subspace $L_{quadr}$ in the tangent space to $M^{6}\,\operatorname{mod}SO\left( 3\right) $ at $O_{6}$, such that for any $l\in L_{quadr}$, $\left\Vert l\right\Vert =1$, we have \[ -c_u \left\Vert l \right\Vert t^{2}\leq \widetilde{D}\left( O_{6}+tl\right) -\widetilde{D}\left( O_{6}\right) \leq -c_d \left\Vert l \right\Vert t^{2} \] for $t$ small enough. Here $c_d$ and $c_u$ are some constants, $0<c_d \leq c_u <+\infty$ and $O_{6}+tl\in M^{6}\,\operatorname{mod}SO\left( 3\right) $ stands for the exponential map applied to the tangent vector $tl$. \vskip .1cm For the tangent vectors outside $L_{quadr}$ we have \[ -c_u^{\prime}\left( l\right) t\leq \widetilde{D}\left( O_{6}+tl\right) -\widetilde{D}\left( O_{6}\right) \leq -c_d^{\prime}\left( l\right) t,\] where now $c_d^{\prime}\left( l\right)$ and $c_u^{\prime}\left( l\right)$ are some positive valued functions of $l$, $0<c^{\prime}\left( l\right) \leq c^{\prime\prime}\left( l\right)<+\infty$. \subsection{Connected components} In \cite{OS-C6} we have shown that the configuration $C_\mathfrak{m}$ is a local maximum. Together with Theorem \ref{localmaxO6} we arrive at the following conclusion. \begin{corollary}\label{cornonconne} The configuration space $\mathcal{Z}$ of six unit cylinders touching a unit ball is not connected.\end{corollary} \begin{proof} Theorem \ref{localmaxO6} implies that the configuration $O_6$ is an isolated point in the space $\mathcal{Z}$ mod $SO(3)$. Hence the configurations $$\gamma(\varphi)=C_{6}\bigl(\varphi,\delta\left( \varphi\right) ,\varkappa\left( \varphi\right) \bigr)\ ,\ \varphi\in\left[ 0;\arcsin \left(\frac{\sqrt{5}}{4}\right)\right]\ ,$$ constructed in \cite{OS}, belong to another component of $\mathcal{Z}$ (at $\phi=\arcsin \left(\frac{\sqrt{5}}{4}\right)$ the function $D\bigl( \gamma(\varphi)\bigr)$ gets back its initial value 1, see the end of Section 5 in \cite{OS}). \end{proof} \vskip .3cm \noindent{\bf Remark.} In contrast to the configuration $O_6$, the configuration $C_\mathfrak{m}$ is not congruent to its mirror image. To show this, we need several definitions. \vskip .1cm A triple $\mathcal{T}$ of straight lines is said to be in a generic position if there is no plane parallel to all three lines. A triple $\mathcal{T}$ in a generic position defines an orientation of the space $\mathbb{R}^3$, or, if an orientation of $\mathbb{R}^3$ is given, a sign $\sigma (\mathcal{T})$. This sign is defined as follows. There is a unique hyperboloid $\mathcal{H}(\mathcal{T})$ of one sheet, passing through the three straight lines of $\mathcal{T}$. Let the equation of the hyperboloid $\mathcal{H}(\mathcal{T})$, in its principal axes, be $z^2=ax^2+by^2-r^2$. The hyperboloid $\mathcal{H}(\mathcal{T})$ has two families of rulings and the three lines of $\mathcal{T}$ belong to the same family. So, viewed from a remote point on the $z$-axis (that is, a point $(0,0,\mathsf{z})$ with $\mathsf{z}$ big enough), we will see a picture of the three lines of $\mathcal{T}$ isotopic to the one shown on Figure \ref{SkewTriples}. The isotopy class of the picture will not change if we are looking from the minus $\infty$ of the $z$-axis, that is, from a point $(0,0,-\mathsf{z})$ with $\mathsf{z}$ big enough. The sign $\sigma (\mathcal{T})$ associated to the triple $\mathcal{T}$ is $+$ if we see the left picture; for the right picture the sign is $-$. \begin{figure}[H] \vspace{.1cm} \centering \includegraphics[scale=0.3]{SkewTriples.jpg} \caption{Three skew lines} \label{SkewTriples} \end{figure} \vskip .1cm There is a way, see e. g. \cite{VV}, to calculate the sign $\sigma (\mathcal{T})$ without determining the hyperboloid $\mathcal{H}(\mathcal{T})$. This is done as follows. First one associates a sign to a pair of oriented skew straight lines, as shown on Figure \ref{Skewlines+-}. \begin{figure}[H] \vspace{.1cm} \centering \includegraphics[scale=0.3]{Skewlines+-.jpg} \caption{Oriented lines} \label{Skewlines+-} \end{figure} \vskip .1cm Given a triple $\mathcal{T}$, equip the lines of $\mathcal{T}$ arbitrarily with an orientation. Then the sign $\sigma (\mathcal{T})$ is equal to the product of the signs for the three pairs of these oriented lines. \vskip .1cm The configuration $C_\mathfrak{m}$ has a three-fold axis of symmetry. The action of the cyclic group $\mathcal{C}_3$ on the set of six cylinders of $C_\mathfrak{m}$ has two orbits of length three. These are the brown and the red triplets of cylinders on Figure \ref{confCm}. The sign of both triples is negative. For the configuration, obtained by a central symmetry from the configuration $C_\mathfrak{m}$, the sign of the both reflected triples is positive. Therefore, the configuration $C_\mathfrak{m}$ is not congruent to its mirror image. Let us for the moment call $C_\mathfrak{m}^+$ the configuration on Figure \ref{confCm}, and $C_\mathfrak{m}^-$ its mirror image. \vskip .1cm Let $\mathcal{Z}(\mathsf{\geq R})$, respectively $\mathcal{Z}(\mathsf{> R})$, denote the configuration space, mod $SO(3)$, of $6$ cylinders, of radius bigger or equal to $\mathsf{R}$, respectively, bigger than $\mathsf{R}$, touching the unit ball. \vskip .1cm In the space $\mathcal{Z}(\mathsf{\geq 1})$, the configurations $C_\mathfrak{m}^+$ and $C_\mathfrak{m}^-$ are in the same connected component: one can move from $C_\mathfrak{m}^+$ to $C_6$ and then return to $C_\mathfrak{m}^-$. \vskip .1cm \noindent{\bf Conjecture.} The configurations $C_\mathfrak{m}^+$ and $C_\mathfrak{m}^-$ belong to different connected components of the space $\mathcal{Z}(\mathsf{>1})$. \vskip .1cm The following observation might be helpful in testing this conjecture. There are twenty different triples of tangent lines in the configuration $C_\mathfrak{m}^+$. Among them there are twelve positive triples and eight negative triples. Therefore, in a motion from $C_\mathfrak{m}^+$ to $C_\mathfrak{m}^-$ some triples have to pass a non-generic position; the formulas for the distances between the cylinders slightly simplify when there is a non-generic triple. \vskip .1cm Also, one can ask the following natural questions. Let $\mathcal{D}$ be a configuration of non-overlapping cylinders of the same radius. Supply each cylinder with an orientation. Let $\mathsf{p}$ be a path in the space $\mathcal{Z}(\mathsf{\geq R})$ or $\mathcal{Z}(\mathsf{> R})$ which starts and ends with the configuration $\mathcal{D}$. In general, the path $\mathsf{p}$ might permute the cylinders or change their orientation. The first question is -- what is the group of permutations and orientation changes induced by all possible such paths? \vskip .1cm We conjecture that for the configuration $C_\mathfrak{m}$ the only permutations of oriented cylinders which can be achieved by a motion in the space $\mathcal{Z}(\mathsf{\geq 1})$ are the rigid rotations from the dihedral group $\mathbb{D}_3$. \vskip .1cm The group generated by permutations and orientation changes of six cylinders is the wreath product $\mathbb{S}_6\wr\, \mathcal{C}_2$. It is interesting to know what is the maxi\-mal radius $\mathsf{R}$ of cylinders for which the whole group $\mathbb{S}_6\wr \mathcal{C}_2$ is realizable by paths in $\mathcal{Z}(\mathsf{\geq R})$. \vskip .1cm Let us say that a subgroup $\mathcal{H}$ of $\mathbb{S}_6\wr\, \mathcal{C}_2$ is path-realizable if there exists a configuration $\mathcal{D}$ of six non-overlapping cylinders in $\mathcal{Z}(\mathsf{\geq R})$ for some $\mathsf{R}$, such that the elements of $\mathbb{S}_6\wr\, \mathcal{C}_2$, realizable by paths in $\mathcal{Z}(\mathsf{\geq R})$, form the subgroup $\mathcal{H}$. Which subgroups of $\mathbb{S}_6\wr\, \mathcal{C}_2$ are path-realizable? For example, for twelve spheres of radius slightly larger than one, touching the unit sphere, it is known that the subgroup $\mathbb{A}_{12}$ of $\mathbb{S}_{12}$ is path-realizable, see Appendix to Chapter 1 in \cite{CS}. What is the maximal radius $\mathsf{R}$ for each path-realizable subgroup of $\mathbb{S}_6\wr\, \mathcal{C}_2$? Does this maximal radius $\mathsf{R}$ depend on the connected component, to which the configuration $\mathcal{D}$ belongs, of the space $\mathcal{Z}(\mathsf{\geq R})$? Of course, these questions can be asked about any number of cylinders, not necessarily six. \section{Criticality of $O_{6}$}\label{maxconfO6} We shall study the deformed configuration $O_6(t)$, formed by the tangent lines $$\ell_{j}^\epsilon(t)=R_{\varrho^2 e_j}^{a_j^\epsilon\, t}\, R_{\varrho e_j\phantom{^2}}^{b_j^\epsilon\, t}\, R_{e_j\phantom{^2}}^{c_j^\epsilon\, t}\, \ell_{j}^\epsilon\ ,\ j=1,2,3\ ,\ \epsilon\in \{+,-\}\ ,$$ where we write $R_{e_1}^\alpha$ (respectively, $R_{e_2}^\alpha$ and $R_{e_3}^\alpha$) for $R_{\mathbf{x}}^{\alpha}$ (respectively, $R_{\mathbf{y}}^{\alpha}$ and $R_{\mathbf{z}}^{\alpha}$). \vskip .1cm To fix the rotational symmetry we keep the tangent line $\ell_1^+$ at its place, that is, $a_1^+=0$, $b_1^+=0$ and $c_1^+=0$. \vskip .1cm We shall be studying the variations of twelve distances appearing in the function $\widetilde{D}$, defined by the formula (\ref{oprtid}). \subsection{First differentials}\label{secdif} First we calculate (directly) the deformations of the squares of distances in the first order. For brevity, we shall write $[d^2_{u,v}]_j$ for the coefficient at $t^j$ in the function $d^2_{u(t),v(t)}$ where $u,v\in O_6(t)$. Here is the result: $$\begin{array}{c} [d^2_{\ell_1^+,\ell_2^-}]_1=-2b_{2 }^-\ ,\ [d^2_{\ell_1^+,\ell_2^+}]_1=2b_{2 }^+\ ,\\[1em] \lbrack d^2_{\ell_1^+,\ell_3^-}\rbrack_1=2a_{3 }^-\ ,\ [d^2_{\ell_1^+,\ell_3^+}]_1=-2a_{3 }^+\ ,\end{array}$$ \begin{equation}\label{disjvar} \begin{array}{c} \lbrack d^2_{\ell_1^-,\ell_2^-}\rbrack_1=2(b_{2 }^- -a_{1 }^-)\ ,\ [d^2_{\ell_1^-,\ell_2^+}]_1=2(a_{1 }^- -b_{2 }^+)\ ,\\[1em] \lbrack d^2_{\ell_1^-,\ell_3^-} \rbrack_1=2(b_{1 }^- -a_{3 }^-)\ ,\ [d^2_{\ell_1^-,\ell_3^+}]_1=2(a_{3 }^+ -b_{1 }^-)\ , \end{array}\end{equation} $$ \begin{array}{c} \lbrack d^2_{\ell_3^-,\ell_2^-}\rbrack_1=2(b_{3 }^- -a_{2 }^-)\ ,\ [d^2_{\ell_3^-,\ell_2^+}]_1=2(a_{2 }^+ -b_{3 }^-)\ ,\\[1em] \lbrack d^2_{\ell_3^+,\ell_2^-}\rbrack_1=2(a_{2 }^- -b_{3 }^+)\ ,\ [d^2_{\ell_3^+,\ell_2^+}]_1=2(b_{3 }^+ -a_{2 }^+)\ . \end{array}$$ The expressions in the first two lines are shorter because the the tangent line $\ell_1^+$ does not move. \vskip .1cm The above differentials are not independent. All the linear relations between them are linear combinations of the following three relations: \begin{eqnarray}\label{depe1}&[d^2_{\ell_1^+,\ell_2^-}]_1+[d^2_{\ell_1^+,\ell_2^+}]_1+[d^2_{\ell_1^-,\ell_2^-}]_1+[d^2_{\ell_1^-,\ell_2^+}]_1 =0\ ,&\\[1em] \label{depe2}&[d^2_{\ell_1^+,\ell_3^-}]_1+[d^2_{\ell_1^+,\ell_3^+}]_1+[d^2_{\ell_1^-,\ell_3^-}]_1+[d^2_{\ell_1^-,\ell_3^+}]_1 =0\ ,\\[1em] \label{depe3} &\lbrack d_{\ell_{2}^{-},\ell_{3}^{-}}^{2}]_{1}+[d_{\ell_{2}^{-},\ell_{3}^{+} }^{2}]_{1}+[d_{\ell_{2}^{+},\ell_{3}^{-}}^{2}]_{1}+[d_{\ell_{2}^{+},\ell _{3}^{+}}^{2}]_{1}=0\ .& \end{eqnarray} It follows that the distances will not decrease in the first order only if $$\begin{array}{c}0\geq b_{2 }^- \geq a_{1 }^- \geq b_{2 } ^+\geq 0\ ,\\[.8em] 0\geq a_{3 } ^+ \geq b_{1 } ^- \geq a_{3 }^-\geq 0\ ,\\[1.em] b_{3 }^- \geq a_{2 }^- \geq b_{3 }^+\geq a_{2 }^+\geq b_{3 }^-\ .\end{array}$$ Therefore we must have \begin{eqnarray}\label{rela1}&b_{2 }^- = a_{1 }^- = b_{2 }^+= 0\ ,&\\[.6em] \label{rela1b}&a_{3 }^+ = b_{1 }^- = a_{3 }^-= 0\ ,&\\[.6em] \label{rela2} &b_{3 }^- = a_{2 }^- = b_{3 }^+= a_{2 }^+=\omega\ ,&\end{eqnarray} where we have denoted by $\omega$ the common value of the four equal coefficients. In the remaining regime, defined by nine relations (\ref{rela1})--(\ref{rela2}), the linear contribution to the 12 distances vanish. \paragraph{Remark.} As mentioned in the Introduction, the important object in the questions concerning local maxima of a minimum of several analytic functions is the vector subspace $E$, on which all differential vanish, of the tangent space to a given configuration. In our situation the subspace $E$ is given by the equations (\ref{rela1})--(\ref{rela2}). \vskip .1cm One may wonder if a consideration of the remaining three distances $d_{\ell_{1}^{+},\ell_{1}^{-}}$, $d_{\ell_{2}^{+},\ell_{2}^{-}}$ and $d_{\ell_{3}^{+},\ell_{3}^{-}}$ could simplify the analysis (being 2 at $t=0$, these distances can drop to 0 under an infinitesimal deformation). The answer is negative: it turns out that these three distances keep their value 2 under the infinitesimal variations from the subspace $E$. For example, the positions of the cylinders $\ell_{2}^+(t)$ and $\ell_{2}^-(t)$ along a deformation from $E$ are $$\ell_{2}^+(t)=R_{e_1}^{\omega t}\, R_{e_2}^{c_2^+ (t)}\, \ell_{2}^+\ ,\ \ell_{2}^-(t)=R_{e_1}^{\omega t}\, R_{e_2}^{c_2^- (t)}\, \ell_{2}^-\ .$$ The distance between the tangent lines $\ell_{2}^+(t)$ and $\ell_{2}^-(t)$ is the same as the distance between the tangent lines $\check{\ell}_{2}^+(t):=R_{e_2}^{c_2^+ (t)}\, \ell_{2}^+$ and $\check{\ell}_{2}^-(t):=R_{e_2}^{c_2^- (t)}\, \ell_{2}^-$. But the tangent lines $\check{\ell}_{2}^+(t)$ and $\check{\ell}_{2}^-(t)$ stay parallel to the plane $Oxz$ and the distance between them remains to be 2. \vskip .1cm One may be tempted to think that each of thee groups of equalities (\ref{rela1})--(\ref{rela2}) is responsible for leaving fixed exactly one of three distances $d_{\ell_{1}^{+},\ell_{1}^{-}}$, $d_{\ell_{2}^{+},\ell_{2}^{-}}$ and $d_{\ell_{3}^{+},\ell_{3}^{-}}$. This is however not the case. \subsection{Second differentials}\label{secdifb} We now consider the same combinations (\ref{depe1})--(\ref{depe3}) but for the coefficients in $t^2$. Clearly, these combinations will contain only six parameters: $\omega$ and all $c_{j1}^{\epsilon}$, $j=1,2,3,$ $\epsilon\in\{+,-\}$, except $c_{1 }^{+},$ which is fixed to be zero. Explicitly (this is again a direct calculation) these combinations read \begin{equation} \begin{array}{rrl} \Upsilon_1&:=&\frac{1}{2}\left( [d^2_{\ell_1^+,\ell_2^-}]_2+[d^2_{\ell_1^+,\ell_2^+}]_2+[d^2_{\ell_1^-,\ell_2^-}]_2+[d^2_{\ell_1^-,\ell_2^+}]_2\right)\\[1em] &=&c_{1 }^- c_{2 }^+ -(c_{1 }^-)^2-c_{1 }^- c_{2 }^-+2c_{1 }^- \omega-2\omega^2\ .\end{array} \label{55}\end{equation} \begin{equation} \begin{array}{rrl} \Upsilon_2&:=&\frac{1}{2}\left( [d^2_{\ell_1^+,\ell_3^-}]_2+[d^2_{\ell_1^+,\ell_3^+}]_2+[d^2_{\ell_1^-,\ell_3^-}]_2+[d^2_{\ell_1^-,\ell_3^+}]_2\right)\\[1em] &=&c_{1 }^- c_{3 }^+ - (c_{3 }^-)^2 - c_{1 }^- c_{3 }^- -(c_{3 }^+)^2\ ,\end{array} \label{56}\end{equation} \begin{equation} \begin{array}{rrl} \Upsilon_3&:=&\frac{1}{2}\left( [d^2_{\ell_3^-,\ell_2^-}]_2+ [d^2_{\ell_3^-,\ell_2^+}]_2+[d^2_{\ell_3^+,\ell_2^-}]_2+ [d^2_{\ell_3^+,\ell_2^+}]_2\right)\\[1em] &=&c_{2 }^- c_{3 }^+ +c_{2 }^+ c_{3 }^--c_{2 }^- c_{3 }^- -c_{2 }^+ c_{3 }^+ -(c_{2 }^-)^2-(c_{2 }^+)^2\ .\end{array} \label{57}\end{equation} The distances will not decrease in the second order only if \begin{equation}\Upsilon_1\geq 0\ ,\ \Upsilon_2\geq 0\ ,\ \Upsilon_3\geq 0\ .\label{22}\end{equation} We will show now that the system $\left( \ref{22}\right) $ has only zero solution. We rewrite it in the form \begin{equation}\label{combsecor1}\omega^2+(\omega-c_{1 }^-)^2\leq c_{1 }^- (c_{2 }^+ - c_{2 }^-)\ ,\end{equation} \begin{equation}\label{combsecor2}(c_{3 }^-)^2+(c_{3 }^+)^2\leq c_{1 }^- (c_{3 }^+ - c_{3 }^-)\ ,\end{equation} \begin{equation}\label{combsecor3}(c_{2 }^-)^2+(c_{2 }^+)^2\leq (c_{2 }^- -c_{2 }^+) (c_{3 }^+ - c_{3 }^-)\ .\end{equation} The left hand sides are non-negative. Taking the product of the inequalities (\ref{combsecor1}) and (\ref{combsecor2}), we find $$(c_{1 }^-)^2 (c_{2 }^+ - c_{2 }^-)(c_{3 }^+ - c_{3 }^-)\geq 0\ .$$ Assume that $c_{1 }^-\neq 0$. Then $(c_{2 }^+ - c_{2 }^-)(c_{3 }^+ - c_{3 }^-)\geq 0$. But (\ref{combsecor3}) implies that $(c_{2 }^+ - c_{2 }^-)(c_{3 }^+ - c_{3 }^-)\leq 0$. Therefore $(c_{2 }^+ - c_{2 }^-)(c_{3 }^+ - c_{3 }^-)= 0$. Now it follows from (\ref{combsecor3}) that $c_{2 }^-=c_{2 }^+=0$. Then (\ref{combsecor1}) implies that $c_{1 }^- = 0$. Thus, we have checked that $c_{1 }^- $ must be 0. Now (\ref{combsecor1}) implies that \begin{equation}\label{doprel1}\omega=0\ ,\end{equation} (\ref{combsecor2}) implies that \begin{equation}\label{doprel2}c_{3 }^-=c_{3 }^+=0\end{equation} and then (\ref{combsecor3}) implies that \begin{equation}\label{doprel3}c_{2 }^-=c_{2 }^+=0\ .\end{equation} The above computations show that along any path with tangent vector in the 6-dimensional subspace (\ref{rela1})-(\ref{rela2}) our function decays as $t^{2}$. \vskip .1cm Together, equalities (\ref{rela1})--(\ref{rela2}) and (\ref{doprel1})--(\ref{doprel3}) show that order $t^1$ coefficients of all functions $a_j^\epsilon (t)$, $b_j^\epsilon (t)$ and $c_j^\epsilon (t)$ vanish. \vskip .1cm This is not, however, the end of the story, since we do not have uniform estimates on the lengths of all the paths entering into our argument. In general, it is possible that a $\mathcal{C}^{\infty}$ function decays along any analytic path starting at the origin, yet it increases along a non-analytic path as the following example shows. \vskip .1cm \noindent{\bf Example.} Let us draw two graphs on the plane $\mathbb{R}^2$, of functions $f_1(x)=e^{-1/x}$ and $f_2(x)=e^{-2/x}$ for $x\geq 0$. An example is provided by an arbitrary $\mathcal{C}^{\infty}$ function in a vicinity of origin in $\mathbb{R}^2$ which increases in the horn between the graphs of $f_1$ and $f_2$ and decreases otherwise: for any analytic path, starting at the origin, there exists a duration when the path does not enter the interior of the horn. \vskip .1cm So to complete the argument we use the theorem \ref{lq2b}, Section \ref{genforms}. \subsection{Three forms}\label{threeforms} A straightforward calculation shows that each of the forms $\Upsilon_{1},\Upsilon_{2},\Upsilon_{3}$ has the matrix rank three. \vskip .1cm If there existed a negatively defined strictly convex combination of three forms $\Upsilon_{1},\Upsilon_{2},\Upsilon_{3}$ then we could directly refer to Theorem 2, Section 5 of \cite{OS-C6} to finish the proof of Theorem \ref{localmaxO6}. Besides, an existence of such combination would give an easier proof of the statement that the system $\Upsilon_{1}\geq 0$, $\Upsilon_{2}\geq 0$ and $\Upsilon_{3}\geq 0$ admits only a trivial solution. However such combination does not exist as we will now show. \begin{proposition} There is no positively defined convex linear combination of the three forms $\Upsilon_{1},\Upsilon_{2}$ and $\Upsilon_{3}$. \end{proposition} \begin{proof} Let $\widetilde{\omega}=\omega-\frac{1}{2}c_{1 }^-$. We have $$\Upsilon_1=\widetilde{\Upsilon}_1-2\widetilde{\omega}^2\ ,$$ where $$\widetilde{\Upsilon}_1=c_{1 }^- c_{2 }^+ -\frac{1}{2}(c_{1 }^-)^2-c_{1 }^- c_{2 }^-\ .$$ The variable $\widetilde{\omega}$ is not involved in the forms $\Upsilon_2$ and $\Upsilon_3$. Therefore, a convex combination of the forms $(-\Upsilon_1)$, $(-\Upsilon_2)$ and $(-\Upsilon_3)$ is positively defined on the subspace with coordinates $\{\widetilde{\omega}, c_{1 }^- , c_{2 }^+ ,c_{2 }^- ,c_{3 }^+ ,c_{3 }^-\}$ if and only if the same convex combination of the forms $(-\widetilde{\Upsilon}_1)$, $(-\Upsilon_2)$ and $(-\Upsilon_3)$ is positively defined on the five-dimensional subspace with coordinates $\{ c_{1 }^- , c_{2 }^+ ,c_{2 }^- ,c_{3 }^+ ,c_{3 }^-\}$. Thus it is sufficient to consider only this five-dimensional space. \vskip .1cm Assume that a combination $$\Upsilon:= -\bigl( \widetilde{\Upsilon}_{1}+\alpha\Upsilon_{2}+\beta\Upsilon_{3}\bigr)\ ,$$ where $\alpha,\beta>0$, is positively defined; without loss of generality we fixed the coefficient of the form $(-\widetilde{\Upsilon}_{1})$ to be 1. \vskip .1cm In the coordinates $\{ c_{1 }^- , c_{2 }^+ ,c_{2 }^- ,c_{3 }^+ ,c_{3 }^-\}$ the form $\Upsilon$ has the following Gram matrix: $$\frac{1}{2}\left(\begin{array}{rrrrr} 1&-1&1&-\alpha&\alpha\\ -1&2\beta&0&\beta&-\beta\\ 1&0&2\beta&-\beta&\beta\\ -\alpha&\beta&-\beta&2\alpha&0\\ \alpha&-\beta&\beta&0&2\alpha \end{array}\right)\ .$$ \vskip .3cm The Sylvester criterion says that the positivity of the form $\Upsilon$ is equivalent to the following system of inequalities \begin{equation}\label{sylvsyst}\begin{array}{c} 2\beta-1>0\ ,\ 4\beta(\beta-1)>0\ ,\ -4\beta(2\alpha-4\alpha\beta+\alpha^2\beta+\beta^2)>0\ ,\\[1em] -16\alpha\beta(\alpha-3\alpha\beta+\alpha^2\beta+\beta^2)>0\ .\end{array}\end{equation} Taking into account that the coefficients $\alpha$ and $\beta$ are positive, the system (\ref{sylvsyst}) reduces to the system $$\beta>1\ ,\ 2\alpha-4\alpha\beta+\alpha^2\beta+\beta^2<0\ ,\ \alpha-3\alpha\beta+\alpha^2\beta+\beta^2<0\ ,$$ which is incompatible. Already the first and the third inequalities are not compatible. Indeed, consider the left hand side $$\mathsf{m}:=\alpha-3\alpha\beta+\alpha^2\beta+\beta^2$$ of the third inequality as the polynomial in $\alpha$. The quadratic polynomial $\mathsf{m}$ can take a negative value only if its roots are real. However the discriminant of the polynomial $\mathsf{m}$ is $$-(\beta-1)^2(4\beta-1)\ ,$$ which is negative for $\beta>1$. \end{proof} \section{Sufficient condition}\label{genforms} Let $\{F_{1}\left( x\right) ,\dots,F_{m}\left( x\right)\}$ be a family of functions $\mathsf{U}\to \mathbb{R}$, where $\mathsf{U}\subset\mathbb{R}^n$ is a neighborhood of the origin $0\in\mathbb{R}^n$, such that $F_u(0)=0$, $u=1,\dots ,m$. We assume that the number of functions is not greater that the number of variables, $m\leq n$. \vskip .1cm For the configurations of tangent lines in Theorem \ref{localmaxO6} the functions $F_u$ are the differences between the squares of distances in the perturbed and non-perturbed configurations. \vskip .1cm We are studying the function \[ {\sf F}\left( x\right) :=\min\left\{ F_{1}\left( x\right) ,\dots,F_{m}\left(x\right) \right\} \ .\] In \cite{OS-C6} we have proved the local maximality of the configuration $C_{\mathfrak{m}}$. For the configuration $C_{\mathfrak{m}}$ there is exactly one convex linear dependency between the differentials of the functions $F_u(x)$, $u=1,\dots ,m$, at the origin. We have given in \cite{OS-C6} a sufficient condition ensuring that the point $\mathbf{0}\in\mathbb{R}^{n}$ is a sharp local maximum of the function ${\sf F}\left( x\right)$. \vskip .1cm As we have seen in Section \ref{maxconfO6}, the space of linear dependencies between $dF_u(0)$, $u=1,\dots ,m$, is three-dimensional and has a basis consisting of three convex dependencies. Moreover in our coordinates the groups of coordinates entering these linear combinations are disjoint. \vskip .1cm In this section we establish an analytic result, Theorem \ref{lq2b}, needed to complete the proof of Theorem \ref{localmaxO6}. Theorem \ref{lq2b} is a sufficient condition, applicable to the configuration $O_6$, which ensures that the point $\mathbf{0}\in\mathbb{R}^{n}$ is a sharp local maximum of the function ${\sf F}\left( x\right)$. Theorem \ref{lq2b} is a generalization of Theorem 2, Section 5 in \cite{OS-C6}. \subsection{Notation}\label{formsO6} We recall some notation from \cite{OS-C6}. Till the end of the Section the summation over repeated indices is assumed. \vskip .1cm We denote by $l_{uj}$ and $q_{ujk}$ the coefficients of the linear and quadratic parts of the function $F_u(x)$, $u=1,\dots,m$, $$F_u(x)=l_{uj}x^j+q_{ujk}x^j x^k+o(2)\ ,$$ where $o(2)$ stands for higher order terms. \vskip .1cm Let $\xi^j:=dx^j$, $j=1,\dots,n$, be the coordinates, corresponding to the coordinate system $x^1,\dots,x^n$, in the tangent space to $\mathbb{R}^n$ at the origin. We define the linear and quadratic forms $l_u(\xi)\equiv l_{uj}\xi^j$ and $q_u(\xi)\equiv q_{ujk}\xi^j\xi^k$ on the tangent space $T_0\mathbb{R}^n$. \vskip .1cm Let $E$ be the subspace in $T_0\mathbb{R}^n$ defined as the intersection of kernels of the linear forms $l_u(\xi)$, $$E=\bigcap_{u=1}^m\; \ker l_u(\xi)\ .$$ \vskip .1cm Let $\mu=\{\mu^1\,\dots ,\mu^m\}$ be a linear dependency between the linear parts of the functions $F_u(x)$, $u=1,\dots,m$, that is, $$\mu^u l_{uj}=0\ \ \text{for all}\ j=1,\ldots,n\ .$$ We denote by $\mathfrak{q}[\mu ]$ the corresponding quadratic form on the space $E$ defined by $$\mathfrak{q}[\mu ]=\mu^u q_{ujk}\xi^j\xi^k\vert_E\ .$$ \subsection{Positively defined families of quadratic forms} We shall say that a family $\{\mathfrak{Q}_1,\ldots ,\mathfrak{Q}_L\}$ of quadratic forms on a real vector space $\mathbb{V}$ is positively defined if the following condition holds \begin{equation}\label{mima2}\begin{array}{l} \text{the system of inequalities}\ \ \mathfrak{Q}_u(x)\leq 0\ ,\ u=1,\ldots ,L,\\[.6em] \text{admits only the trivial solution $x=0$}\ . \end{array} \end{equation} Also, we say that a family $\{\mathfrak{Q}_1,\ldots ,\mathfrak{Q}_L\}$ of quadratic forms on a space $\mathbb{V}$ is negatively defined if the family $\{ -\mathfrak{Q}_1,\ldots ,-\mathfrak{Q}_L\}$ is positively defined. \vskip .1cm The notion of a positively defined family of quadratic forms generalizes the notion of a positively defined quadratic form (it corresponds to $L=1$). \vskip .1cm Let $$\mathfrak{Q}(x):=\max \bigl(\mathfrak{Q}_1(x),\ldots ,\mathfrak{Q}_L(x)\bigr)\ .$$ The condition (\ref{mima2}) is satisfied if and only if the constant $$\mathfrak{v}:=\min_{\left\Vert x\right\Vert =1} \bigl(\mathfrak{Q}(x)\bigr)$$ is positive, $\mathfrak{v}>0$. Because of the homogeneity we have $$\mathfrak{Q}(x)\geq \mathfrak{v} \left\Vert x\right\Vert^2\ \text{for any}\ x\in\mathbb{R}^n.$$ So we can reformulate the positivity of a family in the following form. \begin{definition}\label{depofa} A family $\{\mathfrak{Q}_1,\ldots ,\mathfrak{Q}_L\}$ of quadratic forms is positively defined iff there exists a positive constant $\mathfrak{v}>0$ such that for any $x\in\mathbb{R}^n$ \begin{equation}\label{suschac} \text{there exists $a_\circ \in\{1,\dots ,L\}$ such that $\mathfrak{Q}_{a_\circ}(x)\geq \mathfrak{v} \left\Vert x\right\Vert^2$.} \end{equation} We shall say that such family $\{\mathfrak{Q}_1,\ldots ,\mathfrak{Q}_L\}$ is $\mathfrak{v}$-positively defined. \end{definition} As for $L=1$, the positivity of a family of quadratic forms is an open condition in the following sense. \begin{lemma}\label{inchava} The condition (\ref{mima2}) is stable under small perturbations of the forms of the family. \end{lemma} \begin{proof} Assume that a family $\{\mathfrak{Q}_1,\ldots ,\mathfrak{Q}_L\}$ of quadratic forms is positively defined and let $\mathfrak{v}$ be a constant from Definition \ref{depofa}. \vskip .1cm Let $\mathfrak{P}_u$, $u=1,\ldots ,L$, be an arbitrary family of quadratic forms. There exists a positive constant $\mathfrak{w}$ such that $$\vert\mathfrak{P}_u(x)\vert\leq \mathfrak{w} \left\Vert x\right\Vert^2\ ,\ u=1,\ldots ,L\ .$$ Given $x\in\mathbb{R}^n$, let $a_{\circ}$ be the index defined by (\ref{suschac}). For a positive $\epsilon$ we have $$\vert \mathfrak{Q}_{a_{\circ}}(x)+\epsilon\,\mathfrak{P}_{a_{\circ}}(x)\vert \geq \vert \mathfrak{Q}_{a_{\circ}}(x)\vert- \epsilon\,\vert\mathfrak{P}_{a_{\circ}}(x)\vert \geq \left( \mathfrak{v}-\epsilon \mathfrak{w}\right) \left\Vert x\right\Vert^2\ ,$$ therefore, the family $\{\mathfrak{Q}_u+\epsilon\,\mathfrak{P}_u\, ,\, u=1,\ldots ,L\}$ satisfies the condition of Definition \ref{depofa} for $\epsilon$ small enough. \end{proof} \subsection{Analytic theorem}\label{anareO6} The particularity of the situation analyzed in Subsections \ref{secdif} and \ref{secdifb} can be described as follows. The family of functions $\{ F_1(x),\dots,F_m(x)\}$ splits into several subfamilies $$\mathcal{F}_1=\{ F_{1,1}(x),\dots,F_{1,m_1}(x)\}\ ,\ \dots\ ,\ \mathcal{F}_L=\{ F_{L,1}(x),\dots,F_{L,m_L}(x)\}$$ such that: \begin{itemize} \item[(A)] In each subfamily $ \mathcal{F}_a$ there is exactly one linear dependency $\lambda_a$ between the linear parts of the functions in the subfamily, and this dependency is convex for each subfamily. \item[(B)] The set of variables $x^1,\ldots ,x^n$ is a union of disjoint sets $\mathcal{X}_a$, $a=1,\dots,L$, and a set $\mathcal{Y}$ with the following property: the linear parts of functions from the subfamily $\mathcal{F}_a$ depend only on the variables from the set $\mathcal{X}_a$ for each $a=1,\dots,L$. \item[(C)]The family of quadratic forms $\mathfrak{q}[\lambda_a ]$, $a=1,\dots ,L$ is negatively defined on the subspace $E$. \end{itemize} \noindent We shall use the following notation for the variables from the subsets $\mathcal{X}_a$ and $\mathcal{Y}$: $$ \mathcal{X}_a=\{ x_a^1,\dots ,x_a^{d_a}\}\ ,\ a=1,\dots,L\ ,\ \ \text{and}\ \ \mathcal{Y}=\{ y^1,\dots ,y^{d}\}\ .$$ In particular, $d_1+\ldots +d_L+d=n$. The variables $\{ y^1,\dots ,y^{d}\}$ do not enter the linear parts of functions $\{ F_1(x),\dots,F_m(x)\}$. \begin{lemma}\label{inchavab} The conditions {\rm (A)} and {\rm (C)} are invariant under an arbitrary analytic change of variables, preserving the origin, such that the linear parts transform inside each group $\mathcal{X}_a$ of variables \begin{equation}\label{chavar}x_a^j=(A_a)^j_k\tilde{x}_a^k+ o(1)\ .\end{equation} Here each of matrices $A_a$, $a=1,\ldots ,L$, is non-degenerate.\end{lemma} \begin{proof} This is a straightforward generalization of the proof of Lemma 3, Section 5, in \cite{OS-C6}.\end{proof} \vskip .3cm We now formulate our analytic theorem. \begin{theorem}\label{lq2b} Under the conditions {\rm (A)}, {\rm (B)} and {\rm (C)}, the origin is the strict local maximum of the function ${\sf F}(x)$. \end{theorem} \vskip .1cm \noindent{\bf Remarks.} \vskip .3cm \noindent{\bf 1.} Our particular case of the configuration $O_6$ corresponds to $L=3$; we have the following subfamilies of functions $$\begin{array}{c} \mathcal{F}_1=\left\{ d^2_{\ell_1^+,\ell_2^-}\ ,\ d^2_{\ell_1^+,\ell_2^+}\ ,\ d^2_{\ell_1^-,\ell_2^-}\ ,\ d^2_{\ell_1^-,\ell_2^+}\right\}\ ,\\[1em] \mathcal{F}_2=\left\{ d^2_{\ell_1^+,\ell_3^-}\ ,\ d^2_{\ell_1^+,\ell_3^+}\ ,\ d^2_{\ell_1^-,\ell_3^-}\ ,\ d^2_{\ell_1^-,\ell_3^+}\right\}\ ,\\[1em] \mathcal{F}_3=\left\{ d_{\ell_{2}^{-},\ell_{3}^{-}}^{2}\ ,\ d_{\ell_{2}^{-},\ell_{3}^{+}}^{2}\ ,\ d_{\ell_{2}^{+},\ell_{3}^{-}}^{2}\ ,\ d_{\ell_{2}^{+},\ell _{3}^{+}}^{2}\right\}\ .\end{array}$$ Each subfamily contains four functions. \vskip .1cm The property (A) refers to formulas (\ref{depe1})--(\ref{depe3}). \vskip .1cm For the property (B) see expressions (\ref{disjvar}). \vskip .1cm The property (C) is the statement about three quadratic forms $\Upsilon_{1},\Upsilon _{2},\Upsilon_{3}$ defined by formulas $\left( \ref{55})-(\ref{57}\right)$: we have checked in Subsection \ref{secdif} that $\min\left( \Upsilon_{1},\Upsilon_{2},\Upsilon_{3}\right)\,\rule[-.22cm]{0.1mm}{.54cm}_{\; E} <0$ everywhere except the origin. \vskip .3cm \noindent{\bf 2.} Similarly to the case of the configuration $C_{\mathfrak{m}}$, it follows from the proof that the function ${\sf F}(x)$ decays quadratically at zero along any direction in $E$ and decays linearly along any direction outside $E$. \vskip .1cm As for Theorem 2, Section 5, from \cite{OS-C6}, we need the assertion of Theorem \ref{lq2b} for a family of analytic functions $F_j$; however, a careful analysis shows that the assertion of Theorem \ref{lq2b} holds for functions $F_j$ of the class $\mathcal{C}^{3}$. \vskip .1cm In \cite{OS-C6} we have given two different proofs of Theorem 2. Here we present an analogue of the first proof in \cite{OS-C6}. A proof of Theorem \ref{lq2b} generalizing the second proof of Theorem 2 in \cite{OS-C6} can be given as well, but it looks less natural and more involved, so we have decided to omit it. \subsection{Proof}\label{fpfO6} We proceed as in the first proof of Theorem 2 in \cite{OS-C6}. Performing, if necessary, a suitable change of variables, satisfying the conditions of Lemma \ref{inchavab}, we may assume that each subfamily $ \mathcal{F}_a$, $a=1,\dots,L$, consists of linear functions, except for the first one, $$\begin{array}{c}F_{a,1}=-\sum_{i=2}^{m_a}\lambda^i_{a} x_a^i+q_a(x)+o(2)\ \ \text{where}\ \lambda^i_{a}>0\ \ \text{for}\ i=2,\dots ,m_a\ ,\\[.4em] F_{a,2}=x_a^2\ ,\ \dots \ ,\ F_{a,m_a}=x_a^{m_a}\ .\end{array}$$ The set of variables is split into two disjoint parts, $$\mathcal{V}_1:=\{x_a^t\}_{t=2,\dots,m_a}^{a=1,\dots,L}\ \text{and}\ \mathcal{V}_2:=\{x_a^1\}_{a=1,\ldots ,L}\sqcup \mathcal{Y}\ .$$ We rename, for convenience, the variables from $\mathcal{V}_2$: $ \mathcal{V}_2=\{\mathsf{y}^1,\ldots ,\mathsf{y}^{K}\}$. We identify the points of the tangent subspace $E\subset T_0\mathbb{R}^n$ (in a small enough neighborhood $U$ of the origin) with the plane defined by $x_a^t=0$, $x_a^t\in \mathcal{V}_1$, and coordinatize the space $E$ by the variables $\mathsf{y}\in\mathcal{V}_2$. \vskip .1cm We need to prove that in the set $E^+$, defined by the system of inequalities $x_a^t\geq 0$, $x_a^t\in \mathcal{V}_1$, the function ${\sf{F}}^{(1)}(x)=\min_{1\leq a\leq L} (F_{a,1}(x))$ has a sharp local maximum at the origin. We make a substitution, allowed in $E^+$, $x_a^t=(z_a^t)^2$, $x_a^t\in \mathcal{V}_1$. Our functions have the form $$F_{a,1}=\mathfrak{Q}_a+\psi_a\ ,\ \text{where}\ \ \mathfrak{Q}_a=-\sum_{i=2}^{m_a}\lambda^i_{a} (z_a^i)^2+\mathfrak{q}_a(\mathsf{y})\ \text{and}\ \psi_a=o(2)\ .$$ The family $\{ \mathfrak{q}_a\}$ is negatively defined on $E$ hence the family $\{ \mathfrak{Q}_a\}$ is negatively defined on the space $\tilde{\mathbb{R}}^n$ with the coordinates $z_a^t$ for $x_a^t\in \mathcal{V}_1$, and $\mathsf{y}\in\mathcal{V}_2$: if $\mathfrak{Q}_a\geq 0$, $1\leq a\leq L$, then $\mathfrak{q}_a(\mathsf{y})\geq \sum_{i=2}^{m_a}\lambda^i_{a} (z_a^i)^2\geq 0$, $1\leq a\leq L$, implying that $\mathsf{y}={\bf 0}$ and, consequently, $z_a^i=0$ for $1\leq a\leq L$, $2\leq i\leq m_a$. \vskip .1cm Since the family $\{ \mathfrak{Q}_a\}$ is negatively defined, there exists a positive constant $\mathfrak{v}>0$ such that for any $x\in\tilde{\mathbb{R}}^n$ there exists $a_\circ (x) \in\{1,\dots ,L\}$ for which $$\mathfrak{Q}_{a_\circ (x)}(x)\leq -2\mathfrak{v} \left\Vert x\right\Vert^2\ .$$ Due to the order of smallness of functions $\psi_a$ there exists a neighborhood $U$ of the origin in $\tilde{\mathbb{R}}^n$, in which $$\vert\psi_a(x)\vert \leq \mathfrak{v} \left\Vert x\right\Vert^2\ \ \text{for any}\ a=1,\ldots ,L\ .$$ Therefore, for any $x\in U$ we have ${\sf{F}}_1(x)\leq F_{a_\circ (x),1}(x)\leq -\mathfrak{v} \left\Vert x\right\Vert^2$. \rule{1.2ex}{1.2ex} \vskip .4cm\noindent {\textbf{Acknowledgements.} Part of the work of S. S. has been carried out in the framework of the Labex Archimede (ANR-11-LABX-0033) and of the A*MIDEX project (ANR-11- IDEX-0001-02), funded by the Investissements d'Avenir French Government program managed by the French National Research Agency (ANR). Part of the work of S. S. has been carried out at IITP RAS. The support of Russian Foundation for Sciences (project No. 14-50-00150) is gratefully acknowledged by S. S. The work of O. O. was supported by the Program of Competitive Growth of Kazan Federal University and by the grant RFBR 17-01-00585.} \vspace{-.2cm}
train/arxiv
BkiUa7o4c3aisBY82oDz
5
1
\section{Introduction} One ultimate goal for the community of financial mathematics is to characterize the sophisticated investment environment using tractable probabilistic or stochastic models. For example, the market trend is usually described by some random factors such as Markov chains. In particular, the so-called regime-switching model is widely accepted and usually proposed to capture the influence on the behavior of the market caused by transitions in the macroeconomic system or the macroscopic readjustment and regulation. For instance, the empirical results by Ang and Bekaert~\cite{AngBeK02b} illustrate the existence of two regimes characterized by different levels of volatility. It is well known that default events modulated by the regime-switching process have an impact on the distress state of the surviving securities in the portfolio. More specifically, by an empirical study of the corporate bond market over 150 years, Giesecke et al.~\cite{GieSchStr11} suggest the existence of three regimes corresponding to high, middle, and low default risk. With finitely many economical regimes, Capponi and Figueroa-L\'opez~\cite{CapLop14a} investigate the classical utility maximization problem from terminal wealth based on a defaultable security, and Capponi, Figueroa-L\'opez and Nisen~\cite{CapLopNis14b} obtain a Poisson series representation for the arbitrage-free price process of vulnerable contingent claims. On the other hand, the importance of considering the defaultable underlying assets has attracted a lot of attention, especially after the systemic failure caused by some global financial crisis. Some recent developments extend the early model of single defaultable security to default contagion effects on portfolio allocations. The research of these mutual contagion influence opens the door to provide possible answers to some empirical puzzles like the high mark-to-market variations in prices of credit sensitive assets. For example, Kraft and Steffensen~\cite{Kraf} discuss the contagion effects on defaultable bonds. Callegaro, Jeanblanc and Runggaldier~\cite{Call12} consider an optimal investment problem with multiple defaultable assets which depend on a partially observed exogenous factor process. Jiao, Kharroubi and Pham~\cite{Jiao13} study the model in which multiple jumps and default events are allowed. Recently, Bo and Capponi~\cite{Bo16} examine the optimal portfolio problem of a power utility investor who allocates the wealth between credit default swaps and a money market for which the contagion risk is modeled via interacting default intensities. Apart from the celebrated Merton's model on utility maximization, there has been an increasing interest in the risk-sensitive stochastic control criterion in the portfolio management during recent years, see, e.g., Davis and Lleo~\cite{DavisLIeo04} for an overview of the theory and practice of risk-sensitive asset management. In a typical risk sensitive portfolio optimization problem, the investor maximizes the long run growth rate of the portfolio, adjusted by a measure of volatility. In particular, the classical utility maximization from terminal wealth can be transformed to the risk-sensitive control criterion by introducing a change of measure and a so-called risk-sensitive parameter which characterizes on the degree of risk tolerance of investors, see, e.g., Bielecki and Pliska~\cite{BiePliska99} and Nagai and Peng~\cite{PengNagai}. We will only name a small portion of the vast literature, for instance, the risk sensitive criterion can be linked to the dynamic version of Markowitz's mean-variance optimization by Bielecki and Pliska~\cite{BiePliska99}, to differential games by Fleming~\cite{Fleming06} and more recently by Bayraktar and Yao~\cite{BayraktarYao13} for the connection to zero-sum stochastic differential games using BSDEs and the weak dynamic programming principle. Hansen, et al.~\cite{HansenNoa} further connect the risk-sensitive objective to a robust criteria in which perturbations are characterized by the relative entropy. Bayraktar and Cohen~\cite{BayraktarCohen16} later examine a risk sensitive control version of the lifetime ruin probability problem. Despite many existing work on the risk-sensitive control, optimal investment with credit risk or regime switching respectively, it remains an open problem of the risk-sensitive portfolio allocation with both scenarios of default risk and regime-switching. Our paper aims to fill this gap and considers an interesting case when the default contagion effect can depend on regime states, possibly infinitely many. For some recent related work, it is worth noting that in the default-free market with finite regime states, Andruszkiewicz, Davis and Lleo~\cite{AndDavLIeo} study the existence and uniqueness of the solution to the risk-sensitive asset maximization problem, and provide an ODE for the optimal value function, which may be efficiently solved numerically. Meanwhile, Das, Goswami and Rana~\cite{DasGosRan} consider a risk-sensitive portfolio optimization problem with multiple stocks modeled as a multi-dimensional jump diffusion whose coefficients are modulated by an age-dependent semi-Markov process. They also establish the existence and uniqueness of classical solutions to the corresponding HJB equations. In the context of theoretical stochastic control, we also note that Kumar and Pal~\cite{KumarPal} derive the dynamical programming principle for a class of risk-sensitive control problem of pure jump process with near monotone cost. To model hybrid diffusions, Nguyen and Yin~\cite{NguyenYin} propose a switching diffusion system with countably infinite states. The existence and uniqueness of the solution to the hybrid diffusion with past-dependent switching are obtained. Back to the practical implementation in financial markets with stochastic factors, the regime-switching model or continuous time Markov chain is frequently used to approximate the dynamics of time-dependent market parameter or factors. The continuous state space of the parameter or factor is usually discretized which lead to infinite states of the approximating Markov chain (see, e.g., Ang and Timmermann~\cite{AngTim}). This mainly motivates us to consider the countable regime states in this work and it is shown that this technical difficulties can eventually be reconciled using an appropriate approximation by counterparts with finite states. Therefore, our analytical conclusions for regime-switching can potentially provide theoretical foundations for numerical treatment of risk sensitive portfolio optimization with defaults and stochastic factor processes. Our contributions are twofold. From the modeling perspective, it is considered that the correlated stocks are subject to credit events, and in particular, the dynamics of defaultable stocks, namely the drift, the volatility and the default intensity coefficients, all depend on the macroeconomic regimes. As defaults can occur sequentially, the default contagion is modeled in the sense that default intensities of surviving names are affected simultaneously by default events of other stocks as well as on current regimes states. This set up in our model enables us to analyze the joint complexity rooted in the investor's risk sensitivity, the regime changes and the default contagion among stocks. From the mathematical perspective, the resulting dynamic programming equation (DPE) can be viewed as a recursive infinite-dimensional nonlinear dynamical system in terms of default states. The depth of the recursion equals the number of stocks in the portfolio. Our recipe to study this new type of recursive dynamical system can be summarized in the following scheme: First, it is proposed to truncate the countably infinite state space of the regime switching process and consider the recursive DPE only with a finite state space. Second, for the finite state case, the existence and uniqueness of the solutions of the recursive DPE are analyzed based upon a backward recursion, namely from the state in which all stocks are defaulted toward the state in which all stocks are alive. It is worth noting that no bounded constraint is reinforced on the trading strategies of securities or control variables as in Andruszkiewicz, Davis and Lleo~\cite{AndDavLIeo} and Kumar and Pal~\cite{KumarPal}. As a price to pay, the nonlinearities of the HJB dynamical systems are not globally Lipschitz continuous. To overcome this new challenge, we develop a truncation technique by proving a comparison theorem based on the theory of monotone dynamical systems documented in Smith~\cite{smith08}. Then, we establish a unique classical solution of the recursive DPE by showing that the solution of truncated system has a uniform (strictly positive) lower bound independent of the truncation level. This also enables us to characterize the optimal admissible feedback trading strategy in the verification theorem. Next, when the states are relaxed to be countably infinite, the results in the finite state case can be applied to construct a sequence of approximating risk sensitive control problems to the original problem and obtain elegant uniform estimates to conclude that the sequence of associated smooth value functions will successfully converge to the classical solution of the original recursive DPE. We also contribute to the existing literature by exploring the possible construction and approximation of the optimal feedback strategy in some rigorous verification theorems. The rest of the paper is organized as follows. Section \ref{sec:model} describes the credit market model with default contagion and regime switching. Section \ref{risksens} formulates the risk-sensitive stochastic control problem and introduces the corresponding DPE. We analyze the existence and uniqueness of the classical global solution of recursive infinite-dimensional DPEs and develop rigorous verification theorems in Section \ref{sec:mainres}. For the completeness, some auxiliary results and proofs are delegated to the Appendix~\ref{app:proof1}. \section{The Model} \label{sec:model} We consider a model of the financial market consisting of $N\geq1$ defaultable stocks and a risk-free money market account on a given complete filtered probability space $(\Omega,{\mathcal G},{\mathbb{G}},\Px)$. Let $Y=(Y(t))_{t\in[0,T]}$ be a regime-switching process which will be introduced precisely later. The global filtration $\mathbb{G}=\mathbb{F} \vee{\mathbb{H}}$ augmented by all $\Px$-null sets satisfies the usual conditions. The filtration $\mathbb{F} =({\mathcal{F}}_t)_{t\in[0,T]}$ is jointly generated by the regime-switching process $Y$ and an independent $d\geq1$-dimensional Brownian motions denoted by $W=(W_j(t);\ j=1,\ldots,d)_{t\in[0,T]}^{\top}$. We use $\top$ to denote the transpose operator. The time horizon of the investment is given by $T>0$. The price process of the money market account $B(t)$ satisfies $dB(t)= r(Y(t))B(t)dt$, where $r(Y(t))\geq0$ is interest rate modulated by the regime-switching process $Y$. The filtration $\mathbb{H}$ is generated by a $N$-dimensional default indicator process $Z=(Z_j(t);\ j=1,\ldots,N)_{t\in[0,T]}$ which takes values in ${\cal S}:=\{0,1\}^N$. The default indicator process $Z$ links to the default times of the $N$ defaultable stocks via $\tau_j := \inf\{t\geq0;\ Z_j(t)=1\}$ for $j=1,\ldots,N$. The filtration $\mathbb{H}=({\mathcal{H}}_t)_{t\in[0,T]}$ is defined by ${\cal H}_t=\bigvee_{j=1}^N{\sigma(Z_j(s);\ s\leq t)}$. Hence $\mathbb{H}$ contains all information about default events until the terminal time $T$. The market model is specified in detail in the following subsections. \subsection{Regime-Switching Process}\label{sub:RSP} The regime-switching process is described by a continuous time (conservative) Markov chain $Y=(Y(t))_{t\in[0,T]}$ with countable state space $\mathbb{Z}_+:=\mathbb{N}\setminus\{0\}=\{1,2,\ldots\}$. The generator of the Markov chain $Y$ is given by the $Q$-matrix $Q=(q_{ij})_{ij\in\mathbb{Z}_+}$. This yields that $q_{ii}\leq0$ for $i\in\mathbb{Z}_+$, $q_{ij}\geq0$ for $i\neq j$, and $\sum_{j=1}^{\infty}q_{ij}=0$ for $i\in\mathbb{Z}_+$ (i.e., $\sum_{j\neq i}q_{ij}=-q_{ii}$ for $i\in\mathbb{Z}_+$). \subsection{Credit Risk Model} The joint process $(Y,Z)$ of the regime-switching process and the default indicator process is a Markov process on the state space $\mathbb{Z}_+\times\mathcal{S}$. Moreover, at time $t$, the default indicator process transits from a state $Z(t):=(Z_1(t),\ldots,Z_{j-1}(t),Z_j(t),Z_{j+1}(t),\ldots,Z_N(t))$ in which the obligor $j$ is alive ($Z_j(t)=0$) to the neighbor state ${Z}^j(t):=(Z_1(t),\ldots,Z_{j-1}(t),1-Z_j(t),Z_{j+1}(t),\ldots,Z_N(t))$ in which the obligor $j$ has defaulted at a strictly positive stochastic rate $\lambda_{j}(Y(t),Z(t))$. We assume that $Y$ and $Z_1,\ldots,Z_N$ will not jump simultaneously. Therefore, the default intensity of the $j$-th stock may change either if any other stock in the portfolio defaults (contagion effect), or if there are regime-switchings. Our default model thus belongs to the rich class of interacting intensity models, introduced by Frey and Backhaus~\cite{FreyBackhaus04}. We set $\lambda(i,z)=(\lambda_j(i,z);\ j=1,\ldots,N)^{\top}$ for $(i,z)\in\mathbb{Z}_+\times{\cal S}$. \subsection{Price Processes} The price process of the $N$ defaultable stocks is denoted by the vector process $\tilde{P}=(\tilde{P}^j(t);\ j=1,\ldots,N)_{t\in[0,T]}^{\top}$. Here the price process of the $j$-th stock is given by, for $t\in[0,T]$, \begin{equation}\label{eq:pricedef} \tilde{P}_j(t)=(1-Z_j(t))P_j(t), \ \ \ j = 1,\ldots,N, \end{equation} where $P=(P_j(t);\ j=1,\ldots,N)_{t\in[0,T]}^{\top}$ represents the pre-default price of the $N$ stocks. In particular, the price of the $j$-th stock is given by the pre-default price $P_j(t)$ up to ${\tau_j}-$, and jumps to $0$ at default time ${\tau_j}$ and remains at $0$ afterwards. The pre-default price process $P$ of the $N$ defaultable stocks is assumed to satisfy \begin{align}\label{eq:P} dP(t) = {\rm diag}(P(t)) [(\mu(Y(t))+\lambda(Y(t),Z(t))) dt + \sigma(Y(t))dW(t)], \end{align} where, ${\rm diag}(P(t))$ is the diagonal $N\times N$-dimensional matrix with diagonal elements $P_i(t)$. For each $i\in\mathbb{Z}_+$, the vector $\mu(i)$ is $\R^N$-valued column vector and $\sigma(i)$ is $\R^{N\times d}$-valued matrices such that $\sigma(i)\sigma(i)^\top$ is positive definite. By Eq.s~\eqref{eq:pricedef}, \eqref{eq:P} and integration by parts, the price dynamics of defaultable stocks satisfies that \begin{align}\label{eq:tildeP} d\tilde{P}(t) = {\rm diag}(\tilde{P}(t)) [\mu(Y(t))dt + \sigma(Y(t))dW(t)-dM(t)]. \end{align} Here, $M=(M_j(t);\ j=1,\ldots,N)_{t\in[0,T]}^{\top}$ is a pure jump $\mathbb{G}=(\G_t)_{t\in[0,T]}$-martingale given by \begin{align}\label{eq:taui} M_j(t)&:= Z_j(t) - \int_0^{t\wedge\tau_j}\lambda_j(Y(s),Z(s))ds,\ \ \ \ \ \ t\in[0,T]. \end{align} By the construction of the default indicator process $Z$ in Bo and Capponi~\cite{BoCapponi18}, it can be seen that $W$ is also a $\mathbb{G}$-Brownian motion using the condition (M.2a) in Section 6.1.1 of Chapter 6 in Bielecki and Rutkowski~\cite{BieRut04}. \section{Dynamic Optimization Problem} \label{risksens} In this section, we formally derive the dynamic programming equation (DPE) associated with the risk sensitive stochastic control problem. We first reformulate the risk sensitive portfolio optimization problem in an equivalent form in Section \ref{sec:wealth}. The corresponding DPE will be derived and analyzed in Section \ref{sec:DPE}. \subsection{Formulation of Portfolio Optimization Problem} \label{sec:wealth} Let us first introduce the set up and formulate the risk sensitive portfolio optimization problem. For $t\in[0,T]$, let $\phi_B(t)$ represent the number of shares of the risk-free asset and let $\phi_j(t)$ denote the number of shares of the $j$-th stock at time $t$ held by the investor. The resulting wealth process is given by \begin{align*} X^{\phi}(t) = \sum_{j=1}^N\phi_j(t)\tilde{P}_j(t) + \phi_B(t)B(t),\ \ \ t\in[0,T]. \end{align*} Using the price representation~\eqref{eq:pricedef} of stocks, the above wealth process can be rewritten as: \begin{align* X^{\phi}(t)=\sum_{j=1}^N\phi_j(t) {(1-Z_j(t))} P_j(t)+\phi_B(t)B(t),\ \ \ t\in[0,T]. \end{align*} For a given positive wealth process, we can consider the fractions of wealth invested in the stocks and money market account as follows: for $j=1,\ldots,N$, let us define $\tilde{\pi}_j(t)=\frac{\phi_j(t)\tilde{P}_j(t-)}{X^{\phi}(t-)}$ and $\tilde{\pi}_B(t)=1-\tilde{\pi}(t)^{\top}e_N$, where $\tilde{\pi}(t)=(\tilde{\pi}_i(t);\ i=1,\ldots,N)^{\top}$, and $e_N = \big(\underbrace{1,1,\ldots,1}_{N \; ones}\big)^{\top}$. Noting that the price of the $j$-th stock jumps to zero when the $j$-th stock defaults, the fraction of wealth held by the investor in this stock is zero after it defaults. In particular, the following equality holds {$\tilde{\pi}_j(t)=(1-Z_j(t-))\tilde{\pi}_j(t)$ for $j=1,\ldots,N$}. Therefore, the self-financing condition leads to wealth dynamics in the following form: $X^{\tilde{\pi}}(0)=x\in\R_+:=(0,\infty)$, and \begin{align}\label{eq:wealth} dX^{\tilde{\pi}}(t) &= X^{\tilde{\pi}}(t-)\tilde{\pi}(t)^{\top}{\rm diag}(\tilde{P}(t-))^{-1}d\tilde{P}(t) + X^{\tilde{\pi}}(t)(1-\tilde{\pi}(t)^{\top}e_N)\frac{dB(t)}{B(t)}\\ &=X^{\tilde{\pi}}(t)\big[r(Y(t))+\tilde{\pi}(t)^{\top}(\mu(Y(t))-r(Y(t))e_N)\big]dt+ X^{\tilde{\pi}}(t-)\tilde{\pi}(t)^{\top}[\sigma(Y(t))dW(t)-dM(t)].\nonumber \end{align} We next introduce the definition of the set of all admissible controls used in the paper. \begin{definition}\label{def:add-con} The admissible control set $\tilde{\cal U}$ is a class of $\mathbb{G}$-predictable feedback strategies $\tilde{\pi}(t)=(\tilde{\pi}_j(t);\ j=1,\ldots,N)^{\top}$, $t\in[0,T]$, given by $\tilde{\pi}_j(t)=\pi_j(t,X^{\tilde{\pi}}(t-),Y(t-),Z(t-))$ such that SDE~\eqref{eq:wealth} admits a unique positive (strong) solution for $X^{\tilde{\pi}}(0)=x\in\R_+$ (i.e. the feedback strategies $\tilde{\pi}(t)$ should take values in $U:=(-\infty,1)^N$). Furthermore, the control $\tilde{\pi}=(\tilde{\pi}(t))_{t\in[0,T]}$ is required to make the positive process $\Gamma^{\tilde{\pi},\theta}=(\Gamma^{\tilde{\pi},\theta}(t))_{t\in[0,T]}$ defined later by \eqref{eq:Gam} to be a $\Px$-martingale. \end{definition} We will prove the martingale property of $\Gamma^{\tilde{\pi}^*,\theta}$ for a candidate optimal strategy $\tilde{\pi}^*$ by verifying the generalized Novikov's condition in Section~\ref{sec:mainres}. We consider the following {risk-sensitive} objective functional. For $\tilde{\pi}\in\tilde{\cal U}$, and given the initial values $(X(0),Y(0),Z(0))=(x,i,z)\in\R_+\times\mathbb{Z}_+\times{\cal S}$, we define \begin{align}\label{eq:J0} {\cal J}(\tilde{\pi};T,x,i,z) := -\frac{2}{\theta}\log\Ex\left[\exp\left(-\frac{\theta}{2}\log X^{\tilde{\pi}}(T)\right)\right]=-\frac{2}{\theta}\log\Ex\left[(X^{\tilde{\pi}}(T))^{-\frac{\theta}{2}}\right]. \end{align} The investor aims to maximize the objective functional ${\cal J}$ over all strategies $\tilde{\pi}\in\tilde{\cal U}$. Let us only focus on the case when $\theta\in(0,\infty)$ for a risk-sensitive investor. The case $\theta\in(-2,0)$ is ignored as it is associated to a risk-seeking behavior which is less encountered in practise. The objective functional \eqref{eq:J0} has been considered in the existing literature (see, e.g., Bielecki and Pliska~\cite{BiePliska99}) for dynamic asset allocations in the presence of market risk, however, it is still an open problem in the setting with default risk and regime-switching which motivates our research of this project. Eq.~(1.1) in Bielecki and Pliska~\cite{BiePliska99} in our case can be read as: for $\theta$ close to $0$, \begin{align}\label{eq:rem-000} {\cal J}(\tilde{\pi};T,x,y,z)=\Ex\left[\log\left(X^{\tilde{\pi}}(T)\right)\right]-\frac{\theta}{4}{\rm Var}\left(\log(X^{\tilde{\pi}}(T))\right)+o(\theta^2), \end{align} where $o(\theta^2)$ will typically depend on the terminal horizon $T$. Then ${\cal J}(\tilde{\pi};T,x,y,z)$ may be interpreted as the growth rate of the investor's wealth minus a penalty term proportional to the variance of the realized rate, with an error that is proportional to $\theta^2$. This establishes a link between the risk-sensitive control problem and the robust decision making rule. A risk-sensitive investor would like to design a decision rule which protects him against large deviations of the growth rate from its expectation, and he achieves this by choosing higher values of the parameter $\theta$. We next rewrite the objective functional as the exponential of an integral criterion (similar to Nagai and Peng~\cite{PengNagai}, and Capponi et al.~\cite{CappPascucci}) which will turn out to be convenient for the analysis of the dynamic programming equation. For all $\tilde{\pi}\in\tilde{\cal U}$, the wealth process solving SDE~\eqref{eq:wealth} is given by \begin{align*} X^{\tilde{\pi}}(T)=&x\exp\Bigg\{\int_0^T\big[r(Y(s))+\tilde{\pi}^{\top}(s)({\mu}(Y(s))-r(Y(s))e_N)\big]ds+\int_0^T\tilde{\pi}^{\top}(s)\sigma(Y(s))dW(s)\nonumber\\ &-\frac{1}{2}\int_0^T\left\|\sigma(Y(s))^{\top}\tilde{\pi}(s)\right\|^2ds+\sum_{j=1}^N\int_0^T\log(1-\tilde{\pi}_j(s))dM_j(s)\nonumber\\ &+\sum_{j=1}^N\int_0^{T\wedge\tau_j}\lambda_j(Y(s),Z(s))\big[\tilde{\pi}_j(s)+\log(1-\tilde{\pi}_j(s))\big]ds\Bigg\}, \end{align*} and consequently \begin{align}\label{eq:solution} \left(X^{\tilde{\pi}}(T)\right)^{-\frac{\theta}{2}} &=x^{-\frac{\theta}{2}}\Gamma^{\tilde{\pi},\theta}(T)\exp\left(\frac{\theta}{2}\int_0^TL(\tilde{\pi}(s);Y(s),Z(s))ds\right), \end{align} where, for $(\pi,i,z)\in U\times\mathbb{Z}_+\times{\cal S}$, the risk sensitive function $L(\pi;i,z)$ is defined by \begin{align}\label{eq:L0} L(\pi;i,z)&:= -r(i)-\pi^{\top}(\mu(i)-r(i)e_N)+\frac{1}{2}\left(1+\frac{\theta}{2}\right)\left\|\sigma(i)^{\top}\pi\right\|^2\nonumber\\ &\quad-\sum_{j=1}^N(1-z_j)\left[\frac{2}{\theta}+\pi_j-\frac{2}{\theta}(1-\pi_j)^{-\frac{\theta}{2}}\right]\lambda_j(i,z). \end{align} Here, the positive density process is given by, for $t\in[0,T]$, \begin{align}\label{eq:Gam} \Gamma^{\tilde{\pi},\theta}(t)&:={\cal E}(\Pi^{\tilde{\pi},\theta})_t,\\ \Pi^{\tilde{\pi},\theta}(t)&:=-\frac{\theta}{2}\int_0^t\tilde{\pi}(s)^{\top}\sigma(Y(s))dW(s)+\sum_{j=1}^N\int_0^t\{(1-\tilde{\pi}_j(s))^{-\frac{\theta}{2}}-1\}dM_j(s),\nonumber \end{align} where ${\cal E}(\cdot)$ denotes the stochastic exponential. As $\tilde{\pi}\in\tilde{\cal U}$, we have that $\Gamma^{\tilde{\pi},\theta}=(\Gamma^{\tilde{\pi},\theta}(t))_{t\in[0,T]}$ is a $\Px$-martingale. We can thus define the following change of measure given by \begin{align}\label{eq:change-measure} \frac{d\Px^{\tilde{\pi},\theta}}{d\Px}\big|_{\G_t}=\Gamma^{\tilde{\pi},\theta}(t),\ \ \ \ \ \ t\in[0,T], \end{align} under which \begin{align}\label{eq:BMtheta} W^{\tilde{\pi},\theta}(t):=W(t)+\frac{\theta}{2}\int_0^t\sigma(Y(s))^{\top}\tilde{\pi}(s)ds,\ \ \ \ \ \ t\in[0,T] \end{align} is a $d$-dimensional Brownian motion, while under $\Px^{\tilde{\pi},\theta}$, for $j=1,\ldots,N$, it holds that \begin{align}\label{eq:Girjump} M_j^{\tilde{\pi},\theta}(t):=Z_j(t)-\int_0^{t\wedge\tau_j}(1-\tilde{\pi}_j(s))^{-\frac{\theta}{2}}\lambda_j(Y(s),Z(s))ds,\qquad t\in[0,T] \end{align} is a martingale. The definition of $\Px^{\tilde{\pi},\theta}$ enables us to rewrite the above {risk-sensitive} objective functional~\eqref{eq:J0} in an exponential form. From~\eqref{eq:solution}, we deduce that \begin{align*}\label{eq:J2} {\cal J}(\tilde{\pi};T,x,i,z) &= -\frac{2}{\theta}\log\Ex\left[\left(X^{\tilde{\pi}}(T)\right)^{-\frac{\theta}{2}}\right]=-\frac{2}{\theta}\log\Ex\left[x^{-\frac{\theta}{2}}\Gamma^{\tilde{\pi},\theta}(T)\exp\left(\frac{\theta}{2}\int_0^TL(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right]\nonumber\\ &=\log x -\frac{2}{\theta}\log\Ex^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_0^TL(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right]=:\log x + \bar{{\cal J}}(\tilde{\pi};T,i,z). \end{align*} Here $\Ex^{\tilde{\pi},\theta}$ represents the expectation w.r.t. $\Px^{\tilde{\pi},\theta}$ defined by \eqref{eq:change-measure}. Thanks to the relationship between ${\cal J}$ and $\bar{{\cal J}}$, our original problem is equivalent to maximize $\bar{{\cal J}}$ over $\tilde{\pi}\in\tilde{\cal U}$. We can therefore reformulate the value function of the risk-sensitive control problem as: \begin{equation}\label{eq:value-fcn} V(T,i,z) = \sup_{\tilde{\pi}\in\tilde{\cal U}} \bar{{\cal J}}(\tilde{\pi};T,i,z)=-\frac{2}{\theta}\inf_{\tilde{\pi}\in\tilde{\cal U}} \log\Ex^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_0^TL(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right], \end{equation} for $(i,z)\in\mathbb{Z}_+\times{\cal S}$. \subsection{Dynamic Programming Equations} \label{sec:DPE} In this section, we will first derive the dynamic programming equation (DPE) satisfied by the value function~\eqref{eq:value-fcn} using heuristic arguments in Birge et al.~\cite{BirBoCap17}. It will be postponed in the next section to show that the solution of DPE indeed coincides with the value function of our risk sensitive control problem in rigorous verification theorems. Let $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$ and define \begin{equation}\label{eq:J} \bar{V}(t,i,z) :=-\frac{2}{\theta}\inf_{\tilde{\pi}\in\tilde{\cal U}}\log J(\tilde{\pi};t,i,z):= -\frac{2}{\theta}\inf_{\tilde{\pi}\in\tilde{\cal U}}\log\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right], \end{equation} where $\Ex_{t,i,z}^{\tilde{\pi},\theta}[\cdot]:=\Ex^{\tilde{\pi},\theta}[\cdot|Y(t)=i,Z(t)=z]$. This yields the relation ${V}(T,i,z)=\bar{V}(0,i,z)$. For $0\leq t<s\leq T$, the dynamic programming principle leads to \begin{equation}\label{eq:dpp} \bar{V}(t,i,z)= -\frac{2}{\theta}\inf_{\tilde{\pi}\in\tilde{\cal U}}\log\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(-\frac{\theta}{2}\bar{V}(s,Y(s),Z(s))+\frac{\theta}{2}\int_t^sL(\tilde{\pi}(u);Y(u),Z(u))du\right)\right].\nonumber \end{equation} {Using heuristic arguments in Birge et al.~\cite{BirBoCap17}, we have the following DPE satisfied by $\bar{V}$, i.e., for all $(t,i,z)\in[0,T)\times\mathbb{Z}_+\times{\cal S}$, \begin{align}\label{eq:dpe2} 0=&\frac{\partial \bar{V}(t,i,z)}{\partial t}-\frac{2}{\theta}\sum_{l\neq i}q_{il}\left[\exp\left(-\frac{\theta}{2}\big(\bar{V}(t,l,z)-\bar{V}(t,i,z)\big)\right)-1\right]\nonumber\\ &+\sup_{\pi\in{\cal U}}H\left(\pi;i,z,(\bar{V}(t,i,z^j);\ j=0,1,\ldots,N)\right) \end{align} with terminal condition $\bar{V}(T,i,z)=0$ for all $(i,z)\in\mathbb{Z}_+\times{\cal S}$. In the above equation, the function $H$ is defined by, for $(\pi,i,z)\in U\times\mathbb{Z}_+\times{\cal S}$, \begin{align}\label{eq:H} H(\pi;i,z,\bar{f}(z)):=&-\frac{2}{\theta}\sum_{j=1}^N\left[\exp\left(-\frac{\theta}{2}(f({z}^j)-f(z))\right)-1\right](1-z_j)(1-\pi_j)^{-\frac{\theta}{2}}\lambda_j(i,z)\nonumber\\ &+r(i)+\pi^{\top}(\mu(i)-r(i)e_N)-\frac{1}{2}\left(1+\frac{\theta}{2}\right)\left\|\sigma(i)^{\top}\pi\right\|^2\nonumber\\ &+\sum_{j=1}^N\left[\frac{2}{\theta}+\pi_j-\frac{2}{\theta}(1-\pi_j)^{-\frac{\theta}{2}}\right](1-z_j)\lambda_j(i,z). \end{align} Here $\bar{f}(z)=(f(z^j);\ j=0,1,\ldots,N)$ for any measurable function $f(z)$. Above, we used the notation ${z}^j:=(z_1,\ldots,z_{j-1},1-z_j,z_{j+1},\ldots,z_N)$ for $z\in{\cal S}$.} Eq.~\eqref{eq:dpe2} is in fact a recursive system of DPEs. We consider the following Cole-Hopf transform of the solution given by \begin{align}\label{eq:exp-trnas} \varphi(t,i,z):=\exp\left(-\frac{\theta}{2}\bar{V}(t,i,z)\right),\qquad (t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}. \end{align} Then $\frac{\partial \varphi(t,i,z)}{\partial t}=-\frac{\theta}{2}\varphi(t,i,z)\frac{\partial \bar{V}(t,i,z)}{\partial t}$ for $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$. Plugging it into Eq.~\eqref{eq:dpe2}, we get that \begin{align}\label{eq:dpe3} 0=&\frac{\partial \varphi(t,i,z)}{\partial t}+\sum_{l\neq i}q_{il}\left[\varphi(t,l,z)-\varphi(t,i,z)\right]+\inf_{\pi\in U}\tilde{H}\left(\pi;i,z,(\varphi(t,i,z^j);\ j=0,1,\ldots,N)\right) \end{align} with terminal condition $\varphi(T,i,z)=1$ for all $(i,z)\in\mathbb{Z}_+\times{\cal S}$. In the above equation, the function $\tilde{H}$ is defined by \begin{align}\label{eq:H} \tilde{H}(\pi;i,z,\bar{f}(z) :=&\Bigg\{-\frac{\theta}{2}r(i)-\frac{\theta}{2}\pi^{\top}(\mu(i)-r(i)e_N)+\frac{\theta}{4}\left(1+\frac{\theta}{2}\right)\left\|\sigma(i)^{\top}\pi\right\|^2 \\ &+\sum_{j=1}^N\left(-1-\frac{\theta}{2}\pi_j\right)(1-z_j)\lambda_j(i,z)\Bigg\}f(z)+\sum_{j=1}^Nf(z^j)(1-z_j)(1-\pi_j)^{-\frac{\theta}{2}}\lambda_j(i,z).\nonumber \end{align} \section{Main Results and Verification Theorems}\label{sec:mainres} We analyze the existence of global solutions of the recursive system of DPEs \eqref{eq:dpe3} in a two-step procedure. Firstly, we investigate the existence and uniqueness of classical solutions of Eq.~\eqref{eq:dpe3} as a dynamical system when the Markov chain $Y$ takes values in the finite state space. Secondly, we proceed to study the countably infinite state case using approximation arguments. Let us introduce some notations which will be used frequently in this section. Let $n\in\mathbb{Z}_+$. For $x\in\mathbb{R}^n$, we write $x=(x_1,...,x_n)^{\top}$. For any $x,y\in\R^n$, we write $x\leq y$ if $x_i\leq y_i$ for all $i=1,\ldots,n$, while write $x<y$ if $x\leq y$ and there exists some $i\in\{1,\ldots,n\}$ such that $x_i<y_i$. In particular, $x\ll y$ if $x_i<y_i$ for all $i=1,...,n$. Recall that $e_{N}$ denotes the $N$-dimensional column vector whose all entries are ones. For the general default state $z\in{\cal S}$, we here introduce a general default state representation $z=0^{j_1,\ldots,j_k}$ for indices $j_1\neq\cdots\neq j_k$ belonging to $\{1,\ldots,N\}$, and $k\in\{0,1,\ldots,N\}$. Such a vector $z$ is obtained by flipping the entries $j_1,\ldots,j_k$ of the zero vector to one, i.e. $z_{j_1}=\cdots=z_{j_k}=1$, and $z_{j}=0$ for $j\notin\{j_1,\ldots,j_k\}$ (if $k=0$, we set $z=0^{j_1,\ldots,j_k}=0$). Clearly $0^{j_1,\ldots,j_{N}}=e_N^{\top}$. \subsection{Finite State Case of Regime-Switching Process}\label{sec:finite-states} In this section, we study the case where the regime-switching process $Y$ is defined on a finite state space given by $D_n=\{1,\ldots,n\}$. Here $n\in\mathbb{Z}_+$ is a fixed number. The corresponding $Q$-matrix of the Markov chain $Y$ is given by $Q_n=(q_{ij})_{i,j\in D_n}$ satisfying $\sum_{j\in D_n}q_{ij}=0$ for $i\in D_n$ and $q_{ij}\geq0$ when $i\neq j$. It is worth noting that $q_{ij}$, $i,j\in D_n$ here may be different from what is given in Subsection~\ref{sub:RSP}. With slight abuse of notation, we still use $q_{ij}$ here only for convenience. Let $\varphi(t,z):=(\varphi(t,i,z);\ i=1,\ldots,n)^{\top}$ be a column vector of the solution for $(t,z)\in[0,T]\times{\cal S}$. Then, we can rewrite Eq.~\eqref{eq:dpe3} as the following dynamical system: \begin{align}\label{eq:hjbeqn} \left\{ \begin{aligned} \frac{\partial \varphi(t,z)}{\partial t}+\big(Q_n+{\rm diag}(\nu(z))\big)\varphi(t,z)+G(t,\varphi(t,z),z)=&0,\quad t\in[0,T)\times{\cal S};\\ \varphi(T,z)=&e_n,\quad \text{for all }z\in{\cal S}. \end{aligned} \right. \end{align} Here, the vector of function $G(t,x,z)=(G_i(t,x,z);\ i=1,\ldots,n)^{\top}$ is given by, for each $i\in D_n$ and $(t,x,z)\in[0,T]\times\R^n\times{\cal S}$, \begin{align} G_i(t,x,z)=&\inf_{\pi\in U}\Bigg\{\sum_{j=1}^N\varphi(t,i,z^j)(1-z_j)(1-\pi_j)^{-\frac\theta2}\lambda_j(i,z)\\ &+\bigg[\frac\theta4(1+\frac\theta2)\left\|\sigma(i)^{\top}\pi\right\|^2-\frac\theta2\pi^\top(\mu(i)-r(i)e_N)-\frac{\theta}{2}\sum_{j=1}^N\pi_j(1-z_j)\lambda_j(i,z)\bigg]x_i\Bigg\}.\nonumber \end{align} The vector of coefficient $\nu(z)=(\nu_i(z);\ i=1,\ldots,n)^{\top}$ for $z\in{\cal S}$ is given by, for each $i\in D_n$, \begin{align}\label{eq:nuz} \nu_i(z)=-\frac{\theta}{2}r(i)-\sum_{j=1}^N(1-z_j)\lambda_j(i,z). \end{align} Recall the recursive system given by \eqref{eq:hjbeqn} in terms of default states $z=0^{j_1,\ldots,j_k}\in{\cal S}$ (where $k=0,1,\ldots,N$). The solvability can in fact be analyzed in the recursive form on default states. Therefore, our strategy for analyzing the system is based on a recursive procedure, starting from the default state $z=e_N^{\top}$ (i.e., all stocks have defaulted) and proceeding backward to the default state $z=0$ (i.e., all stocks are alive). \begin{itemize} \item[(i)] $k=N$ (i.e., all stocks have defaulted). In this default state, it is clear that the investor will not invest in stocks and hence the optimal fraction strategy in stocks for this case is given by $\pi_1^*=\cdots=\pi_N^*=0$ by virtue of Definition~\ref{def:add-con}. Let $\varphi(t,e_N^{\top})=(\varphi(t,i,e_N^{\top});\ i=1,\ldots,n)^{\top}$. As a consequence, the dynamical system \eqref{eq:hjbeqn} can be written as \begin{align}\label{eq:hjben} \left\{ \begin{aligned} \frac d{dt}\varphi(t,e_N^{\top})=&-A^{(N)}\varphi(t,e_N^{\top}),\quad\text{ in }[0,T);\\ \varphi(T,e_N^{\top})=&e_n. \end{aligned} \right. \end{align} The matrix of coefficients $A^{(N)}:=Q_n+{\rm diag}(\nu(e_N^{\top}))$. \end{itemize} In order to establish the unique positive solution to the above dynamical system \eqref{eq:hjben}, we need the following auxiliary result. \begin{lemma}\label{lem:sol-hjben2} Let $g(t)=(g_i(t);\ i=1,\ldots,n)^{\top}$ satisfy the following dynamical system: \begin{align*} \left\{ \begin{aligned} \frac d{dt}g(t)=&Bg(t)\quad\text{ in }(0,T];\\ g(0)=&\xi. \end{aligned} \right. \end{align*} If $B=(b_{ij})_{n\times n}$ satisfies $b_{ij}\geq 0$ for $i\neq j$ and $\xi\gg0$, then we have $g(t)\gg0$ for all $t\in[0,T]$. \end{lemma} \noindent{\it Proof.}\quad Define $f(x)=Bx$ for $x\in\R^n$. By virtue of Proposition 1.1 of Chapter 3 in \cite{smith08}, it suffices to verify that $f:\R^n\to\R^n$ is of type $K$, i.e., for any $x,y\in\R^n$ satisfying $x\leq y$ and $x_i=y_i$ {for some $i=1,\ldots,n$}, then it holds that $f_i(x)\leq f_i(y)$. Notice that $b_{ij}\geq0$ for all $i\neq j$. Then, we have that \begin{align}\label{eq:111} f_i(x)&=(Bx)_i=\sum_{j=1}^nb_{ij}x_j=b_{ii}x_i+\sum_{j=1,j\neq i}^nb_{ij}x_j\nonumber\\ &=b_{ii}y_i+\sum_{j=1,j\neq i}^nb_{ij}x_j \leq b_{ii}y_i+\sum_{j=1,j\neq i}^nb_{ij}y_j=f_i(y), \end{align} and hence $f$ is of type $K$. Thus, we complete the proof of the lemma. \hfill$\Box$\\ The next result is consequent on the previous lemma. \begin{lemma}\label{lem:sol-hjben} The dynamical system \eqref{eq:hjben} admits a unique solution which is given by \begin{align}\label{eq:varphien} \varphi(t,e_N^{\top})= e^{A^{(N)}(T-t)}e_n=\sum_{i=0}^{\infty}\frac{(A^{(N)})^i(T-t)^i}{i!}e_n,\quad t\in[0,T], \end{align} where the $n\times n$-dimensional matrix $A^{(N)}= Q_n+{\rm diag}(\nu(e_N^{\top}))=Q_n-\frac{\theta}{2}{\rm diag}(r)$ with $r=(r(i);\ i=1,\ldots,n)^{\top}$. Moreover, it holds that $\varphi(t,e_N^{\top})\gg 0$ for all $t\in[0,T]$. \end{lemma} \noindent{\it Proof.}\quad The representation of the solution $\varphi(t,e_N^{\top})$ given by \eqref{eq:varphien} is obvious. Note that $e_n\gg0$ and $q_{ij}\geq0$ for all $i\neq j$ as $Q_n=(q_{ij})_{n\times n}$ is the generator of the Markov chain. Then in order to prove $\varphi(t,e_N^{\top})\gg0$ for all $t\in[0,T]$, using Lemma~\ref{lem:sol-hjben2}, it suffices to verify $[A^{(N)}]_{ij}\geq0$ for all $i\neq j$. However $[A^{(N)}]_{ij}=q_{ij}$ for all $i\neq j$ and the condition given in Lemma~\ref{lem:sol-hjben2} is therefore verified which implies that $\varphi(t,e_N^{\top})\gg0$ for all $t\in[0,T]$. \hfill$\Box$\\ We next consider the general default case with $z=0^{j_1,\ldots,j_{k}}$ for $0\leq k\leq N-1$, i.e. the stocks $j_1,\ldots,j_{k}$ have defaulted but the stocks $\{j_{k+1},\ldots,j_N\}:=\{1,\ldots,N\}\setminus\{j_1,\ldots,j_k\}$ remain alive. Then we have \begin{itemize} \item[(ii)] Because the stocks $j_1,\ldots,j_k$ have defaulted, the optimal fraction strategies for the stocks $j_1,\ldots,j_{k}$ are given by $\pi_j^{(k,*)}=0$ for $j\in\{j_1,\ldots,j_{k}\}$ by virtue of Definition~\ref{def:add-con}. Let $\varphi^{(k)}(t)=(\varphi(t,i,0^{j_1,\ldots,j_{k}});\ i=1,\ldots,n)^{\top}$ and $\lambda^{(k)}_j(i)=\lambda_j(i,0^{j_1,\ldots,j_{k}})$ for $j\notin\{j_1,\ldots,j_k\}$ and $i=1,\ldots,n$. Then, the corresponding DPE \eqref{eq:hjbeqn} to this default case is given by \begin{align}\label{eq:hjbn-1} \left\{ \begin{aligned} \frac d{dt}\varphi^{(k)}(t)=&-A^{(k)}\varphi^{(k)}(t)-G^{(k)}(t,\varphi^{(k)}(t)),\quad\text{ in }[0,T);\\ \varphi^{(k)}(T)=&e_n. \end{aligned} \right. \end{align} Here, the $n\times n$-dimensional matrix $A^{(k)}$ is given by \begin{align}\label{eq:An-1} A^{(k)}={\rm diag}\left[\left(-\frac{\theta}{2} r(i)-\sum_{j\notin\{j_1,\ldots,j_{k}\}}\lambda_{j}^{(k)}(i);\ i=1,\ldots,n\right)\right]+Q_n. \end{align} The coefficient $G^{(k)}(t,x)=(G^{(k)}_i(t,x);\ i=1,\ldots,n)^{\top}$ for $(t,x)\in[0,T]\times\R^{n}$ is given by, for $i\in D_n$, \begin{align}\label{eq:Gin-1} G^{(k)}_i(t,x):=&\inf_{\pi^{(k)}\in U^{(k)}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)\big(1-\pi_{j}^{(k)}\big)^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)x_i\right\}. \end{align} where, for $(\pi^{(k)},i)\in U^{(k)}\times D_n$, the function $H^{(k)}$ is given by \begin{align}\label{eq:Hk} H^{(k)}(\pi^{(k)};i):=&\frac{\theta}{4}\big(1+\frac{\theta}{2}\big)\left\|\sigma^{(k)}(i)^{\top}\pi^{(k)}\right\|^2 -\frac{\theta}{2}(\pi^{(k)})^{\top}\big(\mu^{(k)}(i)-r(i)e_{N-k}\big)\nonumber\\ &-\frac{\theta}{2}\sum_{j\notin\{j_1,\ldots,j_k\}}\pi_{j}^{(k)}\lambda_{j}^{(k)}(i). \end{align} The policy space of this state is $U^{(k)}=(-\infty,1)^{N-k}$, and $\varphi^{(k+1),j}(t,i):=\varphi(t,i,0^{j_1,\ldots,j_k,j})$ for $j\notin\{j_1,\ldots,j_k\}$ corresponds to the $i$-th element of the positive solution vector of Eq.~\eqref{eq:hjbeqn} at the default state $z=0^{j_1,\ldots,j_k,j}$. Here, for each $i=1,\ldots,n$, we have also used notations: $\pi^{(k)}=(\pi_j^{(k)};\ j\notin\{j_1,\ldots,j_k\})^{\top}$, $\theta^{(k)}(i)=(\theta_j(i);\ j\notin\{j_1,\ldots,j_k\})^{\top}$, $\sigma^{(k)}(i)=(\sigma_{j\kappa}(i);\ j\notin\{j_1,\ldots,j_k\},\kappa\in\{1,\ldots,d\})$, and $\mu^{(k)}(i)=(\mu_j(i);\ j\notin\{j_1,\ldots,j_k\})^{\top}$. \end{itemize} From the expression of $G_i^{(k)}(t,x)$ given by \eqref{eq:Gin-1}, it can be seen that the solution $\varphi^{(k)}(t)$ on $t\in[0,T]$ of DPE \eqref{eq:hjbeqn} at the default state $z=0^{j_1,\ldots,j_k}$ in fact depends on the solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ of DPE~\eqref{eq:hjbeqn} at the default state $z=0^{j_1,\ldots,j_k,j}$ for $j\notin\{j_1,\ldots,j_k\}$. In particular when $k=N-1$, the solution $\varphi^{(k+1),j}(t)=\varphi(t,e_N^{\top})\gg0$ corresponds to the solution to \eqref{eq:hjbeqn} at the default state $z=e_N$ (i.e., $k=N$), which has been obtained by Lemma~\ref{lem:sol-hjben}. This suggests us to solve DPE~\eqref{eq:hjbeqn} backward recursively in terms of default states $z=0^{j_1,\ldots,j_k}$. Thus, in order to study the existence and uniqueness of a positive (classical) solution to the dynamical system \eqref{eq:hjbn-1}, we first assume that \eqref{eq:hjbeqn} admits a positive unique (classical) solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ for $j\notin\{j_1,\ldots,j_k\}$. We can first obtain an estimate on $G^{(k)}(t,x)$, which is presented in the following lemma. \begin{lemma}\label{lem:Gkesti} For each $k=0,1,\ldots,N-1$, let us assume that DPE~\eqref{eq:hjbeqn} admits a positive unique (classical) solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ for $j\notin\{j_1,\ldots,j_k\}$. Then, for any $x,y\in\R^n$ satisfying $x,y\geq\varepsilon e_n$ with $\varepsilon>0$, there exists a positive constant $C=C(\varepsilon)$ which only depends on $\varepsilon>0$ such that \begin{align}\label{eq:Gkesti} \left\|G^{(k)}(t,x)-G^{(k)}(t,y)\right\|\leq C\left\|x-y\right\|. \end{align} Here $\|\cdot\|$ denotes the Euclidian norm. \end{lemma} \noindent{\it Proof.}\quad It suffices to prove that, for each $i=1,\ldots,n$, $|G^{(k)}_i(t,x)-G^{(k)}_i(t,y)|\leq C(\varepsilon)\|x-y\|$ for any $x,y\in\R^n$ satisfying $x,y\geq\varepsilon e_n$ with $\varepsilon>0$, where $C(\varepsilon)>0$ is independent of time $t$. By the recursive assumption, $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ is the unique positive (classical) solution to \eqref{eq:hjbeqn} for $j\notin\{j_1,\ldots,j_k\}$. Then, it is continuous on $[0,T]$ which implies the existence of a constant $C_0>0$ independent of $t$ such that $\sup_{t\in[0,T]}\|\varphi^{(k+1),j}(t)\|\leq C_0$ for $j\notin\{j_1,\ldots,j_k\}$. Thus, by \eqref{eq:Gin-1}, and thanks to $H^{(k)}(0;i)=0$ for all $i\in D_n$ using \eqref{eq:Hk}, it follows that, for all $(t,x)\in[0,T]\times\R^n$, \begin{align}\label{eq:gless} G^{(k)}_i(t,x)\leq&\left[\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)(1-\pi_{j}^{(k)})^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)x_i\right]\Bigg|_{\pi^{(k)}=0}\nonumber\\ =&\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)\lambda_{j}^{(k)}(i)\leq C_0 \sum_{j\notin\{j_1,\ldots,j_k\}}\lambda_{j}^{(k)}(i). \end{align} On the other hand, as $\sigma^{(k)}(i)^\top\sigma^{(k)}(i)$ is positive-definite, there exists a positive constant $\delta>0$ such that $\big\|\sigma^{(k)}(i)^{\top}\pi^{(k)}\|^2\geq\delta\|\pi^{(k)}\|^2$ for all $i\in D_n$. Hence, the following estimate holds: \begin{align}\label{eq:esti1} &H^{(k)}(\pi^{(k)};i)\geq\frac{\theta}{4}(1+\frac{\theta}{2})\delta\left\|\pi^{(k)}\right\|^2-\frac{\theta}{2}\left(\left\|\mu^{(k)}(i)-r(i)e_{N-k}\right\|+\sum_{j\notin\{j_1,\ldots,j_k\}} \lambda_{j}^{(k)}(i)\right)\left\|\pi^{(k)}\right\|. \end{align} We next take the positive constant defined as \[ C_1:=2\frac{\left\|\mu^{(k)}(i)-r(i)e_{N-k}\right\|+\sum_{j\notin\{j_1,\ldots,j_k\}}\lambda_j^{(k)}(i)}{(1+\frac\theta2)\delta}. \] For all $\pi^{(k)}\in\{\pi^{(k)}\in U^{(k)};\ \|\pi^{(k)}\|\geq C_1\}$, it holds that \begin{align}\label{eq:large0} H^{(k)}(\pi^{(k)};i)\geq 0,\qquad i\in D_n. \end{align} This yields that, for all $\pi^{(k)}\in\{\pi^{(k)}\in U^{(k)};\ \|\pi^{(k)}\|\geq C_1\}$ and all $x\geq\varepsilon e_n$, we deduce from \eqref{eq:esti1} and \eqref{eq:large0} that \begin{align*} &\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)(1-\pi_{j}^{(k)})^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)x_i\geq H^{(k)}(\pi^{(k)};i)x_i\\ &\qquad\geq H^{(k)}(\pi^{(k)};i)\varepsilon\\ &\qquad\geq\varepsilon\left[\frac\theta4(1+\frac\theta2)\delta\left\|\pi^{(k)}\right\|^2-\frac\theta2\left(\left\|\mu^{(k)}(i)-r(i)e_{N-k}\right\|+\sum_{j\notin\{j_1,\ldots,j_k\}}\lambda_{j}^{(k)}(i)\right) \left\|\pi^{(k)}\right\|\right]. \end{align*} We shall choose another positive constant depending on $\varepsilon>0$ as \[ C_2(\varepsilon):=\frac{C_1}2+\sqrt{\frac{C_1^2}4+\frac8{\varepsilon\theta(2+\theta)\delta}C_0\sum_{j\notin\{j_1,\ldots,j_k\}}\lambda_{j}^{(k)}(i)}. \] Then, for all $\pi^{(k)}\in\{\pi\in U^{(k)};\ \|\pi\|\geq C_2(\varepsilon)\}$ and all $x\geq\varepsilon e_n$, it holds that \begin{align}\label{eq:esti002} &\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)(1-\pi_{j}^{(k)})^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)x_i\geq C_0\sum_{j\notin\{j_1,\ldots,j_k\}}\lambda_{j}^{(k)}(i). \end{align} By \eqref{eq:gless}, we have that $G^{(k)}_i(t,x)\leq C_0\sum_{j\notin\{j_1,\ldots,j_k\}}\lambda_{j}^{(k)}(i)$ for all $(t,x)\in[0,T]\times\R^n$. Thus, it follows from \eqref{eq:esti002} that \begin{align}\label{eq:G2} G^{(k)}_i(t,x)&=\inf_{\pi^{(k)}\in U^{(k)}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)(1-\pi_{j}^{(k)})^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)x_i\right\}\\ &=\inf_{\substack{\pi^{(k)}\in\{\pi\in U^{(k)}:\\ \|\pi\|\leq C_2(\varepsilon)\}}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)(1-\pi_{j}^{(k)})^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)x_i\right\}.\nonumber \end{align} In virtue of \eqref{eq:G2}, it holds that \begin{align}\label{eq:Gxy} G^{(k)}_i(t,x)&=\inf_{\substack{\pi^{(k)}\in\{\pi\in U^{(k)}:\\ \|\pi\|\leq C_2(\varepsilon)\}}}\Bigg\{\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)(1-\pi_{j}^{(k)})^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)\nonumber\\ &\qquad\qquad\qquad\qquad+H^{(k)}(\pi^{(k)};i)y_i+H^{(k)}(\pi^{(k)};i)(x_i-y_i)\Bigg\}\nonumber\\ &\leq\inf_{\substack{\pi^{(k)}\in\{\pi\in U^{(k)}:\\ \|\pi\|\leq C_2(\varepsilon)\}}}\Bigg\{\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)(1-\pi_{j}^{(k)})^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)\nonumber\\ &\qquad\qquad\qquad\qquad+H^{(k)}(\pi^{(k)};i)y_i\Bigg\}+C(\varepsilon)|x_i-y_i|\nonumber\\ &= G^{(k)}_i(t,y)+C(\varepsilon)|x_i-y_i|. \end{align} Here, the finite positive constant $C(\varepsilon)=\max_{i=1,\ldots,n}C^{(i)}(\varepsilon)$, where for $i\in D_n$, \begin{align}\label{eq:Cepsilon} C^{(i)}(\varepsilon)&:=\sup_{\substack{\pi^{(k)}\in\{\pi\in U^{(k)}:\\ \|\pi\|\leq C_2(\varepsilon)\}}}H^{(k)}(\pi^{(k)};i). \end{align} Note that the constant $C^{(i)}(\varepsilon)$ given above is nonnegative and finite for each $i\in D_n$. By \eqref{eq:Gxy}, we get that $|G^{(k)}_i(t,x)-G^{(k)}_i(t,y)|\leq C(\varepsilon)\|x-y\|$ for any $x,y\in\R^n$ satisfying $x,y\geq\varepsilon e_n$ with $\varepsilon>0$, which completes the proof of the lemma. \hfill$\Box$\\ We move on to study the existence and uniqueness of the global (classical) solution to the dynamical system \eqref{eq:hjbn-1}. To this end, we prepare the following comparison results of two types of dynamical systems with the type $K$ condition introduced in Smith~\cite{smith08}: \begin{lemma}\label{comparison} Let $g_{\kappa}(t)=(g_{\kappa i}(t);\ i=1,\ldots,n)^{\top}$ with $\kappa=1,2$ satisfy the following dynamical systems on $[0,T]$, respectively \begin{align*} \left\{ \begin{aligned} \frac d{dt}g_1(t)=&f(t,g_1(t))+\tilde{f}(t,g_1(t)),\ \text{ in }(0,T];\\ g_1(0)=&\xi_1, \end{aligned} \right.\qquad\qquad \left\{ \begin{aligned} \frac d{dt}g_2(t)=&f(t,g_2(t)),\ \text{ in }(0,T];\\ g_2(0)=&\xi_2. \end{aligned} \right. \end{align*} Here, the functions $f(t,x),\,\tilde{f}(t,x):[0,T]\times\R^n\to\R^n$ are assumed to be Lipschitz continuous w.r.t. $x\in\R^m$ uniformly in $t\in[0,T]$. The function $f(t,\cdot)$ satisfies the type $K$ condition for each $t\in[0,T]$ (i.e., for any $x,y\in\R^n$ satisfying $x\leq y$ and $x_i=y_i$ for some $i=1,\ldots,n$, it holds that $f_i(t,x)\leq f_i(t,y)$ for each $t\in[0,T]$). If $\tilde{f}(t,x)\geq0$ for $(t,x)\in[0,T]\times\R^n$ and $\xi_1\geq\xi_2$, then $g_1(t)\geq g_2(t)$ for all $t\in[0,T]$. \end{lemma} \noindent{\it Proof.}\quad For $p>0$, let $g_{1}^{(p)}(t)=(g_{1i}^{(p)}(t);\ i=1,\ldots,n)^{\top}$ be the solution to the following dynamical system given by \begin{equation} \left\{ \begin{aligned} \frac d{dt}g_{1}^{(p)}(t)=&f(t,g_{1}^{(p)}(t))+\tilde{f}(t,g^{(p)}_{1}(t))+\frac{1}{p}e_n^{\top},\ \text{ in }(0,T];\\ g_{1}^{(p)}(0)=&\xi_1+\frac{1}{p}e_n^{\top}. \end{aligned} \right. \end{equation} Then, for all $t\in(0,T]$, it holds that \begin{align*} \|g_{1}^{(p)}(t)-g_1(t)\|\leq&\|g_{1}^{(p)}(0)-g_1(0)\|+\int_0^t\big\|f(s,g_{1}^{(p)}(s))-f(s,g_1(s))\big\|ds\nonumber\\ &+\int_0^t\big\|\tilde{f}(s,g_{1}^{(p)}(s))-\tilde{f}(s,g_1(s))\big\|ds+\frac1p\int_0^t\|e_n\|ds\nonumber\\ \leq&\frac{1+T}p\|e_n\|+(C+\tilde{C})\int_0^t\big\|g_{1}^{(p)}(s)-g_1(s)\big\|ds. \end{align*} Here $C>0$ and $\tilde{C}>0$ are Lipschitz constant coefficients for $f(t,x)$ and $\tilde{f}(t,x)$, respectively. The Gronwall's lemma yields that $g_{1}^{(p)}(t)\to g_1(t)$ for all $t\in[0,T]$ as $p\to\infty$. We claim that $g_{1}^{(p)}(t)\gg g_2(t)$ for all $t\in[0,T]$. Suppose that the claim does not hold, the fact that $g_{1}^{(p)}(0)\gg g_2(0)$, and $g_1^{(p)}(t),g_2(t)$ are continuous on $[0,T]$ imply that there exists a $t_0\in(0,T]$ such that $g_{1}^{(p)}(s)\geq g_2(s)$ on $s\in[0,t_0]$ and $g_{1i}^{(p)}(t_0)=g_{2i}(t_0)$ for some $i\in\{1,\ldots,n\}$. Because for $t_0>0$, $g_1^{(p)}(t),g_2(t)$ are differentiable on $(0,T]$, it follows that \begin{align*} \frac d{dt}g_{1i}^{(p)}(t)\big|_{t=t_0}=\lim_{\epsilon\to0}\frac{g_{1i}^{(p)}(t_0)-g_{1i}^{(p)}(t_0-\epsilon)}{\epsilon} \leq\lim_{\epsilon\to0}\frac{g_{2i}(t_0)-g_{2i}(t_0-\epsilon)}{\epsilon}= \frac d{dt}g_{2i}(t)\big|_{t=t_0}. \end{align*} On the other hand, as $f(t,\cdot)$ satisfies the type $K$ condition for each $t\in[0,T]$ and $\tilde{f}(t,x)\geq0$ for all $(t,x)\in[0,T]\times\R^n$, for the above $i$, we also have that \begin{align} \frac d{dt}g_{1i}^{(p)}(t)\big|_{t=t_0}=&f_i(t_0,g_{1i}^{(p)}(t_0))+\tilde{f}_i(t_0,g_{1}^{(p)}(t_0))+\frac1p\nonumber\\ >&f_i(t_0,g_{1i}^{(p)}(t_0))\geq f_i(t_0,g_2(t_0))=\frac d{dt}g_{2i}(t)\big|_{t=t_0}. \end{align} We obtain a contradiction, and hence $g_{1}^{(p)}(t)\gg g_2(t)$ for all $t\in[0,T]$. It therefore holds that $g_1(t)\geq g_2(t)$ for all $t\in[0,T]$ by passing $p$ to infinity. \hfill$\Box$\\ Now we are ready to present the following existence and uniqueness result for the positive (classical) solution of Eq.~\eqref{eq:hjbn-1}. \begin{theorem}\label{thm:solutionk} For each $k=0,1,\ldots,N-1$, assume that DPE~\eqref{eq:hjbeqn} admits a positive unique (classical) solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ for $j\notin\{j_1,\ldots,j_k\}$. Then, there exists a unique positive (classical) solution $\varphi^{(k)}(t)$ on $t\in[0,T]$ of \eqref{eq:hjbeqn} at the default state $z=0^{j_1,\ldots,j_k}$ (i.e., Eq.~\eqref{eq:hjbn-1} admits a unique positive (classical) solution). \end{theorem} \noindent{\it Proof.}\quad For any constant $a\in(0,1]$, let us consider the truncated dynamical system given by \begin{align}\label{eq:truneqn} \left\{ \begin{aligned} \frac d{dt} \varphi_a^{(k)}(t)+A^{(k)}\varphi_a^{(k)}(t) + G_a^{(k)}(t,\varphi_a^{(k)}(t))=&0,\ \text{ in }[0,T);\\ \varphi^{(k)}_a(T)=&e_n. \end{aligned} \right. \end{align} Here $\varphi_a^{(k)}(t)=(\varphi_a^{(k)}(t,i);\ i=1,\ldots,n)^{\top}$ is the vector-valued solution and the $n\times n$-dimensional matrix $A^{(k)}$ is given by \eqref{eq:An-1}. The vector-valued function $G_a^{(k)}(t,x)$ is defined as: \begin{align}\label{eq:Ga} G_a^{(k)}(t,x) := G^{(k)}(t,x\vee a e_n),\qquad (t,x)\in[0,T]\times\R^n. \end{align} Thanks to Lemma~\ref{lem:Gkesti}, there exists a positive constant $C=C(a)$ which only depends on $a>0$ such that, for all $t\in[0,T]$, \begin{align}\label{eq:Lip-Ga} \big\|G_a^{(k)}(t,x)-G_a^{(k)}(t,y)\big\|\leq C\|x-y\|,\qquad x,y\in\R^n, \end{align} i.e., $G^{(k)}_a(t,x)$ is globally Lipschitz continuous w.r.t. $x\in\R^m$ uniformly in $t\in[0,T]$. By reversing the time, let us consider $\tilde{\varphi}_a^{(k)}(t):=\varphi_a^{(k)}(T-t)$ for $t\in[0,T]$. Then, $\tilde{\varphi}_a^{(k)}(t)$ satisfies the following dynamical system given by \begin{align}\label{eq:truneq2} \left\{ \begin{aligned} \frac{d}{dt}\tilde{\varphi}_a^{(k)}(t)=&A^{(k)}\tilde{\varphi}^{(k)}_a(t)+G^{(k)}_{a}(T-t,\tilde{\varphi}_a^{(k)}(t)),\ \text{ in }(0,T];\\ \tilde{\varphi}_a^{(k)}(0)=&e_n^{\top}. \end{aligned} \right. \end{align} In virtue of the globally Lipschitz continuity condition \eqref{eq:Lip-Ga}, for each $a\in(0,1]$, it follows that the system~\eqref{eq:truneq2} has a unique (classical) solution $\tilde{\varphi}_a^{(k)}(t)$ on $[0,T]$. In order to apply Lemma~\ref{comparison}, we rewrite the above system \eqref{eq:truneq2} in the following form: \begin{align}\label{eq:truneq3} \left\{ \begin{aligned} \frac{d}{dt}\tilde{\varphi}_a^{(k)}(t)=&f^{(k)}(\tilde{\varphi}^{(k)}_a(t))+\tilde{f}_a^{(k)}(t,\tilde{\varphi}_a^{(k)}(t)),\ \text{ in }(0,T];\\ \tilde{\varphi}_a^{(k)}(0)=&e_n. \end{aligned} \right. \end{align} Here, the Lipschitz continuous functions $f^{(k)}(x)=(f_i^{(k)}(x);\ i=1,\ldots,n)^{\top}$ and $\tilde{f}_a^{(k)}(t,x)=(\tilde{f}^{(k)}_{a,i}(t,x);\ i=1,\ldots,n)^{\top}$ on $(t,x)\in[0,T]\times\R^n$ are given respectively by \begin{align}\label{eq:f} f_i^{(k)}(x)&=\sum_{j=1}^nq_{ij}x_j-\left(\frac{\theta}{2} r(i)+\sum_{j\notin\{j_1,\ldots,j_{k}\}}h_{j}^{(k)}(i)\right)x_i-\beta_i\{|x_i|\vee1\},\nonumber\\ \tilde{f}_{a,i}^{(k)}(t,x)&=G_a^{(k)}(T-t,x)+\beta_i\{|x_i|\vee1\},\quad i=1,\ldots,n. \end{align} The positive constants $\beta_i$ for $i\in D_n$ are given by \begin{align}\label{eq:betai} \beta_i=&-\inf_{\pi^{(k)}\in U^{(k)}}H^{(k)}(\pi^{(k)};i), \end{align} where, for $i\in D_n$, $H^{(k)}(\pi^{(k)};i)$ is defined by \eqref{eq:Hk}. It is not difficult to see that $\beta_i$ is a nonnegative and finite constant for each $i\in D_n$ using \eqref{eq:Hk}. By the recursive assumption that $\varphi^{(k+1),j}(t)\gg0$ on $[0,T]$ for $j\notin\{j_1,\ldots,j_k\}$, for any $a\in(0,1]$, we have that, for each $i\in D_n$, and all $(t,x)\in[0,T]\times\R^n$, \begin{equation}\label{eq:Gapositive} \begin{split} &G^{(k)}_i(T-t,x\vee ae_n)\\ =&\inf_{\pi^{(k)}\in U^{(k)}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}}\varphi^{(k+1),j}(T-t,i)(1-\pi_{j}^{(k)})^{-\frac\theta2}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)(x_i\vee a)\right\}\\ \geq&\{x_i\vee a\}\inf_{\pi^{(k)}\in U^{(k)}}H^{(k)}(\pi^{(k)};i)\geq-\beta_i\{|x_i|\vee 1\}. \end{split} \end{equation} Thus, from \eqref{eq:f}, it follows that, for all $(t,x)\in[0,T]\times\R^n$, \begin{align}\label{eq:onftilde} \tilde{f}_{a,i}^{(k)}(t,x)=G^{(k)}_i(T-t,x\vee ae_n)+\beta_i\{|x_i|\vee1\}\geq0,\quad i\in D_n. \end{align} We next verify that the vector-valued function $f^{(k)}(x)=(f_i^{(k)}(x);\ i=1,\ldots,n)^{\top}$ given by \eqref{eq:f} is of type $K$. Namely we need to verify that, for any $x,y\in\R^n$ satisfying $x\leq y$ and $x_{i_0}=y_{i_0}$ for some $i_0=1,\ldots,n$, it holds that $f_{i_0}^{(k)}(x)\leq f_{i_0}^{(k)}(y)$. In fact, by \eqref{eq:f}, we have that, for any $x,y\in\R^n$ satisfying $x\leq y$ and $x_{i_0}=y_{i_0}$ for some $i_0=1,\ldots,n$, \begin{align}\label{eq:condK} f_{i_0}^{(k)}(x)&=\sum_{j=1}^nq_{i_0j}x_j-\left(\frac{\theta}{2} r(i_0)+\sum_{j\notin\{j_1,\ldots,j_{k}\}}\lambda_{j}^{(k)}(i_0)\right)x_{i_0}-\beta_i\{|x_{i_0}|\vee1\}\nonumber\\ &=q_{i_0i_0}x_{i_0}-\left(\frac{\theta}{2} r(i_0)+\sum_{j\notin\{j_1,\ldots,j_{k}\}}\lambda_{j}^{(k)}(i_0)\right)x_{i_0}-\beta_{i_0}\{|x_{i_0}|\vee1\}+\sum_{j\neq i_0}q_{i_0j}x_j\nonumber\\ &=q_{i_0i_0}y_{i_0}-\left(\frac{\theta}{2} r(i_0)+\sum_{j\notin\{j_1,\ldots,j_{k}\}}\lambda_{j}^{(k)}(i_0)\right)y_{i_0}-\beta_{i_0}\{|y_{i_0}|\vee1\}+\sum_{j\neq i_0}q_{i_0j}x_j\nonumber\\ &\leq q_{i_0i_0}y_{i_0}-\left(\frac{\theta}{2} r(i_0)+\sum_{j\notin\{j_1,\ldots,j_{k}\}}\lambda_{j}^{(k)}(i_0)\right)y_{i_0}-\beta_{i_0}\{|y_{i_0}|\vee1\}+\sum_{j\neq i_0}q_{i_0j}y_j\nonumber\\ &=f_{i_0}^{(k)}(y), \end{align} where we used the fact that for all $j\neq i_0$, $q_{i_0j}\geq0$ as $Q_n=(q_{ij})_{n\times n}$ is the generator of the Markov chain $Y$ and hence $\sum_{j\neq i_0}q_{i_0j}x_j\leq \sum_{j\neq i_0}q_{i_0j}y_j$ for all $x\leq y$. Hence, using Proposition 1.1 of Chapter 3 in Smith \cite{smith08} and Lemma~\ref{lem:sol-hjben2}, we deduce that the following dynamical system \begin{align}\label{eq:truneq4} \left\{ \begin{aligned} \frac{d}{dt}{\psi}^{(k)}(t)=&f^{(k)}({\psi}^{(k)}(t)),\ \text{ in }(0,T];\\ {\psi}^{(k)}(0)=&e_n \end{aligned} \right. \end{align} admits a unique (classical) solution ${\psi}^{(k)}(t)=(\psi_i^{(k)}(t);\ i=1,\ldots,n)^{\top}$ on $t\in[0,T]$, and moreover it holds that ${\psi}^{(k)}(t)\gg0$ for $t\in[0,T]$. Let us set \begin{align}\label{eq:epsilonk} \varepsilon^{(k)}:=\min_{i=1,\ldots,n}\left\{\inf_{t\in[0,T]}\psi_i^{(k)}(t)\right\}. \end{align} The continuity of $\psi^{(k)}(t)$ in $t\in[0,T]$ and $\psi^{(k)}(t)\gg0$ for all $t\in[0,T]$ lead to $\varepsilon^{(k)}>0$. On the other hand, it follows from \eqref{eq:onftilde} that the vector-valued function $f_a^{(k)}(t,x)\geq0$ on $[0,T]\times\R^n$. Because the vector-valued function $f^{(k)}(x)$ is also of type $K$ proved by \eqref{eq:condK}, we can apply Lemma~\ref{comparison} to the dynamical systems \eqref{eq:truneq3} and \eqref{eq:truneq4} and derive that \begin{align}\label{eq:comparison0} \tilde{\varphi}_a^{(k)}(t)\geq {\psi}^{(k)}(t)\geq\varepsilon^{(k)}e_n,\qquad \forall\ t\in[0,T], \end{align} as $\tilde{\varphi}_a^{(k)}(0)={\psi}^{(k)}(0)=e_n$. Note that the positive constant $\varepsilon^{(k)}$ given by \eqref{eq:epsilonk} above is independent of the constant $a\in(0,1]$. We can therefore choose $a\in(0,\varepsilon^{(k)}\wedge1)$ and it holds that $G_a^{(k)}(T-t,\tilde{\varphi}_a^{(k)}(t))=G^{(k)}(T-t,\tilde{\varphi}_a^{(k)}(t)\vee ae_n)=G^{(k)}(T-t,\tilde{\varphi}_a^{(k)}(t))$ on $[0,T]$. By \eqref{eq:truneq2} with $a\in(0,\varepsilon^{(k)}\wedge1)$, it follows that $\tilde{\varphi}_a^{(k)}(t)\geq\varepsilon^{(k)}e_n$ on $[0,T]$ is the unique (classical) solution to the dynamical system \eqref{eq:hjbn-1} and the proof of the theorem is complete. \hfill$\Box$\\ As an important implication of Theorem~\ref{thm:solutionk}, we present one of our major contributions to the existing literature in the next proposition as the characterization of the optimal strategy $\pi^{(k)}\in{U}^{(k)}$ at the default state $z=0^{j_1,\ldots,j_k}$ where $k=0,1,\ldots,N-1$. \begin{proposition}\label{coro:optimal-strategy} For each $k=0,1,\ldots,N-1$, assume that DPE~\eqref{eq:hjbeqn} admits a positive unique (classical) solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ for $j\notin\{j_1,\ldots,j_k\}$. Let $\varphi^{(k)}(t)=(\varphi^{(k)}(t,i);\ i=1,\ldots,n)^{\top}$ be the unique (classical) solution of DPE \eqref{eq:hjbn-1}. Then, there exists a unique optimal feedback strategy $\pi^{(k,*)}=\pi^{(k,*)}(t,i)$ for $(t,i)\in[0,T]\times D_n$ which is given explicitly by \begin{align}\label{eq:optimal-strategy} \pi^{(k,*)}=&\pi^{(k,*)}(t,i)\\ =&\argmin_{\pi^{(k)}\in U^{(k)}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)\big(1-\pi_{j}^{(k)}\big)^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)\varphi^{(k)}(t,i)\right\}\nonumber\\ =&\argmin_{\substack{\pi^{(k)}\in\{\pi\in U^{(k)}:\\ \|\pi\|\leq C\}}}\Bigg\{\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)\big(1-\pi_{j}^{(k)}\big)^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)\varphi^{(k)}(t,i)\Bigg\},\nonumber \end{align} for some positive constant $C>0$. \end{proposition} \noindent{\it Proof.}\quad Let us first recall Eq.~\eqref{eq:hjbn-1}, i.e., \begin{align*} \left\{ \begin{aligned} \frac d{dt}\varphi^{(k)}(t)=&-A^{(k)}\varphi^{(k)}(t)-G^{(k)}(t,\varphi^{(k)}(t)),\quad\text{ in }[0,T);\\ \varphi^{(k)}(T)=&e_n. \end{aligned} \right. \end{align*} Theorem~\ref{thm:solutionk} above shows that the above dynamical system admits a unique positive (classical) solution $\varphi^{(k)}(t)$ on $[0,T]$ and moreover $\varphi^{(k)}(t)\geq \varepsilon^{(k)}e_n^{\top}$ for all $t\in[0,T]$. Here $\varepsilon^{(k)}>0$ is given by~\eqref{eq:epsilonk}. Thus, by \eqref{eq:G2}, we have that, there exists a positive constant $C(\varepsilon^{(k)})$ which depends on $\varepsilon^{(k)}>0$ such that, for each $i\in D_n$, \begin{align*} &G^{(k)}_i(t,\varphi^{(k)}(t,i))\nonumber\\ =&\inf_{\substack{\pi^{(k)}\in\{\pi\in U^{(k)}:\\ \|\pi\|\leq C(\varepsilon^{(k)})\}}}\Bigg\{\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)(1-\pi_{j}^{(k)})^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)\varphi^{(k)}(t,i)\Bigg\}.\nonumber \end{align*} Here, for each $i=1,\ldots,n$, the function $G_i^{(k)}(t,x)$ on $(t,x)\in[0,T]\times\R^n$ is given by \eqref{eq:Gin-1}. Also for each $i=1,\ldots,n$, $\varphi^{(k+1),j}(t,i)$ on $t\in[0,T]$ is the $i$-th element of the positive (classical) solution $\varphi^{(k+1),j}(t)$ of \eqref{eq:hjbeqn} at the default state $z=0^{j_1,\ldots,j_k,j}$ for $j\notin\{j_1,\ldots,j_k\}$. Recall that the function $H^{(k)}(\pi^{(k)};i)$ for $(\pi^{(k)},i)\in U^{(k)}\times D_n$ is given by \eqref{eq:Hk}. Then, it is not difficult to see that, for each $i\in D_n$ and fixed $t\in[0,T]$, \[ h^{(k)}(\pi^{(k)},i):=\sum_{j\notin\{j_1,\ldots,j_k\}} \varphi^{(k+1),j}(t,i)(1-\pi_{j}^{(k)})^{-\frac{\theta}{2}}\lambda_{j}^{(k)}(i)+H^{(k)}(\pi^{(k)};i)\varphi^{(k)}(t,i) \] is continuous and strictly convex in $\pi^{(k)}\in\bar{U}^{(k)}$. Also notice that the space $\{\pi^{(k)}\in \bar{U}^{(k)};\ \|\pi^{(k)}\|\leq C(\varepsilon^{(k)})\}\subset\R^{N-k}$ is compact. Hence, there exist a unique optimum $\pi^{(k,*)}=\pi^{(k,*)}(t,i)\in\bar{U}^{(k)}$. Moreover, it is noted that $h^{(k)}(\pi^{(k)},i)=+\infty$ when $\pi^{(k)}\in\bar{U}^{(k)}\setminus U^{(k)}$ while $h^{(k)}(\pi^{(k)},i)<+\infty$ for all $\pi^{(k)}\in U^{(k)}$. Consequently, we in fact obtain the optimum $\pi^{(k,*)}=\pi^{(k,*)}(t,i)\in\bar{U}^{(k)}$ admitting the representation \eqref{eq:optimal-strategy} by taking $C=C(\varepsilon^{(k)})$ which completes the proof of the proposition. \hfill$\Box$\\ As one of our main results, we finally present and prove the verification theorem for the finite state space of the regime-switching process $Y$ in the next proposition. \begin{proposition}\label{prop:verithemfinite} Let $\varphi(t,z)=(\varphi(t,i,z);\ i\in D_n)^{\top}$ with $(t,z)\in[0,T]\times{\cal S}$ be the unique solution of DPE~\eqref{eq:hjbeqn}. For $(t,i,z)\in[0,T]\times D_n\times{\cal S}$, define \begin{align}\label{eq:pistar} \pi^*(t,i,z):={\rm diag}((1-z_j)_{j=1}^N)\argmin_{\pi\in U}\tilde{H}\left(\pi;i,z,(\varphi(t,i,z^j);\ j=0,1,\ldots,N)\right), \end{align} where $\tilde{H}(\pi;i,z,\bar{f}(z))$ is given by \eqref{eq:H}. Let $\tilde{\pi}^*=(\tilde{\pi}^*(t))_{t\in[0,T]}$ with $\tilde{\pi}^*(t):=\pi^*(t,Y(t-),Z(t-))$. Then $\tilde{\pi}^*\in\tilde{\cal U}$ and it is the optimal feedback strategy, i.e., it holds that \begin{align}\label{optimeq} -\frac{2}{\theta}\log\Ex_{t,i,z}^{\tilde{\pi}^*,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}^*(s);Y(s),Z(s))ds\right)\right]=\bar{V}(t,i,z)=-\frac2\theta\log\varphi(t,i,z). \end{align} \end{proposition} \begin{proof} From Proposition~\ref{coro:optimal-strategy}, it follows that $\tilde{\pi}^*$ is a bounded and predictable process taking values on $U$. We next prove that $\tilde{\pi}^*$ is uniformly away from $1$. In fact, for fixed $(i,z,x)\in D_n\times\mathcal{S}\times(0,\infty)^{N+1}$, we have that $\tilde{H}\left(\pi;i,z,x\right)$ is strictly convex w.r.t. $\pi\in U$, thus $\Phi(i,z,x):=\argmin_{\pi\in U}\tilde{H}\left(\pi;i,z,x\right)$ is well-defined. Notice that $\Phi(i,z,\cdot)$ maps $(0,\infty)^{N+1}$ to $U$ and satisfies the first-order condition $\frac{\partial\tilde{H}}{\partial\pi_j}\left(\Phi(i,z,x);i,z,x\right)=0$ for $j=1,\ldots,N$. Then, Implicit Function Theorem yields that $\Phi(i,z,x)$ is continuous in $x$. Further, for $j=1,\ldots,N,$ if $Z_j(t-)=0$, the first-order condition gives that \begin{align}\label{eq:pistarbelow} (1-\tilde\pi^*_j(t))^{-\frac\theta2-1}=&\bigg[\big(\mu_j(Y(t-))-r(Y(t-))\big)-\frac\theta2\left(1+\frac\theta2\right)\sum_{i=1}^N\big(\sigma^\top(Y(t-))\sigma(Y(t-))\big)_{ji}\tilde\pi^*_i(t)\nonumber\\ &+\frac\theta2\lambda_j(Y(t-),Z(t-))\bigg]\frac{\varphi(t,Y(t-),Z(t-))}{\lambda_j(Y(t-),Z(t-))\varphi(t,Y(t-),Z^j(t-))}. \end{align} Because for all $(i,z)\in D_n\times{\cal S}$, $\varphi(\cdot,i,z)$ has a strictly positive lower bound using \eqref{eq:comparison0}. Together with Proposition~\ref{coro:optimal-strategy}, it follows that, there exists a constant $C>0$ such that $\sup_{t\in[0,T]}(1-\tilde\pi^*_j(t))^{-\frac\theta2-1}\leq C$ for all $j=1,\ldots,N$. Hence, the estimate~\eqref{eq:pistarbelow} yields that $\tilde{\pi}^*$ is uniformly bounded away from $1$. Thus, the following generalized Novikov's condition holds: \begin{align}\label{eq:integral-cond} \Ex\left[\exp\left(\frac{\theta^2}{8}\int_0^T\left|\sigma(Y(t))^{\top}\tilde{\pi}^*(t)\right|^2dt+\sum_{j=1}^N\int_0^T\left|(1-\tilde{\pi}_j^*(t))^{-\frac{\theta}{2}}-1\right|^2\lambda_j(Y(t),Z(t))dt\right)\right]<+\infty. \end{align} The above Novikov's condition \eqref{eq:integral-cond} implies that $\tilde{\pi}^*$ is admissible. We next prove \eqref{optimeq}. Noting that $\varphi(t,z)=(\varphi(t,i,z);\ i\in D_n)^{\top}$ with $(t,z)\in[0,T]\times{\cal S}$ is the unique classical solution of \eqref{eq:hjbeqn}. Note that, there exists a constant $C_L=C_L(n,i,z)>0$ such that $L(\pi;i,z)>-C_L$ for $(\pi,i,z)\in U\times D_n\times{\cal S}$. For $m\geq1$, set $L_m(\pi;i,z):=L(\pi;,i,z)\wedge m$. Then $L_m$ is bounded and $L_m(\pi;i,z)\uparrow L(\pi;i,z)$ as $m\to\infty$. Therefore, for any admissible strategy $\tilde{\pi}\in\tilde{\cal U}$, It\^o's formula gives that, for $0\leq t<s\leq T$, \begin{align}\label{eq:itoveri0} &\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\varphi(s,Y(s),Z(s))\exp\left(\frac{\theta}{2}\int_t^sL_m(\tilde{\pi}(u);Y(u),Z(u))du\right)\right]\nonumber\\ &\quad =\varphi(t,i,z)+\Ex_{t,i,z}^{\tilde{\pi},\theta}\Bigg[\int_t^s\exp\left(\frac{\theta}{2}\int_t^uL_m(\tilde{\pi}(v);Y(v),Z(v))dv\right)\nonumber\\ &\quad\qquad\times\Bigg\{\frac{\partial\varphi(u,Y(u),Z(u))}{\partial t}+\sum_{l\neq Y(u)}q_{Y(u)l}\left(\varphi(u,l,Z(u))-\varphi(u,Y(u),Z(u))\right)\nonumber\\ &\qquad\qquad\quad+\tilde{H}\left(\tilde{\pi}(u);Y(u),Z(u),(\varphi(t,Y(u),Z^j(u));\ j=0,1,\ldots,N)\right)\Bigg\}du\Bigg]\nonumber\\ &\qquad\quad+\Ex_{t,i,z}^{\tilde{\pi},\theta}\Bigg[\int_t^s\exp\left(\frac{\theta}{2}\int_t^uL_m(\tilde{\pi}(v);Y(v),Z(v))dv\right)\varphi(u,Y(u),Z(u))\nonumber\\ &\qquad\qquad\qquad\qquad\times(L_m-L)(\tilde{\pi}(u);Y(u),Z(u))du\Bigg]\nonumber\\ &\quad\geq\varphi(t,i,z)+\Ex_{t,i,z}^{\tilde{\pi},\theta}\Bigg[\int_t^s\exp\left(\frac{\theta}{2}\int_t^uL_m(\tilde{\pi}(v);Y(v),Z(v))dv\right)\varphi(u,Y(u),Z(u))\nonumber\\ &\qquad\qquad\qquad\qquad\times(L_m-L)(\tilde{\pi}(u);Y(u),Z(u))du\Bigg]. \end{align} In the last inequality above, the integral term in the expectation is negative. On the other hand, note that $\varphi$ is bounded and positive, this integral also admits that, $\Px_{t,i,z}^{\tilde{\pi},\theta}$-a.s., for some constant $C_{\varphi}>0$, \begin{align*} &\int_t^s\exp\left(\frac{\theta}{2}\int_t^uL_m(\tilde{\pi}(v);Y(v),Z(v))dv\right)\varphi(u,Y(u),Z(u))(L_m-L)(\tilde{\pi}(u);Y(u),Z(u))du\nonumber\\ &\quad\geq-C_{\varphi}\int_t^s\exp\left(\frac{\theta}{2}\int_t^u[L(\tilde{\pi}(v);Y(v),Z(v))+C_L]dv\right)[L(\tilde{\pi}(u);Y(u),Z(u))+C_L]du\nonumber\\ &\quad=\frac{2 C_{\varphi}}{\theta}\left[1-e^{\frac{\theta}{2}C_L(s-t)}\exp\left(\frac{\theta}{2}\int_t^sL(\tilde{\pi}(u);Y(u),Z(u))du\right)\right]. \end{align*} By taking $s=T$ above. Then, from Dominated Convergence Theorem, it follows that \begin{align}\label{eq:itoveri} \Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\varphi(T,Y(T),Z(T))\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(u);Y(u),Z(u))du\right)\right]\geq\varphi(t,i,z). \end{align} Note that $\varphi(T,i,z)=1$ in \eqref{eq:itoveri}, we obtain that \begin{align}\label{infoverphi} \inf_{\tilde{\pi}\in\tilde{\cal U}}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(u);Y(u),Z(u))du\right)\right]\geq\varphi(t,i,z). \end{align} On the other hand, from \eqref{eq:itoveri0} and \eqref{eq:pistar}, it follows that, for $0\leq t<s\leq T$, \begin{align}\label{infoverphi2} \Ex_{t,i,z}^{\tilde{\pi}^*,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}^*(u);Y(u),Z(u))du\right)\right]=\varphi(t,i,z). \end{align} Because $\pi^*$ is admissible, i.e., $\tilde{\pi}^*\in\tilde{\cal U}$, we deduce from \eqref{infoverphi2} that \begin{align}\label{phioverinf} \varphi(t,i,z)\geq\inf_{\tilde{\pi}\in\tilde{\cal U}}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(u);Y(u),Z(u))du\right)\right]. \end{align} Combining \eqref{infoverphi} and \eqref{phioverinf}, we have that \begin{align}\label{phi=inf} \varphi(t,i,z)=\inf_{\tilde{\pi}\in\tilde{\cal U}}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(u);Y(u),Z(u))du\right)\right]. \end{align} The equality above is equivalent to $\varphi(t,i,z)=e^{-\frac\theta2\bar{V}(t,i,z)}$ due to \eqref{eq:J}. Hence, Eq.~\eqref{infoverphi2} together with \eqref{phi=inf} imply that \eqref{optimeq} holds, which ends the proof. \end{proof} \subsection{Countable State Case of Regime-Switching Process} This section focuses on the existence of classical solutions to the original DPE~\eqref{eq:dpe3} and the corresponding verification theorem when the state space of the Markov chain $Y$ is the countably infinite set $\mathbb{Z}_+=\{1,2,\ldots\}$. The truncation method used in the finite state case fails to be applicable in the case $\mathbb{Z}_+$. Instead, we shall establish a sequence of appropriately approximating risk sensitive control problems with finite state set $D_n^0:=D_n\cup\{0\}$ for $n\in\mathbb{Z}_+$. Building upon the results in the finite state case in Section~\ref{sec:finite-states}, and by establishing valid uniform estimates, we can arrive at the desired conclusion that the smooth value functions corresponding to the above approximating control problems converge to the classical solution of \eqref{eq:dpe3} with countably infinite set $\mathbb{Z}_+$ as $n$ goes to infinity. Recall $D_n=\{1,2,\dots,n\}$ for the fixed $n\in\mathbb{Z}_+$. We define the truncated counterpart of the regime-switching process $Y$ as: for $t\in[0,T]$, \begin{align}\label{eq:Yn} Y^{(n)}(t):=Y(t)\mathds{1}_{\{\tau_n>t\}},\qquad \tau_n^t:=\inf\{s\geq t;\ Y(s)\notin D_n\}, \end{align} where $\tau_n:=\tau_n^0$ for $n\in\mathbb{Z}_+$. By convention, we set $\inf\emptyset=+\infty$. Then, the process $Y^{(n)}=(Y^{(n)}(t))_{t\in[0,T]}$ is a continuous-time Markov chain with finite state space $D_n^0$. Here $0$ is understood as an absorbing state. The generator of $Y^{(n)}$ can therefore be given by the following $n+1$-dimensional square matrix: \begin{align}\label{eq:An} A_n:=\left[\begin{matrix} 0 & 0 & \dots & 0 \\ q^{(n)}_{10} & q_{11} & \dots & q_{1n} \\ q^{(n)}_{20} & q_{21} & \dots & q_{2n} \\ \vdots & \vdots & \vdots & \vdots \\ q^{(n)}_{n0} & q_{n1} & \dots & q_{nn} \end{matrix}\right], \end{align} where $q^{(n)}_{m0}=-\sum_{i=1}^nq_{mi}=\sum_{{i\neq m,i>n}}q_{mi}$ for all $m\in D_n$. Thus, $Y^{(n)}$ is conservative. Here $q_{ij}$ for $i,j=1,\ldots,n$ are the same as given in Subsection~\ref{sub:RSP}. Since $0$ is an absorbing state, we arrange values for the model coefficients at this state. More precisely, we set $r(0)=0$, $\mu(0)=0$, $\lambda(0,z)=\frac\theta2e_N^{\top}$ for all $z\in{\cal S}$, and $\sigma(0)\sigma(0)^\top=\frac4{2+\theta}I_{N}$. Here $I_N$ denotes the $N$-dimensional identity matrix. Then, it follows from \eqref{eq:L0} and Taylor's expansion that $L(\pi;0,z)=\|\pi\|^2+\sum_{j=1}^N(1-z_j)[(1-\pi_j)^{-\frac{\theta}{2}}-1-\frac\theta2\pi_j]\geq0$ for all $(\pi,z)\in U\times{\cal S}$. We next introduce the approximating risk-sensitive control problems where regime-switching processes take values on $D_n^0$. To this end, define $\tilde{\cal U}_n$ as the admissible control set $\tilde{\cal U}$, but the regime-switching process $Y$ is replaced with $Y^{(n)}$. We then consider the following objective functional given by, for $\tilde{\pi}\in\tilde{\cal U}_n$ and $(t,i,z)\in[0,T]\times{D_n^0}\times{\cal S}$, \begin{align}\label{eq:Jn00} J_n(\tilde{\pi};t,i,z):=&\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge{\tau_n^t}}L(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right]\nonumber\\ =&\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge{\tau_n^t}}L(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]. \end{align} Here, the risk-sensitive cost function $L(\pi;i,z)$ for $(\pi,i,z)\in U\times\mathbb{Z}_+\times{\cal S}$ is given by \eqref{eq:L0}. In order to apply the results in the finite state case obtained in Section~\ref{sec:finite-states}, we also need to propose the following objective functional given by, for $\tilde{\pi}\in\tilde{\cal U}_n$ and $(t,i,z)\in[0,T]\times{D_n^0}\times{\cal S}$, \begin{align}\label{eq:Jn} \tilde{J}_n(\tilde{\pi};t,i,z):=&\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T}{L}(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]. \end{align} We will consider the auxiliary value function defined by \begin{align}\label{eq:Vnvalue} V_n(t,i,z):=-\frac{2}{\theta}\inf_{\tilde{\pi}\in\tilde{\cal U}_n}\log\tilde{J}_n(\tilde{\pi};t,i,z),\qquad (t,i,z)\in[0,T]\times D_n^0\times{\cal S}. \end{align} We have the following characterization of the value function $V_n$ which will play an important role in the study of the convergence of $V_n$ as $n\to\infty$. \begin{lemma}\label{lem:jn=tildeJn} It holds that $V_n(t,i,z)=-\frac{2}{\theta}\inf_{\tilde{\pi}\in\tilde{\cal U}_n}\log J_n(\tilde{\pi};t,i,z)$ for $(t,i,z)\in[0,T]\times D_n^0\times{\cal S}$. \end{lemma} \begin{proof} Using \eqref{eq:Jn00} and \eqref{eq:Jn}, we have that, for all $\tilde{\pi}\in\tilde{\mathcal{U}}_n$, \begin{align*} &\log\tilde{J}_n(\tilde{\pi};t,i,z)\\ =&\log\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T}{L}(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]\nonumber\\ =&\log\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}{L}(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds +\frac{\theta}{2}\int_{T\wedge\tau^t_n}^T{L}(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]\nonumber\\ =&\log\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}{L}(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds +\frac{\theta}{2}\int_{T\wedge\tau^t_n}^T{L}(\tilde{\pi}(s);0,Z(s))ds\right)\right]\nonumber\\ \geq & \log\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]\nonumber\\ =&\log J_n(\tilde{\pi};t,i,z)\geq\inf_{\tilde{\pi}\in\tilde{\mathcal{U}}_n}\log J_n(\tilde{\pi};t,i,z), \end{align*} where we used the positivity of ${L}({\pi};0,z)$ for all $(\pi,z)\in U\times{\cal S}$. As $\theta>0$, we obtain from \eqref{eq:Vnvalue} that \begin{align}\label{V<J} V_n(t,i,z)&\leq-\frac2\theta\inf_{\tilde{\pi}\in\tilde{\mathcal{U}}_n}\log J_n(\tilde{\pi};t,i,z). \end{align} On the other hand, for any $\tilde{\pi}\in\tilde{\mathcal{U}}_n$, define $\hat{\pi}(t)=\tilde{\pi}(t)\mathds{1}_{\{t\leq\tau_n\}}$ for $t\in[0,T]$. It is clear that $\hat{\pi}\in\tilde{\mathcal{U}}_n$, and it holds that $\Gamma^{\hat{\pi},\theta}(t,T):=\frac{\Gamma^{\hat{\pi},\theta}(T)}{\Gamma^{\hat{\pi},\theta}(t)} =\frac{\Gamma^{\tilde{\pi},\theta}(T\wedge\tau^t_n)}{\Gamma^{\tilde{\pi},\theta}(t)} =:\Gamma^{\tilde{\pi},\theta}(t,T\wedge\tau^t_n)$. Hence \begin{align*} \log J_n(\tilde{\pi};t,i,z) &=\log\Ex_{t,i,z}\left[\Gamma^{\tilde{\pi},\theta}(t,T)\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]\nonumber\\ &=\log\Ex_{t,i,z}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds\right) \Ex\left[\Gamma^{\tilde{\pi},\theta}(t,T)|\mathcal{F}_{T\wedge\tau^t_n}\right]\right]\nonumber\\ &=\log\Ex_{t,i,z}\left[\Gamma^{\tilde{\pi},\theta}(t,T\wedge\tau^t_n)\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n} L(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]\nonumber\\ &=\log\Ex_{t,i,z}^{\hat{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\hat{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]\nonumber\\ &=\log\Ex_{t,i,z}^{\hat{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\hat{\pi}(s);Y^{(n)}(s),Z(s))ds+\frac{\theta}{2}\int_{T\wedge\tau^t_n}^TL(0;0,Z(s))ds\right)\right]\nonumber\\ &=\log\Ex_{t,i,z}^{\hat{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T}{L}(\hat{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]\nonumber\\ &=\log\tilde{J}_n(\hat{\pi};t,i,z)\geq\inf_{\tilde{\pi}\in\tilde{\mathcal{U}}_n}\log\tilde{J}_n(\tilde{\pi};t,i,z). \end{align*} The above inequality and the arbitrariness of $\tilde{\pi}$ jointly give that \begin{align}\label{J<V} -\frac2\theta\inf_{\tilde{\pi}\in\tilde{\mathcal{U}}_n}\log J_n(\tilde{\pi};t,i,z)\leq V_n(t,i,z). \end{align} Then, the desired result follows by combining \eqref{V<J} and \eqref{J<V} above. \end{proof} Lemma~\ref{lem:jn=tildeJn} together with Theorem~\ref{thm:solutionk} and Proposition~\ref{prop:verithemfinite} in Section \ref{sec:finite-states} for the finite state space of $Y$ imply the following conclusion: \begin{proposition}\label{prop:Vnmonotone00} Let $n\in\mathbb{Z}_+$. Recall the value function $V_n(t,i,z)$ defined by \eqref{eq:Vnvalue}. We define $\varphi_n(t,i,z):=\exp(-\frac\theta2V_n(t,i,z))$. Then $\varphi_n(t,i,z)$ is the unique solution of the recursive system of DPEs given by \begin{align}\label{eq:dpe4} 0=&\frac{\partial \varphi_n(t,i,z)}{\partial t}+\sum_{l\neq i,1\leq l\leq n}q_{il}\left(\varphi_n(t,l,z)-\varphi_n(t,i,z)\right)+q^{(n)}_{i0}(\varphi_n(t,0,z)-\varphi_n(t,i,z))\nonumber\\ &+\inf_{\pi\in U}\tilde{H}\left(\pi;i,z,(\varphi_n(t,i,z^j);\ j=0,1,\ldots,N)\right), \end{align} where $(t,i,z)\in[0,T)\times D_n^0\times{\cal S}$ and the terminal condition is given by $\varphi_n(T,i,z)=1$ for all $(i,z)\in D_n^0\times{\cal S}$. Moreover, it holds that $\varphi_n(t,i,z)\in[0,1]$ and it is decreasing in $n$ for all $(t,i,z)\in[0,T]\times D_n^0\times{\cal S}$. \end{proposition} \begin{proof} Notice that the state space of $Y^{(n)}$ is given by $D_n^0$ which is a finite set. By observing the definition of the value function $V_n$ given by \eqref{eq:Vnvalue}, we have that $\varphi_n(t,i,z)$ is the unique solution of the recursive system \eqref{eq:dpe4} by applying Theorem~\ref{thm:solutionk} and Proposition~\ref{prop:verithemfinite} in Section \ref{sec:finite-states} for the regime-switching process with the finite state space. In order to verify that $\varphi_n(t,i,z)\in[0,1]$ and it is decreasing in $n$, it is sufficient to prove that $V_n(t,i,z)\geq0$ and it is nondecreasing in $n$. Thanks to Lemma~\ref{lem:jn=tildeJn}, and $L(0,i,z)=-r(i)\leq0$ by \eqref{eq:L0}, also note that $\tilde{\pi}_0(t)\equiv0$ is admissible (i.e., $\tilde{\pi}_0\in\tilde{\cal U}_n$), then \begin{align* \inf_{\tilde{\pi}\in\tilde{\cal U}_n}\log J_n(\tilde{\pi};t,i,z)&\leq \log J_n(\tilde{\pi}_0;t,i,z) =\log\Ex_{t,i,z}^{\tilde{\pi}_0,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau_n^t}L(0;Y(s),Z(s))ds\right)\right]\nonumber\\ &=\log\Ex_{t,i,z}^{\tilde{\pi}_0,\theta}\left[\exp\left(-\frac{\theta}{2}\int_t^{T\wedge\tau_n^t}r(Y(s))ds\right)\right]\leq0, \end{align*} as the interest rate process is nonnegative. This gives that $V_n(t,i,z)\geq0$ for all $(t,i,z)\in[0,T]\times{D_n^0}\times{\cal S}$. On the other hand, for any $\tilde{\pi}\in\tilde{\cal U}_n$, we define $\hat{\pi}(t):=\tilde{\pi}(t)\mathds{1}_{\{\tau_n\geq t\}}$ for $t\in[0,T]$. It is clear that $\hat{\pi}\in\tilde{\cal U}_n\cap \tilde{\cal U}_{n+1}$. Recall the density process given by \eqref{eq:Gam}, we have that, for $\tilde{\pi},\hat{\pi}\in\tilde{\cal U}_n$, \begin{align*} \Gamma^{\tilde{\pi},\theta}&={\cal E}(\Pi^{\tilde{\pi},\theta}),\ \Pi^{\tilde{\pi},\theta}=-\frac{\theta}{2}\int_0^{\cdot}\tilde{\pi}(s)^{\top}\sigma(Y^{(n)}(s))dW(s)+\sum_{j=1}^N\int_0^{\cdot}\{(1-\tilde{\pi}_j(s))^{-\frac{\theta}{2}}-1\}dM_j(s);\nonumber\\ \Gamma^{\hat{\pi},\theta}&={\cal E}(\Pi^{\hat{\pi},\theta}),\ \Pi^{\hat{\pi},\theta}=-\frac{\theta}{2}\int_0^{\cdot}\hat{\pi}(s)^{\top}\sigma(Y^{(n)}(s))dW(s)+\sum_{j=1}^N\int_0^{\cdot}\{(1-\hat{\pi}_j(s))^{-\frac{\theta}{2}}-1\}dM_j(s). \end{align*} This shows that $\Gamma^{\tilde{\pi},\theta}(t\wedge\tau_n)=\Gamma^{\hat{\pi},\theta}(t)$ for $t\in[0,T]$. Then, we deduce from \eqref{eq:Jn00} that \begin{align} \log J_n(\tilde{\pi};t,i,z)=&\log\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge \tau_n^t}L(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right]\nonumber\\ \geq&\log\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge \tau_n^t}L(\tilde{\pi}(s);Y(s),Z(s))ds+\frac{\theta}{2}\int_{T\wedge \tau_n^t}^{T\wedge \tau_{n+1}^t}L(0;Y(s),Z(s))\right)\right]\nonumber\\ =&\log\Ex_{t,i,z}^{\hat{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau_{n+1}^t}L(\hat{\pi}(s);Y(s),Z(s))ds\right)\right]\nonumber\\ =&\log J_{n+1}(\hat{\pi};t,i,z)\geq\inf_{\tilde{\pi}\in\tilde{\cal U}_{n+1}}\log J_{n+1}(\tilde{\pi};t,i,z). \end{align} Using \eqref{eq:Vnvalue} and Lemma~\ref{lem:jn=tildeJn}, it follows that $V_{n}(t,i,z)$ is nondecreasing in $n$ for fixed $(t,i,z)\in[0,T]\times D_n^0\times{\cal S}$. Thus, the conclusion of the proposition holds. \end{proof} By virtue of Proposition~\ref{prop:Vnmonotone00}, for any $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$, we set $V^*(t,i,z):=\lim_{n\to\infty}V_n(t,i,z)$. Then, it holds that \begin{align}\label{eq:varphistar} \lim_{n\to\infty}\varphi_n(t,i,z)=\exp\left(-\frac\theta2V^*(t,i,z)\right)=:\varphi^*(t,i,z). \end{align} On the other hand, from Eq.~\eqref{eq:Vnvalue}, it is easy to see that $\varphi_n(t,0,z)=1$ for all $(t,z)\in[0,T]\times{\cal S}$. Then, Eq.~\eqref{eq:dpe4} above can be rewritten as: \begin{align}\label{eq:dpe5} \frac{\partial \varphi_n(t,i,z)}{\partial t}=&-q_{ii}\varphi_n(t,i,z)-\sum_{l\neq i,1\leq l\leq n}q_{il}\varphi_n(t,l,z)-\sum_{l>n}q_{il}\nonumber\\ &-\inf_{\pi\in U}\tilde{H}\left(\pi;i,z,(\varphi_n(t,i,z^j);\ j=0,1,\ldots,N)\right). \end{align} In terms of \eqref{eq:H}, we can conclude that, for $(\pi;i,z)\in U\times\mathbb{Z}_+\times{\cal S}$, $\tilde{H}(\pi;i,z,x)$ is concave in every component of $x\in[0,\infty)^{N+1}$, so is $\inf_{\pi\in U}\tilde{H}(\pi;i,z,x)$. We present the main result in this paper for the case of the countable state space. \begin{theorem}\label{thm:existD} Let $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$. Then, the limit function $\varphi^*(t,i,z)$ given in \eqref{eq:varphistar} above is a classical solution of the original DPE~\eqref{eq:dpe3}, i.e., it holds that \begin{align*} 0=&\frac{\partial \varphi^*(t,i,z)}{\partial t}+\sum_{l\neq i}q_{il}\left[\varphi^*(t,l,z)-\varphi^*(t,i,z)\right]+\inf_{\pi\in U}\tilde{H}\left(\pi;i,z,(\varphi^*(t,i,z^j);\ j=0,1,\ldots,N)\right) \end{align*} with terminal condition $\varphi^*(T,i,z)=1$ for all $(i,z)\in\mathbb{Z}_+\times{\cal S}$. \end{theorem} The proof of Theorem~\ref{thm:existD} will be split into proving a sequence of auxiliary lemmas first. We show the following result as a preparation. \begin{lemma}\label{lem:boundfordphi} Let $(i,z)\in \mathbb{Z}_+\times{\cal S}$. Then $(\frac{\partial\varphi_n(t,i,z)}{\partial t})_{n\geq i}$ is uniformly bounded in $t\in[0,T]$. \end{lemma} \begin{proof} We rewrite Eq.~\eqref{eq:dpe5} as in the following form: \begin{align}\label{estm1} &\frac{\partial \varphi_n(t,i,z)}{\partial t}=-q_{ii}\varphi_n(t,i,z)-\sum_{l\neq i,1\leq l\leq n}q_{il}\varphi_n(t,l,z)-\sum_{l>n}q_{il}\nonumber\\ &\qquad-\inf_{\pi\in U}\hat{H}\left(\pi;i,z,(\varphi_n(t,i,z^j);\ j=0,1,\ldots,N)\right)+C(i,z)\varphi_n(t,i,z), \end{align} where, for $(i,z)\in\mathbb{Z}_+\times{\cal S}$, \begin{align}\label{estm2} C(i,z)=\bigg|\inf_{\pi\in U}\bigg\{&-\frac{\theta}{2}r(i)-\frac{\theta}{2}\pi^{\top}(\mu(i)-r(i)e_n)+\frac{\theta}{4}\left(1+\frac{\theta}{2}\right)\left\|\sigma(i)^{\top}\pi\right\|^2\nonumber\\ &+\sum_{j=1}^N\left(-1-\frac{\theta}{2}\pi_j\right)(1-z_j)\lambda_j(i,z)\bigg\}\bigg|, \end{align} and the nonnegative function \begin{align}\label{eq:hatH} \hat{H}(\pi;i,z,\bar{f}(z)):=\tilde{H}(\pi;i,z,\bar{f}(z))+C(i,z)f(z). \end{align} Because $\hat{H}(\pi;i,z,x)$ is concave in every component of $x\in[0,\infty)^{N+1}$, $\Phi(x):=\inf_{\pi\in U}\hat{H}(\pi;i,z,x)$ is also concave in every component of $x\in[0,\infty)^{N+1}$. It follows from Proposition~\ref{prop:Vnmonotone00} that $x^{(n)}:=(\varphi_n(t,i,z^j);\ j=0,1,\ldots,N)\in[0,1]^{N+1}$. Using Lemma~\ref{lem:conbound}, there exits a constant $C>0$ which is independent of $x^{(n)}$ such that $0\leq \Phi(x^{(n)})\leq C$ for all $n\in\mathbb{Z}_+$. Further, for fixed $(i,z)\in\mathbb{Z}_+\times{\cal S}$, \begin{align*} &\left|-q_{ii}\varphi_n(t,i,z)-\sum_{l\neq i,1\leq l\leq n}q_{il}\varphi_n(t,l,z)-\sum_{l>n}q_{il}+C(i,z)\varphi_n(t,i,z)\right| \leq-2q_{ii}+C(i,z). \end{align*} The desired result follows from Eq.~\eqref{estm1}. \end{proof} \begin{lemma}\label{lem:unfmconforphi} Let $(i,z)\in\mathbb{Z}_+\times{\cal S}$, then $(\varphi_n(t,i,z))_{n\geq i}$ (decreasingly) converges to $\varphi^*(t,i,z)$ uniformly in $t\in[0,T]$ as $n\to\infty$. \end{lemma} \begin{proof} By Proposition~\ref{prop:Vnmonotone00}, Lemma~\ref{lem:boundfordphi}, and Azel\`a-Ascoli's Theorem, we have that $(\varphi_n(\cdot,i,z))_{n\geq i}$ contains an uniformly convergent subsequence. Moreover, Proposition~\ref{prop:Vnmonotone00} and \eqref{eq:varphistar} yield that $\varphi_n(t,i,z)$ (decreasingly) converges to $\varphi^*(t,i,z)$ uniformly in $t\in[0,T]$ as $n\to\infty$. \end{proof} \begin{lemma}\label{lem:phinlobnd} Let $n\in\mathbb{Z}_+$. Consider the following linear system: for $(t,i,z)\in(0,T]\times D_n^0\times{\cal S}$, \begin{align}\label{eq:phin} \frac{\partial\phi_n(t,i,z)}{\partial t}=&(q_{ii}-C(i,z))\phi_n(t,i,z)+\sum_{l\neq i,1\leq l\leq n}q_{il}\phi_n(t,l,z),\nonumber\\ \phi_n(0,i,z)=&1, \end{align} where $C(i,z)$ is given by \eqref{estm2}. Then, there exists a measurable function $\phi^*(t,i,z)$ such that $\phi_n(t,i,z)\nearrow\phi^*(t,i,z)$ as $n\to\infty$ for each fixed $(t,i,z)$. Moreover, it holds that $0<\phi_n(T-t,i,z)\leq\varphi_n(t,i,z)\leq1$ for $(t,i,z)\in[0,T]\times D_n^0\times{\cal S}$. \end{lemma} \begin{proof} Let $(t,i,z)\in[0,T]\times D_n^0\times{\cal S}$ and define $g_n(t,i,z):=\varphi_n(T-t,i,z)$. It follows from Eq.~\eqref{estm1} that $g_n(\cdot,i,z)\in C^1((0,T])\cap C([0,T])$ for each fixed $(i,z)$ and satisfies that \begin{align}\label{g_n} \frac{\partial g_n(t,i,z)}{\partial t}=&(q_{ii}-C(i,z))g_n(t,i,z)+\sum_{l\neq i,1\leq l\leq n}q_{il}g_n(t,l,z)+\sum_{l>n}q_{il}\nonumber\\ &+Q(t,i,z,g_n(t,i,z)),\nonumber\\ g_n(0,i,z)=&1, \end{align} where $Q(t,i,z,x):=\inf_{\pi\in U}\hat{H}\left(\pi;i,z,x,g_n(t,i,z^1),\ldots,g_n(t,i,z^N)\right)$ for $x\in[0,\infty)$. We have from \eqref{eq:hatH} that $Q(t,i,z,x)\geq0$ for all $(t,x)\in[0,T]\times[0,\infty)$. Then $\sum_{l>n}q_{il}+Q(t,i,z,x)\geq0$. Note that the linear part of Eq.~\eqref{g_n} satisfies the $K$-type condition. Then, using the comparison result of Lemma~\ref{comparison}, it shows that $g_n(t,i,z)\geq\phi_n(t,i,z)$, and hence $\varphi_n(t,i,z)\geq\phi_n(T-t,i,z)$. Moreover, we deduce from Lemma \ref{lem:sol-hjben2} that $\phi_n(t,i,z)>0$. By virtue of Eq.~\eqref{eq:phin}, we have that $\phi_{n+1}(t,i,z)$ with $(t,i,z)\in[0,T]\times D_{n+1}^0\times{\cal S}$ satisfies that \begin{equation}\label{eq:phin+1} \begin{split} \frac{\partial\phi_{n+1}(t,i,z)}{\partial t}=&(q_{ii}-C(i,z))\phi_{n+1}(t,i,z)+\sum_{l\neq i,1\leq l\leq n}q_{il}\phi_{n+1}(t,l,z)\\ &+q_{i,n+1}\phi_{n+1}(t,n+1,z),\nonumber\\ \phi_{n+1}(0,i,z)=&1. \end{split} \end{equation} Because $q_{i,n+1}\phi_{n+1}(t,n+1,z)\geq0$ for $i\in D_n^0$, Lemma~\ref{comparison} shows that $\phi_{n+1}(t,i,z)\geq\phi_n(t,i,z)$ for all $(t,i,z)\in[0,T]\times{D_n^0}\times{\cal S}$. Therefore, there exists a measurable function $\phi^*(t,i,z)$ such that $\phi_n(t,i,z)\nearrow\phi^*(t,i,z)$ as $n\to\infty$ for each fixed $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$. \end{proof} \begin{lemma}\label{lem:lobndphistar} Let $(i,z)\in\mathbb{Z}_+\times{\cal S}$. Then, there exists a positive constant $\delta=\delta(i,z)$ such that $\varphi^*(t,i,z)>\delta$ for all $t\in[0,T]$. \end{lemma} \begin{proof} From Lemma~\ref{lem:phinlobnd}, we have that $\varphi_n(t,i,z)\geq\phi_n(T-t,i,z)$. Letting $n\rightarrow\infty$ and using Lemma~\ref{lem:unfmconforphi}, it follows that $\varphi^*(t,i,z)\geq\phi^*(T-t,i,z)\geq\phi_i(T-t,i,z)$. As $\phi_i(t,i,z)>0$ is continuous in $t\in[0,T]$, there exists a positive constant $\delta=\delta(i,z)$ such that $\inf_{t\in[0,T]}\phi_i(t,i,z)\geq\delta$. Therefore $\varphi^*(t,i,z)\geq\delta$ for all $t\in[0,T]$. \end{proof} We can finally conclude the proof of Theorem~\ref{thm:existD} using all previous results. \noindent{\it Proof of Theorem~\ref{thm:existD}.}\quad We first prove that there exists a measurable function $\tilde{\varphi}(t,i,z)$ on $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$ such that $\lim_{n\to\infty}\frac{\partial\varphi_n(t,i,z)}{\partial t}=\tilde{\varphi}(t,i,z)$ for $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$. In fact, note that for $(t,i,z)\in[0,T]\times D_n^0\times{\cal S}$, $0\leq\varphi_{n+1}(t,i,z)\leq\varphi_{n}(t,i,z)\leq1$ for $n\in\mathbb{Z}_+$. Then \begin{align*} \sum_{l\neq i,1\leq l\leq n}q_{il}\varphi_n(t,l,z)+\sum_{l>n}q_{il}\geq\sum_{l\neq i,1\leq l\leq n+1}q_{il}\varphi_{n+1}(t,l,z)+\sum_{l>n+1}q_{il}. \end{align*} This yields from \eqref{eq:varphistar} that $q_{ii}\varphi_n(t,i,z)\nearrow q_{ii}\varphi^*(t,i,z)$ as $n\to\infty$, and \begin{align}\label{eq:conver11} \sum_{l\neq i,1\leq l\leq n}q_{il}\varphi_n(t,l,z)+\sum_{l>n}q_{il}\searrow&\sum_{l\neq i,l\geq 1}q_{il}\varphi^*(t,l,z). \end{align} On the other hand, let $\Phi(x):=\inf_{\pi\in U}\tilde{H}(\pi;i,z,x)$ for $x\in[0,\infty)^{N+1}$. Then $\Phi(x):[0,\infty)^{N+1}\to\R$ is concave in every component of $x$. Let $x^*(t):=(\varphi^*(t,i,z^j);\ j=0,1,\ldots,N)$ and $x^{(n)}(t):=(\varphi_n(t,i,z^j);\ j=0,1,\ldots,N)$ for $n\in\mathbb{Z}_+$. Then $0\leq x^*(t)\leq x^{(n)}(t)$ for $n\in\mathbb{Z}_+$ and $\lim_{n\to\infty}x^{(n)}(t)=x^*(t)$ using \eqref{eq:varphistar}. Moreover, Lemma~\ref{lem:lobndphistar} gives that $\delta\ll x^*\ll2$. It follows from Lemma~\ref{lem:conconver} that $\lim_{n\to\infty}\Phi(x^{(n)}(t))=x^*(t)$. Thus, by virtue of Eq.~\eqref{eq:dpe5}, as $n\to\infty$, one has \begin{align}\label{eq:expresstildevarphi} &\frac{\partial\varphi_n(t,i,z)}{\partial t}\to\tilde{\varphi}(t,i,z):=-q_{ii}\varphi^*(t,i,z)-\sum_{l\neq i,l\geq 1}q_{il}\varphi^*(t,l,z)-\Phi\left(x^*(t)\right). \end{align} We next prove that for $(i,z)\in\mathbb{Z}_+\times{\cal S}$, $\frac{\partial\varphi_n(t,i,z)}{\partial t}\rightrightarrows\tilde{\varphi}(t,i,z)$ in $t\in[0,T]$ as $n\to\infty$. Here $\rightrightarrows$ denotes the uniform convergence. Eq.~\eqref{estm1} together with \eqref{eq:expresstildevarphi} first give that, for $(t,i,z)\in[0,T]\times D_n^0\times{\cal S}$, \begin{align}\label{eq:I-II-III} \frac{\partial \varphi_n(t,i,z)}{\partial t}-\tilde{\varphi}(t,i,z)&=\sum_{i=1}^3 B_i^{(n)}(t,i,z), \end{align} where \begin{align}\label{eq:Bn} B_1^{(n)}(t,i,z) &:= -q_{ii}(\varphi_n(t,i,z)-\varphi^*(t,i,z))+C(i,z)(\varphi_n(t,i,z)-\varphi^*(t,i,z)),\nonumber\\ B_2^{(n)}(t,i,z) &:= \sum_{l\neq i,1\leq l\leq n}q_{il}(\varphi_n(t,l,z)-\varphi^*(t,l,z))+\sum_{l>n}q_{il}(1-\varphi^*(t,i,z)),\nonumber\\ B_3^{(n)}(t,i,z) &:= \Phi(x^{(n)}(t))-\Phi(x^*(t)). \end{align} Here $\Phi(x):=\inf_{\pi\in U}\tilde{H}(\pi;i,z,x)$ for $x\in[0,\infty)^{N+1}$, $x^{(n)}(t):=(\varphi_n(t,i,z^j);\ j=0,1,\ldots,N)$, and $x^{*}(t):=(\varphi^*(t,i,z^j);\ j=0,1,\ldots,N)$. Lemma~\ref{lem:unfmconforphi} guarantees that $\varphi_n(t,i,z)\rightrightarrows\varphi^*(t,i,z)$ in $t\in[0,T]$ as $n\to\infty$, and hence $B_1^{(n)}(t,i,z)\rightrightarrows0$ in $t\in[0,T]$ as $n\to\infty$. On the other hand, for any small $\varepsilon>0$, since $\sum_{l\neq i}q_{il}<\infty$, there exists $n_1\geq1$ such that $\sum_{l>n_1,l\neq i}q_{il}<\frac\varepsilon2$. Note that, for all $1\leq l\leq n_1$, $\varphi_n(t,l,z)\rightrightarrows\varphi^*(t,l,z)$ in $t\in[0,T]$ as $n\to\infty$, there exists $n_2\geq1$ such that $\sup_{t\in[0,T]}\sum_{l\neq i,1\leq l\leq n_1}q_{il}(\varphi_n(t,l,z)-\varphi^*(t,l,z))\leq\frac\varepsilon2$ for $n>n_2$. Hence, for all $n>n_1\vee n_2$, noting that $0\leq\varphi^*(t,i,z)\leq\varphi_n(t,i,z)\leq1$, it holds that \begin{equation}\label{II} \begin{split} |B_2^{(n)}(t,i,z)|=&\sum_{l\neq i,1\leq l\leq n_1}q_{il}(\varphi_n(t,l,z)-\varphi^*(t,l,z))+\sum_{l\neq i,n_1<l<n}q_{il}(\varphi_n(t,l,z)-\varphi^*(t,l,z))\\ &+\sum_{l>n}q_{il}(1-\varphi^*(t,i,z))\leq\frac\varepsilon2+\sum_{l>n_1}q_{il}\leq \frac\varepsilon2+\frac\varepsilon2=\varepsilon. \end{split} \end{equation} Thus, we deduce that $B_2^{(n)}(t,i,z)\rightrightarrows0$ in $t\in[0,T]$ as $n\to\infty$. We can have from Lemma~\ref{lem:conbound} that for all $x\in\mathds{R}^{N+1}$ satisfying $0\leq x\leq 2$, $0\leq\Phi(x)\leq C$ for some constant $C>0$. As for $j=0,1,\ldots,N$, $\varphi_n(t,i,z^j)\rightrightarrows\varphi^*(t,i,z^j)$ in $t\in[0,T]$ as $n\rightarrow\infty$, Lemma \ref{lem:lobndphistar} yields that there exists a constant $\delta>0$ such that $1\geq\varphi_n(t,i,z^j)\geq\varphi^*(t,i,z^j)\geq\delta>0$ for all $t\in[0,T]$. Further, there exists $\lambda^j_n(t)\in[0,1]$ such that $\varphi_n(t,i,z^j)=(1-\lambda^j_n(t))\varphi^*(t,i,z^j)+2\lambda^j_n(t)$. In turn, $\lambda^j_n(t)=\frac{\varphi_n(t,i,z^j)-\varphi^*(t,i,z^j)}{2-\varphi^*(t,i,z^j)}$, and hence for all $j=0,1,\ldots,N$, $\lambda^j_n(t)\rightrightarrows0$ in $t\in[0,T]$ as $n\rightarrow\infty$. Similar to that in \eqref{concaveexpansion1}, we can derive that \begin{align}\label{eq:infdiffer} \Phi(x^{(n)}(t))\geq \Phi(x^*(t))\prod_{j=0}^N(1-\lambda^j_n(t))+\Lambda^{(n)}_1(t). \end{align} Similar to the first term in the r.h.s. of the inequality \eqref{eq:infdiffer}, every term in $\Lambda^{(n)}_1(t)$ above has $N+1$ multipliers and at least one of these multipliers is of the form $\lambda^j_n(t)$, while other multipliers are nonnegative and bounded by $1\vee C$. Due to the fact that $\lambda^j_n(t)\rightrightarrows0$ in $t\in[0,T]$ as $n\to\infty$, we have that $\Lambda^{(n)}_1(t)\rightrightarrows0$ in $t\in[0.T]$ as $n\to\infty$. Moreover, it follows from \eqref{eq:infdiffer} that \begin{align}\label{eq:infdiffer2} &\left(1-\prod_{j=0}^N(1-\lambda^j_n(t))\right)\Phi(x^*(t))-\Lambda^{(n)}_1(t)\geq\Phi(x^*(t))-\Phi(x^{(n)}(t))=-B_3^{(n)}(t,i,z). \end{align} It is not difficult to see that the l.h.s. of the inequality \eqref{eq:infdiffer2} tends to $0$ uniformly in $t\in[0,T]$ as $n\rightarrow\infty$. On the other hand, there exists $\tilde{\lambda}^j_n(t)\in[0,1]$ such that $\varphi^*(t,i,z^j)=(1-\tilde{\lambda}^j_n(t))\varphi_n(t,i,z^j)+0\cdot\tilde{\lambda}^j_n(t)$, and in turn $\tilde{\lambda}^j_n(t)=\frac{\varphi_n(t,i,z^j)-\varphi^*(t,i,z^j)}{\varphi_n(t,i,z^j)}\rightrightarrows0$ in $t\in[0,T]$ as $n\to\infty$, since $\varphi_n(t,i,z^j)\geq\delta>0$. So that \begin{align}\label{eq:infdiffer1} &\left(1-\prod_{j=0}^N(1-\tilde{\lambda}^j_n(t))\right)\Phi(x^{(n)}(t))-\Lambda^{(n)}_2(t)\geq\Phi(x^{(n)}(t))-\Phi(x^{*}(t))=B_3^{(n)}(t,i,z), \end{align} where the form of $\Lambda^{(n)}_2(t)$ is similar to that of $\Lambda^{(n)}_1(t)$, but it is related to $\tilde{\lambda}^j_n(t)$ for $j=0,1,\ldots,N$. As in \eqref{eq:infdiffer2}, the l.h.s. of the inequality~\eqref{eq:infdiffer1} tends to $0$ uniformly in $t\in[0,T]$ as $n\to\infty$. Hence, it follows from \eqref{eq:infdiffer2} and \eqref{eq:infdiffer1} that $B_3^{(n)}(t,i,z)\rightrightarrows0$ in $t\in[0,T]$ as $n\rightarrow\infty$. Thus, we proved that for $(i,z)\in\mathbb{Z}_+\times{\cal S}$, $\frac{\partial\varphi_n(t,i,z)}{\partial t}\rightrightarrows\tilde{\varphi}(t,i,z)$ in $t\in[0,T]$ as $n\to\infty$. We at last show that, for $(i,z)\in\mathbb{Z}_+\times{\cal S}$, $\varphi^*(T,i,z)-\varphi^*(t,i,z)=\int_t^T\tilde{\varphi}(s,i,z)ds$ for $t\in[0,T]$. For $n\in\mathbb{Z}_+$, it follows from Proposition~\ref{prop:Vnmonotone00} that $\varphi_n(\cdot,i,z)\in C^1([0,T))\cap C([0,T])$ for $(i,z)\in D_n^0\times{\cal S}$. This implies that \begin{equation}\label{differ} \begin{split} \varphi^*(T,i,z)-\varphi^*(t,i,z)&=\varphi^*(T,i,z)-\varphi^*(t,i,z)-(\varphi_n(T,i,z)-\varphi_n(t,i,z))\\ &\quad+\int_t^T\frac{\partial\varphi_n(s,i,z)}{\partial t}(s,i,z)ds. \end{split} \end{equation} Lemma~\ref{lem:unfmconforphi} ensures that $\varphi(T,i,z)-\varphi(t,i,z)-(\varphi_n(T,i,z)-\varphi_n(t,i,z))\to0$ as $n\to\infty$. From Lemma~\ref{lem:boundfordphi} and the uniform convergence of $\frac{\partial\varphi_n(t,i,z)}{\partial t}$ to $\tilde{\varphi}(t,i,z)$ in $t\in[0,T]$, it follows that $\tilde{\varphi}(t,i,z)$ is continuous in $t\in[0,T]$ and $\int_t^T\frac{\partial\varphi_n(s,i,z)}{\partial t}ds\to\int_t^T\tilde{\varphi}(s,i,z)ds$ as $n\to\infty$. Moreover, as $\varphi^*(T,i,z)-\varphi^*(t,i,z)=\int_t^T\tilde{\varphi}(s,i,z)ds$ for each $t\in[0,T]$, $\frac{\partial\varphi^*(t,i,z)}{\partial t}=\tilde{\varphi}(t,i,z)$ holds for all $t\in[0,T]$. Hence, $\varphi^*(t,i,z)$ is indeed a classical solution of the original DPE \eqref{eq:dpe3}. \hfill$\Box$\\ The verification argument for the case of countable state space $\mathbb{Z}_+=\{1,2,\ldots\}$ is presented in the next key proposition. Before it, we provide some mild conditions on model coefficients: \begin{itemize} \item[({C.1})] There exist positive constants $c_1$, $c_2$, $\delta$ and $K$ such that $c_1\|\xi\|^2\leq\xi^\top\sigma(i)\sigma(i)^\top\xi\leq c_2\|\xi\|^2$ for all $\xi\in\R^N$ and $i\in\mathbb{Z}_+$, $\delta\leq\lambda(i,z)\leq K$ for all $(i,z)\in\mathbb{Z}_+\times{\cal S}$, and $r(i)+\|\mu(i)\|\leq K$ for all $i\in\mathbb{Z}_+$. \end{itemize} The first condition on $\sigma(i)$ is actually related to the uniformly elliptic property of the volatility matrix $\sigma(i)$ of stocks. \begin{proposition}\label{prop:verivalue} Let the condition {\rm(C.1)} hold. Let $\varphi^*(t,i,z)$ with $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$ be given by \eqref{eq:varphistar}. Then, for all $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$, \begin{align}\label{veriphistar} \varphi^*(t,i,z)=\inf_{\tilde{\pi}\in\tilde{\cal U}}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right]. \end{align} \end{proposition} \begin{proof} From Proposition~\ref{prop:verithemfinite} and Lemma~\ref{lem:jn=tildeJn}, it follows that, for $n\in\mathbb{Z}_+$, \begin{align*} \varphi_n(t,i,z)=&\inf_{\tilde{\pi}\in\tilde{\cal U}_n}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(s);Y^{(n)}(s),Z(s))ds\right)\right]\nonumber\\ =&\inf_{\tilde{\pi}\in\tilde{\cal U}_n}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge \tau^t_n}L(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right]. \end{align*} Then, for any $\varepsilon>0$, there exists $\tilde{\pi}^\varepsilon\in\tilde{\mathcal{U}}_n$ such that \begin{align}\label{eq:varphin+epsilon} \varphi_n(t,i,z)+\varepsilon>\Ex_{t,i,z}^{\pi^\varepsilon,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge \tau^t_n}L(\tilde{\pi}^\varepsilon(s);Y(s),Z(s))ds\right)\right]. \end{align} Define $\hat{\pi}^\varepsilon(t):=\tilde{\pi}^\varepsilon(t)\mathds{1}_{\{t\leq\tau_n\}}$ for $t\in[0,T]$. Then, it holds that $\hat{\pi}^\epsilon\in\tilde{\mathcal{U}}$, and $\Gamma^{\hat{\pi}^\varepsilon,\theta}(t,T)=\Gamma^{\tilde{\pi}^\varepsilon,\theta}(t,T\wedge\tau^t_n)$ for $t\in[0,T]$. Also note that $L(0,i,z)=-r(i)\leq0$ for all $(i,z)\in\mathbb{Z}_+\times{\cal S}$. Then, the inequality~\eqref{eq:varphin+epsilon} continues that \begin{align} \varphi_n(t,i,z)+\varepsilon>&\Ex_{t,i,z}^{\tilde{\pi}^\varepsilon,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge \tau^t_n}L(\tilde{\pi}^\varepsilon(s);Y(s),Z(s))ds\right)\right]\nonumber\\ =&\Ex_{t,i,z}^{\hat{\pi}^\varepsilon,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge \tau^t_n}L(\hat{\pi}^\varepsilon(s);Y(s),Z(s))ds\right)\right]\nonumber\\ \geq&\Ex_{t,i,z}^{\hat{\pi}^\varepsilon,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T}L(\hat{\pi}^\varepsilon(s);Y(s),Z(s))ds\right)\right]\nonumber\\ \geq&\inf_{\tilde{\pi}\in\tilde{\cal U}}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}^\varepsilon(s);Y(s),Z(s))ds\right)\right]. \end{align} By passing $n\to\infty$ and then $\varepsilon\to0$, we get \begin{align}\label{phistaroninf} \varphi^*(t,i,z)\geq\inf_{\tilde{\pi}\in\tilde{\cal U}}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right]. \end{align} On the other hand, using Theorem~\ref{thm:existD} and Proposition~\ref{prop:verithemfinite}, $\varphi^*(t,i,z)$ is strictly positive and $\varphi^*(t,i,z)\leq\varphi_n(t,i,z)\leq1$ for all $n\geq1$. Then, under the condition (C.1), by applying a similar argument of the proof of \eqref{eq:itoveri}, we have that, for any $\tilde{\pi}\in\tilde{\cal U}$, \begin{align*} &\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\varphi^*(T,Y(T),Z(T))\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(u);Y(u),Z(u))du\right)\right]\geq\varphi^*(t,i,z). \end{align*} Because $\varphi(T,i,z)=1$ for all $(i,z)\in\mathbb{Z}_+\times{\cal S}$, we deduce that \begin{align}\label{infonphistar} \inf_{\tilde{\pi}\in\tilde{\cal U}}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right]\geq\varphi^*(t,i,z). \end{align} The equality \eqref{veriphistar} therefore follows by combining \eqref{phistaroninf} and \eqref{infonphistar}, and the validity of the proposition is checked. \end{proof} Similar to that in Proposition~\ref{prop:verithemfinite}, we can construct a candidate optimal $\mathbb{G}$-predictable feedback strategy $\tilde{\pi}^*$ by, for $t\in[0,T]$, \begin{align}\label{eq:optimaltildepis} \tilde{\pi}^*(t)&:={\rm diag}\left((1-Z_j(t-))_{j=1}^N\right)\nonumber\\ &\quad\times\argmin_{\pi\in U}\tilde{H}\left(\pi;Y(t-),Z(t-),(\varphi^*(t,Y(t-),Z^j(t-));\ j=0,1,\ldots,N)\right). \end{align} We first prove that $\tilde{\pi}^*$ can be characterized as an approximation limit by a sequence of well defined admissible strategies. \begin{lemma}\label{lem:approxpistar} Let the condition {\rm(C.1)} hold. There exists a sequence of strategies $(\tilde{\pi}^{(n,*)})_{n\in\mathbb{Z}_+}\subset\tilde{\mathcal{U}}$ such that $\lim_{n\to\infty}\tilde{\pi}^{(n,*)}(t)=\tilde{\pi}^*(t)$ for $t\in[0,T]$, $\Px$-a.s., and further $\lim_{n\to\infty}J(\tilde{\pi}^{(n,*)};t,i,z)=\varphi^*(t,i,z)$ for $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$, $\Px$-a.s. Here, the objective functional $J$ is defined in~\eqref{eq:J}. \end{lemma} \begin{proof} For fixed $(i,z,x)\in\mathbb{Z}_+\times\mathcal{S}\times(0,\infty)^{N+1}$, we have that $\tilde{H}\left(\pi;i,z,x\right)$ is strictly concave w.r.t. $\pi\in U$, and hence $\Phi(i,z,x):=\argmin_{\pi\in U}\tilde{H}\left(\pi;i,z,x\right)$ is well defined. Note that $\Phi(i,z,\cdot)$ maps $(0,\infty)^{N+1}$ to $U$ and satisfies the first-order condition $\frac{\partial\tilde{H}}{\partial\pi_j}\left(\Phi(i,z,x);i,z,x\right)=0$ for $j=1,\ldots,N$. Then, Implicit Function Theorem yields that $\Phi(i,z,x)$ is continuous in $x$. Let $x^{(n)}(t):=(\varphi_n(t,Y^{(n)}(t-),Z^j(t-));\ j=0,1,\ldots,N)$. It follows from Proposition~\ref{prop:verithemfinite} and Lemma~\ref{lem:jn=tildeJn} that, for $t\in[0,T]$, \begin{equation} \tilde{\pi}^{(n,*)}(t):={\rm diag}((1-Z_j(t-))_{j=1}^N)\Phi(Y(t-),Z(t-),x^{(n)}(t))\mathds{1}_{\{t\leq\tau_n\}}\nonumber \end{equation} belongs to $\tilde{\mathcal{U}}_n\cap\tilde{\mathcal{U}}$, and further it satisfies that \begin{align} \varphi_n(t,i,z)&=\Ex_{t,i,z}^{{\tilde\pi^{(n,*)}},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\tilde{\pi}^{(n,*)}(s);Y(s),Z(s))ds\right)\right]. \end{align} Lemma~\ref{lem:unfmconforphi} gives that $\lim_{n\to\infty}\|x^{(n)}(t)-x^*(t)\|=0$ for $t\in[0,T]$, $\Px$-a.s., where $x^*(t):=(\varphi^*(t,Y(t-),Z^j(t-));\ j=0,1,\ldots,N)$. We define the predictable process $\tilde{\pi}^*(t):={\rm diag}((1-Z_j(t-))_{j=1}^N)\Phi(Y(t-),Z(t-),x^*(t))$ for $t\in[0,T]$. By Lemma~\ref{lem:lobndphistar} and the continuity of $\Phi(i,z,\cdot)$, we obtain $\lim_{n\to\infty}\tilde{\pi}^{(n,*)}(t)=\tilde{\pi}^*(t)$ for $t\in[0,T]$, a.s. Moreover, it holds that \begin{align*} &J(\tilde{\pi}^{(n,*)};t,i,z)=\Ex_{t,i,z}^{{\tilde\pi^{(n,*)}},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde\pi^{(n,*)}(s);Y(s),Z(s))ds\right)\right]\nonumber\\ &\qquad=\Ex_{t,i,z}^{{\tilde\pi^{(n,*)}},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\tilde\pi^{(n,*)}(s);Y(s),Z(s))ds+\frac{\theta}{2}\int_{T\wedge\tau^t_n}^TL(0;Y(s),Z(s))ds\right)\right]\nonumber\\ &\qquad\leq\Ex_{t,i,z}^{{\tilde\pi^{(n,*)}},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\tilde\pi^{(n,*)}(s);Y(s),Z(s))ds\right)\right]=\varphi_n(t,i,z). \end{align*} Proposition~\ref{prop:verivalue} then yields that $\varphi^*(t,i,z)\leq J(\tilde{\pi}^{(n,*)};t,i,z)\leq\varphi_n(t,i,z)$ for $n\in\mathbb{Z}_+$. This verifies that $\lim_{n\to\infty}J(\tilde{\pi}^{(n,*)};t,i,z)=\varphi^*(t,i,z)$ for $(t,i,z)\in[0,T]\times\mathbb{Z}_+\times{\cal S}$, a.s. using Lemma~\ref{lem:unfmconforphi}. \end{proof} \begin{proposition}\label{prop:admiss} Let the condition {\rm(C.1)} hold. Then, the optimal feedback strategy $\tilde{\pi}^*$ given by \eqref{eq:optimaltildepis} is admissible, i.e., $\tilde{\pi}^*\in\tilde{\cal U}$. \end{proposition} \begin{proof} Under the condition {\rm(C.1)}, it is not difficult to verify that there exists a constant $C>0$ such that $L(\pi;i,z)\geq-C$ for all $(\pi,i,z)\in U\times\mathbb{Z}_+\times\mathcal{S}$. Thanks to Proposition~\ref{prop:verivalue}, we have that \begin{align*} \varphi^*(t,i,z)&=\inf_{\tilde{\pi}\in\tilde{\cal U}}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}(s);Y(s),Z(s))ds\right)\right]\\ &\geq\inf_{\tilde{\pi}\in\tilde{\cal U}}\Ex_{t,i,z}^{\tilde{\pi},\theta}\left[\exp\left(-\frac{\theta}{2}\int_t^TCds\right)\right]=\exp\left(-\frac\theta2C(T-t)\right),\nonumber \end{align*} for $(t,i,z)\in[0,T]\times{\mathbb{Z}_+}\times{\cal S}$. Hence, for $t\in[0,T]$, \begin{align}\label{eq:xstarbound} x^*(t)=(\varphi^*(t,Y(t-),Z^j(t-));\ j=0,1,\ldots,N))\in[e^{-\frac\theta2C(T-t)},1]^{N+1}. \end{align} The continuity of $\Phi(i,z,x):=\argmin_{\pi\in U}\tilde{H}\left(\pi;i,z,x\right)$ gives that $\tilde{\pi}^*(t)$ for $t\in[0,T]$ is uniformly bounded by some constant $C_1>0$. Moreover, the first-order condition yields that, for all $j=1,\ldots,N$, if $Z_j(t-)=0$, \begin{align}\label{eq:pistarbelow2} (1-\tilde{\pi}^*_j(t))^{-\frac\theta2-1} =&\Bigg[(\mu_j(Y(t-))-r(Y(t-)))-\frac\theta2\left(1+\frac\theta2\right)\sum_{i=1}^N(\sigma(Y(t-))^\top\sigma(Y(t-)))_{ji}\tilde{\pi}^*_i(t)\nonumber\\ &\quad+\frac\theta2\lambda_j(Y(t-),Z(t-))\Bigg] \frac{\varphi^*(t,Y(t-),Z(t-))}{\lambda_j(Y(t-),Z(t-))\varphi^*(t,Y(t-),Z^j(t-))}\nonumber\\ \leq& C_2, \end{align} where we used the condition (C.1) and \eqref{eq:xstarbound}. Note that $\tilde{\pi}^*_j(t)=0$ if $Z_j(t-)=1$, then $\tilde{\pi}^*$ is also uniformly bounded away from $1$. This implies that the generalized Novikov's condition holds in the countably infinite state case, and hence $\tilde{\pi}^*$ is admissible. \end{proof} The above verification results (Proposition~\ref{prop:verivalue} and Proposition~\ref{prop:admiss}) can be seen as a uniqueness result for the dynamic programming equation. Under the condition (C.1), we can also establish an error estimate on the approximation of the sequence of strategies $\tilde{\pi}^{(n,*)}$ to the optimal strategy $\pi^{*}$ in terms of the objective functional $J$ (see~\eqref{eq:J}), which is given by \begin{lemma}\label{lem:errorestimate} Let $n\in\mathbb{Z}_+$. Under the condition {\rm(C.1)}, for $(t,i,z)\in[0,T]\times D_n\times{\cal S}$, there exists a constant $C>0$ which is independent of $n$ such that \begin{align*} \left|J(\tilde{\pi}^{(n,*)};t,i,z)-J(\tilde{\pi}^{(*)};t,i,z)\right|\leq C\left(1-\sum_{j=1}^na^{(n)}_{ij}(T-t)\right). \end{align*} Here $a^{(n)}_{ij}(T-t)=\delta_{ij}+(T-t)q_{ij}+\sum_{k=1}^\infty\sum_{1\leq l_1,\ldots,l_k\leq n}\frac{(T-t)^{k+1}}{(k+1)!}q_{il_1}q_{l_1l_2}\cdots q_{l_k j}$. \end{lemma} \begin{proof} By Proposition 4.5, $J(\tilde{\pi}^{(n,*)};t,i,z)\to \varphi^*(t,i,z)=J(\tilde{\pi}^*;t,i,z)$ as $n\to\infty$. On the other hand, it can be verified that there exists constants $\gamma\in(0,1)$ and $C_1>0$ such that $\tilde{\pi}^*(t)\in[-C_1,1-\gamma]^N$ for all $t\in[0,T]$, a.s. Then, using \eqref{eq:L0}, it follows that $L(\tilde{\pi}^*(t);Y(t),Z(t))\leq C_2$, a.s. for $t\in[0,T]$. Here $C_2$ is a positive constant. Therefore, by noting $\tilde{\pi}^*\in\tilde{\mathcal{U}}_n$, we have that \begin{align*} \varphi^*(t,i,z)&=\Ex_{t,i,z}^{\tilde{\pi}^*,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}^*(s);Y(s),Z(s))ds\right)\right]\notag\\ &\geq\Ex_{t,i,z}^{\tilde{\pi}^*,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^TL(\tilde{\pi}^*(s);Y(s),Z(s))ds\right)\mathbf{1}_{\{\tau^t_n>T\}}\right]\notag\\ &=\Ex_{t,i,z}^{\tilde{\pi}^*,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\tilde{\pi}^*(s);Y(s),Z(s))ds\right)\mathbf{1}_{\{\tau^t_n>T\}}\right]\notag\\ &=\Ex_{t,i,z}^{\tilde{\pi}^*,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\tilde{\pi}^*(s);Y(s),Z(s))ds\right)\right]\notag\\ &\qquad-\Ex_{t,i,z}^{\tilde{\pi}^*,\theta}\left[\exp\left(\frac{\theta}{2}\int_t^{T\wedge\tau^t_n}L(\tilde{\pi}^*(s);Y(s),Z(s))ds\right)\mathbf{1}_{\{\tau^t_n\leq T\}}\right]\notag\\ &\geq\varphi_n(t,i,z)-\Ex_{t,i,z}^{\tilde{\pi}^*,\theta}\left[e^{\frac{\theta C_2}{2}(T\wedge\tau^t_n-t)}\mathbf{1}_{\{\tau^t_n\leq T\}}\right]\notag\\ &\geq\varphi_n(t,i,z)-C_3\mathbb{P}_{t,i,z}^{\tilde{\pi}^*,\theta}(\tau^t_n\leq T), \end{align*} where $C_3:=e^{\frac{\theta C_2T}{2}}$ and $\varphi_n(t,i,z)$ is defined in Proposition~\ref{prop:Vnmonotone00}. Using the given inequality $\varphi^*(t,i,z)\leq J(\tilde{\pi}^{(n,*)};t,i,z)\leq\varphi_n(t,i,z)$ in the proof of Lemma \ref{lem:approxpistar}, under the condition (C.1), we arrive at \begin{align*} \left|J(\tilde{\pi}^{(n,*)};t,i,z)-J(\tilde{\pi}^{(*)};t,i,z)\right|&=J(\tilde{\pi}^{(n,*)};t,i,z)-\varphi^*(t,i,z)\leq\varphi_n(t,i,z)-\varphi^*(t,i,z)\nonumber\\ &\leq C_3\mathbb{P}_{t,i,z}^{\tilde{\pi}^*,\theta}(\tau^t_n\leq T). \end{align*} Note that, by Proposition 4.5, $Y$ is also a Markov chain with the generator $Q=(q_{ij})$ under $\mathbb{P}_{t,i,z}^{\tilde{\pi}^*,\theta}$. Then $\mathbb{P}_{t,i,z}^{\tilde{\pi}^*,\theta}(\tau^t_n\leq T)\to0$ as $n\to\infty$. On the other hand, $\tau^t_n$ is the absorption time of $(Y^{(n)}(s))_{s\in[t,T]}$ whose generator is given as $A_n$ given by \eqref{eq:An}. Hence, using Section 11.2.3 in Chapter 11 in ~\cite{BieRut04}, we also have that $\Px_{t,i,z}^{\tilde{\pi}^*,\theta}(\tau^t_n\leq T)=1-\sum_{j=1}^na^{(n)}_{ij}(T-t)$. This completes the proof. \end{proof} We next provide an example in which the error estimate $1-\sum_{j=1}^na_{ij}^{(n)}(T-t)$ in Lemma~\ref{lem:errorestimate} admits a closed form representation. Let us consider the following specific generator given by \begin{align* Q=\left[\begin{matrix} -1 & \frac12 & \frac14 & \dots & \frac1{2^{n-1}} & \frac1{2^n} & \dots \\ \frac12 & -1 & \frac14 & \dots & \frac1{2^{n-1}} & \frac1{2^n} & \dots\\ \frac12 & \frac14 & -1 & \dots & \frac1{2^{n-1}} & \frac1{2^n} & \dots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \frac12 & \frac14 & \frac18 &\dots & \frac1{2^{n-1}} & -1 & \dots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \end{matrix}\right].\notag \end{align*} Then, for any $l\leq n$, $\sum_{j=1}^nq_{lj}=\sum_{j=1}^{n-1}\frac1{2^j}-1=\frac{-1}{2^{n-1}}$. Therefore, for any $i\leq n$, \begin{align*} \sum_{j=1}^na^{(n)}_{ij}(T-t)&=\sum_{k=0}^\infty\frac{(T-t)^k}{k!}\left(\frac{-1}{2^{n-1}}\right)^k=e^{-\frac{T-t}{2^{n-1}}}. \end{align*} It follows that, for $(t,i,z)\in[0,T]\times D_n\times{\cal S}$, we have the explicit error estimate \begin{eqnarray*} \left|J(\tilde{\pi}^{(n,*)};t,i,z)-J(\tilde{\pi}^{(*)};t,i,z)\right|\leq C\left(1-e^{-\frac{T-t}{2^{n-1}}}\right), \end{eqnarray*} where $C>0$ is independent of $n$. \begin{remark}\label{rem:qijt} It is also worth mentioning here that our method used in the paper can be applied to treat the case where the regime-switching process $Y$ is a time-inhomogeneous Markov chain with a time-dependent generator given by $Q(t)=(q_{ij}(t))_{i,j\in\mathbb{Z}_+}$ for $t\in[0,T]$. Here, for $t\in[0,T]$, $q_{ii}(t)\leq0$ for $i\in\mathbb{Z}_+$, $q_{ij}(t)\geq0$ for $i\neq j$, and $\sum_{j=1}^{\infty}q_{ij}(t)=0$ for $i\in\mathbb{Z}_+$ (i.e., $\sum_{j\neq i}q_{ij}(t)=-q_{ii}(t)$ for $i\in\mathbb{Z}_+$). Also for $i,j\in\mathbb{Z}_+$, $t\to q_{ij}(t)$ is continuous on $[0,T]$, and the infinite summation $\sum_{j\in\mathbb{Z}_+}q_{ij}(t)$ is uniformly convergent in $t\in[0,T]$.\\ \end{remark} \noindent \textbf{Acknowledgements}: L. Bo is supported by Natural Science Foundation of China under grant 11471254 and the Key Research Program of Frontier Sciences of the Chinese Academy of Science under grant QYZDB-SSW-SYS009. X. Yu is supported by the Hong Kong Early Career Scheme under grant 25302116. The authors would like to thank two anonymous referees for the careful reading and helpful comments to improve the presentation of this paper.
train/arxiv
BkiUdws5qsBDCrvnOVHJ
5
1
\section{Introduction} \label{secintro} Solar flares result from the abrupt release of free magnetic energy, that has previously been stored in the coronal magnetic field by flux emergence and surface motions \citep{Forbes06}. Most of the strongest flares are eruptive \citep[as reviewed by][]{SchrijverCospar09}. For the latter, the standard model attributes the flare energy release to magnetic reconnection that occurs in the wake of coronal mass ejections \citep{Shibata95,LinForbes00,Moore01,Priest02}. Several flare-related phenomena impact the solar atmosphere itself. To be specific, there are photospheric sunquakes \citep{Zhar11}, chromospheric ribbons \citep{Sch87}, coronal loop restructuration \citep{Warren11} and oscillation \citep{Nak99}, large-scale coronal propagation fronts \citep{Dela08}, and driving of sympathetic eruptions \citep{SchrijverTitle11}. In addition to solar effects, flare-related irradiance enhancements \citep{Woods04}, solar energetic particles \citep[SEPs, ][]{MassonKlein09} and coronal mass ejections \citep[CMEs, ][]{Vour10} constitute major drivers for space weather, and are responsible for various environmental hazards at Earth \citep{Schwenn06,Pulk07}. For all these reasons, it would be desireable to know whether or not there is a maximum for solar flare energies, and if so, what its value is. On the one hand, detailed analyses of modern data from the past half-century imply that solar flare energies range from $10^{28}$ to $10^{33}$ ergs, with a power-law distribution that drops above $10^{32}$ ergs \citep{Schrij12}. The maximum value there corresponds to an estimate for the strongest directly-observed flare from Nov 4, 2003. Saturated soft X-ray observations showed that this flare was above the X28 class, and model interpretations of radio observations of Earth's ionosphere suggested that it was X40 \citep{Brod05}. Due to the limited range in time of these observations, it is unclear whether or not the Sun has been -or will be- able to produce more energetic events. For example, the energy content of the first-ever observed solar flare on Sept 1, 1859 \citep{Car59,Hod59} has been thoroughly debated \citep{Mc01,Tsu03,Cliv04,Wolff12}. On the other hand, precise measurements on unresolved active Sun-like stars have revealed the existence of so-called superflares, even in slowly-rotating and isolated stars \citep{Schae00,Mae12}. Their energies have been estimated to be between a few $10^{33}$ ergs to more than $10^{36}$ ergs. Unfortunately, it is still unclear whether or not the Sun can produce such superflares, among other reasons because of the lack of reliable information on the starspot properties of such stars \citep{Ber05,Stras09}. So as to estimate flare energies, a method complementary to observing solar and stellar flares is to use solar flare models, and to constrain the parameters using observational properties of active regions, rather than those of the flares themselves. In the present paper, we perform such an analysis. Since analytical approaches are typically oversimplified for such a purpose, numerical models are likely to be required. Moreover, incorporating observational constraints not only precludes the use of 2D models, but also restrict the choice to models that have already proven to match various solar observations to some acceptable degree. We use a zero-$\beta$ MHD simulation of an eruptive flare \citep{Aula10,Aula12} that extends the standard flare model in 3D. Dedicated analyses of the simulation, as recalled hereafter, have shown that this model successfully reproduced the time-evolution and morphological properties of active region magnetic fields after their early emergence stage, of coronal sigmoids from their birth to their eruption, of spreading chromospheric ribbons and sheared flare loops, of tear-drop shaped CMEs, and of large-scale coronal propagation fronts. We scaled the model to solar observed values as follows: we incorporate observational constraints known from previously reported statistical studies regarding the magnetic flux of active regions, as well as the area and magnetic field strength of sunspot groups. This method allows one to identify the maximum flare energy for realistic but extreme solar conditions, and to predict the size of giant starspot pairs that are required to produce superflares. \section{The eruptive flare model} \label{sec1} \subsection{Summary of the non-dimensionalized model} The eruptive flare model was calculated numerically, using the {\em observationally driven high-order scheme magnetohydrodynamic} code \citep[OHM:][]{Aula05a}. The calculation was performed in the pressureless resistive MHD approximation, using non-dimensionalized units, in a $251 \times 251 \times 231$ non-uniform cartesian mesh. Its uniform resistivity resulted in a Reynolds number of about $R_m\sim10^3$. The simulation settings are thoroughly described in \citet{Aula10,Aula12}. In the model, the flare resulted from magnetic reconnection occuring at a nearly vertical current sheet, gradually developing in the wake of a coronal mass ejection. The reconnection led to the formation of ribbons and flare loops \citep{Aula12}. The CME itself was triggered by the ideal loss-of-equilibrium of a weakly twisted coronal flux rope \citep{Aula10}, corresponding to the torus instability \citep{KliTor06,DemAula10}. During the eruption, a coronal propagation front developed at the edges of the expanding sheared arcades surrounding the flux rope \citep{Schrijver11}. Before it erupted, the flux rope and a surrounding sigmoid were progressively formed in the corona \citep{Aula10,Sav12}, above a slowly shearing and diffusing photospheric bipolar magnetic field. This pre-eruptive evolution was similar to that applied in past symmetric models \citep{vanBalle89,Amari03b}, and they matched observations and simulations for active regions during their late flux emergence stage and their subsequent decay phase \citep[e.g.][]{vanDriel03,Arch04,Green11}. The magnetic field geometry of the modeled eruptive flare is shown in Fig.~\ref{fig1}. The left panel clearly shows the asymmetry of the model. A $27\%$ flux imbalance in the photosphere, in favor of the positive polarity, manifests itself as open magnetic field lines rooted in the positive polarity, at the side of the eruption. This asymmetry was set in the model so as to reproduce typical solar active regions, with a stronger (resp. weaker) leading (resp. trailing) polarity. In the right panels, the field of view corresponds to the size of the magnetic bipole $L^{\mbox{\rm{\tiny bipole}}}$, as used for physical scaling hereafter. If one assumes a sunspot field of $B_z^{\mbox{\rm{\tiny max}}}=3500$ G, then the isocontours that cover the widest areas correspond to $B_z^{\mbox{\rm{\tiny max}}}/5 =\pm 700$ G. Since this is the minimum magnetic field value for sunspot penumbrae \citep{Solanki06}, those isocontours correspond to the outer edge of the modeled sunspots. With these settings, the total sunspot area in the model is about half of the area of the field of view being shown in Fig.~\ref{fig1}, {\em right}. So with $B_z^{\mbox{\rm{\tiny max}}}=3500$ G the sunspot area is $f^{-1}\, (L^{\mbox{\rm{\tiny bipole}}})^2$, with $f\sim2$, while a lower value for $B_z^{\mbox{\rm{\tiny max}}}$ implies a higher value for $f$. During the pre-eruptive energy storage phase, the combined effects of shearing motions and magnetic field diffusion in the photosphere eventually resulted in the development of magnetic shear along the polarity inversion line, over a length of about $L^{\mbox{\rm{\tiny bipole}}}$. This long length presumably results in the modeled flare energy to be close to its maximum possible value, given the distribution of photospheric flux \citep{Fal08,Moore12}. \subsection{Physical scalings} \begin{figure*} \sidecaption \includegraphics[width=12cm]{aulanier_f1} \caption{ Eruptive flare model. {\em [left:]} Projection view of randomly plotted coronal magnetic field lines. The grayscale corresponds to the vertical component of the photospheric magnetic field $B_z$. {\em [right:]} Photospheric bipole viewed from above. The pink (resp. cyan) isocontours stand for positive (resp. negative) values of $B_z^{\mbox{\rm{\tiny max}}}/1.1, 2, 3, 4, 5$. The yellow isocontour shows the polarity inversion line $B_z=0$. {\em [right-top:]} The grayscale for $B_z$ is the same as in the left panel. {\em [right-bottom:]} The grayscale shows the vertical component of the photospheric electric currents. Strong elongated white/black patches highlight flare ribbons. The red lines show representative post-reconnection flare loops, rooted in the flare ribbons. } \label{fig1} \end{figure*} The MHD model was calculated in a wide numerical domain of size $20\times20\times30$, with a magnetic permeability $\mu=1$, using dimensionless values $B_z^{\mbox{\rm{\tiny max}}}=8$ in the dominant polarity, and $L^{\mbox{\rm{\tiny bipole}}}=5$. These settings resulted in a dimensionless photospheric flux inside the dominant polarity of $\phi=42$ \citep{Aula10}, and a total pre-eruptive magnetic energy of $E^{\mbox{\rm{\tiny bipole}}}=225$. Throughout the simulation, a magnetic energy of $E^{\mbox{\rm{\tiny model}}} =19\%\, E^{\mbox{\rm{\tiny bipole}}}=42$ was released. Only $5\%$ of this amount was converted into the kinetic energy of the CME. These numbers have been presented and discussed in \citet{Aula12}. The remaining $95\%\, E^{\mbox{\rm{\tiny model}}}$ of the magnetic energy release can then be attributed to the flare energy itself. It must be pointed out that the simulation did not cover the full duration of the eruption. Indeed, numerical instabilities eventually prevented us from pursuing it with acceptable diffusion coefficients. Nevertheless, the rate of magnetic energy decrease had started to drop before the end of the simulation, and the electric currents within the last reconnecting field lines where relatively weak. On the one hand, this means that the total energy release $E^{\mbox{\rm{\tiny model}}}$ is expected to be slightly higher, but presumably not by much. On the other hand, the relatively low $R_m$ value of the simulation implies that some amount of $E^{\mbox{\rm{\tiny model}}}$ should be attributed to large-scale diffusion, rather than to the flare reconnection. Because of these numerical concerns, we consider thereafter that the flare energy in the model was about $E=40$, but this number should not be taken as being precise. Also, within the pressureless MHD framework of the simulation, the model cannot address which part of this energy is converted into heating, and which remaining part results in particle acceleration. It is straightforward to scale the model numbers given above into physical units. In the international system of units (SI), $\mu=4\pi10^{-7}$, the total magnetic flux $\phi$ and the total flare energy $E$ can then be written as \begin{eqnarray} \label{eqflux1} \phi &=& 42\, \Bigg{(}\frac{B_z^{\mbox{\rm{\tiny max}}}}{8~{\mbox{\rm T}}}\Bigg{)} \, \Bigg{(}\frac{L^{\mbox{\rm{\tiny bipole}}}}{5~{\mbox{\rm m}}}\Bigg{)}^2 \, {\mbox{\rm Wb}}\, , \\ \label{eqenergy1} E &=& \frac{40}{\mu}\, \Bigg{(}\frac{B_z^{\mbox{\rm{\tiny max}}}}{8~{\mbox{\rm T}}}\Bigg{)}^2 \, \Bigg{(}\frac{L^{\mbox{\rm{\tiny bipole}}}}{5~{\mbox{\rm m}}}\Bigg{)}^3 \, {\mbox{\rm J}}\, . \end{eqnarray} Rearranging these equations into commonly used solar units leads to: \begin{eqnarray} \label{eqflux} \phi &=& 0.52\times10^{22}\, \Bigg{(}\frac{B_z^{\mbox{\rm{\tiny max}}}}{{10^3~\mbox{\rm G}}}\Bigg{)} \, \Bigg{(}\frac{L^{\mbox{\rm{\tiny bipole}}}}{50~{\mbox{\rm Mm}}}\Bigg{)}^2 \, {\mbox{\rm Mx}}\, , \\ \label{eqenergy} E &=& 0.5\times10^{32}\, \Bigg{(}\frac{B_z^{\mbox{\rm{\tiny max}}}}{{10^3~\mbox{\rm G}}}\Bigg{)}^2 \, \Bigg{(}\frac{L^{\mbox{\rm{\tiny bipole}}}}{50~{\mbox{\rm Mm}}}\Bigg{)}^3 \, {\mbox{\rm erg}}\, . \end{eqnarray} While the power-law dependences in these equations come from the definitions of flux and energy, the numbers themselves directly result from the MHD simulation, and not from simple order of magnitude estimates. So Eqs.~(\ref{eqflux}) and (\ref{eqenergy}) enable us to calculate the model predictions for a wide range of photospheric magnetic fields and bipole sizes. The results are plotted in Fig.~\ref{fig2}. In this figure, the right vertical axis is the total sunspot area within the model, being given by $f^{-1}\, (L^{\mbox{\rm{\tiny bipole}}})^2$ using $f=2$. It is expressed in micro solar hemispheres \citep[hereafter written MSH as in][although other notations can be found in the literature]{Baumann05}. Hereafter all calculated energies (resp. fluxes) will almost always be given in multiples of $10^{32}$ ergs (resp. $10^{22}$ Mx), for easier comparison between different values. Typical decaying active regions with $L^{\mbox{\rm{\tiny bipole}}}=200$ Mm, which contain faculae of $B_z^{\mbox{\rm{\tiny max}}}=100$ G, have $\phi=0.8\times10^{22}$ Mx and can produce moderate flares of $E=0.3\times10^{32}$ ergs. Also, $\delta$-spots with $L^{\mbox{\rm{\tiny bipole}}}=40$ Mm and $B_z^{\mbox{\rm{\tiny max}}}=1500$ G have a lower magnetic flux $\phi=0.5\times10^{22}$ Mx, but can produce twice stronger flares, with $E=0.6\times10^{32}$ ergs. These energies for typical solar active regions are in good agreement with those estimated from the total solar irradiance (TSI) fluence of several observed flares \citep{Kret11}. Other parameters can result in more or less energetic events. For example one can scale the model to the sunspot group from which the 2003 Halloween flares originated. Firstly, one can overplot our Fig.~\ref{fig1}, {\em right}, onto the center of the Fig.~2 in \citet{Schrijver06} and thus find an approximated size of the main bipole which is involved in the flare, out of the whole sunspot group. This gives a bipole size of the order of $L^{\mbox{\rm{\tiny bipole}}} \sim 65$ Mm. Secondly, observational records lead to a peak sunspot magnetic field of $B_z^{\mbox{\rm{\tiny max}}}=3500$ G \citep{Living12}. These scalings lead to $\phi=3\times10^{22}$ Mx and $E=13\times10^{32}$ ergs. The modeled $\phi$ is about one third of the flux of the dominant polarity as measured in the whole active region \citep{Kaza10}. Comparing this modeled flare energy $E$ with that of extreme solar flares that originated from this same active region, we find that it is twice as strong as that of the Oct 28, 2003 X17 flare \citep{Schrij12}, and about the same as that of the Nov 4, 2003 X28-40 flare, as can be estimated from \citet{Kret11} and \citet[] [Eq.~1]{Schrij12}. \section{Finding the upper limit on flare energy} \label{sec3} \begin{figure*} \centering \includegraphics[width=\textwidth]{aulanier_f2} \caption{ Magnetic flux in the dominant polarity of the bipole, and magnetic energy released during the flare, calculated as a function of the maximum magnetic field and the size of the photospheric bipole. The $\times$ and $+$ signs correspond to extreme solar values. The former is unrealistic and the latter must be very rare (see text for details). } \label{fig2} \end{figure*} \subsection{Excluding unobserved regions in the parameter space} We indicate in Fig.~\ref{fig2} the minimum and maximum sunspot magnetic fields as measured from spectro-polarimetric observations since 1957. They are respectively $700$ G in the penumbra, and $3500$ G in the umbra \citep{Solanki06,Pevtsov11}. The latter value is an extreme that has rarely been reported in sunspot observations, and it typically is observed in association with intense flaring activity \citep{Living12}. We also indicate the maximum area of sunspot groups, including both the umbras and the penumbras. They were measured from 1874 to 1976 \citep{Baumann05} and from 1977 to 2007 \citep{Hat08}. These sizes follow a log-normal distribution up $3000$ MSH, but there are a few larger groups. The largest one was observed in April 1947, and its area was about $5400-6000$ MSH \citep{Nic48,Taylor89}. For illustration, we provide in Fig.~\ref{fig3} one image of this sunspot group and one of its surrounding faculae and filaments, as observed with the Meudon spectroheliograph. Interestingly, this sunspot group did not generate strong geomagnetic disturbances. This could either be due to a lack of strong enough magnetic shear in the filaments which were located between the sunspots, or to the lack of Earth-directed CMEs that could have been launched from this region. However, several other large sunspot groups, whose areas were at least $3500$ MSH, did generate major geomagnetic storms. Among those are the March 1989 event, which led to the Quebec blackout \citep{Taylor89}, and the December 1128 event, which produced aurorae in Asia and which corresponds to the first reported sunspot drawing \citep{Willis01}. Therefore, we conservatively keep $6000$ MSH as the maximum value. The 1874-2007 dataset does not include the first observed flare, in December 1859. Nevertheless, \citet{Hod59} reported that the size of the sunspot group associated with this event was about $96$ Mm, and one can estimate from the drawing of \citet{Car59} that its total area was smaller than $6000$ MSH. The point marked by a thick $\times$ sign in Fig.~\ref{fig2} is defined by the intersection of the $3500$ G and the $6000$ MSH lines. The model states that its magnetic flux is $\phi=27\times10^{22}$ Mx. This modeled value is much higher than $8\times10^{22}$ Mx, which corresponds both to the dominant polarity for the Halloween flares \citep{Kaza10} and to the highest flux measured for single active regions, as observed during a sample of time-periods between 1998 and 2007 \citep{Parnell09}. The modeled flux for this largest sunspot group is nevertheless consistent with the maximum value of $20\times10^{22}$ Mx for an active region, as reported by \citet{Zhang10} in a very extensive survey, ranging from 1996 to 2008. It remains difficult to estimate the highest active region flux which ever occurred. Firstly, no magnetic field measurement is available for the April 1947 sunspot group. Secondly, the automatic procedure of \citet{Zhang10} can lead several active regions to be grouped into an apparent single region, while the method of \citet{Parnell09} in contrast tends to fragment active region into several pieces. For reference, we therefore overplotted both $\phi=8\times10^{22}$ and $2\times10^{23}$ Mx values in Fig.~\ref{fig2}. The flare energy at the point $\times$, where the magnetic field and size of sunspot groups take their extreme values, is $E^{\times}=340\times 10^{32}$ ergs. This could a priori be considered as the maximum possible energy of a solar flare. In addition, it falls within the range of stellar superflare energies \citep{Mae12}. Nevertheless, we argue below that this point is unrealistic for observed solar conditions. \subsection{Taking into account the fragmentation of flux} \begin{figure*} \centering \includegraphics[width=.84\textwidth]{aulanier_f3} \caption{ The largest sunspot group ever reported since the end of the nineteenth century, as observed in April 5, 1947 in Ca~{\sc ii} K1v ({\em left}) and H$\alpha$ ({\em right}) by the Meudon spectroheliograph. } \label{fig3} \end{figure*} All large sunspot groups are highly fragmented, and display many episodes of flux emergence and dispersal. We argue that this fragmentation is the reason why scaling the model to the whole area of the largest sunspot group leads to over estimate the maximum flare energy. Firstly, sunspot groups incorporate several big sunspots, ranging from a few spots \citep[see e.g.][for February 2011]{Schrijver11} to half a dozen \citep[see e.g.][for September 1859 and October 2003 respectively]{Car59,Schrijver06} and up to more than ten \citep[see e.g.][for March 1989 and April 1947 respectively; see also Fig.~\ref{fig3}]{Wang91,Nic48}. Secondly, these groups typically have a magnetic flux imbalance \citep[e.g. $23\%$ for the October 2003 sunspot group][]{Kaza10}, because they often emerge within older active regions. This naturally creates new magnetic connections to distant regions on the Sun, in addition to possibly pre-existing ones. Thirdly, the magnetic shear tends to be concentrated along some segments only of the polarity inversion lines of a given group \citep{Fal08}. This is also true for the April 1947 sunspot group, as evidenced by the complex distribution of small filaments (see Fig.~\ref{fig3}). This means that a given sunspot group is never energized as a whole. These three observational properties are actually consistent with the solar convection-driven breaking of large sub-photospheric flux tubes into a series of smaller deformed structures, as found in numerical simulations \citep{Fan03,JouveSub}. They show that these deformed structures should eventually emerge through the photosphere as grouped but distinct magnetic bipoles. These different bipoles should naturally possess various degrees of magnetic shear, and should not be fully magnetically connected to each other in the corona. So both observational and theoretical arguments suggest that only a few sunspots from a whole sunspot group should be involved in a given flare. Unfortunately, the fraction of area to be considered, and to be compared with the size of the bipole in the model, is difficult to estimate. We consider the Oct-Nov 2003 flares, for example. Our estimation of $L^{\mbox{\rm{\tiny bipole}}} \sim 65$ Mm, as given above, results in a modeled sunspot area of $700$ MSH (see Fig.~\ref{fig2}). This is about $27\%$ of the maximum area measured for the whole sunspot group, which peaked at $2600$ MSH on Oct 31. Another way to estimate this fraction is to measure the ratio between the magnetic flux swept by the flare ribbons, and that of the whole active region. \citet{Qiu07} and \citet{Kaza12} reported a ratio of $25\%$ and $31\%$ for the Oct 28 flare, respectively. The same authors also reported on a dozen of other events, for which one can estimate ratios ranging between $10\%$ and $30\%$, on average. These considerations lead us to conjecture that $30\%$ at most of the area of the largest observed sunspot group, as reported by \citet{Nic48} and \citet{Taylor89}, i.e. a maximum of $1800$ MSH, can be involved in a flare. This is more than $2.5$ times the area of the bipole involved in the Halloween flares. In Fig.~\ref{fig2}, we therefore plot another point indicated by a thick $+$ sign, located at the intersection of the $3500$ G and the $1800$ MSH lines. In the model, this corresponds to $L^{\mbox{\rm{\tiny bipole}}}=105$ Mm. The flare energy at this point is $E^{+}=56\times 10^{32}$ ergs. Under the assumptions of the model, and considering that it probably corresponds to the most extreme observed solar conditions, $E^{+}$ should correspond to the upper limit on solar flare energy. \subsection{Numerical concerns} As for all numerical models, various limitations could play a role in changing the estimated maximum flare energy $E^{+}$. We mentioned above that the simulation did not cover the duration of the full eruption, because some numerical instabilities eventually developed. On the one hand, this means that our flare energies are slightly under estimated. But on the other hand, the low $R_m$ must lead to a weak large-scale diffusion. It should not be very strong, however, since the characteristic diffusion time at the scale of the modeled bipole can be estimated as $150$ times the duration of the simulation. Still, it ought to take away some fraction of the magnetic energy released during the simulation, so that our flare energies are slightly over estimated. Quantifying the relative importance of both effects is unfortunately hard to achieve. Moreover, applying different spatial distribution of shear during the pre-flare energy storage phase could lead to a different amount of energy release \citep{Fal08}. But in our model, the shearing motions were extended all along the polarity inversion line in the middle of the flux concentrations. Therefore we argue that it will be difficult for different settings to produce significantly higher flare energies. Another concern is that our simulation produces a CME kinetic energy which is only $5\%$ of the flare energy. But current observational energy estimates imply that the kinetic energy of a CME can be the same as \citep{Emslie05} and up to three times higher than \citep{Emslie12} the bolometric energy of its associated flare. This strong discrepancy cannot be attributed to the fact that our simulation was limited in time. Indeed, other 3D (resp. 2.5D) MHD models calculated by independent groups and codes predict that no more than $10\%$ (resp. $30\%$) of the total released magnetic energy is converted into the CME kinetic energy \citep{Amari03b,Jaco06,Lynch08,Ree10}. This means that it is unclear whether the relatively weaker CME kinetic energy in our model should be attributed to observational biases, or to numerical problems commonly shared by several groups and codes. In principle, the validity of the model can also be questioned because magnetic reconnection is ensured by resistivity, with a relatively low magnetic Reynolds number $R_m$ as compared to that of the solar corona. This may lead to different reconnection rates from those found in collisionless reconnection simulations \citep[see e.g.][]{Aunai11}. The reconnection rate is indeed important for the flare energy release in fully three-dimensional simulations of solar eruptions. In principle, slower (resp. faster) reconnection releases weaker (resp. larger) amounts of magnetic energy per unit time. Nevertheless, one might argue that the time-integrated energy release, during the whole flare, could be not very sensitive to the reconnection rate. However the energy content which is available at a given time, within a given pair of pre-reconnecting magnetic field lines, strongly depends on how much time these field lines have had to stretch ideally \citep[as described in][]{Aula12}, and thus by how much their magnetic shear has decreased before they reconnect. This explains why the time-evolution of the eruption makes the reconnection rate important for time-integrated energy release. In our simulation, we measure the reconnection rate from the average Mach number $M$ of the reconnection inflown. During the eruption, it increases in time from $M\sim0.05$ to $M\sim0.2$ approximately. These reconnection rates are fortunately comparable to those obtained for collisionless reconnection. So we conjecture that the limited physics inside our modeled reconnecting current sheet should not have drastic consequences for the flare energies. Nevertheless, it should be noted that this result probably does not hold for other resistive MHD simulations that use very different $R_m$. We foresee that these numerical concerns are probably not extremely sensitive: the orders of magnitudes that we find for flare energies are likely to be correct. But it is difficult at present to assert that we estimate flare energies with a precision better than several tens of percents, or even more. Therefore we conservatively round up the upper value $E^{+}$ to $6 \times10^{33}$ ergs. In the future, data-driven simulations which can explore the parameter space and which incorporate more physics will have to be developed to fine-tune the present analyses. \section{Summary and discussion} \label{secsum} So as to estimate the maximum possible energy of solar flare, we used a dimensionless numerical 3D MHD simulation for solar eruptions \citep{Aula10,Aula12}. We had previously shown that this model successfully matches the observations of active region magnetic fields, of coronal sigmoids, of flare ribbons and loops, of CMEs, and of large-scale propagation fronts. We scaled the model parameters to physical values. Typical solar active region parameters resulted in typically observed magnetic fluxes \citep{Parnell09,Zhang10} and flare energies \citep{Kret11,Schrij12}. We then scaled the model using the largest measured sunspot magnetic field \citep{Solanki06}, and the area of the largest sunspot group ever reported, which developed in March-April 1947 \citep{Nic48,Taylor89}. In addition, we took into account that observations show that large sunspots groups are always fragmented into several spots, and are never involved in a given flare as a whole. This partitioning can presumably be attributed to sub-photospheric convective motions. Since those motions are always present because of the solar internal structure \citep{Brun02}, it is difficult to imagine that the Sun will ever produce a large sunspot group consisting of a single pair of giant sunspots. Based on some approximated geometrical and reconnected magnetic flux estimations, we considered that only $30\%$ the area of a given sunspot group can be involved in a flare. Keeping in mind the assumptions and limitations of the numerical model, these scalings resulted in a maximum flare energy of $\sim 6 \times10^{33}$ ergs. This is is ten times the energy of the Oct 28, 2003 X17 flare, as reported in \citet{Schrij12}. In addition, this value is about six times higher than the maximum energy in TSI fluence that can be estimated from the SXR fluence of the Nov 4, 2003 X28-40 flare, using the scalings given by \citet{Kret11} and \citet{Schrij12}. Finally, it lies in the energy range of the weakest superflares that were reported by \citet{Mae12} for numerous slowly-rotating and isolated Sun-like stars. But it is several orders of magnitude smaller than that of strong stellar superflares. One could ask what the frequency is at which the Sun can produce a maximum flare like this. Observational records since 1874 reveal that the area of sunspot groups follow a sharp log-normal distribution \citep{Baumann05,Hat08}. Unfortunately, the statistics for sunspot groups larger than $3000$ MSH in area are too poor to estimate whether or not this distribution is valid up to $6000$ MSH. In addition, neither do all active regions or sunspot groups generate flares, nor do they always generate them at the maximum energy, as calculated by the model. The reason must be that a solar eruption requires a strong magnetic sheared polarity inversion line, and current observations show that this does not occur in all solar active regions \citep{Fal08}. Consequently it is currently difficult to estimate the probability of appearance of the strongest flare that we found. We can only refer to \citet{Baumann05} and \citet{Hat08}, who reported that the size of sunspot groups follows a clear log-normal distribution up to $3000$ MSH, and to \citet{Cliv04} and \citet{Schrij12}, who argue that this upper limit on flare energy was never reached in any observed solar flare since, even including the Carrington event of Sept 1859. \begin{figure} \centering \includegraphics[width=0.405\textwidth,clip]{aulanier_f4} \caption{ Schematic representation of several modeled sunspot pairs on the solar disk, with their corresponding modeled flare energies. Note that our estimations state that in the real Sun, a given pair will often be embedded in a much larger sunspot group, from which only the bipole that is shown here will be involved in the flare. } \label{fig4} \end{figure} When the model is scaled to the strongest measured sunspot magnetic field, i.e. 3.5~kG, it can be used to calculate the size of the sunspot pair that is required to generate the solar flares of various energies. We plot those in Fig.~\ref{fig4}. These scalings can also be used to relate stellar superflares to starspot sizes. But it should be noted that starspot magnetic fields are still difficult to measure reliably, and that current estimates put them in the range of $2-5$ kG \citep{Ber05}. With these scalings, a superflare of $10^{36}$ ergs requires a very large single pair of spots, whose extent is $48^\circ$ in longitude/latitude, at the surface of a Sun-like star. While such spots have been observed indirectly in non-Sun-like stars as well as in young fast-rotating Sun-like stars \citep{Ber05,Stras09}, they have never been reported on the Sun. \section{Conclusion} \label{secccl} We combined a numerical magnetohydrodynamic model for solar eruptions calculated with the OHM code and historical sunspot observations starting from the end of the nineteenth century. We concluded that the maximum energy of solar flares is about six times that of the strongest-ever directly-observed flare of Nov 4, 2003. One unaddressed question is whether or not the current solar convective dynamo can produce much larger sunspot groups, as required to produce even stronger flares according to our results. This seems unlikely, since such giant sunspot groups ``have not been recorded in four centuries of direct scientific observations and in millennia of sunrises and sunsets viewable by anyone around the world'', to quote \citet{Schrij12}. It can thus reasonably be assumed that, during the most recent few billion years while on the main sequence, the Sun never has produced, and never will produce, a flare more energetic than this upper limit. We thus conjecture that one condition for Sun-like stars to produce superflares is to host a dynamo that is much stronger than that of an aged Sun with a rotation rate exceeding several days. On the one hand, our results suggest that we have not experienced the largest possible solar flare. But on the other hand, and unless the dynamo theory proves otherwise, our results also provide an upper limit for extreme space weather conditions, that does not exceed those related to past observed flares by much. \begin{acknowledgements} The MHD calculations were done on the quadri-core bi-Xeon computers of the Cluster of the Division Informatique de l'Observatoire de Paris (DIO). The historical Meudon spectroheliograph observations were digitalized by I.~Bual\'e, and are available in the BASS2000 database. The work of MJ is funded by a contract from the AXA Research Fund. \end{acknowledgements} \bibliographystyle{aa}
train/arxiv
BkiUdjc4eIZijfna0Ty9
4
0.8
\subsection{Please Capitalize the First Letter of Each Notional Word in Subsection Title} \section{Introduction} \label{sec:intro} It has been long found that the Fanaroff-Riley type I radio galaxies (FRIs) are edge-darkened, while Fanaroff-Riley type II radio galaxies (FRIIs) are edge-brightened (Fanaroff \& Riley \cite{1974MNRAS.167P..31F}). For a given host galaxy luminosity, FRIs have lower radio luminosities than FRIIs (Owen \& Ledlow \cite{1994ASPC...54..319O}). The primary reason for this difference is still not clear. There are two scenarios to explain this difference, due to either the different physical conditions in ambient medium (Gopal-Krishna \& Wiita \cite{2000A&A...363..507G}), or the difference in central engines, i.e., different accretion modes and/or jet formation processes (Ghisellini \& Celotti \cite{2001A&A...379L...1G}). About three decades ago, two different types of central engine were realized based on the analysis on powerful radio sources. Many powerful objects have strong optical and ultraviolet continua, for which one invokes copious and radiatively efficient accretion flows (quasars and broad line radio galaxies) (Begelman et al. \cite{1984RvMP...56..255B}). But many double radio sources, e.g. Cygnus A, lack this radiative signature, which can be instead perhaps explained by Blandford $\&$ Znajek (\cite{1977MNRAS.179..433B}) with a mechanism for electromagnetic extraction of rotational energy of black hole. Later on, using spectropolarimetric observations, the quasar spectra were discovered in many radio galaxies (Antonucci \cite{1984ApJ...278..499A}), suggesting that all radio galaxies and radio quasars are powered by similar central engine. While the hidden quasars were detected in many radio galaxies, in some cases, they were not (Singal \cite{1993MNRAS.262L..27S}). Quasars hidden by dusty gas will re-radiate their absorbed energy in the infrared, therefore, the extensive observations with the infrared spectroscopy were made to search more robustly for the hidden quasars in radio galaxies (e.g., Ogle et al. \cite{2006ApJ...647..161O}; Haas et al. \cite{2005A&A...442L..39H}; Leipski et al. \cite{2009ApJ...701..891L}, etc). The targets were selected by their diffuse radio flux density, to minimize any orientation biases. The Spitzer observations indicate that there are two types of central engine for radio galaxies, which do not show a correlation exactly with FR class. Ogle et al (2006) showed that about half of the narrow-line FRII radio galaxies have a mid-IR luminosity at 15 $\rm \mu m$ of $\rm > 8\times10^{43}erg~s^{-1}$, indicating strong thermal emission from hot dust in the active galactic nucleus, just like the matched quasars. However, they also found that another half do not. These MIR-weak sources do not contain a powerful accretion disk, and they may be fit with nonthermal, jet-dominated AGNs, where the jet is powered by a radiatively inefficient accretion flow or black hole spin-energy, rather than energy extracted from an accretion disk. The dismatch with FR class was also found in FRIs. Leipski et al. (2009) reported most FRIs lack powerful type-1 AGN, but it is not tenable to generalize on associations between FRI galaxies and nonthermal only AGNs, and a fraction of FRIs do have warm dust emission which could be attributed to hidden type-1 nuclei (Antonucci \cite{2002apsp.conf..151A}). The different central engine types in radio galaxies could be constrained from their IR luminosty. On the scales of the relativisitic jets being produced, the central engine may deeply differ, which call the investigations to understand how accretion mode affects the innermost radio emission. Very long baseline interferometry (VLBI) is one of the most poweful tools to detect the jet properties at pc-scales. In this work, we combine the VLBA and mid-infrared observations for a sample of radio galaxies, to study the relation of accretion process and jet properties at pc-scale. Our sample is shown in Section \ref{sec:2}, and Section \ref{sec:3} is for the VLBA and MIR data. We present the results and discussions in Section \ref{sec:4}, while the conclusion is provided in Section \ref{sec:5}. Throughout the paper, we use a cosmology with $\rm H_0 = 70~km~ s^{-1}\rm Mpc^{-1}$, $\Omega_{\rm m} = 0.30$, $\Omega_\Lambda = 0.70$. The spectral index $\alpha$ is defined as $f_{\nu}\propto\nu^{\alpha}$, in which $f_{\nu}$ is the flux desity at frequency $\nu$. \section{Sample} \label{sec:2} To systematically study the relationship between the accretion mode and the pc-scale jet properties, we choose a sample from 3CRR\footnote{http://3crr.extragalactic.info/} catalogue (Laing et al. \cite{1983MNRAS.204..151L}). There are 173 sources in the 3CRR catalogue, including 43 quasars, 10 broad-line radio galaxies, and 120 narrow-line radio galaxies. The original 3CRR catalogue has a flux limit of 10 Jy at 178 MHz, and is the canonical low-frequency selected catalogue of bright radio sources. From 3CRR sample, the MIR observations have been well studied for a well-defined, radio flux-limited sample of 50 radio galaxies with a flux density at $178$ MHz $>16.4$ Jy, and 5 GHz VLA core flux density $\geq$ 7 mJy (e.g. Ogle et al. \cite{2006ApJ...647..161O}; Haas et al. \cite{2005A&A...442L..39H}; Leipski et al. \cite{2009ApJ...701..891L}, etc). The MIR emission enable us to explore the existence of hidden quasars, thus we use this subsample in our study. We carefully searched the VLBI observation for all these 50 objects, and found that 27 sources have already been observed with VLBA. We observed the ramaining 23 targets with VLBA at 5 GHz. In two objects, the poor $uv$ data preclude us to make good images. Moreover, in three of the 27 sources with VLBA observation from archive, the VLBA data are not useful to make final images. After excluding these five sources, the final sample consisits of 45 radio galaxies with MIR detections and VLBA observations either by us or from archive. The essential information of the sample are list in Table \ref{tab_1}, in which 30 sources belong to FRIIs, 11 sources FRIs, and the remaining 4 sources are core-dominated source (Laing et al. \cite{1983MNRAS.204..151L}). \begin{center} \begin{footnotesize} \setlength{\tabcolsep}{2pt} \begin{longtable}{lcccccccccccr} \caption{Sample of 3CRR radio galaxies.\label{tab_1}}\\ \hline\hline name &alias &ID & $z$ & FR &log $M_{\rm BH}$ & D & $f_{\textrm{VLA}}$ & $f_{\textrm{178}}$& $f_{\textrm{MIR}}$ &log $L_{\textrm{bol}}$ & Calibrator & Distance \\ & & & & & ($\rm M_{\odot}$)& ($\rm Mpc$)& ($\rm mJy$) & ($\rm Jy$) & ($\rm mJy$) & ($\rm erg $~$s^{-1}$) & & ($\rm deg$) \\ (1) & (2) & (3)& (4) & (5) & (6) & (7) &(8)&(9)& (10)& (11) &(12)&(13)\\ \endfirsthead \caption{Continue.}\\ \hline\hline name &alias &ID & $z$ & FR &log $M_{\rm BH}$ & D & $f_{\textrm{VLA}}$ & $f_{\textrm{178}}$& $f_{\textrm{MIR}}$ &log $L_{\textrm{bol}}$ & Calibrator & Distance \\ & & & & & ($\rm M_{\odot}$)& ($\rm Mpc$)& ($\rm mJy$) & ($\rm Jy$) & ($\rm mJy$) & ($\rm erg $~$s^{-1}$) & & ($\rm deg$) \\ (1) & (2) & (3)& (4) & (5) & (6) & (7) &(8)&(9)& (10)& (11) &(12)&(13)\\ \hline \endhead \hline \caption{continued on next page} \endfoot \hline \endlastfoot \hline 3C 31 & UGC00689 & Bo015 & 0.0167 & I & 8.65 & 72.4 & 92 & 18.3 & 17.19 &43.82& & \\ 3C 33 & & BG239 & 0.0595 & II & 8.50 & 266 & 24 & 59.3 & 75 & 45.20& & \\ 3C 47 & & BG239 & 0.425 & II & 9.20 & 2330 & 73.6 & 28.8 & 34.39 &46.32 & &\\ 3C 48$^c$ & & $...$ & 0.367 & C & 8.80 & 1960 & 896 & 60 & 110.91 &46.62& & \\ 3C 66B & UGC01841 & Bo015 & 0.0215 & I & 8.58 & 93.6 & 182 & 26.8 & 4.76 & 43.56& & \\ 3C 79 & & BG239 & 0.2559 & II & 8.80 & 1294 & 10 & 33.2 & 42.08 & 46.03& 0316+162 & 2.23 \\ 3C 84$^b$ && $...$ & 0.0177 & I & 8.89 & 76.8 & 59600 & 66.8 & 1146.04&45.30 & & \\ 3C 98 & 0356+10 & BG158 & 0.0306 & II & 8.21 & 134 & 9 & 51.4 & 48.8$^a$ &44.28 & & \\ 3C 109 && BT065 & 0.3056 & II & 9.30 & 1586 & 263 & 23.5 & 120.02 & 46.51& & \\ 3C 123$^b$ && $...$ & 0.2177 & II & 7.87 & 1078 & 100 & 206 & 2.8 &44.99 & & \\ 3C 138& & BC081 & 0.759 & C & 8.70 & 4700 & 94 & 24.2 & 15.1$^a$ & 46.12& & \\ 3C 147$^b$ && $...$ & 0.545 & C & 8.70 & 3142 & 2500 & 65.9 & 22.4$^a$ &46.03& & \\ 3C 173.1 && BG239 & 0.292 & II & 8.96 & 1510 & 7.4 & 16.8 & 0.6 & 44.67&0708+742 & 0.78 \\ 3C 192 & &BG239 & 0.0598 & II & 8.43 & 268 & 8 & 23 & 3.2 & 44.13& 0759+252 &1.19 \\ 3C 196 & &BG239 & 0.871 & II & 9.60 & 5570 & 7 & 74.3 & 22.9 & 46.68&0804+499 &1.82 \\ 3C 208 & &BH167 & 1.109 & II & 9.40 & 7510 & 51 & 18.3 & 5.8 & 46.38& & \\ 3C 212 & &BH057 & 1.049 & II & 9.20 & 7010 & 150 & 16.5 & 15.5 &46.68 & & \\ 3C 216$^b$& &$...$ & 0.668 & II & 7.00 & 4020 & 1050 & 22 & 28.7 &46.58& & \\ 3C 219& & BG239 & 0.1744 & II & 8.77 & 841.7 & 51 & 44.9 & 11.2 & 45.31& & \\ 3C 220.1 && BG239 & 0.61 & II & 8.40 & 3600 & 25 & 17.2 & 2.4 & 45.66&& \\ 3C 226 && BG239 & 0.82 & II & 8.05 & 5200 & 7.5 & 16.4 & 15.65 & 46.52&0943+105 &0.77 \\ 3C 228 && BG239 & 0.5524 & II & 8.27 & 3194 & 13.3 & 23.8 & 0.99 & 45.29& 0951+175& 3.15 \\ 3C 234 && BG239 & 0.1848 & II & 8.88 & 897.5 & 90 & 34.2 & 239 &46.39& & \\ 3C 254 & &BG239 & 0.734 & II & 9.30 & 4510 & 19 & 21.7 & 11.6$^a$ &46.01& & \\ 3C 263 && BG239 & 0.652 & II & 9.10 & 3910 & 157 & 16.6 & 29.8 &46.57& & \\ 3C 264 && BK125 & 0.0208 & I & 8.57 & 90.5 & 200 & 28.3 & 10.32 & 43.80& & \\ 3C 272.1$^b$ && $...$ & 0.0029 & I & 8.40 & 12 & 180 & 21.1 & 27.6&42.76 & & \\ 3C 274 & J1230+12 & W040 & 0.0041 & I & 8.86 & 18 & 4000 & 1144.5 & 42.96 &43.19& & \\ 3C 275.1& & BG239 & 0.557 & II & 8.30 & 3230 & 130 & 19.9 & 8.4 & 46.03& & \\ 3C 286$^b$ & & $...$ & 0.849 & C & 8.50 & 5400 & 5554 & 27.3 & 7.64$^a$ &45.97& & \\ 3C 288 && BG239 & 0.246 & I & 9.50 & 1240 & 30 & 20.6 & 0.6 & 44.55& & \\ 3C 300 && BG239 & 0.272 & II & 8.49 & 1390 & 9 & 19.5 & 0.7 & 44.67&1417+172&2.61 \\ 3C 309.1 & J1459+71 &BB233 & 0.904 & II & 9.10 & 5830 & 2350 & 24.7 & 17.2$^a$&46.29 & & \\ 3C 326 & 1549 & BG202 & 0.0895 & II & 8.23 & 409 & 13 & 22.2 & 0.39 &43.69& & \\ 3C 338 && BV017 & 0.0303 & I & 9.07 & 133 & 105 & 51.1 & 2.4 & 43.56&& \\ 3C 380 && BM157 & 0.691 & I & 9.40 & 4190 & 7447 & 64.7 & 40.4 & 46.72&& \\ 3C 382 && BT065 & 0.0578 & II & 8.75 & 258 & 188 & 21.7 & 114 & 45.33& & \\ 3C 386 && BG239 & 0.0177 & I & 8.57 & 76.8 & 120 & 26.1 & 2.47 & 43.20& & \\ 3C 388 && BG239 & 0.0908 & II & 8.81 & 415 & 62 & 26.8 & 0.84 & 43.96& & \\ 3C 390.3$^b$& & $...$ & 0.0569 & II & 8.92 & 254 & 330 & 51.8 & 164 & 45.44& & \\ 3C 401 && BG239 & 0.201 & II & 9.18 & 986 & 32 & 22.8 & 0.8 & 44.50& & \\ 3C 436 && BG239 & 0.2145 & II & 8.66 & 1060 & 19 & 19.4 & 1.5 & 44.76& & \\ 3C 438 & &BG239 & 0.29 & II & 8.74 & 1490 & 7.1 & 48.7 & 0.45 & 44.56& 2202+363&2.23 \\ 3C 452 & &BB199 & 0.0811 & II & 8.46 & 369 & 130 & 59.3 & 45 &45.24 & & \\ 3C 465 && V018 & 0.0293 & I & 8.77 & 128 & 270 & 41.2 & 3.17 & 43.63& & \\ \end{longtable} \end{footnotesize} \end{center} {\footnotesize Notes: Columns (1) - (2): source name and alias name; Column (3): VLBA project code, $^b,^c$ - VLBA measurements from Fomalont et al. (\cite{2000ApJS..131...95F}), and Worrall et al. (\cite{2004MNRAS.347..632W}), respectively; Columns (4) and (5): redshift and FR types I and II, C represents the core-dominated source; Column (6): black hole mass; Column (7): luminosity distance; Columns (8) - (11): the VLA core flux density at 5 GHz, and the 178 MHz flux density, and mid-infrared flux desity at 15 $\mu$m ($^a$ - at 24 $\mu$m), and the bolometric luminosity; Columns (12) - (13): phase calibrators for phase-reference observations, and its separation to the source.} \section{Data compilation} \label{sec:3} In this work, the VLBA and MIR data are essential to study the relationship between the accretion mode and pc-scale jets in radio galaxies, which are complied from our observations and archive data. \subsection{VLBA observations and data reduction} \label{subsec:Vdata} The VLBA observations of our sample consists of three groups. In the first group, we performed VLBA observations at C-band with a total observing time of 20 hours for 23 sources in three blocks for scheduling convenience on Feb. 13, 14, and 15, 2016 (program ID: BG239). In two of these 23 sources, we are not able to make images due to poor $uv$ data quality, thus this group finally consists of 21 objects. Among these 21 sources, thirteen radio galaxies can be self-calibrated with observing time of 30 mins for each target, while for the remaining eight sources, the phase-referencing is required with on-source time of 40 mins individually. These sources and the related phase calibrators are list in Table \ref{tab_1}. Group two has 16 radio galaxies, of which the VLBA observational data can be downloaded from NRAO archive\footnote{https://archive.nrao.edu/archive/advquery.jsp} (see program ID in Table \ref{tab_1}). For the rest eight sources, the third group, the measurements of jet components can be directly obtained from literatures (Fomalont et al. \cite{2000ApJS..131...95F}; Worrall et al. \cite{2004MNRAS.347..632W}). The data reduction was performed for the sources in groups one and two. Data are processed with AIPS in a standard way. Before fringe fitting, we correct for Earth orientation, remove dispersive delay from ionosphere, and calibrate the amplitude by using system temperature and gain curve. Phase calibration is followed in order by correcting for instrumental phase and delay using pulse-calibration data, removing the residual phase, delay and rate for relative strong targets by fringe fitting on source itself. For weaker targets, phase-referencing technique is taken by applying the residual phase, delay and rate solutions from phase-referencing calibrator to the corresponding target in interpolating method. Imaging and model-fitting were performed in DIFMAP and the final results are given in Table \ref{tab_2}, in which the measurements of jet components directly adopted from literatures are also given for eight sources. Tentatively, we assume the brightest component to be radio core in this work. The VLBA radio images for each object are shown in Figure \ref{fig:bg239} and Figure \ref{fig:other}, for groups one and two, respectively. All images are at 5 GHz, except for 3C 208, in which 8 GHz data is used since there is no 5 GHz data available. \begin{center} \begin{footnotesize} \begin{longtable}{lcccccccr} \caption{Results for the radio galaxies \label{tab_2}}\\ \hline\hline Name & Comps. & FR & $S$ & $r$ & $\theta$ & $a$ & $b/a$ &log $T_{\rm B}$ \\ & & & (mJy) & (mas) & (deg)& (mas) & &(K) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7)&(8) & (9) \\ \endfirsthead \caption{Continue.}\\ \hline\hline Name & Comps. & FR & $S$ & $r$ & $\theta$ & $a$ & $b/a$ &log $T_{\rm B}$ \\ & & & (mJy) & (mas) & (deg)& (mas) & &(K) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7)&(8) & (9) \\ \hline \endhead \hline \caption{continued on next page} \endfoot \hline \endlastfoot \hline 3C 31 & C & I &80.37 & 0.24 & 178.16 & 0.22 & 1.00 & 11.09 \\ & & & 15.97 & 0.85 & $-19.75$ & 0.14 & 1.00 & \\ & & & 3.09 & 7.79 & $-14.43$ & 1.55 & 1.00 & \\ & & & 5.02 & 3.00 & $-13.66$ & 1.18 & 1.00 & \\ & & & 1.16 & 11.45 & $-14.64$ & 1.40 & 1.00 & \\ 3C 33 & C & II & 20.89 & 0.07 & $-149.68$ & 0.24 & 1.00 & 10.46 \\ & & & 1.90 & 4.57 & $-155.97$ & 0.25 & 1.00 & \\ & & & 12.94 & 0.26 & 25.18 & 14.18 & 0.05 & \\ & & & 2.65 & 15.41 & $-158.56$ & 5.34 & 0.32 & \\ & & & 1.24 & 12.15 & 25.19 & 0.65 & 1.00 & \\ & & & 1.69 & 42.71 & 26.19 & 5.78 & 0.33 & \\ & & & 0.91 & 16.14 & 21.54 & 1.11 & 1.00 & \\ & & & 0.59 & 35.32 & 25.76 & 0.07 & 1.00 & \\ & & & 0.60 & 54.06 & 26.83 & 1.77 & 1.00 & \\ & & & 0.41 & 4.14 & 27.21 & 0.49 & 1.00 & \\ 3C 47 & C & II & 50.97 & 0.05 & 33.95 & 0.14 & 1.00 & 11.57 \\ & & & 5.52 & 2.06 & $-149.00$ & 0.24 & 1.00 & \\ & & & 5.13 & 7.46 & $-149.60$ & 13.71 & 0.08 & \\ & & & 0.78 & 20.88 & $-149.04$ & 1.09 & 1.00 & \\ & & & 1.31 & 4.49 & $-146.83$ & 0.41 & 1.00 & \\ & & & 1.04 & 12.30 & $-145.93$ & 0.75 & 1.00 & \\ & & & 1.10 & 1.58 & 22.11 & 0.42 & 1.00 & \\ & & & 0.80 & 7.01 & $-153.43$ & 0.48 & 1.00 & \\ & & & 0.24 & 25.96 & $-147.85$ & 0.62 & 1.00 & \\ 3C 48 & C & C &56.10 & & 171.00 & 2.20 & 0.18 & 9.93 \\ 3C 66B & C &I &137.32 & 0.31 & $-124.58$ & 0.03 & 1.00 & 13.05 \\ & & & 84.78 & 0.55 & 56.66 & 0.03 & 1.00 & \\ & & & 1.52 & 21.79 & 53.79 & 1.24 & 1.00 & \\ & & & 16.05 & 2.43 & 59.46 & 0.36 & 1.00 & \\ & & & 5.38 & 4.80 & 55.70 & 0.16 & 1.00 & \\ & & & 21.33 & 11.02 & 53.64 & 13.38 & 0.10 & \\ & & & 0.66 & 7.23 & 57.92 & 0.41 & 1.00 & \\ 3C 79 & C & II &27.10 & 0.02 & 108.07 & 0.46 & 1.00 & 10.16 \\ & & & 0.98 & 2.41 & $-72.30$ & 1.97 & 1.00 & \\ & & & 0.86 & 7.06 & $-71.87$ & 2.50 & 1.00 & \\ & & & 0.32 & 14.89 & $-13.89$ & 0.12 & 1.00 & \\ & & & 0.30 & 24.28 & $-48.36$ & 0.16 & 1.00 & \\ 3C 84 & C & I &17752.00 & & 154.00 & 4.60 & 0.78 & 10.90 \\ & & & 5833.00 & & 170.00 & 6.60 & 0.14 & \\ & & & 3084.00 & & 161.00 & 5.70 & 0.28 & \\ 3C 98 & C & II &44.87 & 0.01 & 14.63 & 0.21 & 1.00 & 10.88 \\ 3C 109 & C & II & 221.48 & 0.05 & $-7.57$ & 0.22 & 1.00 & 11.75 \\ & & & 28.37 & 1.91 & 155.42 & 0.11 & 1.00 & \\ & & & 6.30 & 8.66 & 150.16 & 1.51 & 1.00 & \\ & & & 9.16 & 3.66 & 154.24 & 1.05 & 1.00 & \\ & & & 3.72 & 26.09 & 147.10 & 1.25 & 1.00 & \\ & & & 5.40 & 13.83 & 152.72 & 2.92 & 1.00 & \\ & & & 3.77 & 20.73 & 149.93 & 2.51 & 1.00 & \\ & & & 4.47 & 5.37 & 151.00 & 1.01 & 1.00 & \\ & & & 4.40 & 10.81 & 150.35 & 1.43 & 1.00 & \\ 3C 123 & C & II &111.00 & & 92.00 & 3.90 & 0.77 & 9.00 \\ 3C 138 & C & C & 130.47 & 0.04 & $-113.69$ & 0.60 & 1.00 & 10.90 \\ & & &76.55 & 1.64 & $-109.71$ & 0.16 & 1.00 & \\ & & & 47.50 & 3.94 & 112.07 & 3.51 &0.21 & \\ & & & 98.36 & 6.30 & $-92.35$ & 0.19 & 1.00 & \\ & & & 22.17 & 10.83 & 58.89 & 6.77 & 0.25 & \\ & & & 38.20 & 14.58 & 27.92 & 8.06 & 1.00 & \\ & & & 10.93 & 19.00 & 6.22 & 5.11 & 0.26 & \\ & & & 20.80 & 2.69 & $-199.01$ & 0.54 & 1.00 & \\ 3C 147 & C & C &882.00 & & 171.00 & 2.10 & 0.57 & 10.77 \\ & & & 506.00 & & 16.00 & 2.80 & 0.18 & \\ & & & 676.00 & & 146.00 & 5.00 & 0.28 & \\ & & & 222.00 & & & 1.40 & 0.43 & \\ 3C 173.1& C & II &14.81 &0.02 & $-178.81$ & 0.06 & 1.00 & 11.69 \\ 3C 192 & C & II &13.63 & 0.02 & $-150.73$ & 0.03 & 1.00 & 12.09 \\ 3C 196 & C & II &14.25 & 0.02 & $-69.89$ & 0.18 & 1.00 & 11.03 \\ 3C 208$^a$ & C &II & 72.62 & 0.02 & 170.04 & 0.15 & 0.88 & 11.65 \\ & & & 9.01 & 3.80 & $-96.16$ & 0.90 & 0.45 & \\ & & & 4.54 & 0.78 & $-95.66$ & 0.07 & 1.00 & \\ 3C 212 & C & II &118.40 & 0.04 & 125.47 & 0.16 & 1.00 & 12.14 \\ & & & 12.87 & 1.06 & $-40.29$ & 0.07 & 1.00 & \\ & & & 3.41 & 11.64 & $-35.84$ & 1.63 & 1.00 & \\ & & & 10.79 & 2.53 & $-43.52$ & 0.84 & 1.00 & \\ & & & 1.40 & 15.20 & $-36.59$ & 0.17 & 1.00 & \\ & & & 1.00 & 17.67 & $-35.88$ & 0.24 & 1.00 & \\ 3C 216 & C & II &620.00 & & 152.00 & 0.80 & 0.38 & 11.70 \\ & & & 85.00 & & 149.00 & 2.70 & 0.22 & \\ 3C 219 & C & II &44.47 & 0.10 & 42.98 & 0.09 & 1.00 & 11.73 \\ & & & 12.08 & 1.30 & $-137.30$ & 2.83 & 0.04 & \\ & & & 0.78 & 7.24 & $-141.25$ & 4.03 & 0.21 & \\ & & & 0.24 & 15.82 & $-140.63$ & 1.66 & 1.00 & \\ 3C 220.1 & C &II &22.26 & 0.16 & $-66.26$ & 0.58 & 0.32 & 10.35 \\ & & & 6.57 & 0.68 & 80.44 & 1.31 & 0.20 & \\ & & & 0.64 & 5.43 & 82.56 & 3.08 & 0.82 & \\ & & & 0.31 & 11.14 & 107.56 & 0.06 & 1.00 & \\ 3C 226 & C & II &17.22 & 0.03 & $-11.04$ & 0.03 & 1.00 & 12.66 \\ & & & 0.25 & 1.34 & 148.61 & 0.17 & 1.00 & \\ & & & 0.27 & 1.50 & $-25.26$ & 0.13 & 1.00 & \\ 3C 228 & C & II &18.41 & 0.03 & 1.36 & 0.05 & 1.00 & 12.10 \\ & & & 0.90 & 2.20 & $-173.66$ & 0.09 & 1.00 & \\ & & & 0.30 & 9.45 & $-167.45$ & 0.36 & 1.00 & \\ & & & 0.25 & 4.34 & $-164.92$ & 0.26 & 1.00 & \\ 3C 234 & C & II &19.71 & 0.19 & $-166.70$ & 0.28 & 1.00 & 10.39 \\ & & & 12.48 & 5.19 & 65.34 & 1.52 & 0.39 & \\ & & & 1.87 & 8.49 & 67.38 & 0.62 & 1.00 & \\ & & & 10.44 & 1.53 & 67.37 & 0.28 & 1.00 & \\ & & & 0.47 & 17.01 & 65.87 & 1.56 & 1.00 & \\ & & & 0.23 & 5.79 & $-108.60$ & 0.22 & 1.00 & \\ & & & 0.20 & 29.54 & 66.67 & 0.84 & 1.00 & \\ & & & 0.23 & 11.87 & 63.40 & 0.17 & 1.00 & \\ & & & 0.23 & 13.55 & 72.18 & 0.20 & 1.00 & \\ & & & 0.16 & 3.93 & 39.60 & 1.03 & 1.00 & \\ 3C 254 & C & II &18.19 & 0.11 & 86.87 & 0.02 & 1.00 & 12.99 \\ & & & 3.27 & 1.25 & $-71.87$ & 0.04 & 1.00 & \\ & & & 2.61 & 3.20 & $-71.01$ & 1.06 & 1.00 & \\ & & & 0.71 & 5.98 & $-66.40$ & 0.80 & 1.00 & \\ & & & 0.32 & 9.71 & $-70.11$ & 1.09 & 1.00 & \\ & & & 0.26 & 12.27 & $-72.26$ & 0.24 & 1.00 & \\ 3C 263 & C & II &111.60 & 0.10 & $-72.97$ & 0.01 & 1.00 & 13.38 \\ & & & 49.77 & 0.91 & 108.43 & 2.09 & 0.18 & \\ & & & 4.06 & 3.15 & 111.08 & 0.06 & 1.00 & \\ & & & 10.64 & & 113.84 & 3.31 & 0.16 & \\ & & & 0.36 & & 109.23 & 0.72 & 1.00 & \\ & & & 1.69 & & 111.88 & 14.05 & 0.08 & \\ 3C 264 & C &I &159.07 & 0.08 & 174.75 & 0.13 & 1.00 & 11.84 \\ & & & 20.00 & 1.55 & 28.35 & 0.35 & 1.00 & \\ & & & 18.15 & 4.39 & 24.31 & 1.91 & 1.00 & \\ 3C 272.1 & C & I &187.00 & & & 1.10 & 0.64 & 10.23 \\ & & & 13.00 & & & 2.00 & 1.00 & \\ 3C 274 & C &I &850.85 & 0.14 & 84.64 & 0.40 & 1.00 & 11.58 \\ & & & 390.81 & 0.89 & $-82.20$ & 0.63 & 1.00 & \\ & & & 308.21 & 0.89 & 98.28 & 0.46 & 1.00 & \\ & & & 43.08 & 2.00 & 107.23 & 1.17 & 1.00 & \\ & & & 137.06 & 2.05 & -83.38 & 0.29 & 1.00 & \\ 3C 275.1 & C & II &262.91 & 0.19 & 144.58 & 0.72 & 0.08 & 12.03 \\ & & & 77.80 & 1.38 & $-25.10$ & 0.44 & 1.00 & \\ & & & 6.56 & 8.06 & $-18.32$ & 1.68 & 1.00 & \\ & & & 7.16 & 3.16 & $-15.05$ & 1.22 & 1.00 & \\ & & & 2.02 & 14.96 & $-19.14$ & 2.70 & 1.00 & \\ 3C 286 & C & C &1723.00 & & 33.00 & 4.60 & 0.78 & 10.41 \\ & & & 978.00 & & 61.00 & 7.40 & 0.50 & \\ & & & 192.00 & & 108.00 & 2.60 & 0.58 & \\ 3C 288 & C & I &20.18 & 0.20 & 74.08 & 0.26 & 1.00 & 10.52 \\ 3C 300 & C & II &25.12 & 0.02 & 106.06 & 0.50 & 0.10 & 11.06 \\ 3C 309.1 & C & II &287.32 & 0.38 & $-36.12$ & 0.16 & 1.00 & 12.46 \\ & & & 325.52 & 23.84 & 163.32 & 1.42 & 1.00 & \\ & & & 83.38 & 24.70 & 165.76 & 0.82 & 1.00 & \\ & & & 283.09 & 1.01 & 162.64 & 0.35 & 1.00 & \\ & & & 148.12 & 22.44 & 167.73 & 2.24 & 1.00 & \\ & & & 242.09 & 50.30 & 155.79 & 14.43 & 1.00 & \\ & & & 142.48 & 40.73 & 163.31 & 4.42 & 1.00 & \\ & & & 6.38 & 52.10 & 145.74 & 0.69 & 1.00 & \\ & & & 35.80 & 2.54 & 165.33 & 0.36 & 1.00 & \\ & & & 16.38 & 36.63 & 174.28 & 1.80 & 1.00 & \\ & & & 11.38 & 31.04 & 164.18 & 0.42 & 1.00 & \\ 3C 326 & C & II &35.75 & 0.04 & 133.83 & 0.04 & 1.00 & 12.28 \\ & & & 0.86 & 4.55 & $-37.95$ & 0.59 & 1.00 & \\ & & & 1.22 & 0.84 & $-66.60$ & 0.28 & 1.00 & \\ & & & 0.27 & 2.69 & 130.60 & 0.55 & 1.00 & \\ 3C 338 & C & I &90.81 & 0.12 & $-113.55$ & 0.16 & 1.00 & 11.42 \\ & & & 33.52 & 1.50 & 79.75 & 1.24 & 1.00 & \\ & & & 13.40 & 3.00 & $-94.21$ & 0.54 & 1.00 & \\ & & & 10.04 & 6.02 & 78.11 & 1.45 & 1.00 & \\ & & & 7.34 & 10.46 & $-90.40$ & 2.15 & 1.00 & \\ & & & 5.50 & 10.33 & 86.11 & 1.26 & 1.00 & \\ & & & 7.25 & 20.47 & 91.12 & 2.66 & 1.00 & \\ 3C 380 & C & I &1038.99 & 0.61 & 149.70 & 1.37 & 0.15 & 11.87 \\ & & & 240.58 & 9.09 & $-31.75$ & 1.78 & 0.42 & \\ & & & 153.21 & 2.87 & $-26.57$ & 1.91 & 0.31 & \\ 3C 382 & C & II &120.30 & 0.43 & $-125.88$ & 0.06 & 1.00 & 12.42 \\ & & & 104.67 & 0.56 & 52.66 & 0.31 & 1.00 & \\ & & & 22.94 & 1.99 & 56.80 & 0.75 & 1.00 & \\ 3C 386 & C & I &17.19 & 0.05 & $-136.79$ & 0.33 & 1.00 & 10.07 \\ 3C 388 & C & II &35.55 & 0.16 & 48.61 & 0.19 & 1.00 & 10.92 \\ & & & 6.11 & 1.38 & $-114.17$ & 0.16 & 1.00 & \\ & & & 2.70 & 3.81 & $-118.78$ & 1.38 & 1.00 & \\ & & & 0.77 & 9.74 & $-117.36$ & 1.59 & 1.00 & \\ & & & 1.37 & 2.45 & 54.78 & 2.42 & 1.00 & \\ & & & 0.33 & 15.07 & $-124.03$ & 0.40 & 1.00 & \\ & & & 0.16 & 6.68 & $-118.28$ & 0.40 & 1.00 & \\ 3C 390.3 & C & II &463.00 & & 159.00 & 1.90 & 0.21 & 10.68 \\ & & & 261.00 & & 166.00 & 1.40 & 0.28 & \\ 3C 401 & C & II &20.94 & 0.10 & 36.42 & 0.21 & 1.00 & 10.69 \\ & & & 5.58 & 0.62 & $-169.43$ & 0.43 & 1.00 & \\ & & & 0.49 & 5.08 & $-157.05$ & 0.18 & 1.00 & \\ & & & 0.24 & 17.04 & $-164.52$ & 1.46 & 1.00 & \\ & & & 2.06 & 1.93 & $-156.70$ & 0.44 & 1.00 & \\ 3C 436 & C & II &16.79 & 0.62 & 3.21 & 0.70 & 0.43 & 9.92 \\ & & & 0.88 & 5.36 & $-3.76$ & 0.90 & 1.00 & \\ & & & 0.39 & 15.27 & $-28.93$ & 0.21 & 1.00 & \\ & & & 0.27 & 36.68 & $-16.86$ & 0.78 & 1.00 & \\ 3C 438 & C & II &16.00 & 0.03 & $-106.51$ & 0.11 & 1.00 & 11.19 \\ 3C 452 & C & II &95.31 & 0.26 & $-95.22$ & 0.84 & 1.00 & 10.04 \\ & & & 74.44 & 2.12 & 85.89 & 1.20 & 1.00 & \\ & & & 27.54 & 1.90 & $-89.84$ & 0.54 & 1.00 & \\ & & & 9.76 & 8.73 & $-88.61$ & 1.32 & 1.00 & \\ & & & 8.06 & 4.99 & $-87.66$ & 2.52 & 1.00 & \\ & & & 5.64 & 9.43 & 81.59 & 1.26 & 1.00 & \\ 3C 465 & C &I &90.03 & 0.49 & 135.88 & 0.30 & 1.00 & 10.87 \\ & & & 29.16 & 1.54 & $-59.42$ & 0.81 & 1.00 & \\ & & & 12.22 & 3.12 & $-56.69$ & 0.56 & 1.00 & \\ & & & 3.66 & 8.11 & $-49.55$ & 0.67 & 1.00 & \\ & & & 7.58 & 4.45 & $-56.07$ & 0.86 & 1.00 & \\ \end{longtable} \end{footnotesize} \end{center} {\footnotesize Notes: Column (1): source name, $^a$ - at 8 GHz; Column (2): components, C represents the radio core; Column (3): FR types I and II, C represents the core-dominated source; Column (4): flux density; Columns (5) - (6): component position, and its position angle; Column (7): major axis; Column (8): axial ratio; Column (9): brightness temperature.} \subsection{Mid-infrared data} \label{subsec:Mdata} We collected the MIR data for our sample from NASA/IPAC Extragalactic Database (NED)\footnote{http://ned.ipac.caltech.edu/}, which is originally from the observations using either Spitzer IRS/MIPS or ISOCAM (e.g. Ogle et al. \cite{2006ApJ...647..161O}; Haas et al. \cite{2005A&A...442L..39H}; Leipski et al. \cite{2009ApJ...701..891L}; Temi et al. \cite{2005ApJ...622..235T}). In the sample, the flux density at 15 $\mu$m are available for 39 sources, and only 24 $\mu$m flux density are obtained in the remaining six radio galaxies (i.e., 3C 98, 3C 254, 3C 309.1, 3C 138, 3C 147, and 3C 286, see Table \ref{tab_1}). \section{RESULTS AND DISCUSSION} \label{sec:4} \subsection{Accretion mode} \label{sec:4.1} FRIs have lower radio luminosities than FRIIs for a similar host galaxy luminosity (Owen \& Ledlow \cite{1994ASPC...54..319O}). FRIs and FRIIs have shown clear dividing line in the radio and optical luminosity plane, which can be re-expressed as a line of constant ratio of the jet or the disk accretion power with the Eddington luminosity. This implies the accretion process plays a more important role in FRIs and FRIIs dichotomy than a different environment (Ghisellini \& Celotti \cite{2001A&A...379L...1G}). Quasars hidden by dusty gas will re-radiate their absorbed energy in the infrared. Ogle et al. (\cite{2006ApJ...647..161O}) investigated the MIR emission using the Spitzer survey of 3C objects, including radio galaxies and quasars, selected by the relatively isotropic lobe emission. They argued that most of the MIR-weak sources may not contain a powerful accretion disk. It is likely that in the nonthermal, jet-dominated AGNs, the jet is powered by a radiatively inefficient accretion flow or black hole spin-energy, rather than energy from accrtion disk. Two different central engines are recognized for FRIs or FRIIs in their study, with the dividing value of the luminosity at 15 $\mu$m of $\rm 8\times10^{43} ~ergs ~s^{-1}$. The sources with the luminosity above it are suggested to contain a radiatively efficient accretion flow. Instead of a fixed dividing luminosity, the accretion mode is investigated from the Eddington ratio $L_{\rm bol}/L_{\rm Edd}$ in this work, in which $L_{\rm bol}$ and $L_{\rm Edd}$ are the bolometric and Eddington luminosities, respectively. The black hole masses of 17 sources are collected from various literatures (McLure et al. \cite{2006NewAR..50..782M}; Wu \cite{2009MNRAS.398.1905W}; McNamara et al. \cite{2011ApJ...727...39M}; Mingo et al. \cite{2014MNRAS.440..269M}). For the rest 28 radio galaxies, the black hole masses were estimated by using the relationship between the host galaxy absolute magnitude at R band ($M_{\rm R}$) and black hole mass provided by McLure et al. (\cite{2004MNRAS.351..347M}), \begin{equation} \rm log~\it (\frac{\rm M_{\rm BH}}{\rm M_{\odot}})=\rm -0.5\it M_{\rm R}\rm -2.74 \end{equation} in which, the $M_{\rm R}$ was calculated from the $R$ magnitude in the updated online 3CRR catalogue. In this work, the bolometric luminosity $L_{\rm bol}$ is calculated from mid-infrared luminosity either at 15 or at 24 $\mu$m, using the relation in Runnoe et al. (\cite{2012MNRAS.426.2677R}), \begin{equation} \rm log~\it L_{\rm bol}=\rm (10.514\pm4.390)+(0.787\pm0.098)~log({\nu}\it L_{\rm \nu,15\mu m}) \end{equation} \begin{equation} \rm log~\it L_{\rm bol}=\rm (15.035\pm4.766)+(0.688\pm0.106)~log({\nu}\it L_{\rm \nu,24\mu m}) \end{equation} in which, a spectral indice of $\alpha_{\nu}=-1$ is used for k-correction. We adopted a conventional value of $L_{\rm bol}/L_{\rm Edd}= 10^{-2}$ to separate radiatively efficient or inefficient accretion mode (e.g., Hickox et al. \cite{2009ApJ...696..891H}). The relationship between the VLBA core luminosity at 5 GHz and the Eddington ratio is presented in Figure \ref{fig:F_vlba}. The rest frame 5 GHz luminosity is estimated from the VLBA 5 GHz or 8 GHz (for 3C 208) core flux density using a spectral indice of $\alpha=0$. While most FRII radio galaxies have higher Eddington ratio than FRIs, we found that there is indeed no single correspondence between the FR morphology and accretion mode. The eight out of thirty FRIIs ($26.7\%$) may have low accretion rate with $L_{\rm bol}/L_{\rm Edd}< 10^{-2}$, and the rest 22 objects ($73.3\%$) are at high accretion mode with $L_{\rm bol}/L_{\rm Edd}\ge 10^{-2}$. In contrast, two out of eleven FRIs ($18.2\%$), and $81.8\%$ FRIs are at radiatively efficient and inefficient accretion mode, respectively. There is a significant correlation between the VLBA core luminosity at 5 GHz and the Eddington ratio, with a Spearman correlation coefficient of $r=0.820$ at $\gg 99.99$ per cent confidence. This implies that the higher accretion rate are likely able to produce more powerful jets. The correlation between the MIR luminosity at 15 $\mu$m and VLBA 5 GHz core luminosity is also investigated in Figure \ref{fig:MIR_core}. The luminosity at 15 $\mu$m in six sources were estimated from 24 $\mu$m using a spectral indice of $\alpha_{\nu}=-1$. A significant correlation is found between two parameters with a Spearman correlation coefficient of $r=0.849$ at $\gg 99.99$ per cent confidence. After excluding the common dependence on redshift, the partial Spearman rank correlation method (Macklin \cite{1982MNRAS.199.1119M}) shows that the significant correlation is still present with a correlation coefficient of $r=0.635$ at $\gg 99.99$ per cent confidence. The linear fit gives \begin{equation} \rm log~\it L_{\rm core,5GHz}=\rm (0.951\pm0.083)~log({\nu}\it L_{\rm \nu,15\mu m})\rm -(0.263\pm3.655) \end{equation} While in the flux-limited low-frequency radio survey like 3CRR sample, the low-frequency emission is mostly dominated by the lobe, which, however, is normally located at the jet end, thus represents the past jet activity. In contrast, the MIR, and especially the pc-scale VLBA core emission are instantaneously and comtemporarily from the central engine. The strong correlation strongly indicates the tight relation between the accretion disk and jets, as found in various works (e.g., Cao \& Jiang\cite{1999MNRAS.307..802C}; Gu et al. \cite{2009MNRAS.396..984G}). In the framework of unification scheme of AGNs, FRIs are unified with BL Lac objects (BL Lacs), and FRIIs with flat-spectrum radio quasars (FSRQs) (Antonucci \cite{1993ARA&A..31..473A}; Urry \& Padovani \cite{1995PASP..107..803U}). The blazars consists of BL Lacs and FSRQs, and are characteristic of strong beaming effect due to the jets pointing towards us with small viewing angles. The jets in FSQRs are found to have stronger power and higher velocity than those in BL Lacs (e.g., Gu et al. \cite{2009MNRAS.396..984G}; Chen \cite{2018arXiv180305715C}). On the other hand, the Eddington ratios of BL Lacs are systematically lower than those of radio quasars with a rough division at $L_{\rm bol}/L_{\rm Edd} \sim 0.01$, which imply that the accretion mode of BL Lacs may be different from that of radio quasars (e.g., Xu et al. \cite{2009ApJ...694L.107X}). The radio galaxies used in this study has its own advantages in avoiding the strong contaminaion of jet beaming effect on the VLBA core emission, since the jet viewing angle are usually large in radio galaxies. Our results of higher accretion rate likely associated with stronger jet are generally in agreement with the unification scheme. \subsection{Pc-scale Radio Morphology} \label{sec:4.2} It can be clearly seen from the high-resolution VLBA 5 GHz images in Figures \ref{fig:bg239} and \ref{fig:other} that there are various morphologies in our sample sources, including 10 core only, 29 one-sided core-jet, and 6 two-sided core-jet structures. The two-sided core-jet structure is found in 3C 33, 3C 38, 3C 338, 3C 452 (see Figures \ref{fig:bg239} and \ref{fig:other}), 3C 147, and 3C 286 (Fomalont et al. \cite{2000ApJS..131...95F}). In this work, we will not distinguish the latter two categories, instead we call them all as core-jet structure. The radio morphologies was further studied with the source fraction of the specified structure in 17 radido galaxies with inefficient accretion flow and 28 efficient ones. At low Eddington ratio ($<10^{-2}$), we found that six out of seventeen ($35.3\%$) exhibit core only structure, and the remaining sources ($64.7\%$) have core-jet morphology. In contrast, core only and core-jet present in 3 ($10.7\%$) and 25 ($89.3\%$) sources, respectively, at high Eddington ratio ($\ge10^{-2}$). It thus seems that the higher accretion rate may be more likely related with the core-jet structure. For a similar distribution of viewing angles likely presents in our sample of radio galaxies, radio morphology perhaps can reflect some jet information like strength and speed in different accretion models. A core-jet radio morphology likely indicates the source jet moving at higher speed with relatively powerful. However, a naked core may indicates a relatively weaker jet with lower speed. Based on our analysis, we found that the radiatively inefficient accretion flow may perhaps be also inefficient in producing powerful jets moving at lower speed, while the radiatively efficient one shows higher probability on forming strong jets with higher speed. This is consistent with the correlation between the VLBA core luminosity at 5 GHz and the Eddington ratio shown in Figure \ref{fig:F_vlba}. In a broader framework, this is also consistent with the radio-quiet populations. LINERs and Seyferts can be analog to two accretion systems (Kewley et al. \cite{2006MNRAS.372..961K}). LINERs seem to have radio cores more optically thick than those of Seyferts, and their radio emission is mainly confined to a compact core or base of a jet, thus it is likely that the radiatviely inefficient accretion flow is likely to host a more compact VLBI pc-scale core, than that of radiative efficient one. The pc-scale VLBA projected linear size $l$ of sources is estimated as the largest distance among radio components for core-jet sources, while directly as the major axis for core-only galaxies. The distribution of pc-scale VLBA size for all sources is presented in Figure \ref{fig:size}, except for eight objects, in which the size is not available in literatures. There is a broad range with most sources in 1 - 100 pc, and the jet extends to about 300 - 400 pc in several core-jet objects. We find a significant correlation between the linear size and the Eddington ratio with a correlation coefficient of $r=0.671$ at $\gg 99.99$ per cent confidence (see Figure \ref{fig:size}). This indicates that the higher accretion rate may have more extended jet, again supporting our results of more powerfully jets in higher-accretion system. \subsection{Brightness Temperature} \label{sec:4.3} From the high-resolution VLBA images, the brightness temperature of radio core $T_{\rm B}$ in the rest frame can be estimated with (Ghisellini et al. \cite{1993ApJ...407...65G}) \begin{equation} T_{\rm B}=1.77\times10^{12}(1+z)(\frac{S_{\nu}}{\rm Jy})(\frac{\nu}{\rm GHz})^{-2}(\frac{\theta_{\rm d}}{\rm mas})^{-2} ~~\rm K \end{equation} in which $z$ is the redshift, $S_{\nu}$ is core flux desity at frequency $\nu$, and $\theta_{\rm d}$ is the angular diameter, $\theta_{\rm d}$ = $\sqrt{ab}$ with $a$ and $b$ being the major and minor axes, respectively. There is an important parameter Doppler factor $\delta$, which can be restricted by \begin{equation} \delta=T_{\rm B}/T_{\rm B}^{'} \end{equation} in which $T_{\rm B}^{'}$ is the intrinsic brightness temperature. The core brightness temperature distribution diagram is presented in Figure \ref{fig:T_B}. In our sample, the core brightness temperature ranges from $10^{9}$ to $10^{13.38}$ K with a median value of $10^{11.09}$ K (see also in Table \ref{tab_2}). Most sources are in the range of $10^{10} - 10^{12}$ K, less than the inverse Compton catastrophic limits $10^{12}$ K (Kovalev et al. \cite{2005AJ....130.2473K}). Therefore, systematically the beaming effect may not be significant in our sample, although it may not be trivial in some cases, for example, in 3C 263, the source with the highest $T_{\rm B}$. In comparison, the VLBA core brightness temperatures of blazars typically range between $10^{11}$ and $10^{13}$ K with a median value near $10^{12}$ K, and can even extend up to $5\times10^{13}$ K (Kovalev et al. \cite{2005AJ....130.2473K}, \cite{2009ApJ...696L..17K}). These results are basically in agreement in the framework of unification scheme of AGNs, with FRIs/FRIIs and BL Lacs/FSRQs. The strong beaming effect results in high brightness temperature of the radio cores in blazars, while it is less pronounced in radio galaxies because of large jet viewing angles. We have analyzed the correlation of the brightness temperature and the Eddington ratio in Figure \ref{fig:T_B}. There is no correlation between two parameters, and the distribution of $T_{\rm B}$ is similar at high and low accreiton rate. \subsection{Compared with VLA data} \label{sec:4.4} We collected VLA 5 GHz flux density for our sources from 3CRR catalogue, then we compared the VLBA with VLA flux density. The flux ratio of VLBA core to VLA core, and ratio of VLBA total to VLA core, are plotted with the Eddington ratio in Figure \ref{fig:compac}. The flux ratio between VLBA and VLA can in principle give information on the source compactness, since they represent the source structure at different scales, with normally the former at pc-scale, and the latter at kpc-scale. There are no correlations between the flux ratio and the Eddington ratio. The flux ratio covers more than one order of magnitude, and there is no systematical difference between the high and low accretion regimes. It's interesting to see that the VLBA core flux density is higher than VLA core in many sources. This can be most likely due to variability. This is even more pronounced when considering the VLBA total flux density. In this case, the VLBA total flux is higher than VLA core in majority of objects, implying the variability may be common in our sample. \subsection{Core/lobe Flux Density Ratio} \label{sec:4.5} In comparison of VLBI pc-scale core flux desity with 178 MHz flux density, we would investigate the present status of core radio activity. It might be possible that those sources with weak MIR dust emission are just recently at radiatively inefficient accretion model, while the large scale radio morphology was produced by past radiatively efficient accretion model. Therefore, their core/lobe flux density ratio are expected to be low. In previous works (e.g., Ogle et al. \cite{2006ApJ...647..161O}), the core and lobe luminosity ratio is indeed less in MIR-weak FRIIs than in MIR-luminous FRIIs at VLA. The ratio of VLBA core to 178 MHz flux density is plotted with the MIR luminosity and the Eddington ratio in Figure \ref{fig:corelobe}. While a MIR luminosity at 15 $\rm \mu m$ of $\rm 8\times 10^{43}~ erg~ s^{-1}$ is adopted to distinguish the MIR-weak and MIR-luminous sources in Ogle el al. (\cite{2006ApJ...647..161O}), we further use the Eddington ratio in recognizing the accretion mode. The flux ratio of VLBA core to 178 MHz covers about two orders of magnitude, and there is no single dependence of the flux ratio on the either the Eddington ratio or MIR luminosity. Similar behaviours are seen in the panels of the flux ratio with MIR luminosity and Eddington ratio. Considering solely the high and low accretion rate regime, there is no correlation between the radio flux ratio and MIR luminosity/Eddington ratio. The distribution of the flux ratio at high accretion rate is broader than that at low rate, which mainly concentrated on lower flux ratio and does not extend to very high values. Interestingly, the FRIIs with low MIR luminosity (below $\rm 8\times 10^{43}~ erg~ s^{-1}$) or low accretion rate ($L_{\rm bol}/L_{\rm Edd}< 10^{-2}$) are exclusively at the lower end of the distribution of radio flux ratio. In contrast, two MIR-luminous or highly accreting FRIs are all at high end. It is not impossible that the location of these sources are due to the recent shining or weakening of the central engine (i.e., both accretion and jet), resulting a higher or lower VLBA core luminosity, thus a lower or higher flux ratio of VLBA core to 178 MHz. \section{SUMMARY} \label{sec:5} We investigated the role of the accretion model in creating the VLBI jets by ultilizing the VLBA and MIR data for a sample of 45 3CRR radio galaxies. The accretion mode is constrained from the Eddington ratio, which is estimated from the MIR-based bolometric luminosity and the black hole masses. While most FRII radio galaxies have higher Eddington ratio than FRIs, we found that there is indeed no single correspondence between the FR morphology and accretion mode with eight FRIIs at low accretion and two FRIs at high accretion rate. There is a significant correlation between the VLBA core luminosity at 5 GHz and the Eddington ratio. We found that the higher accretion rate may be more likely related with the core-jet structure, thus more extended jet. These results imply that the higher accretion rate are likely able to produce more powerful jets. There is a strong correlation between the MIR luminosity at 15 $\mu$m and VLBA 5 GHz core luminosity, in favour of the tight relation between the accretion disk and jets. In our sample, the core brightness temperature ranges from $10^{9}$ to $10^{13.38}$ K with a median value of $10^{11.09}$ K indicating that systematically the beaming effect may not be significant. The exceptional cases, FRIs at high and FRIIs at low accretion rate, are exclusively at the high and low end of the distribution of the flux ratio of VLBA core to 178 MHz flux density. It is not impossible that the location of these sources are due to the recent shining or weakening of the central engine (i.e., both accretion and jet). \section{ACKNOWLEDGEMENTS} \label{sec:rotate} We thank the anonymous referee for constructive comments that improved the manuscript. We thank Minhua Zhou, Mai Liao,and Jiawen Li for helpful discussions. Special thanks are given to Robert Antonucci for the initialization of the project and valuable discussions. This work is supported by the National Science Foundation of China (grants 11473054, U1531245, 11763002, and 11590784). This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The VLBA experiment is sponsored by Shanghai Astronomical Observatory through the MoU with the NRAO. The VLBA is operated by the Long Baseline Observatory which is managed by Associated Universities, Inc., under cooperative agreement with the National Science Foundation. The Long Baseline Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. \textit{Facility:} VLBA {\it Software:} IDL, AIPS, DIFMAP \newpage
train/arxiv
BkiUeirxK6Ot9V_E5Ljg
4
0.8
\section{Introduction} Einstein's general relativity is a unique theory of gravity, and black holes are one of its exact solutions, and are characterised by the no-hair theorem. The boundary of a black hole is known as the event horizon, which is a one-way surface, meaning that nothing can escape from it, including electromagnetic radiation. The existence of a singularity means that space–time ceases to exist, signalling the breakdown of general relativity, requiring modifications in said theory. Sakharov \cite{Sakharov:1966} and Gliner \cite{Gliner:1966} proposed a method to resolve the singularity problem by considering a de Sitter core with equation of state $P=-\rho$ or to obtain a regular model without singularities. This model could provide proper discrimination at the final stage of gravitational collapse, replacing the future singularity. Using this idea, Bardeen \cite{Regular} gave the first black hole solution that shows that there are horizons, but there is no singularity. These solutions are an exact solution of general relativity coupled with nonlinear electrodynamics (NLED) proposed by Ayon-Beato and Garcia \cite{AGB,AGB1,ABG99}. Subsequently, significant efforts have been made to investigate regular black holes \cite{Ansoldi:2008jw,Lemos:2011dq,Zaslavskii:2009kp,Bronnikov:2000vy} (more recently refs. \cite{hc,lbev,Balart:2014cga,Xiang,singh,kumar:2019wpu,Singh:2019wpu,Kumar:2020bqf,dvs99,Tzikas:2018cvs,Singh:2020xju}), but most of these solutions are fundamentally based on Bardeen's proposal. Regular black holes are also found in Einstein–Gauss–Bonnet gravity \cite{25,28,29,Singh20}, $f(r)$ gravity \cite{33}, quadratic gravity \cite{34}, $f(T)$ gravity \cite{35}, noncommutative geometry \cite{27}, rotating black hole solution \cite{31,32}, and P-V criticality \cite{s1,s2,s3,s4,s5}. The Einstein–Hilbert action in space–time coupled with NLED is expressed as \cite{AGB1,dvs99}, \begin{equation} I =\frac{1}{2 }\int d^{4}x\sqrt{-g}\Big[ \mathcal{R} +{\cal{L}}(F)\Big], \label{action1} \end{equation} where ${\cal R}$ is the Ricci scalar and ${\cal{L}}(F)$ is the Lagrangian density of the nonlinear field which is given by \cite{AGB1,Singh:2020xju, dvs99} \begin{equation} {\cal{L}}(F)=\frac{3}{2sg^2}\left(\frac{\sqrt{2g^2F}}{1+\sqrt{2g^2F}}\right)^{5/2}, \end{equation} where $F$ is a function of $F_{ab}F^{ab}$, $F_{ab}$ is the electromagnetic field tensor and $s$ is the parameter which is related to the mass and charge via $s=g/2M$. For spherically symmetric space-times, the only non-vanishing component of $F_{ab}$ is $F_{\theta\phi}$. Variation of the action in Eq. (\ref{action1}) with respect to the metric tensor $g_{ab}$ and the electromagnetic potential $A_a$leads to \begin{eqnarray} &&R_{a b}-\frac{1}{2}g_{a b}R+\Lambda g_{a b}=T_{a b}\equiv2\left[\frac{\partial {\cal{L(F)}}}{\partial F}F_{a c}F_{b}^{c}-g_{a b}{\cal{L(F)}}\right],\\&& \nabla_{a}\left(\frac{\partial {\cal{L(F)}}}{\partial F}F^{a b}\right)=0,\qquad\qquad\qquad \nabla_{\mu}(* F^{ab})=0, \label{Field equation} \end{eqnarray} The spherically symmetric black hole admits the following black hole solution \begin{equation} ds^2=-\left(1-\frac{2M r^2}{(r^2+g^2)^{3/2}}\right)dt^2 +\frac{1}{\left(1-\frac{2M r^2}{(r^2+g^2)^{3/2}}\right)}dr^2+r^2d\Omega_2^2, \label{m1} \end{equation} where $d\Omega_2^2=d\theta^2+ r^2\sin^2\theta$ denotes the metric on a $2D$ sphere, $g$ is a magnetic charge and $M$ is the integration constant which is related to the black hole mass. The solution becomes Schwarzschild black hole solution in the absence of a magnetic charge. The General relativity is generalized into a more effective theory which can provide mass and is known as dRGT massive gravity. It was introduced in de Rham, Gabadaadze and Tolly model \cite{drgt,1,2,3,4}, which added a potential contribution to the Einstein-Hilbert action. The dRGT massive gravity is formulated such that the equation of motion does not contain a higher derivative term, consequently the ghost field vanishes. But the formulation of exact solutions in this theory is arduous due to the nonlinear term which leads to intricacy in calculations. Nevertheless considerable efforts have been made to procure spherically symmetric black holes in distinct massive gravity \cite{10,11,12,14,17,18,19,20,21,22,23,24,sgg}. Modification of the dRGT model is based on the definition of the reference metric and the most successful reference metric was suggested by Vegh \cite{vegh}. It is believed that dRGT massive gravity may provide a possible explanation for the accelerated expansion of the universe that does not require cosmological constant and has received significant attention including searches for black holes. Motivated by the work of Sakhrov, Gliner and Bardeen we present an exact black hole solution in the presence of dRGT massive gravity coupled to NLED\cite{AGB,AGB1,ABG99}. The NLED theory is richer than the Maxwell theory, and in the weak-field limit it reduces to Maxwell electrodynamics. It was shown that coupling gravity to NLED can remove black hole singularities. The obtained solution is regular everywhere including $r\to 0$ and it reduces to the Schwarzschild massive black hole in the absence of magnetic charge. We studied the thermodynamics of black hole including the phase transition and the effect of massive gravity parameter in it. We also studied the thermodynamic behaviour including phase transition by observing the nature of free energy in canonical and grand canonical ensembles. The remainder of this paper is organised as follows: The Bardeen black hole solution in dRGT massive gravity is obtained in Section 2. This section also contains the relevant equations of Einstein theory coupled with NLED. The structure and location of the horizons of the Bardeen massive black holes are investigated in Section 3. Section 4 is devoted to the study of the thermodynamic properties of Bardeen massive black holes. We adopt the signature ($\--$, +,+,+) for metric and use the units $8\pi G = c = 1$. \section{Bardeen black holes solution in massive gravity} The Einstein-Hilbert action in the presence of the cosmological constant coupled with the dRGT massive gravity and NLED is given by \begin{equation} S=\int d^4x\sqrt{-g}\left[R+{\cal{L}}(F)+m^2_g\,\mathcal{U}(g,\phi^a)\right], \label{action} \end{equation} where $R$ , $\mathcal{U}$ and $\phi ^a $ are the Ricci scalar, potential for the graviton and the St\"uckelberg scalar respectively. The potential $\mathcal{U}$ modifies the gravitational field with the variation of graviton mass $m_g$. The effective potential $\mathcal{U}$ in four-dimensional spacetime is given as \begin{equation} \mathcal{U}(g,\phi^a)=\mathcal{U}_2+\alpha_3\mathcal{U}_3+\alpha_4\mathcal{U}_4 \label{pot} \end{equation} here $\alpha_3$ and $\alpha_4$ are dimensionless free parameters \cite{drgt,1,2,3,sgg} \begin{eqnarray} &&\mathcal{U}_2\equiv [\mathcal{K}]^2-[\mathcal{K}^2]\\&&\mathcal{U}_3\equiv [\mathcal{K}]^3-[\mathcal{K}][\mathcal{K}^2]+2[\mathcal{K}^3]\\&&\mathcal{U}_4\equiv [\mathcal{K}]^4-6[\mathcal{K}]^2[\mathcal{K}^2]+8[\mathcal{K}][\mathcal{K}^3]+3[\mathcal{K}^2]^2-6[\mathcal{K}^4] \end{eqnarray} where $ \mathcal{K}_{b}^{a}=\delta_{b}^{a}-\sqrt{g^{a\sigma}f_{ab}\partial_\sigma\phi^a\partial_b\phi^b}$, $f_{ab}$ is a reference metric and square brackets represent the traces, i.e., $[\mathcal {K}]=\mathcal {K}_{a}^{a}$ and $[\mathcal {K}^{n}]= (\mathcal K^{n})_{a}^{a}$. The St\"uckelberg scalars $\phi^a$ are the four scalar fields which are initiated to restore general covariance of the theory. It is observed that the interacting terms are symmetric polynomials of ${\cal K}$. The equation of motion does not contain higher order derivative term because of the chosen possible coefficients. We use unitary gauge $\phi^a=x^{\mu}\delta^a_{\mu}$ \cite{vegh}. In the chosen gauge, the tensor observable metric describes the five degrees of freedom of the massive graviton. It is noted that once the scalars are fixed, the St\"uckelberg scalars transform according to the coordinate transformation. As the unitary gauge is preferred, employing a coordinate transform it will break the gauge condition and then incite further change in the St\"uckelberg scalars. The gravitational potential parameters $\alpha_3$ and $\alpha_4$ used in Eq. (\ref{pot}) are described as follows, \begin{equation} \alpha_3=\frac{\alpha-1}{3},\qquad \alpha_4=\frac{\beta}{4}+\frac{1-\alpha}{12} \end{equation} The equation of motion is obtained by varying the action (\ref{action}) with respect to $g_{ab}$,\, which is given by \begin{eqnarray} &&R_{ab}-\frac{1}{2}g_{ab}R+m_{g}^2X_{ab}=T_{ab}\equiv 2\left[\frac{\partial {\cal{L(F)}}}{\partial F}F_{a c}F_{b}^{c}-g_{a b}{\cal{L(F)}}\right],\nonumber\\&& \nabla_{a}\left(\frac{\partial {\cal{L(F)}}}{\partial F}F^{a b}\right)=0, \label{efe} \end{eqnarray} where $X_{ab}$ is the energy-momentum tensor which can be determined by varying the potential ${\cal U}$ term w.r.t. $g_{ab}$ \cite{sgg} \begin{eqnarray} X_{ab}=&&\mathcal{K}_{ab}-\mathcal{K}g_{ab}-\alpha\left\{\mathcal{K}_{ab}^2-\mathcal{K}\mathcal{K}_{ab}+\frac{[\mathcal{K}]^2-[\mathcal{K}^2]}{2}g_{ab}\right\}\nonumber\\&&+3\beta\Big\{\mathcal{K}_{ab}^3-\mathcal{K}\mathcal{K}_{ab}^2+\frac{1}{2}\mathcal{K}_{ab}\left\{[\mathcal{K}]^2-[\mathcal{K}^2]\right\}-\frac{1}{6}g_{ab}\left\{[\mathcal{K}]^3-3[\mathcal{K}][\mathcal{K}^2]+2[\mathcal{K}^3]\right\}\Big\} \end{eqnarray} An additional constraint besides modified Einstein equations can be implied by using the Bianchi identities $\nabla^{a}X_{ab}=0 $. The line element of the spherically symmetric space-time is \begin{equation} ds^2=-f(r)dt^2 +\frac{1}{f(r)}dr^2+r^2d\Omega_2^2, \label{m1} \end{equation} with the following metric ansatz \cite{sgg} \begin{equation} f_{ab}=diag(0,0, c^2,c^2\sin^2\theta) \end{equation} where $c$ is the constant with the choice of preceding metric, the action remain finite since it only contains the non-negative power of $f_{ab}$ \cite{sgg}. The $(r,r)$ component of the modified Einstein equations (\ref{efe}) is \begin{eqnarray} \frac{ r f^\prime (r) +f(r)}{r^2}-\frac{1}{r^2}-m_g^2\left(\frac{\alpha(3r-c)(r-c)}{r^2}+\frac{3\beta(r-c)^2}{r^2}+\frac{3r-2c}{r}\right)=\frac{3Me^2r^2}{(r^2+e^2)^{5/2}}, \label{16} \end{eqnarray} The Equation (\ref{16}) admits the following solution for the metric function $f(r)$, \begin{equation} f(r)=1-\frac{2M r^2}{(r^2+g^2)^{3/2}}+\frac{\Lambda}{3} r^2+\gamma r+\zeta,\label{sol1} \end{equation} with \begin{eqnarray} &&\Lambda=3m_g^2(1+\alpha+\beta),\\&& \gamma=-cm_g^2{(1+2\alpha+3\beta)},\\&& \zeta=c^2m_g^2(\alpha+3\beta). \end{eqnarray} In the obtained solution (\ref{sol1}), the cosmological constant $\Lambda $ occurs naturally in the theory in terms of the graviton mass $ m_g $ which serves as the cosmological constant. The solution (\ref{16}) reduced to Bardeen black hole solution in the absence of massive gravity parameter $(m_g=0) $ \cite{singh,dvs99,Tzikas:2018cvs} and it is reduced Schwarzschild black hole in the absence of massive gravity parameter and magnetic charge. \noindent The horizon of the black hole can be obtained when $f(r)=0$. \begin{eqnarray} 1-\frac{2M r^2}{(r^2+g^2)^{3/2}}+\frac{\Lambda}{3} r^2+\gamma r+\zeta=0 \end{eqnarray} This is transcendental equation it can not be solved analytically. Numerical analysis of $f(r)=0$ is being done by varying the magnetic charge $(g)$ with fixed value of massive gravity parameter $m_g^2c^2=1$, is depicted in the Fig. \ref{fr1}. The numerical analysis of $f (r ) = 0$ reveals that it is possible to find non-vanishing value of $g$, $\alpha$, $\beta$ and $m_g^2c^2$ for which metric function $f (r )$ is minimum, i.e, $f (r ) = 0$, this will give three real roots which correspond to the Cauchy horizon, event horizon and cosmological horizon. The cosmological horizon $(r_c)$ related to the graviton mass. \begin{figure*} [h] \begin{tabular}{c c c c} \includegraphics[width=0.75\linewidth]{f1.eps} \end{tabular} \caption{ Plot of $f(r)$ vs $r$ for different value of magnetic charge $g$ with $\alpha=2,\,\beta=0.7$ and $m_g^2c^2=1$. } \label{fr1} \end{figure*} It is clear from Fig. \ref{fr1}, the size of the black hole increases with decrease in magnetic charge. The black hole has three horizons at $g> 0.40$ {\it viz.} Cauchy, event, and cosmological horizon, the two horizons for $g=0.40$ viz event and cosmological horizon and only cosmological horizon when $g>0.40$. Now, Let us study the nature of singularity of Bardeen massive black hole. It becomes useful to consider the curvature invariants of Ricci square ($R_{ab}R^{ab}$) and Kretshmann scalars ($R_{abcd}R^{abcd}$). The invariants are \begin{eqnarray} &&\lim_{r\to 0}R_{ab}R^{ab}=-12\Lambda+\frac{144M}{g^3}\left(\frac{\Lambda}{3}+\frac{M}{g^3}+\frac{ m^2 \gamma}{2} \right)+\frac{42m^2}{g^2}\left(\frac{m^2\zeta^2}{g^2}-\frac{2\Lambda\zeta}{3}+\frac{5\gamma^2}{12}+\frac{\zeta^2}{g^2} \right),\nonumber\\ &&\lim_{r\to 0}R_{abcd}R^{abcd}=\frac{4 \Lambda^2}{3}+\frac{48M}{g^3}\left(\frac{\Lambda}{3}+\frac{M}{g^3}+\frac{5 m^2 \zeta}{6g^2}\right)+\frac{7m^4 }{g^2}\left(\gamma^2 +\frac{6 \zeta^2}{g^2}+\frac{2g^2\zeta^2}{7}-\frac{4\zeta \Lambda}{3m^2}\right).\nonumber\\ \label{RR} \end{eqnarray} These invariants show that the black hole solution (17) is regular everywhere including origin ($r=0$). The singularity of the solution is removed due to the presence of Bardeen source (2). \section{Thermodynamics of the black hole} \subsection{Canonical Ensemble} We investigate the thermodynamic properties of the Bardeen massive black hole in canonical ensemble by considering a fixed charge $g$ of the black hole. One can determine the mass of a black hole by $ f(r)=0 $. The mass of the black hole in terms of horizon radius $ r $ is given by \begin{eqnarray} M=\frac{(g^2+r^2)^{3/2}\left(1+r\gamma+r^2\Lambda+\zeta\right)}{2r^2}, \end{eqnarray} substituting the values of $\Lambda$, $\gamma$ and $\zeta$ from Eq. (18), (19) and Eq. (20) into Eq. (23), the mass of Bardeen massive black hole becomes \begin{eqnarray} M=\frac{(g^2+r^2)^{3/2}}{2r^2}\left(1+m_g^2r^2(1+\alpha+\beta)+c^2 m_g^2(\alpha+3\beta)-cm_g^2r(1+2\alpha+3\beta)\right). \end{eqnarray} The mass of the Bardeen massive black hole reduces to mass of Bardeen black hole in the limit of $m_g=0$ \cite{Tzikas:2018cvs} and the mass of Schwarzschild massive black hole when $g=0$ \cite{handi}. The black hole mass reduces to $AdS$ Schwarzschild black hole in the limit of $g=0$ and $m_g=0$. For convenience, one can take $ m_g^2c^2=1 $, under this condition of dimensionless parameters one can have for positive values of black hole mass \begin{eqnarray} &&\alpha>-\,\frac{r^2(1+\beta)-r(1+3\beta)+(1+3\beta)}{(r-1)^2} \qquad\text{for}\,\, r\neq 1\\&&\beta>-\,1\qquad\text{and $\alpha $ is arbitrary for}\,\, r=1. \end{eqnarray} The temperature of the black hole is known as Hawking temperature which is related to the surface gravity $\kappa$ by the relation $T=\kappa/2\pi $ \cite{Singh:2020rnm, Singh2018}. The temperature $T$ of the black hole is \begin{equation} T=\frac { f' (r) }{4\pi}=\frac{r^2(1+2r\gamma+r^2\Lambda+\zeta)-g^2(2+r\gamma +2\gamma)}{4\pi r(g^2+r^2)} \label{temp1} \end{equation} \begin{figure*} [h] \begin{tabular}{c c c c} \includegraphics[width=0.7\linewidth]{t1.eps} \end{tabular} \caption{ Plots of temperature of the Bardeen massive black hole $T$ vs horizon radius $r$ for the different value of magnetic charge $g$ with $\alpha=2,\,\beta=0.7$ and $m_g^2c^2=1$. The dotted curve shows the temperature of the Schwarzschild massive black hole.} \label{th0} \end{figure*} The temperature of the Bardeen massive black hole can be recovered in the limit of graviton mass $m_g =0 $. The plot of temperature is displayed in Fig. \ref{th0}, which shows that the temperature of Bardeen massive black hole increase and attains the maximum value and then decreases with increase in the horizon radius $r$ and attains the minimum values $T^{min}$ for all the values of magnetic magnetic charge which is different for the different value of magnetic e charge. After this, the temperature of the black hole increases monotonically with horizon radius and coincide with the Schwarzschild massive black hole at $r=1.1$. Next, let us focus our attention to an important thermodynamic quantity entropy $S$ of the black hole in term of horizon radius by using the first law of thermodynamics $dM=TdS+\phi_gdg$, where the charge is fixed. The entropy of the Bardeen massive black hole is \begin{equation} S=\int\frac{dM}{T}=\pi r^2\left[\left(1-\frac{g^2}{r^2}\right)\sqrt{1+\frac{g^2}{r^2}}+\frac{3}{2}\frac{g^2}{r^2}\log(r+\sqrt{g^2+r^2})\right], \end{equation} This entropy does not follow the area law in the presence of magnetic charge, when $g=0$ it follow the standard area law which resembles with the Bekenstein-Hawking area law. Wald \cite{wald} has demonstrated that the black hole entropy obeys the area law, but in the case of regular black holes one does not obtain correct form of the using the first law of thermodynamics. Using the thermodynamic quantities associates with the black hole (mass, charge, temperature and entropy), one can easily show that these quantities does not follow the first law of thermodynamics \begin{equation} dM\neq TdS+\phi_g\, dg \end{equation} Ma {\it et al} \cite{Ma} modified the first law black hole thermodynamics. When the black hole mass parameter $M$ is included in the energy momentum tensor, the conventional form of the first law gets modified with an extra factor. The corrected temperature is obtained from the modified first law of thermodynamics \cite{ Ma, Maulif,Singh:2021rnm} \begin{equation} C_{M}dM=T\,dS + \phi_g\,dg, \label{mod} \end{equation} The thermodynamics variables appearing in the above expression are given by, \begin{eqnarray} &&\phi_{g}=\frac{(g^2+r^2)^{3/2}(3+3r\gamma+r^2\Lambda+3\zeta)}{6r^2}\nonumber\\ &&C_{M} ={4\pi}\int r^2 \frac{\partial T_0^0}{\partial M}=1-\frac{r^3}{(r^2+g^2)^{3/2}}. \label{thermo} \end{eqnarray} The entropy of the black hole \begin{equation} S=\int C_M \frac{dM}{T}=\pi r^2=\frac{A}{4}. \end{equation} One can also determine local thermodynamically stable state by verifying the sign of heat capacity. The heat capacity of the black hole at constant volume is defined as \begin{equation} C=T\left(\frac{\partial S}{\partial T}\right) \label{hc1} \end{equation} Substituting the expression of temperature and entropy in Eq. (\ref{hc1}) and on solving it the expression for the heat capacity becomes \begin{equation} C=\frac{2\pi(g^2+r^2)^{5/2}\left(r^2(1+2r_+\gamma+r^2\Lambda+\zeta)-g^2(2+r\gamma+2\zeta)\right)}{r\left(r^4(-1+r^2\Lambda-\zeta)+2g^4(1+c_1)+g^2r^2(7+6r\gamma+3r^2\Lambda+7\zeta) \right)}. \end{equation} To analyse it, we plot the heat capacity in Fig. \ref{sh1} for different values of magnetic charge which clearly exhibits that the heat capacity for a given value of magnetic charge is discontinuous exactly at the critical radius $r_{c1}$ and $r_{c2}$. Further, it is noticeable that the black hole is thermodynamically stable for $r<r_{c1}$ and $r>r_{c2}$ whereas it is thermodynamical unstable for $r_{c1}<r<r_{c2}$. Moreover, divergence of the heat capacity at critical radius $r=r_{c1}$ and $r_{c2}$ indicates the occurrence of a phase transition \cite{handi}. The heat capacity is discontinuous at $r_{c1}=0.252$ and $r_{c2}=0.68$ for $g=0.10$. \begin{figure*} [h] \begin{tabular}{c c c c} \includegraphics[width=0.75\linewidth]{c1.eps} \end{tabular} \caption{ Plots of specific heat $(C)$ vs horizon radius $r$ of the black hole for the different value of magnetic charge $g$ with $\alpha=2,\,\beta=0.7$ and $m_g^2c^2=1$. The dotted curve shows the free energy of Schwarzschild massive black hole.} \label{sh1} \end{figure*} \noindent In case of canonical ensemble where there is no exchange of particles and the charge is fixed, one can consider the black hole to be a closed system. For this, we turn to calculate the Helmholtz free energy \cite{Singh:2020rnm,sgg} \begin{equation} F =M-TS \label{feh}\end{equation} . Substituting the expression of $M, T$ and $S$ in Eq. (\ref{feh}) the Helmholtz free energy of the Bardeen massive black hole becomes \begin{eqnarray} F&=&\frac{(g^2+r^2)^{3/2}(3+3r\gamma+r^2\Lambda+3\zeta)}{6r^2}-\frac{r\left(r^2(1+2r\gamma+r^2\Lambda+\zeta)\right)}{4(g^2+r^2)}\nonumber\\&&-\frac{r\left(-g^2(2+r\gamma+2\zeta)\right)}{4(g^2+r^2)}, \end{eqnarray} The condition of globally thermodynamically stable black hole is give by $ F\leq 0 $. We analyse the stability of the black hole by studying the nature of free energy which is plotted in Fig. \ref{sh20}, for the different values of magnetic charge $g$. Here we see that the free energy have a local minimum and a local maximum corresponding to the extremal points of the Hawking temperature (see Fig. \ref{th0}). At these points the heat capacity flip the sign (see Fig. \ref{sh1}) . For $r>r_{c1}$, the free energy is the increasing function of horizon radius $r$ and becomes positive at large value of $r$ and attains the maximum value at $r_{c2}$. At $r=r_{c2}$ the slope of free energy turn negative and the theory naturally provides the Hawking-Page phase transition. \begin{figure*} [h] \begin{tabular}{c c c c} \includegraphics[width=0.5\linewidth]{fr1.eps} \includegraphics[width=0.5\linewidth]{fr3.eps} \end{tabular} \caption{ Plots of free energy $(F)$ vs horizon radius $r$ of the black hole for the different value of magnetic charge $g$ with $\alpha=2,\,\beta=0.7$ and $m_g^2c^2=1$, and the point shows that the local minima and maxima. The dotted curve shows the Helmholtz free energy of Schwarzschild massive black hole and second plot is for $g=0.10$.} \label{sh20} \end{figure*} { Now we study the phase transition of the Bardeen massive black hole in the $T-S$ plane for fixed valued of massive gravity parametere. The critical values can be obtained by solving the following equation \begin{equation} \left(\frac{\partial T}{\partial S}\right)_{m_{g}}=\left(\frac{\partial^2 T}{\partial S^2}\right)_{m_{g}}=0 \label{ts14} \end{equation} The temperature of Bardeen massive black hole in term of entropy is written as \begin{eqnarray} T&=&\frac{1}{4\sqrt{\pi}(g^2\pi +S)}\Big(\frac{3m^2_g S^{3/2}(1+\alpha+\beta)}{\pi}+\nonumber\\&&~~\frac{cm^2_g\sqrt{\pi S}(1+2\alpha+3\beta)(\pi g^2-2S)}{\sqrt{\pi}}-\frac{1+c^2m^2_g(\alpha+3\beta)(S-2g^2\pi)}{S^{1/2}}\Big) \label{ts13} \end{eqnarray} Substitution Eq. (\ref{ts13}) into Eq. (\ref{ts14}), we find the critical points and the numerical results are presented in Tab I. \begin{table}[ht] \begin{center} \begin{tabular}{ l | l | l | l l } \hline \hline \multicolumn{1}{c|}{ $m_g$} &\multicolumn{1}{c}{$S_c$} &\multicolumn{1}{|c|}{$g_c$} &\multicolumn{1}{c}{$T_c$} \\ \hline \,\,\,\,\,1~~ &~~0.768~~ & ~~0.156~~ & ~~0.017~~ \\ % \,\,\,\,\,2~~ &~~0.670~~ & ~~0.149~~ & ~~0.025~~ \\ % \,\,\,\,\,3~~ &~~0.651~~ & ~~0.148~~ & ~~0.072~~ \\ % \,\,\,\,\,4~~ &~~0.645~~ & ~~0.148~~ & ~~0.131~~ \\ \hline \hline \end{tabular} \caption{The table for critical temperature $T_c$, critical entropy $S_c$ and critical magnetic charge $g_c$ corresponding different value of $m_g$ with fixed value of $\alpha=2,\,\beta=0.7$.} \label{tr2} \end{center} \end{table} In order to obtained the phase transition of the black holes, we can identify the free energy of the black hole to see the effect of massive gravity parameter on the phase structure. \begin{figure*} [h] \begin{tabular}{c c c c} \includegraphics[width=0.65\linewidth]{ft1.eps} \end{tabular} \caption{ The plots of free energy vs temperature for $g<g_c$ with fixed value of $\alpha=2,\,\beta=0.7$. } \label{sh} \end{figure*} In $F-T$ plot we see that for $g<g_c$ for the Bardeen massive black hole the small and large black hole are stable but intermediate black hole is unstable, since the heat capacity is negative (see the Fig. \ref{sh1}). In $F\-- T$ plot the appearance of characteristic swallow tail shows that the obtained values are critical ones in which the phase transition take place for $g<g_c$. It is worthwhile to mention that the critical value of entropy, magnetic charge decreases and temperature increases with the massive gravity parameter (see the Tab. I). } \subsection{Grand Canonical ensemble} Let us consider the Bardeen massive black hole in a grand canonical ensemble, where the black hole exchange the charge with the surrounding and chemical potential $\phi_g$ can be held fixed. In this way, the system is being considered in the grand canonical ensemble. Its chemical potential $ \mu $ is specified as follows \begin{equation} \mu=\frac{(g^2+r^2)^{3/2}(3+3r_+\gamma+r^2\Lambda+3\zeta)}{6r^2} \end{equation} The temperature of Bardeen massive black hole in terms of chemical potential $\mu$ is written as \begin{eqnarray} T=\frac{1+2r\gamma+r_+^2\Lambda+\zeta+(2+r\gamma+2\zeta)\Big(1-\sqrt{1+\frac{16 \mu^2}{(3+3r_+\gamma+r^2\Lambda+3\zeta)^2}}\Big)}{4\pi r\Big(\sqrt{2}+\sqrt{1+\frac{16 \mu^2}{(3+3r\gamma+r^2\Lambda+3\zeta)^2}}\Big)} \end{eqnarray} \begin{figure*} [h] \begin{tabular}{c c c c} \includegraphics[width=0.75\linewidth]{t2.eps} \end{tabular} \caption{ Plot of temperature $T$ as the function of horizon radius $r$ for different value of chemical potential $\mu$. The dotted line shows the temperature of the Schwarzschild massive black hole. } \label{sh7} \end{figure*} In Fig. \ref{sh7} we show the variation of the temperature of Bardeen massive black hole at different value of chemical potential $\mu$. From the Fig. \ref{sh7} it is obvious that temperature is decreasing with increasing the horizon radius $r$ and attains the minimum value for the fixed value of chemical potential $\mu$. The minimum value of the temperature will occur at $r=r_c$ where the heat capacity diverges (see Fig. \ref{sh8} (right)). We have chosen $\alpha=2,\,\, \beta =0.7$ as a simple choice of the parameters in this region and adopt the condition $m^2c^2=1$. In the case of Grand canonical ensemble the corresponding free energy is the Gibbs free energy $G$ which is expressed as $G=M-T S-\mu g $. It can be seen from Fig. \ref{sh8} that the value of expression $ (1-\frac{\Lambda}{3}r^2+\zeta-\mu^2) $ will decide the sign of free energy that is globally thermodynamic stability exists when the following condition is satisfied. \begin{equation} \Lambda r^2\geq 3\left(1+\zeta-\mu^2\right) \end{equation} \begin{figure*} [h] \begin{tabular}{c c c c} \includegraphics[width=0.5\linewidth]{fr2.eps} \includegraphics[width=0.5\linewidth]{c2.eps} \end{tabular} \caption{ Plots of Gibbs free energy and heat capacity vs horizon radius $r$ for different value of chemical potential $\mu$ with $\alpha=2,\,\beta=0.7$ and $m_g^2c^2=1$. The dotted curve shows the free energy of the Bardeen black hole. } \label{sh8} \end{figure*} In addition locally thermodynamic stability can also be verified by analysing the heat capacity $C$, which is shown in Fig. (\ref{sh8}). The condition of locally thermodynamic stability is given by $\Lambda r^2>\left(1+\zeta-\mu^2\right)$. In this grand canonical aspect both global and local stability depend on the parameters $ \Lambda,c_1 $ and also on $ \mu $ unlike canonical case where stability conditions depends only upon $ \Lambda $ and $ \zeta $. Once again both these parameters depend upon $ \alpha $ and $ \beta $. The plot of Gibbs free energy with the function of temperature is plotted in the Fig \ref{sh9}. In this figure we notice that the Bardeen massive black hole is stable above the Hawking temperature, where the Gibbs free energy is negative. \begin{figure*} [h] \begin{tabular}{c c c c} \includegraphics[width=0.65\linewidth]{gt1.eps} \end{tabular} \caption{ The plots of Gibbs free energy vs. temperature for $\mu<\mu_c$ with fixed value of $\alpha=2,\,\beta=0.7$ and $\mu_c=1.028$. } \label{sh9} \end{figure*} For further analysis of the stability of the black hole as shown in Fig. \ref{sh8} for the different values of chemical potential $\mu$, it is shown that heat capacity for a given value of chemical potential $\mu$ is discontinuous exactly at the critical radius $r_c$. Further, we note that there is a flip of sign in the heat capacity around $r_c$ . Thus Bardeen massive black hole is thermodynamically stable for $r > r_c$ whereas it is thermodynamically unstable for $r<r_c$ and there is a phase transition at $r=r_c$ from the unstable to stable phase. \section{Conclusion} The dRGT massive gravity describes non-linear interaction terms as a correction of the Einstein-Hilbert action and reduces to general relativity as a particular limit. The dRGT massive gravity has received significant attention including searches for the solutions of black holes. We note that because of the inclusion of the massive gravity term in the action, the Bardeen black hole solution is modified and the corresponding thermodynamic quantities are also changed. The temperature decreases with increasing the horizon radius and attains a minimum value and negative at the larger value of magnetic charge. The temperature of the Bardeen massive black hole coincide with the Schwarzschild massive black hole at $r=1.1$ for canonical ensemble and $r=1.4$ for grand canonical ensemble. However, the black hole entropy do not obey the area law, a new quantity $C_M$ is defined which is required for the consistency of the first law of thermodynamics and area law. We studied the black hole thermodynamics in both canonical and grand canonical ensembles to analyse the thermodynamic quantities including the phase transition. The stability of the black hole has been studied by observing the behaviour of heat capacity and the free energy. The slope of free energy becomes negative after the heat capacity diverges and the Hawking-Page phase transition occurs naturally in both the ensembles. The free energy turns negative slope where the heat capacity is diverges and the Hawking-Page phase transition occur in both ensembles. In this study we constructed the exact solution of Bardeen black holes in the presence dRGT massive gravity which reduces to Bardeen black holes when $m_g=0$ and $AdS$ massive black hole in the absence of charge. The resulting black hole solution is characterized by analysing horizons, which at most could be threefold , so that inner, outer and cosmological horizons. We have also analysed the thermodynamic quantities like the black hole mass, Hawking temperature, entropy and free energy at event horizon in terms of magnetic charge $g$ and the massive gravity parameter $m_g$ in both canonical and grand canonical ensemble. The thermodynamics of the black hole is modified due to the presence of non-linear source. The local and global stability of the black hole fot the case of grand canonical ensemble have been studied by investigating the heat capacity and Gibbs free energy. The heat capacity flip the sign at $r=r_c$, where the temperature is minimum. The positive heat capacity $C>0 $ for $r<r_c$ allowing the black hole to become thermodynamically stable and the black holes are globally preferred with negative free energy. We further analysed the stability of canonical ensemble by studying the nature of Helmoltz free energy. Free energy has a local minima and local maxima corresponding to the horizon radius where the specific heat diverges (see Fig.4 b) and these points can be identified as the extremal points of the Hawking temperature (see Fig.\ref{th0}). However, at very small horizon radius the Hawking temperature is negative and hence not physical for global stability. This is in accordance with the Hawking-Page phase transition in general relativity \cite{li}. \section*{Acknowledgement} Authors would like to thank Dr. Dharm Veer Singh for useful discussions.
train/arxiv
BkiUbJQ5qU2Ap1mgSR3E
5
1
\section{Proof of Theorem \ref{main}} Using \cite{L1} and \cite{L2}, we determined in \cite{BO2} the reflective index $i_r(M)$ of all irreducible Riemannian symmetrric spaces $M$ of noncompact type and the reflective submanifolds $\Sigma$ in $M$ for which $i_r(M) = \mbox{codim}(\Sigma)$. Using duality between Riemannian symmetric spaces of noncompact type and of compact type, we obtain Table \ref{reflindex} for the reflective index $i_r(G)$ of all simply connected, compact simple Lie groups and the reflective submanifolds $\Sigma$ in $G$ for which $i_r(G) = \mbox{codim}(\Sigma)$. \begin{table}[h] \caption{The reflective index $i_r(G)$ of simply connected, compact simple Lie groups} \label{reflindex} {\footnotesize\begin{tabular}{ | p{2cm} p{3.5cm} p{2cm} p{2cm} p{2cm} |} \hline \rule{0pt}{4mm} \hspace{-1mm}$G$ & $\Sigma$ & $\dim(G)$ & $i_r(G)$ & Comments \\[1mm] \hline \rule{0pt}{4mm} \hspace{-2mm} $SU_2$ & $SU_2/S(U_1U_1)$ & $3$ & $1$ & \\ $SU_3$ & $SU_3/SO_3$ & $8$ & $3$ & \\ $SU_{r+1}$ & $S(U_rU_1)$ & $r(r+2)$ & $2r$ & $r \geq 4$ \\ $Spin_5$ & $Spin_4$, $SO_5/SO_2SO_3$ & $10$ & $4$ & \\ $Spin_{2r+1}$ & $Spin_{2r}$ & $r(2r+1)$ & $2r$ & $r \geq 3$ \\ $Sp_r$ & $Sp_{r-1}Sp_1$ & $r(2r+1)$ & $4r-4$ & $r \geq 3$\\ $Spin_{2r}$ & $Spin_{2r-1}$ & $r(2r-1)$ & $2r-1$ & $r \geq 3$ \\ $E_6$ & $F_4$ & $78$ & $26$ &\\ $E_7$ & $E_6U_1$ & $133$ & $54$ & \\ $E_8$ & $E_7Sp_1$ & $248$ & $112$ & \\ $F_4$ & $Spin_9$ & $52$ & $16$& \\ $G_2$ & $G_2/SO_4$ & $14$ & $6$& \\[1mm] \hline \end{tabular}} \end{table} Note that Table \ref{reflindex} leads to Table \ref{Liegroup} when replacing $i_r(G)$ with $i(G)$ and adding $\Sigma = SU_3$ in the row for $G_2$. The two problems we thus need to solve for each $G$ are: \begin{itemize} \item[(1)] prove that there exists no non-reflective totally geodesic submanifold $\Sigma$ in $G$ with $\mbox{codim}(\Sigma) < i_r(G)$; \item[(2)] determine all non-reflective submanifolds $\Sigma$ in $G$ with $\mbox{codim}(\Sigma) = i_r(G)$. \end{itemize} The following result is a crucial step towards the solution of the two problems: \begin{thm}[Ikawa, Tasaki \cite{IT}] \label{IkawaTasaki} A necessary and sufficient condition that a totally geodesic submanifold $\Sigma$ in a compact connected simple Lie group is maximal is that $\Sigma$ is a Cartan embedding or a maximal Lie subgroup. \end{thm} The Cartan embeddings are defined as follows. Let $G/K$ be a Riemannian symmetric space of compact type and $\sigma \in \mbox{Aut}(G)$ be an involutive automorphism of $G$ such that $\mbox{Fix}(\sigma)^o \subset K \subset \mbox{Fix}(\sigma)$, where \[ \mbox{Fix}(\sigma) = \{g \in G : \sigma(g) = g\} \] and $\mbox{Fix}(\sigma)^o$ is the identity component of $\mbox{Fix}(\sigma)$. By definition, the automorphism $\sigma$ fixes all points in $K$ and the identity component $K^o$ of $K$ coincides with $\mbox{Fix}(\sigma)^o$. The Cartan map of $G/K$ into $G$ is the smooth map \[ f : G/K \to G\ ,\ gK \mapsto \sigma(g)g^{-1}. \] The Cartan map $f$ is a covering map onto its image $\Sigma = f(G/K)$. Let $\theta \in \mbox{Aut}(G)$ be the involutive automorphism on $G$ defined by inversion, that is, \[ \theta : G \to G\ ,\ g \mapsto g^{-1}. \] We now define a third involutive automorphism $\rho \in \mbox{Aut}(G)$ by $\rho = \theta \circ \sigma$. By definition, we have \[ \rho(g) = \theta(\sigma(g)) = \sigma(g)^{-1} = \sigma(g^{-1}) \] for all $g \in G$. Moreover, for all $g \in G$ we have \begin{align*} \rho(f(gK)) & = \rho(\sigma(g)g^{-1}) = \sigma((\sigma(g)g^{-1})^{-1}) = \sigma(g\sigma(g)^{-1}) = \sigma(g\sigma(g^{-1})) \\ & = \sigma(g)\sigma^2(g^{-1}) = \sigma(g)g^{-1} = f(gK). \end{align*} Thus the automorphism $\rho$ fixes all points in $\Sigma$. The automorphisms $\sigma,\theta,\rho \in \mbox{Aut}(G)$ are involutive isometries of $G$, where $G$ is considered as a Riemannian symmetric space with a bi-invariant Riemannian metric. Geometrically, $\theta$ is the geodesic symmetry of $G$ at the identity $e \in G$ and its differential at $e$ is \[ d_e\theta : T_eG \to T_eG\ ,\ X \mapsto -X. \] The differential of $\sigma$ at $e$ is \[ d_e\sigma : T_eG \to T_eG\ ,\ X \mapsto \begin{cases} X & \mbox{if } X \in T_eK, \\ -X & \mbox{if } X \in \nu_eK, \end{cases} \] where $\nu_eK$ denotes the normal space of $K$ at $e$. This shows that $\sigma$ is the geodesic reflection of $G$ in the identity component $K^o$ of $K$. In particular, $K^o$ (and hence also $K$) is a totally geodesic submanifold of $G$. Since $\rho = \theta \circ \sigma$, the differential of $\rho$ at $e$ is \[ d_e\rho : T_eG \to T_eG\ ,\ X \mapsto \begin{cases} X & \mbox{if } X \in \nu_eK, \\ -X & \mbox{if } X \in T_eK, \end{cases} \] It follows that there exists a connected, complete, totally geodesic submanifold $N$ of $G$ with $e \in N$ and $T_eN = \nu_eK$. We saw above that $\rho$ fixes all points in $\Sigma$, which implies $\Sigma \subset N$ since $\Sigma$ is connected. Moreover, since $\dim(\Sigma) = \dim(G) - \dim(K) = \mbox{codim}(K) = \dim(N)$ and $\Sigma$ is complete we get $\Sigma = N$. It follows that $\Sigma$ is a totally geodesic submanifold of $G$. In fact, we have proved that both $K^o$ and $\Sigma$ are reflective submanifolds of $G$ which are perpendicular to each other at $e$. In view of Theorem \ref{IkawaTasaki} it therefore remains to investigate the maximal Lie subgroups of $G$. The connected maximal Lie subgroups of compact simple Lie groups are well known from classical theory. Due to connectedness we can equivalently consider maximal subalgebras of compact simple Lie algebras. In Table \ref{maxsubalgebra} we list the maximal subalgebras of minimal codimension in compact simple Lie algebras (see e.g.\ \cite{Ma}). \begin{table}[h] \caption{Maximal subalgebras ${\mathfrak{h}}$ of minimal codimension $d({\mathfrak{g}})$ in compact simple Lie algebras ${\mathfrak{g}}$} \label{maxsubalgebra} {\footnotesize\begin{tabular}{ | p{2cm} p{2cm} p{2cm} |} \hline \rule{0pt}{4mm} \hspace{-1mm}${\mathfrak{g}}$ & ${\mathfrak{h}}$ & $d({\mathfrak{g}})$ \\[1mm] \hline \rule{0pt}{4mm} \hspace{-2mm} ${\mathfrak{s}}{\mathfrak{u}}_{r+1}$ & ${\mathfrak{s}}{\mathfrak{u}}_r \oplus \mathbb{R}$ & $2r$ \\ ${\mathfrak{s}}{\mathfrak{o}}_{2r+1}$ & ${\mathfrak{s}}{\mathfrak{o}}_{2r}$ & $2r$ \\ ${\mathfrak{s}}{\mathfrak{p}}_r$ & ${\mathfrak{s}}{\mathfrak{p}}_{r-1} \oplus {\mathfrak{s}}{\mathfrak{p}}_1$ & $4r-4$ \\ ${\mathfrak{s}}{\mathfrak{o}}_{2r}$ & ${\mathfrak{s}}{\mathfrak{o}}_{2r-1}$ & $2r-1$ \\ ${\mathfrak{e}}_6$ & ${\mathfrak{f}}_4$ & $26$ \\ ${\mathfrak{e}}_7$ & ${\mathfrak{e}}_6 \oplus \mathbb{R}$ & $54$ \\ ${\mathfrak{e}}_8$ & ${\mathfrak{e}}_7 \oplus {\mathfrak{s}}{\mathfrak{p}}_1$ & $112$ \\ ${\mathfrak{f}}_4$ & ${\mathfrak{s}}{\mathfrak{o}}_9$ & $16$ \\ ${\mathfrak{g}}_2$ & ${\mathfrak{s}}{\mathfrak{u}}_3$ & $6$ \\[1mm] \hline \end{tabular}} \end{table} We can now finish the proof of Theorem \ref{main}. From Tables \ref{reflindex} and \ref{maxsubalgebra} we get $i_r(G) \leq d({\mathfrak{g}})$. Theorem \ref{IkawaTasaki} then implies $i(G) = i_r(G)$. Using Table \ref{reflindex} we obtain the column for $i(G)$ in Table \ref{Liegroup}. To find all $\Sigma$ in $G$ with $\mbox{codim}(\Sigma) = i(G)$ we first note that $i(G) < d({\mathfrak{g}})$ if and only if $G \in \{SU_2,SU_3\}$. In this case $\Sigma$ must be a Cartan embedding and hence a reflective submanifold. From Table \ref{reflindex} we obtain that $\Sigma = SU_2/S(U_1U_1)$ if $G = SU_2$ and $\Sigma = SU_3/SO_3$ if $G = SU_3$. Now assume that $i(G) = d({\mathfrak{g}})$. Then $\Sigma$ is either a Cartan embedding (and then $\Sigma$ is as in Table \ref{reflindex}) or a maximal connected subgroup $H$ of $G$ for which ${\mathfrak{h}}$ has minimal codimension $d({\mathfrak{g}})$ (and then ${\mathfrak{h}}$ is as in Table \ref{maxsubalgebra}). By inspection we see that such $H$ is reflective unless $G = G_2$, in which case we get the non-reflective totally geodesic submanifold $SU_3$ of $G_2$ satisfying $\mbox{codim}(SU_3) = 6 = i(G_2)$. This finishes the proof of Theorem \ref{main}. Regarding our conjecture $i(M) = i_r(M)$ if and only if $M \neq G_2^2/SO_4$, we list in Table \ref{summary} the irreducible Riemannian symmetric spaces of noncompact type for which the conjecture remains open. \begin{table}[h] \caption{The reflective index $i_r(M)$ for irreducible Riemannian symmetric spaces $M$ of noncompact type for which the conjecture $i(M) = i_r(M)$ is still open and reflective submanifolds $\Sigma$ of $M$ with $\mbox{codim}(\Sigma) = i_r(M)$} \label{summary} {\footnotesize\begin{tabular}{ | p{2.9cm} p{3.7cm} p{1.5cm} p{0.8cm} p{2cm} |} \hline \rule{0pt}{4mm} \hspace{-1mm}$M$ & $\Sigma$ & $\dim M$ & $i_r(M)$ & Comments \\[1mm] \hline \rule{0pt}{4mm} \hspace{-2mm} $SU^*_{2r+2}/Sp_{r+1}$ & $\mathbb{R} \times SU^*_{2r}/Sp_r$ & $r(2r+3)$ & $4r$ & $r \geq 3$ \\[1mm] \hline \rule{0pt}{4mm} \hspace{-2mm} $Sp_r(\mathbb{R})/U_r$ & $\mathbb{R} H^2 \times Sp_{r-1}(\mathbb{R})/U_{r-1}$ & $r(r+1)$ & $2r-2$ & $r \geq 6$\\ $SO^*_{4r}/U_{2r}$ & $SO^*_{4r-2}/U_{2r-1}$ & $2r(2r-1)$ & $4r-2$ & $r \geq 3$ \\ $Sp_{r,r}/Sp_rSp_r$ & $Sp_{r-1,r}/Sp_{r-1}Sp_r$ & $4r^2$ & $4r$ & $r \geq 3$ \\ $E_7^{-25}/E_6U_1$ & $E_6^{-14}/Spin_{10}U_1$ & $54$ & $22$ & \\[1mm] \hline \rule{0pt}{4mm} \hspace{-2mm} $Sp_{r,r+k}/Sp_rSp_{r+k}$ & $Sp_{r,r+k-1}/Sp_rSp_{r+k-1}$ & $4r(r+k)$ & $4r$ & $r \geq 3, k \geq 1$, $r > k+1$ \\ $SO^*_{4r+2}/U_{2r+1}$ &$SO^*_{4r}/U_{2r}$ & $2r(2r+1)$ & $4r$ & $r \geq 3$ \\[1mm] \hline \rule{0pt}{4mm} \hspace{-2mm} $E_6^6/Sp_4$ & $F_4^4/Sp_3Sp_1$ & $42$ & $14$ & \\[1mm] \hline \rule{0pt}{4mm} \hspace{-2mm} $E_7^7/SU_8$ & $\mathbb{R} \times E^6_6/Sp_4$ & $70$ & $27$ & \\[1mm] \hline \rule{0pt}{4mm} \hspace{-2mm} $E_8^8/SO_{16}$ & $\mathbb{R} H^2 \times E_7^7/SU_8$ & $128$ & $56$ & \\[1mm] \hline \rule{0pt}{4mm} \hspace{-2mm} $E_6^2/SU_6Sp_1$ & $F_4^4/Sp_3Sp_1$ & $40$ & $12$ & \\ $E_7^{-5}/SO_{12}Sp_1$ & $E_6^2/SU_6Sp_1$ & $64$ & $24$ & \\ $E_8^{-24}/E_7Sp_1$ & $E_7^{-5}/SO_{12}Sp_1$ & $112$ & $48$ & \\[1mm] \hline \end{tabular}} \end{table}
train/arxiv
BkiUe-vxK6nrxjHzDOZM
5
1
\section{\label{sec:level1}Introduction\\} The interface of tensile strained germanium ($\varepsilon$-Ge) grown on III-V substrates is currently being considered as the working tunnel-barrier in the channel of future high-performance and low-power consumption tunnel field-effect transistors (TFETs).\cite{Hudait2014,Clavel2015,Nguyen2015} These devices take advantage of band-to-band tunneling of charge carriers between the source and drain, and as a result, can overcome the limit for the subthreshold slope of thermionic devices,\cite{Ionescu11} thereby simultaneously improving the transistor switching speed (performance) and I$_\text{ON}$/I$_\text{OFF}$ current ratio (power efficiency). Concurrently, there is a large research effort dedicated to the integration of optical interconnects on a CMOS compatible platform,\cite{Miller09,Assefa10} allowing for highly efficient ultrafast inter- and intra-chip data communication. The latter requires efficient on-chip light sources, and $\varepsilon$-Ge grown on III-V substrates\cite{Pavarelli13} is being investigated for this purpose due to the tensile strain induced direct band gap of Ge. The operation of transistor devices depends crucially on the junctions at the border between device materials, and this dependence only becomes stronger as device dimensions continue to shrink.\cite{delAlamo11,Ferain11} From the perspective of optical devices, where electron-hole recombination is required in the active region for light emission, material interfaces also play a dominant role in the device operation\cite{Pavarelli13} by determining the barrier height for electron and hole confinement. Motivated by the technological importance, significant progress has been made in recent decades towards the understanding of solid-state interfaces and the resulting line up of energy bands between materials forming the interface.\cite{Peressi98,Margaritondo12,Agostini13,Brillson16} In terms of \textit{ab-initio} calculations of band alignments, the lattice-matched isovalent interfaces\cite{Vandewalle87,Vandenberg88,Christensen88,Peressi90,Hybertsen90,Hybertsen91} represent the simplest and most well studied category. Density functional theory (DFT) is typically employed, and idealized, atomically abrupt interfacial structures\cite{Harrison78,Peressi98} are often used. These theoretical works have shown that band alignments for these interfaces are predominantly derived from bulk properties of the adjoining materials.\cite{Vandewalle87,Christensen88} Hence, for isovalent interfaces, the interfacial structure does not have a significant effect. Band offsets (BOs) across pseudomorphic heterostructures exhibiting heterovalent bonding across the interface have also been studied, both experimentally\cite{Waldrop79,Kraut80,Biasol92,Dahmen93,Volodin14,Pavarelli13} and computationally.\cite{Martin80,Kunc81,Peressi91,Biasol92,Franciosi93,Peressi98,Pavarelli13} As for the isovalent interfaces, a large portion of the computational (atomistic modeling) studies also involve ideal, abrupt interfaces, although some focus has been given to atomic intermixing/diffusion across the interface.\cite{Peressi98} Unlike isovalent junctions, the interfacial structure can have a significant effect on the band offsets of heterovalent interfaces, to the point of inducing \textit{qualitative} modifications to the offsets, e.g. type-I to type-II or vice versa, in interfaces such as $\varepsilon$-Ge/In$_{0.3}$Ga$_{0.7}$As(001),\cite{Pavarelli13} and also for Ge/In$_{x}$Al$_{1-x}$As(001) in certain cases (see results section). In this paper, the sensitivity of BOs to interface structure in the lattice (mis)matched heterovalent ($\varepsilon$-)Ge/In$_{x}$Al$_{1-x}$As(001) interface is explained by a linear response electrostatic effect\cite{Resta89,Peressi98} which occurs as a result of changes in the position of polarized bonds (IV-III, or IV-V) relative to the abrupt interface. Local changes in the electrostatic potential step across the junction result from the local variations in valence charge density and the latter are in turn induced by variations in the stoichiometry of the interfacial region. Hence, this work extends previous theories of BO-interface structure relations\cite{Resta89,Peressi98} to the technologically important interface ($\varepsilon$-)Ge/In$_{x}$Al$_{1-x}$As(001). By explaining the qualitative changes in the band alignments that can be achieved for the same material interface by only changing the interface structure, this work also contributes to the understanding of how devices can be tailored by the interface. Lattice mismatching across the interface can also affect the band alignment. When a thin-film is grown pseudomorphically on a substrate with a different lattice constant the thin-film exhibits elastic strain so that it can match the lattice constant of the substrate, below a critical thickness such that stress is not large enough to cause plastic relaxation via e.g. dislocation formation.\cite{ChasonGuduru16} Epitaxial strain can be used to induce a direct bandgap in Ge, useful for silicon-compatible photonics.\cite{LiangBowers10} For strained, heterovalent interfaces such as ($\varepsilon$-)Ge/In$_x$Al$_{1-x}$As the dependence of the band alignment on tensile strain $\varepsilon$, which is varied by the substrate stoichiometry $x$, can be significant due to the reordering of conduction band valleys. Here, we study the band alignment over a range of cation stoichiometry $x$ and show that when combined with modifications of the interface structure (modifications which represent diffusion of group-III atoms into the Ge layer), transitions between type-I and type-II band alignments can be achieved in this interface. In this work, the variations of valence (VBO) and conduction (CBO) band offsets between Ge and In$_{x}$Al$_{1-x}$As, with respect to interfacial configuration, are investigated using first-principles atomistic simulations---which are detailed in the next section. In Sec.~\ref{results}, we consider a range of systematic structural modifications of the interface. Specifically, we consider (a) the Ge, As, and group-III stoichiometric balance of the mixed interfacial region for fixed substrate stoichiometries, (b) group-III composition of the In$_{x}$Al$_{1-x}$As substrate (for $x$ = 0.0 to 0.25) for fixed interfacial stoichiometries (Sec.~\ref{sec-abrupt-int}), (c) interdiffusion of species across the junction (Sec.~\ref{sec-intdiff}). For interdiffusion, we investigate the relative stability of diffused atoms in either material (Sec.~\ref{sec-Eform}). Finally, in Sec.~\ref{sec-analysis}, we rationalize the results by simple arguments and models based on the linear response electrostatic effect. Based on the results of the simulations and on the linear response analysis, we conclude (Sec.~\ref{conclusions}) that our simulations provide a picture consistent with existing experimental results. We predict that both type-I and type-II band offsets should be observable for this interface depending on the details of the interface structure. \section{Computational Methods} Optimized geometries of bulk and interface models are calculated using DFT within the local density approximation (LDA),\cite{PZ1981,PW1992} along with a plane wave basis set and norm-conserving pseudopotentials\cite{TM1991}, as implemented in the Quantum Espresso software suite.\cite{QE2009} A non-linear core correction is added to the In pseudopotential to treat the core-valence interaction.\cite{LouieFroyenCohen1982} 50 Rydberg kinetic energy cutoff is used for the plane wave basis set. Numerically converged Monkhorst-Pack\cite{MonkhorstPack1976} $k$-point grids are used for all supercells in this work. The macroscopic average along the $z$ axis (aligned to the (001) direction) of the planar average (parallel to the interfacial plane) of the self-consistent potential\cite{Baldereschi88} [$V$$^{m}$$(z)$] for bulk and interface cells are calculated within DFT.\cite{Giantomassi11} Interface models consist of 24 atomic layers oriented along the (001) direction; 11 monolayers for Ge, 11 monolayers for In$_{x}$Al$_{1-x}$As, and at least 1 mixed monolayer per periodic image of the supercell. Parallel to the interface, interface supercells have dimensions of ($2\times1$) in units of the (110) lattice parameter. The virtual crystal approximation (VCA) is used to approximate the In$_{x}$Al$_{1-x}$As cation alloy for each composition point. Bulk cells are used to calculate the bulk band edges relative to the respective $V$$^{m}$$(z)$ for each material, and interface cells are used to calculate the potential offset ($dV$, see below) between the slabs. All band offsets correspond to fully relaxed geometries for bulk In$_{x}$Al$_{1-x}$As and interface cells, while for Ge the bulk cells are biaxially strained along the (100) and (010) directions and allowed to relax along (001). Thus, the bulk cells represent biaxially tensile strained Ge grown on an In$_{x}$Al$_{1-x}$As substrate (with AlAs lattice matched to Ge) and the interface models represent the minimum energy bonding configuration between the slabs. In order to investigate the relative stability of diffused impurities in either Ge or AlAs, large cubic bulk cells with dimensions (3$\times$3$\times$3) in units of the (100) lattice parameter are used to calculate the formation energies\cite{VandewalNeugebauer04,Freysholdt14} of substitutional impurities in bulk Ge and bulk AlAs. \footnote{We do not consider In$_{x}$Al$_{1-x}$As with $x$ $>$ 0 for this purpose due to the inaccurate bond lengths resulting from the VCA approach which would lead to inaccurate impurity-host bonding energies.} These correspond to impurity defects present after growth as a result of diffusion of substrate species during growth of the material on the substrate. Thus, we consider a single Al (As) on a Ge site (Al(As)$_\text{Ge}$) in a 216 atom bulk Ge supercell, and a single Ge on an Al (As) site (Ge$_\text{Al(As)}$) in a 216 atom bulk AlAs cell. The formation energies are calculated as a function of the chemical potential $\mu$$_\alpha$ of each exchanged atom $\alpha$\cite{VandewalNeugebauer04,Freysholdt14} which are related to the bulk elemental phases to establish boundaries on $\mu$$_\alpha$. Thus, the formation energies can be calculated for As-rich and Al-rich conditions, where the range of variation of As and Al chemical potentials corresponds to the heat of formation of AlAs\cite{ZhangNorthrup91}. Thermodynamically stable configurations correspond to charge neutral interfacial bonding configurations between group-IV and group-III/V atoms\cite{Peressi98,Martin80,Kunc81,BylanderKleinman90} with no electric field building up across either material. In supercell simulations, the lack of a slab dipole is ensured when N$_{IV-V}$ = N$_{IV-III}$, where N$_{IV-V(III)}$ is the number of Ge-V(Ge-III) bonds per simulation cell. This constraint is imposed in all simulations in this work. Clearly, accurate band offsets require accurate calculations of the bulk band structures, which is precluded in DFT due to the well-known band gap problem.\cite{PerdewLevy82,ShamSchluter85,GodbySchluterSham88} The DFT+$GW$ approach corrects the energy levels using the $GW$ approximation to the electron self-energy\cite{Hedin1965,AryasetGunnars97,HybLouie86} providing sufficiently accurate bulk band structures for evaluating band alignments at semiconductor/oxide interfaces.\cite{Myrta10,Giantomassi11} In this work, differences of 0.17 eV between the VBO calculated with and without the $GW$ correction are found for the lattice matched Ge/AlAs(001) and for lattice mismatched $\varepsilon$-Ge/In$_{x}$Al$_{1-x}$As(001). For these reasons, all VBOs and CBOs calculated in this work are obtained using the DFT+$GW$ approach. This yields a first-order approximation to the quasiparticle band gaps from which band offsets are derived.\cite{Myrta10,Giantomassi11} Valence and conduction band offsets are computed using \begin{equation} \label{eq.1} \Delta E_V = E_{V,Ge} - E_{V,III-V} + \Delta(\delta E_V) + dV \ \end{equation} \begin{equation} \label{eq.2} \Delta E_C = E_{C,III-V} - E_{C,Ge} - \Delta(\delta E_C) - dV \ \end{equation} where $E_{V,Ge}$ ($E_{V,III-V}$) is the valence band maximum of Ge (In$_{x}$Al$_{1-x}$As) relative to $V^{m}$$(z)$ of the bulk cells, $E_{C,Ge}$ ($E_{C,III-V}$) is the DFT conduction band minimum of Ge (In$_{x}$Al$_{1-x}$As) relative to $E_{V,Ge}$ ($E_{V,III-V}$), $\delta$$E_V$ ($\delta$$E_C$) is the \textit{GW} correction to the valence band maximum (conduction band minimum) and $\Delta$($\delta$$E_{V/C}$) represents the difference between the materials in the \textit{GW} correction for the valence/conduction band edge (V/CBE). $dV$ is the offset in $V$$^{m}$$(z)$ across the interface. To obtain $dV$, the entire self-consistent potential was taken, however the change in $V^{m}$$(z)$ across the interface does not significantly involve the exchange-correlation potential, which is flat throughout the interface cell. For unstrained Ge, the conduction band minimum resides at the L point, while for sufficient biaxial strain $\varepsilon$$_{Ge}$ Ge exhibits a direct minimum gap. After relaxing the lattice constants of AlAs and InAs, and assuming a linear variation of the In$_{x}$Al$_{1-x}$As lattice constant with $x$, the corresponding change in cell parameters is applied to the relaxed Ge cell, resulting in $\varepsilon$$_{Ge}$ = 1.76\% when In content $x$ = 0.25. As discussed in the results section, the varying In content affects the ordering of the satellite valleys of both Ge and In, which has important implications for the band offsets. We note that no spin-orbit coupling (SOC) is included in these calculations. The effect of SOC is to split the heavy hole $p$ states (with $J$ = 1/2 angular momentum) near the top of the valence band. This splitting is 0.30 eV in Ge, and 0.275 eV in AlAs\cite{madelung2004}. As this difference of 0.025 eV is relatively small, we expect a correspondingly small effect on our calculated band offsets, which in this work always correspond to the offset between band extrema. Hence, we consider the gain in accuracy to be insufficient to justify the increased computational load of including relativistic terms, and we omit SOC. \begin{figure}[t] \includegraphics[width=1.0\columnwidth]{interface_BOlines} \caption{\label{fig:interface_BOlines} Atomic structure of the ideal, ordered, As-terminated Ge / In$_{x}$Al$_{1-x}$As interface. Black lines correspond to a schematic representation of the valence and conduction BOs (with and without $GW$ corrections) across the interface. Ge atoms are purple, As atoms are green, and In/Al atoms are blue.} \end{figure} \section{Results} \label{results} \subsection{Abrupt ordered interfaces} \label{sec-abrupt-int} In this section, we focus on the interface that is atomically abrupt and localized to a single mixed monolayer which resides precisely between the slabs forming the heterojunction. The mixed interfacial monolayer (MIML) consists of either Ge and As atoms, or of Ge and In/Al atoms. This is exemplified by the ordered interface shown in Fig.~\ref{fig:interface_BOlines}. No interdiffusion is considered at this stage. Considering this as a fixed interfacial configuration, the In$_{x}$Al$_{1-x}$ stoichiometry of the substrate is varied from $x$ = 0 to $x$ = 0.25 and the valence and conduction band offsets are tracked in steps of $\Delta$$x$ = 0.05. Varying the In content affects the lattice constant of the substrate which in turn changes the strain state of the Ge slab. As $\varepsilon$$_{Ge}$ increases beyond 1.5\%, the conduction band satellite valleys are reordered in energy and Ge becomes a direct gap material. An analogous statement can be made for the conduction band valleys of In$_x$Al$_{1-x}$As. Our calculations show that for $x$ $\leq$ 0.20, the minimum energy valley in In$_x$Al$_{1-x}$As resides at the X point, while for larger proportions of In, In$_x$Al$_{1-x}$As exhibits a direct minimum gap at $\Gamma$. In recent experimental works, III-V substrate growth is immediately followed by cooling under an As$_{2}$ overpressure before transfer to a vacuum chamber for Ge growth,\cite{Clavel2015} such that the resulting heterostructure most likely corresponds to Ge grown on an As-terminated In$_{x}$Al$_{1-x}$ slab, rendering an interfacial layer consisting of Ge and As atoms. In other experimental studies, group-III precursors was introduced immediately prior to Ge growth,\cite{Cheng2012} or even coverages of group-III and V atoms on the III-V surface before Ge growth were inferred from the observed surface reconstruction\cite{Maeda1995} with significant III segregation into Ge after Ge growth.\cite{Maeda1999} All of these studies taken together provide an impetus to study both III-terminated and V-terminated In$_{x}$Al$_{1-x}$As interfaced with Ge. As will be shown below, alternative interfacial stoichiometries can lead to interesting behavior in the form of qualitative changes to the band alignments. Hence, we study BOs for both cases, starting with the interface in which In$_x$Al$_{1-x}$As is As terminated (see Fig.~\ref{fig:interface_BOlines} for structure). The results are displayed in Fig.~\ref{fig:VBOs_CBOs_InxAl1-xAs_As-Ge-int}. \begin{figure}[t] \includegraphics[width=\columnwidth]{VBOs_CBOs_InxAl1-xAs_As-Ge-int} \caption{\label{fig:VBOs_CBOs_InxAl1-xAs_As-Ge-int} Valence and conduction band offsets calculated using DFT+$GW$ for the abrupt As-terminated $\varepsilon$-Ge/In$_x$Al$_{1-x}$As(001) interface. The In content is varied in steps of 0.05, up to 0.25 (where 0.00 corresponds to AlAs). Variations of In content lead to corresponding changes in Ge strain, denoted by $\varepsilon$$_{Ge}$. The band gaps are labelled by the satellite valleys of minimum (maximum) energy for the conduction (valence) bands for both materials; for $x$ $\leq$ 0.20, the minimum energy conduction valley is L for Ge and X for In$_x$Al$_{1-x}$As.} \end{figure} With the interfacial configuration fixed to that shown in Fig.~\ref{fig:interface_BOlines} (group-V-terminated), a relatively small change in the valence band alignment is observed as a function of In content; 0.11 eV change in the VBO is observed between $x$ = 0.00 and $x$ = 0.25. The CBO exhibits a larger change of 0.47 eV as a function of In content. Thus a type-I BO is observed for Ge interfaced with As-terminated In$_x$Al$_{1-x}$As, and modifications of the group-III composition of the substrate for 0.00 $\leq$ $x$ $\leq$ 0.25 (neglecting changes due to the randomized cation alloy) does not qualitatively change the band alignment. \begin{figure}[t] \includegraphics[width=\columnwidth]{VBOs_CBOs_InxAl1-xAs_III-Ge-int} \caption{\label{fig:VBOs_CBOs_InxAl1-xAs_III-Ge-int} Valence and conduction band offsets calculated using DFT+$GW$ for the abrupt III-terminated $\varepsilon$-Ge/In$_x$Al$_{1-x}$As interface. The In content is varied in steps of 0.05, up to 0.25 (where 0.00 corresponds to AlAs). Variations of In content lead to corresponding changes in Ge strain, denoted by $\varepsilon$$_{Ge}$. The band gaps are labelled by the satellite valleys of minimum (maximum) energy for the conduction (valence) bands for both materials; for $x$ $\leq$ 0.20, the minimum energy conduction valley is L for Ge and X for In$_x$Al$_{1-x}$As.} \end{figure} Changing to the group-III-terminated In$_x$Al$_{1-x}$As, in which the MIML consists of Ge and In/Al cation atoms, a stark contrast is observed in the band alignments compared to the As-terminated case. Fig.~\ref{fig:VBOs_CBOs_InxAl1-xAs_III-Ge-int} shows much larger VBOs and correspondingly smaller CBOs, with the CBO becoming negative (corresponding to the Ge CBE being higher in energy than that of In$_x$Al$_{1-x}$As, see equation (\ref{eq.2})) for small values of $x$; a type-II band offset is calculated for $x$ $<$ 0.05, and so a type-II to type-I transition in the band alignment occurs as a function of III content for the abrupt III-terminated $\varepsilon$-Ge/In$_x$Al$_{1-x}$(001) interface, with the BOs being type-I for $x$ $>$ 0.05. In addition to showing the BO dependence on substrate stoichiometry, this is also a strong indication of the high sensitivity of band alignments to interfacial stoichiometry for $\varepsilon$-Ge/In$_{x}$Al$_{1-x}$As(001) (compare Figs.~\ref{fig:VBOs_CBOs_InxAl1-xAs_As-Ge-int} and ~\ref{fig:VBOs_CBOs_InxAl1-xAs_III-Ge-int} for a given value of $x$), and shows that the band alignment can also change from type-I to type-II when comparing As-terminated to III-terminated Ge/AlAs interfaces. This is in qualitative agreement with the results of Pavarelli {\it et al}.\cite{Pavarelli13} who reported an analogous change in the calculated band offset type for anion and cation dominated interface stoichiometries in the $\varepsilon$-Ge/In$_{0.3}$Ga$_{0.7}$As(001) interface. The results also indicate a comparable change in the VBO and CBO as a function of In content for both the III-terminated case and the As-terminated case, although a slightly larger change is seen for the III-terminated case. This is explained by the presence of group-III atoms at the interface. As the latter corresponds to a III rich In$_x$Al$_{1-x}$As surface, the larger variation of the VBO (0.15 eV) and CBO (0.5 eV) with respect to the In$_x$Al$_{1-x}$ stoichiometry is observed due to the slightly larger effect that $x$ has on the interface potential term $dV$, where the latter is derived from the atomic potentials present in the interface supercell. \subsection{Interdiffusion} \label{sec-intdiff} In order to investigate the effects of interdiffusion of atomic species across the interface on band offsets, the position of the MIML was shifted up to 2 monolayers away from the abrupt interfacial layer separating the materials, either towards Ge or towards In$_x$Al$_{1-x}$As (Fig.~\ref{fig:GeOnAlAs_for_paper_ML012-2_AlGe}). This corresponds to a maximum thickness of $\sim$6 {{\AA}} over which atomic diffusion is considered (i.e. $\pm$$\sim$3 {{\AA}} from the ML0 position, see Fig.~\ref{fig:GeOnAlAs_for_paper_ML012-2_AlGe}), which is consistent with previous experimental reports of interface abruptness in comparable heterostructures.\cite{Clavel2015,Nguyen2015} However, these interfacial configurations are unrealistic, and for heterostructures present in experimental samples, interfacial configurations involving mixed depths of diffusing species throughout the interfacial region are much more likely to occur. As a first approximation, this can be investigated by linearly varying the stoichiometric balance of atoms between adjacent MIMLs (while always maintaining charge neutral configurations) near the interface. For example, consider the Ge/AlAs heterojunction with an abrupt interface in which the MIML consists of Ge and Al atoms (corresponding to the band offsets on the far left for $x$ = 0 and $\varepsilon$$_{Ge}$ = 0 in Fig.~\ref{fig:VBOs_CBOs_InxAl1-xAs_III-Ge-int}). This position of the MIML is referred to as ML0 (see right panel of Fig.~\ref{fig:GeOnAlAs_for_paper_ML012-2_AlGe}). By using the VCA to linearly mix the atoms of ML0 and ML1 (see middle panel of Fig.~\ref{fig:GeOnAlAs_for_paper_ML012-2_AlGe}), an approximation to an interfacial configuration involving mixed diffusion depths of Al atoms into the Ge slab can be achieved. For the case of Al atoms diffusing from ML0 to ML1, the stoichiometric balance between the monolayers required to maintain neutrality results in the relation \vspace{5mm} \centerline{[Al$_{0.5-−a}$Ge$_{0.5+a}$]$^\textrm{ML0}$ = [Al$_{a}$Ge$_{1−-a}$]$^\textrm{ML1}$} \vspace{5mm} \noindent where [Al$_{a}$Ge$_{1-a}$]$^\textrm{ML0/1}$ is the composition of ML0/1, and $a$ is varied from 0 to 0.5. This is repeated for the case of Al atoms diffusing between ML1 and ML2, with $b$ used as the stoichiometry parameter instead of $a$ to avoid confusion. For the case of As atoms diffusing into Ge (see Sec.~\ref{sec-Asdiff}), the stoichiometric relation is analogous, with Al sites being replaced by As. This procedure is also repeated for the case of Ge atoms diffusing into In$_x$Al$_{1-x}$As (see Sec.~\ref{sec-Gediff}) which would more likely correspond to the scenario of a III-V slab grown on a Ge substrate.\cite{Chia2008} The stoichiometries of the endpoints (e.g. Al$_{0.5}$Ge$_{0.5}$ in ML0, ML1, or ML2, see Fig.~\ref{fig:GeOnAlAs_for_paper_ML012-2_AlGe}) are calculated using explicit atomistic models (see Fig.~\ref{fig:VBOs_CBOs_InxAl1-xAs_III-Ge-int}) and compared to the corresponding VCA results for comparison. As an additional assessment of the error associated with modeling the mixed layer stoichiometries with the VCA, the cluster expansion formalism is used to generate a special quasirandom structure (SQS)\cite{Wei90} representation of the mixed monolayer with $a$ = 0.5. The ATAT code\cite{ATAT02,ATAT-mcsqs13} is used to generate the periodic monolayer cell which exhibits multisite correlation functions for $m$th nearest neighbor $k$-atom clusters $\bar{\Pi}_{k,m}^\text{2D}$ that match those of an infinite random 2 dimensional binary alloy with 50/50 composition, for up to $m$ = 2 and $k$ = 3. This monolayer SQS is used to compare to the ordered monolayer structures, and to the end-point VCA stoichiometries. This is done for Al diffusing into Ge, and for Ge diffusing into AlAs. The results of these simulations are also compared to an analytical model based on a linear response theory for polar, heterovalent interfaces\cite{Peressi98,Harrison78}. This model is described in Sec.\ref{sec-analysis}, and the results obtained from this model are shown as dashed lines in Figs.~\ref{fig:cat-diffusion_Ge-InxAl1-xAs}, ~\ref{fig:Ge-diffusion_GecatML_Ge-InxAl1-xAs}, and ~\ref{fig:Ge-diffusion_GeAsML_Ge-InxAl1-xAs}. \begin{figure} [t] \includegraphics[width=10cm,height=3.5cm]{GeOnAlAs_for_paper_ML012-2_AlGe} \caption{\label{fig:GeOnAlAs_for_paper_ML012-2_AlGe} (Left) Model of MIML residing exactly between the materials forming the heterojunction; this position is labelled ML0. (Middle) MIML positioned one monolayer further from ML0, towards the Ge slab, with position labeled ML1. (Right) MIML positioned 2 monolayers towards the Ge slab, with position labeled ML2. For Ge diffusing into In$_{x}$Al$_{1-x}$As, we consider Ge atoms residing up to two monolayers away from the ML0 position, towards In$_{x}$Al$_{1-x}$As, with position labeled ML-2. Ge atoms are purple, group-V As atoms are green, and group-III In and Al atoms are blue. In this figure, the case of group-III atoms residing in the top In$_{x}$Al$_{1-x}$As layer is used as an example. For the cases of As atoms residing in the top layer, all green and blue atoms are exchanged.} \end{figure} \begin{figure*} { \includegraphics[width=\columnwidth]{VBOs_CBOs_AlAs_Al-diffusion_ML012} \includegraphics[width=\columnwidth]{VBOs_CBOs_InAlAs_InAl-diffusion_ML012}} { \includegraphics[width=\columnwidth]{VBOs_Al-interdiffusion_VCA_Ge-AlAs} \includegraphics[width=\columnwidth]{VBOs_InAl-interdiffusion_VCA_Ge-InAlAs}} \caption[width=\columnwidth]{\label{fig:cat-diffusion_Ge-InxAl1-xAs}In (a) and (b), band offsets are presented for explicit (ordered) models of group-III In and Al atoms in the Ge slab, with increasing distance from the ML0 position. In (c) and (d), VBOs are plotted for group-III atoms diffusing into Ge, using the VCA to approximate the stoichiometry of monolayers near the interfacial plane. Left panels ((a) and (c)) correspond to Ge/AlAs(001), right panels ((b) and (d)) correspond to $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001). The \textbf{{\texttimes}} symbols label VBOs calculated using the VCA. The \textbf{$\triangle$} symbols correspond to explicit models of the atomic configurations for endpoint interface stoichiometries, ordered along the (110) direction parallel to the interface. The \textbf{$\bigtriangledown$} symbols represent the SQS model for the mixed monolayer. The dashed lines correspond to the linear response model (described in Sec.~\ref{sec-analysis}) for polar interfaces, applied to diffusion. As group-III In$_{x}$Al$_{1-x}$ cations diffuse away from the substrate into Ge, the band offsets become increasingly type-II in character for Ge/AlAs(001), and change from type-I to type-II for $\varepsilon$-Ge/In$_{x}$Al$_{1-x}$As(001). Note that negative values of the CBO correspond to the Ge CBE residing at a higher energy than the AlAs CBE.} \end{figure*} \begin{figure*} { \includegraphics[width=\columnwidth]{VBOs_CBOs_AlAs_Ge-diffusion_GeAlML0-2} \includegraphics[width=\columnwidth]{VBOs_CBOs_InAlAs_Ge-diffusion_GeInAlML0-2}} { \includegraphics[width=\columnwidth]{VBOs_Ge-interdiffusion_VCA_GeAlML0-2} \includegraphics[width=\columnwidth]{VBOs_Ge-interdiffusion_VCA_GeInAlML0-2}} \caption[width=\columnwidth]{\label{fig:Ge-diffusion_GecatML_Ge-InxAl1-xAs}In (a) and (b), band offsets are presented for explicit models of Ge in a group-III layer of In$_{x}$Al$_{1-x}$As. In (c) and (d), VBOs are plotted for Ge diffusing from ML0 to ML-2 (the second group-III layer away from ML0), using the VCA to approximate the stoichiometry of monolayers near the interfacial plane. In$_{x}$Al$_{1-x}$As is III-terminated. Left panels ((a) and (c)) correspond to Ge/AlAs(001), and right panels ((b) and (d)) correspond to $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001). The \textbf{{\texttimes}} symbols label VBOs calculated using the VCA. The \textbf{$\triangle$} symbols correspond to explicit models of the atomic configurations for endpoint interface stoichiometries, ordered along the (110) direction parallel to the interface. The \textbf{$\bigtriangledown$} symbols represent the SQS model for the mixed monolayer. The dashed lines correspond to the linear response model (described in Sec.~\ref{sec-analysis}) for polar interfaces, applied to diffusion. For this range of diffusion, the band alignment remains type-II as a function of Ge diffusion distance into AlAs, and changes from type-I to type-II for Ge diffusion into In$_{0.25}$Al$_{0.75}$As.} \end{figure*} \begin{figure*} { \includegraphics[width=\columnwidth]{VBOs_CBOs_AlAs_Ge-diffusion_GeAsML0-2} \includegraphics[width=\columnwidth]{VBOs_CBOs_InAlAs_Ge-diffusion_GeAsML0-2}} { \includegraphics[width=\columnwidth]{VBOs_Ge-interdiffusion_VCA_GeAsML0-2} \includegraphics[width=\columnwidth]{VBOs_Ge-interdiffusion_InAlAs_VCA_GeAsML0-2}} \caption[width=\columnwidth]{\label{fig:Ge-diffusion_GeAsML_Ge-InxAl1-xAs}In (a) and (b), band offsets are presented for explicit models of Ge in an As layer of In$_{x}$Al$_{1-x}$As. In (c) and (d), VBOs are plotted for Ge diffusing from ML0 to ML-2 (the second group As layer away from ML0), using the VCA to approximate the stoichiometry of monolayers near the interfacial plane. In$_{x}$Al$_{1-x}$As is As-terminated. Left panels ((a) and (c)) correspond to Ge/AlAs(001), and right panels ((b) and (d)) correspond to $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001). The \textbf{{\texttimes}} symbols label VBOs calculated using the VCA. The \textbf{$\triangle$} symbols correspond to explicit models of the atomic configurations for endpoint interface stoichiometries, ordered along the (110) direction parallel to the interface. The \textbf{$\bigtriangledown$} symbols represent the SQS model for the mixed monolayer. The dashed lines correspond to the linear response model (described in Sec.~\ref{sec-analysis}) for polar interfaces, applied to diffusion. For this range of diffusion, the band alignment remains type-I as a function of Ge diffusion distance into AlAs or In$_{0.25}$Al$_{0.75}$As, with the VBO significantly reduced in both cases.} \end{figure*} \begin{table} \caption{\label{tab:AsGe_BO} Calculated band offsets of Ge/AlAs(001) and $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001) for As-terminated In$_{x}$Al$_{1-x}$As, in which As atoms have diffused up to two monolayers into Ge. The MIML column refers to the position of the mixed interfacial monolayer, as defined by Fig.~\ref{fig:GeOnAlAs_for_paper_ML012-2_AlGe}. The values in brackets refer to estimates from the linear response model for polar interfaces\cite{Peressi98,Harrison78} applied to diffusion, as described in Sec.~\ref{sec-analysis}. All band offsets are in eV.} \begin{ruledtabular} \begin{tabular}{c|cccc} MIML & VBO & CBO & \\ \hline\\[-0.2cm] &Ge/AlAs(001) && \\[0.1cm] ML0 & 0.97 & 0.41 & \\ ML1 & 0.74 (0.70) & 0.65 (0.61) & \\ ML2 & 0.46 (0.43) & 0.92 (0.89) & \\ \\[0.1cm] & $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001) && \\[0.1cm] ML0 & 0.86 & 0.75 & \\ ML1 & 0.72 (0.62) & 0.90 (0.80) & \\ ML2 & 0.48 (0.37) & 1.15 (1.04) & \\ \end{tabular} \end{ruledtabular} \end{table} \subsubsection{In$_{x}$Al$_{1-x}$ diffusion into Ge} \label{sec-catdiff} For the case of Al atoms diffusing away from the Ge/AlAs(001) interface and into Ge, a linear change in the band offset is observed (panels (a) and (c) of Fig.~\ref{fig:cat-diffusion_Ge-InxAl1-xAs}). The band alignment is type-II for the case of the MIML residing at ML0 ([Al$_{0.5}$Ge$_{0.5}$]$^\textrm{ML0}$). Thus, for Ge films grown on Al-terminated AlAs(001), the valence (conduction) band edge of AlAs resides above (below) that of Ge, and for increasing diffusion depth of Al atoms into Ge the CBO becomes increasingly negative and the VBO increasingly positive. As a result, the band alignment is increasingly type-II over this range as a function of diffusion distance of Al. While a diffusion distance of up to 2 monolayers (corresponding to $\sim$3 {{\AA}}) into Ge is particularly short, an increase in the band alignments of 0.50 eV (0.51 eV) is calculated for the explicit (VCA) models of the interface, which shows again the large sensitivity of band alignments to diffusion distance. Turning to the lattice-mismatched interface $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001), a qualitatively similar movement of VBO and CBO with respect to diffusion distance of group-III (In and Al) atoms compared to Ge/AlAs(001) is calculated. The major difference compared to Ge/AlAs(001) is that the VBO for [(InAl)$_{0.5}$Ge$_{0.5}$]$^\textrm{ML0}$ is small enough to yield a type-I band alignment. A type-I to type-II transition in the band alignment is observed as a function of diffusion distance for this case, with the CBO for [(InAl)$_{0.5}$Ge$_{0.5}$]$^\textrm{ML1}$ being close to flat (0.01 eV) and [(InAl)$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$ exhibiting a type-II band offset (Fig.~\ref{fig:cat-diffusion_Ge-InxAl1-xAs} (b)). Thus for group-III cations diffusing across the interface and into $\varepsilon$-Ge, calculations show that this can have a large enough effect as to change the character of the band alignment relative to the abrupt interface, even for a very short diffusion distance of two monolayers. From the perspective of device physics, this finding has significant consequences. For example, for devices involving sandwiches of $\varepsilon$-Ge between In$_{0.25}$Al$_{0.75}$As(001) layers, the trapping of both electrons and holes (required for optically active recombination in optoelectronic applications) will be highly dependent on the diffusion depth of In and Al atoms into the $\varepsilon$-Ge layer. As the abrupt $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001) exhibits a type-I band alignment, these calculations show that atomic-scale abruptness of this interface is required to achieve significant optical recombination in the $\varepsilon$-Ge layer, which hinders the use of this particular interface in optical devices. \subsubsection{As diffusion into Ge} \label{sec-Asdiff} Calculations of band offsets were also performed for the case of As-terminated In$_{x}$Al$_{1-x}$As, see Table~\ref{tab:AsGe_BO}.\footnote{For this case the As atoms of the MIML are moved into Ge in the manner described in sec.~\ref{sec-intdiff}, where the mixed layer stoichiometries are related by [As$_{0.5-−a}$Ge$_{0.5+a}$]$^\textrm{ML0}$ = [As$_{a}$Ge$_{1−-a}$]$^\textrm{ML1}$ for As atoms diffusing from ML0 to ML1, and [As$_{0.5-−b}$Ge$_{0.5+b}$]$^\textrm{ML1}$ = [As$_{b}$Ge$_{1−-b}$]$^\textrm{ML2}$ for As atoms diffusing from ML1 to ML2.} The results show that band alignments are quite sensitive to diffusion distance into Ge. For the case of As atoms residing in ML1 ([As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML1}$), the VBO is reduced by 0.23 eV compared to the abrupt (ML0) case, while the CBO correspondingly increases by 0.24 eV. For $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001), the VBO (CBO) decreases (increases) by 0.14 eV (0.15 eV). When As atoms have diffused to ML2 ([As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$), the BOs continue to move in the same direction, showing again a linear change with respect to interface stoichiometry as in the case of group-III diffusion into Ge (Sec.~\ref{sec-catdiff}), but with a slope of the opposite sign. The VBO and CBO of Ge/AlAs for the [As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$ case (0.46 eV and 0.92 eV, respectively) compare very well with recent BO measurements of the Ge/AlAs interface.\cite{Hudait2014} The calculated VBO (CBO) of $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001) for [As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$ is 0.48 eV (1.15 eV). The latter BOs also compare well with unpublished experimental XPS measurements~\cite{Hudait-private} of the band alignment of $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001).\footnote{The manuscript reporting this joint experimental and theoretical effort is currently under preparation. We therefore omit the band offset figures for this case (refer to Fig.~\ref{fig:VBOs_CBOs_InxAl1-xAs_As-Ge-int} for the explicit models of [As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML0}$ in Ge/AlAs(001) and $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001)).} \subsubsection{Ge diffusion into In$_x$Al$_{1-x}$As} \label{sec-Gediff} Due to the solid solubility of Ge in GaAs,\cite{Bosi2010} Ge diffusing through the interface towards the overlayer is a common observation in III-V/Ge(001) heterostructures (i.e. a III-Vs grown on Ge) such as GaAs/Ge(001). To a certain extent this diffusion and the overall interface quality can be controlled by growth conditions,\cite{Tanoto2006,Brammertz2008,Bosi2011,Sophia2015,Jia2016} as well as by thin interlayers of AlAs (or alloys thereof) between GaAs and Ge.\cite{Chia2008,Li2013,Qi2014,Chen2016} The latter technique can decrease interdiffusion due to the large Al-As bond energy and yield heterostructures with very sharp interfaces between the III-V region and Ge. However diffusion cannot be completely suppressed and Ge diffusion distances of a few nm to tens of nm into the AlAs region can be observed.\cite{Chia2008} Many factors influence the investigation of heterostructures involving III-Vs grown on Ge, such as the potential of CMOS compatible monolithic integration of optical devices\cite{Fitzgerald1992,Chilukuri2007} where graded GeSi alloys act as a buffer between the III-V overlayer and the Si substrate. Also, high-quality III-V/Ge interfaces with a type-I BO could offer advantages for photovoltaic technologies.\cite{Bosi2007,Guter2009,Qi2014} An understanding of the effects of Ge diffusion through the interface is imperative to the assessment of these potential applications. In this section, the effects of Ge diffusion into AlAs and In$_{0.25}$Al$_{0.75}$As on the BOs are studied for short diffusion distances. For the interface with In$_{0.25}$Al$_{0.75}$As, Ge is tensile strained to the III-V lattice constant, which models the top In$_{0.25}$Al$_{0.75}$As/$\varepsilon$-Ge interface of a confined $\varepsilon$-Ge region between a III-V overlayer and a III-V substrate, as is considered for optoelectronic applications.\cite{Pavarelli13} Only the positions of Ge atoms which satisfy electron counting rules\cite{Peressi98,Martin80,Pashley89} for interfacial bonding are considered for this investigation, as this prevents the accumulation of an electric field across each slab which would result in unstable interface structures. For this reason, only the position of Ge atoms corresponding to the second monolayer away from the ML0 position and towards In$_{x}$Al$_{1-x}$As (ML-2) is considered. In terms of the stoichiometric expression for the mixed layer as defined at the beginning of Sec.~\ref{sec-intdiff}, this corresponds to only [Ge$_{b}$(III)$_{1−-b}$]$^\textrm{ML-2}$ ([Ge$_{b}$(As)$_{1−-b}$]$^\textrm{ML-2}$) being considered and compared to the abrupt [(III)$_{0.5}$Ge$_{0.5}$]$^\textrm{ML0}$ ([As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML0}$) interfaces for group-III (group-V) terminated In$_{x}$Al$_{1-x}$As. \begin{figure} { \includegraphics[width=1.0\columnwidth]{Eforms}} \caption{\label{fig:Eforms} Formation energies of substitutional impurities in Ge (purple lines) and impurities in AlAs (green lines), plotted as a function of chemical potential. Solid lines correspond to impurities which result in As bonding to Ge, while dashed lines correspond to impurities which result in Al bonding to Ge. In all cases, impurities which lead to As-Ge bonds have higher stability for the majority of the range of variation of the chemical potential, and particularly for As-rich conditions.} \end{figure} \begin{figure*} { \includegraphics[width=6.1225cm,height=4.75cm]{AlGeML0_V_and_rho} \includegraphics[width=5.8025cm,height=4.75cm]{AlGeML1_V_and_rho} \includegraphics[width=5.8025cm,height=4.75cm]{AlGeML2_V_and_rho}} \caption{\label{fig:Velectro_rho_Aldiff} Al diffusing into Ge. Top graphs show the planar and macroscopic average of the electrostatic potential plotted as a function of $z$ (normal to interfacial plane). Bottom graphs show the macroscopic average of the electronic charge density plotted as a function of $z$. Left panels (a) correspond to the abrupt interface ([Al$_{0.5}$Ge$_{0.5}$]$^\textrm{ML0}$), middle panels (b) are for [Al$_{0.5}$Ge$_{0.5}$]$^\textrm{ML1}$, and right panels (c) correspond to Al residing two monolayers into Ge ([Al$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$). All values of $dV$ are in eV. Charge density variation $\Delta$$\rho$$^{m}$ is in $e$/{{\AA}}$^3$. Note that the potentials and charge densities are plotted along the same horizontal scale.} \end{figure*} \begin{figure*} { \includegraphics[width=6.1225cm,height=4.75cm]{AsGeML0_V_and_rho} \includegraphics[width=5.8025cm,height=4.75cm]{AsGeML1_V_and_rho} \includegraphics[width=5.8025cm,height=4.75cm]{AsGeML2_V_and_rho}} \caption{\label{fig:Velectro_rho_Asdiff} As diffusing into Ge. Top graphs show the planar and macroscopic average of the electrostatic potential plotted as a function of $z$ (normal to interfacial plane). Bottom graphs show the macroscopic average of the electronic charge density plotted as a function of $z$. Left panels (a) correspond to the abrupt interface ([As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML0}$), middle panels (b) are for [As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML1}$, and right panels (c) correspond to As residing two monolayers into Ge ([As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$). All values of $dV$ are in eV. Charge density variation $\Delta$$\rho$$^{m}$ is in $e$/{{\AA}}$^3$. Note that the potentials and charge densities are plotted along the same horizontal scale.} \end{figure*} \begin{figure*} { \includegraphics[width=8.9cm,height=6.6cm]{GeAlML-2_V_and_rho} \includegraphics[width=8.4cm,height=6.6cm]{GeAsML-2_V_and_rho}} \caption{\label{fig:Velectro_rho_Gediff} Ge diffusing into AlAs. Top graphs show the planar and macroscopic average of the electrostatic potential plotted as a function of $z$ (normal to interfacial plane). Bottom graphs show the macroscopic average of the electronic charge density plotted as a function of $z$. Left panels (a) correspond to the Ge atoms residing two monolayers into Al-terminated AlAs ([Ge$_{0.5}$Al$_{0.5}$]$^\textrm{ML-2}$), and right panels (b) correspond to Ge residing two monolayers into As-terminated AlAs ([Ge$_{0.5}$As$_{0.5}$]$^\textrm{ML-2}$). All values of $dV$ are in eV. Charge density variation $\Delta$$\rho$$^{m}$ is in $e$/{{\AA}}$^3$. Compare $\Delta$$\rho$$^{m}$ of the left panel to that of [Al$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$ in Fig.~\ref{fig:Velectro_rho_Aldiff}. Compare $\Delta$$\rho$$^{m}$ of the right panel to that of [As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$ in Fig.~\ref{fig:Velectro_rho_Asdiff}. Note that the potentials and charge densities are plotted along the same horizontal scale.} \end{figure*} In contrast to the results of Secs.~\ref{sec-catdiff} and ~\ref{sec-Asdiff} involving In$_{x}$Al$_{1-x}$As atoms diffusing into Ge, the band alignments in this section vary by a larger amount for a given diffusion distance. For group-III terminated In$_{x}$Al$_{1-x}$As (see Fig.~\ref{fig:Ge-diffusion_GecatML_Ge-InxAl1-xAs}), an increase of 0.55 eV (0.56 eV) in the VBO is calculated for Ge diffusing to the ML-2 position in AlAs (In$_{0.25}$Al$_{0.75}$As). For AlAs, this almost results in a broken gap band alignment, as the CBE of AlAs is only 0.09 eV above the VBE of Ge. The CBE of In$_{0.25}$Al$_{0.75}$As(001) is also close to the VBE of $\varepsilon$-Ge, but with a larger separation (0.12 eV) compared to Ge-AlAs. Assuming continued linearity of the BOs as a function of diffusion distance, broken gap alignments are expected for these interfaces with a further diffusion of Ge into In$_{x}$Al$_{1-x}$As. This has important consequences for devices involving AlAs (with possibly small proportions of In) grown on Ge. At variance, movement of the VBO in the opposite direction is calculated for Ge atoms diffusing into AlAs (In$_{x}$Al$_{1-x}$As) through an As terminated interface (see Fig.~\ref{fig:Ge-diffusion_GeAsML_Ge-InxAl1-xAs}). In this case the VBO \textit{decreases} by 0.75 eV (0.64 eV), which is qualitatively similar but significantly higher than the maximum VBO variation of 0.51 eV (0.38 eV) for As atoms diffusing through the Ge/AlAs(001) ($\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001)) interface. An apparent bowing of the VBO as a function of the mixed layer stoichiometry $b$ can be observed in panels (c) and (d) of Figs.~\ref{fig:Ge-diffusion_GecatML_Ge-InxAl1-xAs} and~\ref{fig:Ge-diffusion_GeAsML_Ge-InxAl1-xAs}, whereas this bowing effect is largely suppressed for the case of group-III atoms diffusing into Ge (Fig.~\ref{fig:cat-diffusion_Ge-InxAl1-xAs} (c) and (d)). The bowing effect in Figs.~\ref{fig:Ge-diffusion_GecatML_Ge-InxAl1-xAs} and~\ref{fig:Ge-diffusion_GeAsML_Ge-InxAl1-xAs} is likely an artifact of the VCA model of the interfacial region; the VCA cannot correctly capture local structural properties\cite{Jaros85} which translates to errors in bond lengths and interlayer distances. These errors are exacerbated when the mixed monolayer, represented by the VCA, bonds to the neighboring ionic layers within the III-V crystal. The difference in bond lengths between III-V bonds and Ge-III/V bonds makes an important contribution to the local potential within the III-V slab and this contribution is missed in the VCA representation of the mixed layers for Ge diffusing into AlAs or In$_{0.25}$Al$_{0.75}$As. This causes larger errors for the intermediate values of $b$ which involve two types of VCA `atoms' in the supercell, thus producing the bowing effect. For group-III atoms diffusing into Ge (see Sec.~\ref{sec-catdiff}), the VCA sites are now bonding to covalent rather than to ionic layers and the structural errors of the VCA are not so apparent. The origin of the apparent bowing effect is further investigated by comparing the VCA results to explicit models of the mixed monolayer which are statistically representative of a 2 dimensional 50/50 random alloy (see panel (c) of Figs.~\ref{fig:cat-diffusion_Ge-InxAl1-xAs}, ~\ref{fig:Ge-diffusion_GecatML_Ge-InxAl1-xAs}, and ~\ref{fig:Ge-diffusion_GeAsML_Ge-InxAl1-xAs}). For the cases of Ge diffusing into AlAs, the VBOs obtained by the SQS monolayer are much closer to those obtained by the ordered model, compared to those obtained by the VCA. This lends further credence to the possibility that the apparent bowing effect seen in the VCA VBOs is due to structural errors associated with the VCA, as opposed to a realistic effect. This is also the case for Al diffusing one monolayer into Ge (see Fig.~\ref{fig:Ge-diffusion_GecatML_Ge-InxAl1-xAs}), although for two monolayers of Al diffusion the VBO from the ordered model resides midway between the VCA and SQS result. In addition, the reduced bowing in this case (Fig.~\ref{fig:Ge-diffusion_GecatML_Ge-InxAl1-xAs}) indicates that the VCA is a better approximation for modeling mixed layers in materials with purely covalent bonding, compared to materials with some degree of ionic bonding. \subsubsection{Stability of diffused impurities} \label{sec-Eform} The formation energies\cite{VandewalNeugebauer04,Freysholdt14,ZhangNorthrup91} of substitutional impurities were calculated in bulk cells in order to establish the relative stability (under conditions of thermodynamic equilibrium) of diffused impurities present in either Ge or AlAs after growth on top of either an AlAs substrate or a Ge substrate, respectively. Thus, the formation energetics establish, as a function of growth conditions, which diffused impurities are more likely to be present in either material after growth. The formation energies E$_{form}$($\mu$$_\alpha$) are plotted as a function of $\mu$$_\alpha$ for the each diffused impurity in Fig~\ref{fig:Eforms}. For the majority of the range of $\mu$$_\alpha$, the substitutional impurities which result in bonds between As and Ge have consistently lower formation energies than impurities corresponding to Al bonding to Ge. In particular, these As-Ge bonding impurities have very low formation energies (note that negative formation energies correspond to impurities which form spontaneously under thermodynamic equilibrium, hence a kinetic process would be required to prevent their formation) under As-rich conditions. The latter has been used to realize high quality, abrupt interfaces,\cite{Clavel2015,Nguyen2015} hence we expect As-rich growth conditions to be favored for applications of these heterostructures. We also mention that our calculated heat of formation of AlAs is $-$2.27 eV, which overestimates the magnitude compared to the experimental value of $-$1.53 eV\cite{Berger96semicon}. While this affects the formation energetics, our purpose is simply to establish an approximate, qualitative picture of the relative stabilities, and the error (0.74 eV) does not reverse the relative stability of As-Ge bonding impurities and Al-Ge bonding impurities under As-rich growth conditions. \subsection{Analysis --- Relation between band offsets and interface configurations} \label{sec-analysis} \subsubsection{Electrostatic potential, charge density, and interface diffusion} \label{sec-V_rho_diff} The changes in band offsets presented in Sec.~\ref{sec-intdiff} arise purely from changes in the $dV$ term in Eqs.~(\ref{eq.1}) and (\ref{eq.2}). This is equivalent to stating that the changes in band offsets arise purely from interfacial effects, specifically from changes in the interface dipole derived from the macroscopic average of the atomic potentials near the interface, $V^{m}$$(z)$ (where the growth orientation is aligned to the $z$ direction). There is no contribution from bulk properties in the band alignment variations observed when comparing different interface structures (for a given group-III stoichiometry of In$_{x}$Al$_{1-x}$As and strain state of Ge). In general, atomic mixing affects the electrostatic potential line up by changing the charge density profile across the interface $\rho(z)$.\cite{Harrison78,Brillson16} In fact, it can be shown that the electrostatic potential (Hartree potential $V_{H}$ + bare ionic $V_{ion}$) step across the interface is given by \begin{equation} 4\pi\,e^2\int\,z\rho^{m}(z)\text{d}z, \end{equation} and it is equivalent to the interface dipole ($e$ is the electronic charge, $\rho^{m}(z)$ is the macroscopic average of $\rho(z)$).\cite{Peressi98} Then, modifications to the local charge density arising from changes to the bonding configuration near the interface can provide either an enhancement or reduction of the interface dipole,\cite{Peressi98,McKinley92} depending on the polarity of bonds to the diffusing species, and their diffusion depth. For this reason, the VBO and CBO variations can be explained by electrostatic considerations involving the effect of positions of IV-III and IV-V bonds on the local potential. As a result of the valence charge carried by the diffusing atoms, we expect Ge-III bonds to contribute positively to $dV$ as a function of diffusion distance of III atoms into Ge, and Ge-V bonds to contribute negatively to $dV$ as a function of diffusion distance. This is indeed consistent with what we observe in Figs.~\ref{fig:Velectro_rho_Aldiff} and~\ref{fig:Velectro_rho_Asdiff}. The former shows the planar and macroscopic averages of $V_{H}$+$V_{ion}$, and $\rho^{m}(z)$ for the explicit models of [Al$_{0.5}$Ge$_{0.5}$]$^\textrm{ML0}$, [Al$_{0.5}$Ge$_{0.5}$]$^\textrm{ML1}$, and [Al$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$ for the Ge-AlAs interface; the latter shows the same quantities for the [As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML0}$, [As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML1}$, and [As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$ interface configurations of Ge/AlAs(001). It can be seen that as Ge-III (Ge-V) bonds move away from the abrupt interfacial layer and into Ge the step in the electrostatic potential increases (decreases), while the region over which charge transfer occurs widens. A similar conclusion is reached by plotting $V$$^{m}$$(z)$ and $\rho^{m}$$(z)$ for the $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As(001) interfaces, which are not shown for brevity. For Ge diffusing into In$_{x}$Al$_{1-x}$As, (see Fig.~\ref{fig:Velectro_rho_Gediff}) a larger change in $dV$ relative to the abrupt interfaces is observed compared to the change in $dV$ for Al and As atoms diffusing the same distance into Ge. This is not unexpected, given the relation between $dV$ and $\rho^{m}$$(z)$\cite{Peressi98}; the variations in density across the interface $\Delta$$\rho$$^{m}$ have a slightly larger amplitude for Ge diffusion into Al- or As-terminated AlAs compared to [Al$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$ or [As$_{0.5}$Ge$_{0.5}$]$^\textrm{ML2}$ (compare to corresponding $\Delta$$\rho$$^{m}$ values in Figs.~\ref{fig:Velectro_rho_Asdiff} and~\ref{fig:Velectro_rho_Aldiff}), and this translates to a larger effect on the interface dipole for Ge diffusion. \subsubsection{Linear response theory applied to interface diffusion} \label{LRT} Here we derive a simple model to describe the relationship between interface dipole and mixed layer stoichiometry across a heterovalent interface. We follow the linear response theory approach\cite{Peressi98} put forward by Peressi {\it et al.} which is based on the model for polar interfaces proposed by Harrison $et$ $al$\cite{Harrison78}. Within this approach, the interface is treated as a perturbation of a periodic reference crystal and the potential lineup consists of an isovalent (i.e. interface independent) and heterovalent terms $dV$ = $dV_\text{iso}$ + $dV_\text{het}$. The $dV_\text{het}$ is then obtained via the Poisson equation from the additional nuclear charges (carried by the perturbation) at each site. For Al diffusing into Ge, we consider bulk Ge as the reference crystal. Hence, the additional nuclear charge per site along each monolayer across the interface is (schematically) \begin{equation} \resizebox{\columnwidth}{!}{$\dots -\underbrace{\text{Ge}}_{0}-\underbrace{\text{Ge}}_{0}- \left\langle -\underbrace{\text{Ge}_{1-a'}\text{Al}_{a'}}_{(a')} \right\rangle - \left\langle \underbrace{\text{Ge}_{1-(a-a')}\text{Al}_{(a-a')}}_{(a-a')} \right\rangle - \left\langle \underbrace{\text{Ge}_{0.5+a}\text{Al}_{0.5-a}}_{0.5-a} \right\rangle - \underbrace{\text{As}}_{-1} - \underbrace{\text{Al}}_{+1}- \cdots$} \end{equation} where 0 $\leq a' \leq a \leq 0.5$ is the mixed layer stoichiometry. The accumulated charge, found by summing adjacent sites from left to right, is then used to find the net contribution of interfacial charge to $dV_\text{het}$, which for Al diffusing into Ge becomes \begin{equation} dV_\text{het}(\text{AlGe}) = \frac{\pi e^2}{2a_0\epsilon}(0.5 + a + a'), \end{equation} where $e$ is the electron charge, $a_0$ the lattice constant of a GeAlAs alloy\footnote{Since Ge and AlAs are lattice matched this will be the bulk lattice constant of either Ge or AlAs} and $\epsilon$ the dielectric constant of the same alloy obtained as an average of the Ge and the AlAs dielectric constant. This is similar for the case of As diffusing into Ge, but with the opposite sign, \begin{equation} dV_\text{het}(\text{AsGe}) = -\frac{\pi e^2}{2a_0\epsilon}(0.5 + a + a'). \end{equation} For Ge diffusing into Al terminated AlAs, a similar line of reasoning results in the following interface contribution to the potential line up (again, $0 \leq b \leq 0.5$ is used here for the mixed stoichiometry instead of $a$) \begin{equation}\label{eq:GeinAlAs} dV_\text{het}(\text{GeAl}) = \frac{\pi e^2}{2a_0\epsilon}(0.5 + 2b), \end{equation} and for Ge diffusing into As terminated AlAs, \begin{equation}\label{eq:GeinAsAl} dV_\text{het}(\text{GeAs}) = -\frac{\pi e^2}{2a_0\epsilon}(0.5 + 2b). \end{equation} By considering $dV_\text{iso}$ as the average of the abrupt (ML0, $a$ = $a'$ = $b$ = 0) cases for Al and As terminated AlAs, the isovalent contribution to the VBO follows from Eq.~\ref{eq.1}. $dV_\text{het}$ for either Al(+) or As($-$) terminated ML0 cases is then obtained from the difference $dV$ - $dV_\text{iso}$ = $dV_\text{het}$, giving a value for the proportionality constant $\pi\,e^{2}/(2a_0\epsilon)=$~0.27~eV in Ge/AlAs and 0.245 eV in $\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As. The contribution due to diffusing away from the ML0 plane is then simply an additional 0.27 (0.245) eV per monolayer of diffusion through the Ge/AlAs ($\varepsilon$-Ge/In$_{0.25}$Al$_{0.75}$As) interface, which results in a linear relation between the VBO and stoichiometry of mixed layers containing diffused impurities.\footnote{We have also derived $\pi\,e^{2}/(2a_0\epsilon)$ from first-principle by substituting in the expression the $a_0$ and $\epsilon$ values estimated from the LDA. In this case we obtained 0.29~eV and 0.27~eV respectively which would give comparable, though somewhat worse agreement with the values from the simulations.} Results show that this model agrees qualitatively with VBOs obtained from supercell calculations. We observe generally a better agreement between this model and calculations involving explicit interface configurations (non-VCA) than with VBOs obtained from the VCA representation of the interfaces (with the exception of Al diffusing 2 monolayers into Ge). The latter is particularly true for Ge atoms diffusing into (In)AlAs, which again shows the weakness of using VCA to represent mixed layers in a ionically bonded material. \section{Conclusions} \label{conclusions} First-principles calculations of valence and conduction band offsets have been performed using the DFT+$GW$ approach. The $GW$ correction was applied to obtain accurate bulk bandgaps, while DFT within the LDA formulation provided the interfacial profile of the self-consistent potential from which the interface dipole $dV$ can be derived. By varying the stoichiometry of monolayers near the interface using the VCA, the atomic diffusion away from the abrupt interfacial layer can be modeled. Within this approach, the $dV$ term can be changed depending on the interlayer stoichiometry with a sensitivity large enough to, in some cases, change the character of the band alignment. The results of this work are qualitatively consistent with the linear response theory developed for semiconductor interfaces,\cite{Resta89} where for heterovalent interfaces the change in interface dipole (and hence the change in band offset) should be linear in the stoichiometries of mixed layers.\cite{Peressi98,Bratina94} We attribute the deviations from linearity showing an apparent bowing effect, especially for the cases of Ge diffusing into the III-V slab to the structural errors associated with the VCA. The VCA has also been used to model the group-III alloy of the III-V slab, thus introducing a further error in the calculations of the band offsets. This error has been investigated in Ref.~\onlinecite{Greene-Diniz16} for In$_{0.5}$Ga$_{0.5}$As. By comparing with the most accurate SQS, it was found that most of the error in the VCA resides in the indirect L-point satellite valley band gap, while the minimum error of the VCA corresponds to the direct $\Gamma$-point band gap. As indicated by the early studies on SQS models, errors in band gaps obtained by averaging between the constituent binary materials generally follow the same trend for different III-V materials,\cite{Wei90} hence the trends in VCA-errors should be transferable to In$_{x}$Al$_{1-x}$As. For $x=25\%$---the highest In content alloy studied in the present work---the band gap is direct at the $\Gamma$-point, and we expect the least amount of error. While this error is not negligible (likely $\lessapprox$ 0.1 eV), it is not large enough to change qualitative conclusions and trends of the present study. Future studies will involve a wider range of explicit models of disordered configurations for the interface, in which various SQS\cite{Zunger90,Wei90} representations of the interfacial layers will be compared against each other. This will shed more light on the band offset bowing effect and, by comparison, more accurately quantify the error in the band offsets when representing the mixed layer stoichiometries by the VCA. SQSs will also be used to model the group-III alloy of the III-V slab. While the importance of the interface structure for heterovalent interfaces along with the associated departure from band offset transitivity seen for many isovalent interfaces is by now well-established, this work shows that variations in the interface stoichiometry can be enough to dramatically change the band alignment characteristics for the lattice (mis)matched ($\varepsilon$-)Ge/In$_{x}$Al$_{1-x}$As(001) interface. Combining this with the experimentally validated band offsets achievable from DFT+$GW$ for conduction and valence band offsets, this work shows that due to variations in the interface dipole, both type-I and type-II band offsets should be observable for this interface depending on the details of the interface structure. Calculations of the formation energetics of diffused substitutional impurities indicate consistently greater stability of impurities which involve As bonding to Ge, for both materials comprising the interface. For the commonly used experimental approach of growing Ge on As-rich (nominally As-terminated) III-As substrates, from which atomically sharp interfaces can be achieved, these results are consistent with the observation of type-I band offsets for ($\varepsilon$-)Ge/In$_{x}$Al$_{1-x}$As(001) for 0 $\leq$ $x$ $\leq$ 0.25. \vfill \begin{acknowledgments} The authors thank J. C. Abreu for providing the SQS models. The authors are grateful for helpful discussions with J. C. Abreu, F. Murphy-Armando, T. J. Ochalski, D. Saladukha, M. B. Clavel, M. K. Hudait, and J. Kohanoff. The authors acknowledge the use of computational facilities at the Atomistic Simulation Centre---Queen's University Belfast. This work is financially supported by the Department for the Employment of Northern Ireland and InvestNI through the US-Ireland R\&D partnership programme (USI-073). \end{acknowledgments}
train/arxiv
BkiUfdI5qhLBxoxlBuDQ
5
1
\section{Introduction} In a previous paper~\cite{Han:2015hba}, we presented a comprehensive analysis on the LHC signatures of the type II seesaw model of neutrino masses in the nondegenerate case of the triplet scalars. In this companion paper, another important signature---the pair and associated production of the neutral scalars--is explored in great detail. This is correlated to the pair production of the standard model (SM) Higgs boson, $h$, which has attracted lots of theoretical and experimental interest~\cite{Aad:2013wqa,Chatrchyan:2013lba} since its discovery~\cite{Aad:2012tfa,Chatrchyan:2012ufa}, because the pair production can be used to gain information on the electroweak symmetry breaking sector~\cite{Plehn:1996wb}. Since any new ingredients in the scalar sector can potentially alter the production and decay properties of the Higgs boson, a thorough examination of the properties offers a diagnostic tool to physics effects beyond the SM. The Higgs boson pair production has been well studied for collider phenomenology in the framework of the SM and beyond~\cite{Plehn:1996wb,Dawson:1998py,Djouadi:1999rca,Baur:2002qd,Asakawa:2010xj, Dolan:2012rv,Papaefstathiou:2012qe,Goertz:2013kp,Gupta:2013zza,Barr:2013tda, deFlorian:2013jea,Dolan:2013rja,Barger:2013jfa,Englert:2014uqa,Liu:2014rva,deLima:2014dta, Barr:2014sga}, and extensively studied in various new physics models~\cite{Dolan:2012ac,Arhrib:2009hc,Craig:2013hca,Hespel:2014sla,Kribs:2012kz, Cao:2013si,Nhung:2013lpa,Ellwanger:2013ova,Bhattacherjee:2014bca,Christensen:2012si, Wu:2015nba,Cao:2014kya,Han:2013sga,Gouzevitch:2013qca,No:2013wsa,Grober:2010yv, Gillioz:2012se,Liu:2013woa,Arhrib:2008pw,Heng:2013cya,Dawson:2012mk, Chen:2014xwa,Dib:2005re,Yang:2014gca,Chen:2014ask}, as well as in the effective field theory approach of anomalous couplings~\cite{Contino:2012xk,Nishiwaki:2013cma,Liu:2014rba,Dawson:2015oha} and effective operators~\cite{Azatov:2015oxa,Goertz:2014qta,Pierce:2006dh,Kang:2015nga,He:2015spf}. The pair production of the SM Higgs boson proceeds dominantly through the gluon fusion process~\cite{Plehn:1996wb,Djouadi:1999rca}, and has a cross section at the $14~{\rm TeV}$ LHC (LHC14) of about $18~\textrm{fb}$ at leading order~\cite{Plehn:1996wb}. \footnote{This number is modified to $33~\textrm{fb}$ at next-to-leading order~\cite{Dawson:1998py} and to $40~\textrm{fb}$ at next-to-next-to-leading order~\cite{deFlorian:2013jea}.} It can be utilized to measure the Higgs trilinear coupling. A series of studies have surveyed its observability in the $b\bar{b}\gamma\gamma$, $b\bar{b}\tau^+\tau^-$, $b\bar{b}W^+W^-$, $b\bar{b}b\bar{b}$, and $WW^*WW^*$ signal channels~\cite{Baglio:2012np,Dolan:2012rv,Gouzevitch:2013qca,Papaefstathiou:2012qe, Goertz:2013kp,deLima:2014dta,Barr:2014sga}. For the theoretical and experimental status of the Higgs trilinear coupling and pair production at the LHC, see Refs.~\cite{Baglio:2012np,Dawson:2013bba}. In summary, at the $14~{\rm TeV}$ LHC with an integrated luminosity of $3000~\textrm{fb}^{-1}$ (LHC14@3000), the trilinear coupling could be measured at an accuracy of $\sim 40\%$~\cite{Barger:2013jfa}, and thus leaves potential space for new physics. As we pointed out in Ref.~\cite{Han:2015hba}, in the negative scenario of the type II seesaw model where the doubly charged scalars $H^{\pm\pm}$ are the heaviest and the neutral ones $H^0/A^0$ the lightest, i.e., $M_{H^{\pm\pm}}>M_{H^\pm}>M_{H^0/A^0}$, the associated $H^0A^0$ production gives the same signals as the SM Higgs pair production while enjoying a larger cross section. The leading production channel is the Drell-Yan process $pp\to Z^*\to H^0A^0$, with a typical cross section $20$-$500~\textrm{fb}$ in the mass region {$130$-$300~{\rm GeV}$}. Additionally, there exists a sizable enhancement from the cascade decays of the heavier charged scalars, which also gives some indirect evidence for these particles. The purpose of this paper is to examine the importance of the $H^0A^0$ production with an emphasis on the contribution from cascade decays and to explore their observability. The paper is organized as follows. In Sec.~\ref{decay}, we summarize the relevant part of the type II seesaw and explore the decay properties of $H^0,~A^0$ in the negative scenario. Sections \ref{Eh} and \ref{signal} contain our systematical analysis of the impact of cascade decays on the $H^0/A^0$ production in the three signal channels, $b\bar{b}\gamma\gamma$, $b\bar{b}\tau^+\tau^-$, and $b\bar{b}\ell^+\ell^-\cancel{E}_T$. We discuss the observability of the signals and estimate the required integrated luminosity for a certain mass reach and significance. Discussions and conclusions are presented in Sec.~\ref{Dis}. In most cases, we will follow the notations and conventions in Ref.~\cite{Han:2015hba}. \section{Decay Properties of Neutral Scalars in the Negative Scenario} \label{decay} The type II seesaw and its various experimental constraints have been reviewed in our previous work \cite{Han:2015hba}. Here we recall the most relevant content that is necessary for our study of the decay properties of the scalars in this section and of their detection at the LHC in later sections. The type II seesaw model introduces an extra scalar triplet $\Delta$ of hypercharge two~\cite{typeII} on top of the SM Higgs doublet $\Phi$ of hypercharge unity. Writing $\Delta$ in matrix form, the most general scalar potential is \begin{eqnarray} \label{Vpotential} V(\Phi,\Delta)&=& m^2\Phi^\dagger\Phi+M^2\text{Tr}(\Delta^\dagger\Delta)+\lambda_1(\Phi^\dagger\Phi)^2 +\lambda_2\left(\text{Tr}(\Delta^\dagger\Delta)\right)^2 +\lambda_3\text{Tr}(\Delta^\dagger\Delta)^2\notag\\ &&+\lambda_4(\Phi^\dagger\Phi)\text{Tr}(\Delta^\dagger\Delta) +\lambda_5\Phi^\dagger\Delta\Delta^\dagger\Phi+\left(\mu \Phi^T i\tau^2\Delta^\dagger \Phi+\text{H.c.}\right). \end{eqnarray} As in the SM, $m^2 < 0$ is assumed to trigger spontaneous symmetry breaking, while $M^2 > 0$ sets the mass scale of the new scalars. The vacuum expectation value (vev) $v$ of $\Phi$ then induces via the $\mu$ term a vev $v_\Delta$ for $\Delta$. The components of equal charge (and also of identical $CP$ in the case of neutral components) in $\Delta$ and $\Phi$ then mix into physical scalars $H^\pm$; $A^0$; $H^0,~h$ and would-be Goldstone bosons $G^{\pm;0}$, with the mixing angles specified by (see, for instance, Refs.~\cite{Arhrib:2011uy,Aoki:2012jj}) \begin{align} \tan \theta_+ = \frac{\sqrt{2} v_{\Delta}}{v},~ \tan \alpha = \frac{2 v_{\Delta}}{v},~ \tan 2\theta_0 = \frac{2v_{\Delta}}{v} \frac{v^2(\lambda_4+\lambda_5)-2M_{\Delta}^2} {2v^2\lambda_1-M_{\Delta}^2-v_\Delta^2(\lambda_2+\lambda_3)}, \label{mixangles} \end{align} where an auxiliary parameter is introduced for convenience, \begin{align} M_\Delta^2=\frac{v^2\mu}{\sqrt{2}v_\Delta}. \end{align} To a good approximation, the SM-like Higgs boson $h$ has the mass $M_h \approx\sqrt{2\lambda_1}v$, the new neutral scalars $H^0,~A^0$ have an equal mass $M_{H^0}\approx M_{A^0} \approx M_{\Delta}$, and the new scalars of various charges are equidistant in squared masses: \begin{equation} M^2_{H^{\pm\pm}}-M^2_{H^{\pm}}\approx M^2_{H^{\pm}}-M^2_{H^0/A^0}\approx -\frac{1}{4}\lambda_5v^2. \label{massrelation} \end{equation} There are thus two scenarios of spectra, positive or negative, according to the sign of $\lambda_5$. For convenience, we define $\Delta M\equiv M_{H^\pm}-M_{H^0/A^0}$. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{BRHp1_new.pdf} \includegraphics[width=0.45\linewidth]{BRHpp1.pdf} \includegraphics[width=0.45\linewidth]{BRHp2.pdf} \includegraphics[width=0.45\linewidth]{BRHpp2.pdf} \includegraphics[width=0.45\linewidth]{BRHp3.pdf} \includegraphics[width=0.45\linewidth]{BRHpp3.pdf} \end{center} \caption{Branching ratios of $H^{\pm}$ and $H^{\pm\pm}$ versus $M_{\Delta}$ at some benchmark points of $\Delta M$ and $v_{\Delta}$: $(\Delta M,v_\Delta)=(5,0.01),~(10,0.01),~(5,0.001)~{\rm GeV}$, from the upper to the lower panels. \label{brhp}} \end{figure} In the rest of this section, we discuss the decay properties of the new scalars in the negative scenario with an emphasis on $H^0$ and $A^0$. The explicit expressions for the relevant decay widths can be found in Refs.~\cite{Djouadi:2005gj,Aoki:2011pz,Chabab:2014ara}. It has been shown that $H^0/A^0$ decays dominantly into neutrinos for $v_{\Delta}<10^{-4}~{\rm GeV}$~\cite{Perez:2008ha}, resulting in totally invisible final states. We will restrict ourselves to $v_{\Delta}\gg 10^{-4}~{\rm GeV}$ in this work, where $H^0/A^0$ dominantly decays into visible particles. Before we detail their decay properties, we give a brief account of the cascade decays of the charged scalars. The branching ratios of the cascade decays are controlled by the three parameters, $v_{\Delta}$, $\Delta M$, and $M_{\Delta}$. The cascade decays dominate in the moderate region of $v_{\Delta}$ and for $\Delta M$ not too small, where a minimum value of $\Delta M\sim2~{\rm GeV}$ appears around $v_{\Delta}\sim10^{-4}~{\rm GeV}$~\cite{Perez:2008ha, Aoki:2011pz, Han:2015hba,Melfo:2011nx}. In Fig.~\ref{brhp}, the branching ratios of $H^{\pm}$ and $H^{\pm\pm}$ are shown as a function of $M_{\Delta}$ at some benchmark points of $v_{\Delta}$ and $\Delta M$. Basically speaking, in the mass region $M_\Delta=130$-$300~{\rm GeV}$, the cascade decays are dominant for a relatively large mass splitting $\Delta M$ (as shown in the middle panel of Fig.~\ref{brhp}) or a relatively small $v_{\Delta}$ (in the lower panel). \subsection{$H^0$ decays} At tree level, $H^0$ can decay to $f\bar{f}~(f=q,l)$, $\nu\nu$, $W^{+}W^{-}$, $ZZ$, and $hh$. It can also decay to $gg$, $\gamma\gamma$, and $Z\gamma$ through radiative effects. Similarly, $A^0 \to f\bar{f}$, $\nu\nu$, $Zh$ at tree level, and it has the same decay modes as $H^0$ at the loop level. Since we have chosen $v_{\Delta}\gg 10^{-4}~{\rm GeV}$, the neutrino mode can be safely neglected for both $H^0$ and $A^0$. Previous work usually concentrated on the decoupling region where the neutral scalars $H^0/A^0$ are much heaver than the light $CP$-even Higgs $h$ and the scalar self-couplings $\lambda_i$ are taken to be zero for simplicity~\cite{Perez:2008ha}. In this case, the mixing angle $\theta_0\approx\alpha$, and the $H^0W^+W^-$ coupling [being proportional to $\sin(\alpha-\theta_0)$] tends to vanish. As a consequence, the $W$-pair mode is absent and the dominant channels are $H^0 \to hh$, $ZZ$ for a heavy $H^0$. In contrast, we take into account the effect of scalar self-interactions and focus on the nondecoupling regime, i.e., $H^0/A^0$ are not much heavier than $h$. For illustration, we choose the benchmark values $v_{\Delta}=10^{-3}~{\rm GeV}$, $\Delta M=5~{\rm GeV}$; then, $\lambda_5$ is determined by Eq.~(\ref{massrelation}) upon specifying $M_\Delta$. \footnote{As pointed out in Ref.~\cite{Aoki:2011pz}, varying $v_{\Delta}$ in the range $10^{-3}$-$1~{\rm GeV}$ would not change the branching ratios significantly.} To investigate the effect of the scalar self-interactions, we note the following features in the decays of $H^0$. 1) The decay widths of $H^0 \to f\bar{f},~gg$ differ from those of $h$ only by a factor of $\sin^2\theta_0$, which leads to similar behavior for $H^0$ and $h$. 2) The only free parameter for the mixing between $H^0$ and $h$ is $\lambda_4$, because [as shown in Eq.~(\ref{mixangles})] the impact of $\lambda_{2,3}$ is suppressed by a small $v_{\Delta}$ and a relatively large mass difference between $M_{\Delta}$ and $M_h$ while $\lambda_1$ is fixed by $M_h$. 3) $\lambda_4$ enters the $H^0W^+W^-$ and $H^0ZZ$ couplings and thus affects the decays $H^0\to W^+W^-,~ZZ$. 4) The $H^0hh$ coupling simplifies for $v_{\Delta}\ll v$ such that the only free parameter in the decay $H^0\to hh$ is again $\lambda_4$. As a consequence of these features, we shall choose $\lambda_4$ as a free parameter and vary it in the range $[-1.0,1.0]$, and fix the couplings $\lambda_2=\lambda_3=0.1$ which are involved in loop-induced decays. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.43\linewidth]{BRH0bb.pdf} \includegraphics[width=0.45\linewidth]{BRH0tt.pdf} \end{center} \caption{Branching ratios of $H^0\to b\bar{b}$ and $H^0\to t\bar{t}$ as a function of $M_{H^0}$ for various values of $\lambda_4$. \label{brh0tt}} \end{figure} We first examine the branching ratios of $H^0\to f\bar{f}$. BR($H^0\to b\bar{b}$) and BR($H^0\to t\bar{t}$) are plotted in Fig.~\ref{brh0tt} for different mass regions of $H^0$. \footnote{The influence of $\lambda_4$ for light fermions $b,c,\tau,\mu$ and gluons is similar, so we only present BR($H^0\to b\bar{b}$) in Fig.~\ref{brh0tt}.} It is clear that the variation of BR($H^0\to b\bar{b}$) is more dramatic for $\lambda_4>0$. The maximum of BR($H^0\to b\bar{b}$) appears at $\lambda_4\approx0.5$. Obviously, BR($H^0\to b\bar{b}$) is a nonmonotonic function of $\lambda_4$, while BR($H^0\to t\bar{t}$) monotonically increases with $\lambda_4$. As will be discussed later, this different behavior in the two mass regions is due mainly to a zero in the $H^0ZZ$ coupling. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{BRH0WW.pdf} \includegraphics[width=0.45\linewidth]{BRH0hh.pdf} \includegraphics[width=0.45\linewidth]{BRH0ZZ1.pdf} \includegraphics[width=0.45\linewidth]{BRH0ZZ2.pdf} \end{center} \caption{Left: Branching ratios of $H^0\to W^+W^-,~ZZ$ as a function of $M_{H^0}$ in the mass region $130$-$300~{\rm GeV}$. Right: Branching ratios of $H^0\to hh,~ZZ$ as a function of $M_{H^0}$ in the mass region $200$-$1000~{\rm GeV}$. \label{brh0WW}} \end{figure} Now we study the bosonic decays $H^0\to W^+W^-,~ZZ,~hh$. In the left panel of Fig.~\ref{brh0WW}, we present the branching ratios of $H^0\to W^+W^-,~ZZ$ in the mass region $130$-$300~{\rm GeV}$. For most values of $\lambda_4$, BR($H^0\to W^+W^-$) increases with $M_{H^0}$ when $M_{H^0}<2M_{W}$, and varying $\lambda_4$ for $\lambda_4>0$ changes it considerably. $\lambda_4$ has a strong impact on BR($H^0\to W^+W^-$) in the mass region $2M_{Z}<M_{H^0}<2M_{h}$ where the decay channel dominates overwhelmingly for $\lambda_4<0$ but becomes negligible for $\lambda_4$ approaching about $0.5$. However, once the $H^0\to hh$ channel is opened, $H^0\to W^+W^-$ is suppressed significantly independent of $\lambda_4$. The decay $H^0\to ZZ$ cannot dominate when $M_{H^0}<2M_{W}$. In the mass region $2M_{Z}<M_{H^0}<2M_{h}$, it is complementary with the $W^+W^-$ channel, so their behavior is just opposite. More interestingly, there is a zero point for the $H^0ZZ$ coupling, which is proportional to $(v\sin\theta_0-4v_{\Delta}\cos\theta_0)$. According to Eq. (\ref{mixangles}), one obtains the corresponding $M_{\Delta}$ at the zero: \begin{equation} M_{\Delta}^0(ZZ)=\sqrt{2M_h^2-\frac{1}{2}(\lambda_4+\lambda_5)v^2}. \end{equation} Note that the above relation only holds for $\lambda_4+\lambda_5<2M_h^2/v^2\approx0.5$, since we are working in the scenario where $M_{\Delta}>M_h$. The existence of the zero coupling explains the presence of the nodes in BR($H^0\to ZZ$) for $\lambda_4\leq0$. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{BRH0AA.pdf} \includegraphics[width=0.44\linewidth]{BRH0ZA.pdf} \end{center} \caption{Branching ratios of $H^0\to \gamma\gamma,~Z\gamma$ as a function of $M_{H^0}$ for various sets of $\lambda_{2,4}$ values. \label{brh0AA}} \end{figure} In the right panel of Fig. \ref{brh0WW}, BR($H^0\to hh,ZZ$) are shown in the mass region $200$-$1000~{\rm GeV}$. When $M_{H^0}>2M_{h}$, the dependence on $\lambda_4$ is simple: a larger $\lambda_4$ corresponds to a smaller BR($H^0\to hh$) and a larger BR($H^0\to ZZ$). It is clear that $\lambda_4$ has a more significant impact in the mass region $200\sim350~{\rm GeV}$, and varying $\lambda_4$ could change BR($H^0\to ZZ$) from $0$ to $0.9$. Once $M_{H^0}$ exceeds $2M_t$, the evolution of Br($H^0\to hh,ZZ$) becomes smooth with the increase of $M_{H^0}$. There also exists a zero point for the $H^0hh$ coupling, which can be obtained as for the $ZZ$ channel: \begin{equation} M_{\Delta}^0(hh)=\sqrt{2(\lambda_4+\lambda_5)v^2-2M_h^2}~, \end{equation} which is valid for $\lambda_4+\lambda_5>3M_h^2/2v^2\approx0.375$. Finally, we investigate the loop-induced decays, $H^0\to \gamma\gamma,~Z\gamma$. In addition to the usual contributions from the top quark and $W$ boson, the new charged scalars $H^{\pm}$ and $H^{\pm\pm}$ also contribute to the decays. These new terms involve the $H^0H^+H^-$ and $H^0H^{++}H^{--}$ couplings, which are proportional to \begin{eqnarray} H^0H^+H^-&:& [(2\lambda_2+2\lambda_3-\lambda_5)\sin\alpha\cos\theta_0-(2\lambda_4+\lambda_5)\cos\alpha\sin\theta_0], \nonumber \\ H^0H^{++}H^{--}&:&(\lambda_2\sin\alpha\cos\theta_0-\lambda_4\cos\alpha\sin\theta_0). \end{eqnarray} One therefore has to consider the scalar self-couplings $\lambda_{2,3}$. For simplicity, we set $\lambda_2=\lambda_3$ and vary them from $-3.0$ to $3.0$. In Fig. \ref{brh0AA}, we display BR($H^0\to \gamma\gamma$) and BR($H^0\to Z\gamma$) versus $M_{H^0}$ for some typical sets of $\lambda_{2,4}$ values. The evolution of both branching ratios crosses 3 orders of magnitude in this parameter region. The resulting enhancement compared with $h\to\gamma\gamma$ in the SM looks significant: the maximal enhancement can be achieved at the level of $9\%$ for the $H^0\to \gamma\gamma$ channel at $M_{H^0}=130~{\rm GeV}$, and of $0.7\%$ for the $H^0\to Z\gamma$ channel at $M_{H^0}\approx140~{\rm GeV}$. \subsection{$A^0$ decays} \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.43\linewidth]{BRA0bb.pdf} \includegraphics[width=0.45\linewidth]{BRA0tt.pdf} \end{center} \caption{Branching ratios of $A^0\to b\bar{b},~t\bar{t}$ as a function of $M_{A^0}$ for various values of $\lambda_4$. \label{bra0tt}} \end{figure} Similar to $H^0$, the decay widths of $A^0 \to f\bar{f},~gg$ differ from those of $h$ by a factor of $\sin^2\alpha$ with $\alpha$ being given in Eq. (\ref{mixangles}). Moreover, the only vertex which involves $\lambda_i$ is the $A^0Zh$ coupling proportional to $(\cos\theta_0\sin\alpha-2\sin\theta_0\cos\alpha)$. As a consequence, one can only choose $\lambda_4$ as a free parameter to illustrate the influence of scalar interactions. In this section, we also vary $\lambda_4$ from $-1.0$ to $1.0$ and take the same benchmark values for $v_{\Delta}$ and $\Delta M$ as for the $H^0$ decays. In the left panel of Fig. \ref{bra0tt}, we present BR($A^0\to b\bar{b}$) as a function of $M_{A^0}$.~\footnote{As before, the influence of $\lambda_4$ on the $A^0\to f \bar{f},~gg$ channels is similar to the $b\bar{b}$ mode.} For a fixed value of $\lambda_4$, BR($A^0\to b\bar{b}$) decreases as $M_{A^0}$ increases. The dependence of BR($A^0\to b\bar{b}$) on $\lambda_4$ is simple: The larger $\lambda_4$ is, the larger BR($A^0\to b\bar{b}$) is. And BR($A^0\to b\bar{b}$) can be dominant with $\lambda_4=1.0$ as long as $A^0\to Zh$ is not fully opened. The right panel of Fig. \ref{bra0tt} shows BR($A^0\to t\bar{t}$), which is very similar to BR($H^0\to t\bar{t}$). \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.44\linewidth]{BRA0Zh1.pdf} \includegraphics[width=0.45\linewidth]{BRA0Zh2.pdf} \end{center} \caption{Branching ratios of $A^0\to Zh$ as a function of $M_{A^0}$ for various values of $\lambda_4$.. \label{bra0zh}} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.43\linewidth]{BRA0AA.pdf} \includegraphics[width=0.45\linewidth]{BRA0ZA.pdf} \end{center} \caption{Branching ratios of $A^0\to \gamma\gamma$ and $A^0\to Z\gamma$ as a function of $M_{A^0}$ for various values of $\lambda_4$. \label{bra0aa}} \end{figure} We then study the most important decay $A^0\to Zh$. In Fig. \ref{bra0zh}, we present BR($A^0\to Zh$) as a function of $M_{A^0}$ in the low-mass region ($130$-$300~{\rm GeV}$) and high-mass region ($300$-$1000~{\rm GeV}$), respectively. The evolution of BR($A^0\to Zh$) with $M_{A^0}$ and $\lambda_4$ is just opposite to that of $A^0\to b\bar{b}~(t\bar t)$ in the low- (high-) mass region. The variation of BR($A^0\to Zh$) with $\lambda_4$ is dramatic below the $Zh$ threshold. In particular, near the $Zh$ threshold BR($A^0\to Zh)\sim 1.0$ for $\lambda_4=-1.0$, while BR($A^0\to Zh$) tends to vanish for $\lambda_4=1.0$, which corresponds to the zero point of the $A^0Zh$ coupling: \begin{equation} M_{\Delta}^0(Zh)=\sqrt{(\lambda_4+\lambda_5)v^2-M_h^2}, \end{equation} with $\lambda_4+\lambda_5>2M_h^2/v^2\approx0.5$. BR($A^0\to Zh$) is totally dominant in the mass region between the $Zh$ and $t\bar t$ thresholds, and becomes comparable to BR($A^0\to t\bar{t}$) when $M_{A^0}>2M_t$. At last, we study the one-loop-induced decays, $A^0\to \gamma\gamma,~Z\gamma$. These two channels can only be induced by the top quark in the loop since the $A^0W^+W^-$, $A^0H^+H^-$, and $A^0H^{++}H^{--}$ couplings are absent in the $CP$-conserving case. In Fig. \ref{bra0aa}, both BR($A^0\to \gamma\gamma$) and BR($A^0\to Z\gamma$) are displayed. For $M_{A^0}$ below the $Zh$ threshold, the variation in $\lambda_4$ of BR($A^0\to \gamma\gamma$) increases as $M_{A^0}$ increases. BR($A^0\to\gamma\gamma$) could reach $9\times10^{-4}$ for $M_{A^0}\approx 210~{\rm GeV}$ and $\lambda_4=1.0$, which is much smaller than the maximum of BR($H^0\to\gamma\gamma$). The variation in $\lambda_4$ of BR($A^0\to Z\gamma$) is slightly steeper, with a maximum of $1.2\times10^{-4}$ at $M_{A^0}\approx 215~{\rm GeV}$ and $\lambda_4=1.0$. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{BRH0.pdf} \includegraphics[width=0.45\linewidth]{BRA0.pdf} \end{center} \caption{Branching ratios of $H^0/A^0$ as a function of $M_{H^0/A^0}$ at the benchmark point in Eq.~(\ref{BP}). \label{brh0}} \end{figure} In the above, we have discussed the decay channels of $H^0$ and $A^0$ separately. We have shown that the scalar self-interactions have a large impact on their branching ratios. In Sec.~\ref{signal}, we will explore their LHC signatures. For this purpose, we choose the following benchmark values: \begin{equation}\label{BP} v_{\Delta}=0.001~{\rm GeV}, ~\Delta M=5~{\rm GeV}, ~\lambda_2=\lambda_3=0.1, ~\lambda_4=0.25. \end{equation} The reason that we set relatively small values of $v_{\Delta}$ and $\Delta M$ is to obtain large cascade decays of charged scalars as well as a large enhancement of neutral scalar production. In Fig. \ref{brh0}, we display all relevant branching ratios versus $M_{H^0/A^0}$ for this benchmark model, which is to be simulated in Sec.~\ref{signal} for the LHC in the $b\bar{b}\gamma\gamma$, $b\bar{b}\tau^+\tau^-$, and $b\bar{b}W^+W^-$ signal channels. \section{Production of Neutral Higgs from Cascade Decays}\label{Eh} We pointed out in Ref.~\cite{Han:2015hba} the importance of the associated $H^0A^0$ production in the nondegenerate case. To estimate the number of signal events, we simulated the signal channel $b\bar{b}\tau^+\tau^-$ at $M_{H^0/A^0}=130~{\rm GeV}$. We found that, with a much higher production cross section than the SM Higgs pair ($hh$) production, a $2.9\sigma$ excess in that signal channel is achievable for LHC14@300. In the present work, we are interested in the observability of the associated $H^0A^0$ production in the nondecoupling mass regime $(130$-$200~{\rm GeV})$. In Fig. \ref{cs} we first show the production cross sections for a pair of various scalars at LHC14 versus $M_{\Delta}$ with a degenerate spectrum. As before, we incorporate the next-to-leading-order (NLO) QCD effects by multiplying a $K$-factor of $1.3$ in all $q\bar{q}$ production channels~\cite{Dawson:1998py}. The $hh$ production through gluon-gluon fusion at NLO ($33~\textrm{fb}$) is also indicated (black dashed line) for comparison. One can see that the cross section for $H^0A^0$ is about $20$-$500~\textrm{fb}$ in the mass region $130$-$300~{\rm GeV}$, which is much larger than the $hh$ production for most of the mass region and thus leads to great discovery potential. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{CS1.pdf} \includegraphics[width=0.45\linewidth]{CS2.pdf} \end{center} \caption{Production cross sections for a pair of scalars at LHC14 versus $M_{\Delta}$ for a degenerate spectrum. The black dashed line is for the SM $hh$ production. \label{cs}} \end{figure} In general, the new scalars are nondegenerate for a nonzero $\lambda_5$. In the positive scenario where $H^{\pm\pm}$ are the lightest, the cascade decays of $H^\pm$ and $H^0/A^0$ can strengthen the observability of $H^{\pm\pm}$~\cite{Akeroyd:2011zza,Chun:2013vma}. For the same reason, in the negative scenario where $H^0/A^0$ are the lightest, the charged scalars contribute instead to the production of $H^0/A^0$ through the cascade decays like $H^{\pm}\to H^0/A^0W^*$. In this work, we study these contributions in the same way as was done for the positive scenario in Refs.~\cite{Akeroyd:2011zza,Chun:2013vma}. We define the reference cross section $X_0$ for the standard Drell-Yan process \begin{equation} X_0=\sigma(pp\to Z^*\to H^0A^0), \end{equation} which is independent of the cascade decay parameters $v_{\Delta}$ and $\Delta M$. A detailed study on the $b\bar{b}\tau^+\tau^-$ signal for this process with $M_{\Delta}=130~{\rm GeV}$ can be found in Ref.~\cite{Han:2015hba}. Besides the above direct production, neutral scalars can also be produced from cascade decays of charged scalars. These extra production channels include $H^\pm H^0/A^0$, $H^+H^-$, $H^{\pm}H^{\mp\mp}$, and $H^{++}H^{--}$ followed by cascade decays of charged scalars. We consider first the associated $H^\pm H^0/A^0$ production followed by cascade decays of $H^\pm$, \begin{eqnarray} \nonumber pp\to W^*\to H^{\pm}H^0 \to H^0H^0 W^* &,&~~ pp\to W^*\to H^{\pm}H^0 \to A^0H^0 W^*,\\ pp\to W^*\to H^{\pm}A^0 \to H^0A^0 W^* &,&~~ pp\to W^*\to H^{\pm}A^0 \to A^0A^0 W^*, \end{eqnarray} resulting in three final states classified by a pair of neutral scalars: $A^0H^0$, $H^0H^0$, and $A^0A^0$. Noting that the last two originate only from cascade decays, any detection of such production channels would be a hint of charged scalars being involved. Using the fact that \begin{eqnarray} \sigma(pp\to W^*\to H^\pm H^0)&\simeq&\sigma(pp\to W^*\to H^\pm A^0), \\ \mbox{BR}(H^\pm\to H^0 W^*)&\simeq&\mbox{BR}(H^\pm\to A^0 W^*), \end{eqnarray} as well as the narrow width approximation, we calculate the production cross sections for these three final states: \begin{eqnarray} H^0A^0:X_1&=&2[\sigma(pp\to W^+\to H^+ H^0)+\sigma(pp\to W^-\to H^- H^0)]\times \mbox{BR}(H^\pm \to A^0 W^*), \\ H^0H^0:Y_1&=&[\sigma(pp\to W^+\to H^+ H^0)+\sigma(pp\to W^-\to H^- H^0)]\times \mbox{BR}(H^\pm \to H^0 W^*), \\ A^0A^0:Z_1&=&[\sigma(pp\to W^+\to H^+ A^0)+\sigma(pp\to W^-\to H^- A^0)]\times \mbox{BR}(H^\pm \to A^0 W^*). \end{eqnarray} The factor 2 in $X_1$ accounts for the equal contribution from the process with $H^0$ and $A^0$ interchanged. The relations $X_1=2Y_1=2Z_1$ actually hold true for all of the four production channels, since for a given channel the same branching ratios (such as for $H^{\pm}\to H^0/A^0W^*$) are involved, \begin{equation}\label{XXX} X_i=2Y_i=2Z_i,~(i=1,2,3,4), \end{equation} where $X_i,~Y_i$, and $Z_i$ refer to the cross sections for $H^0A^0$, $H^0H^0$, and $A^0A^0$ production with the subscript $i=1,2,3,4$ denoting the production channels $H^\pm H^0/A^0$, $H^{+}H^{-}$, $H^{\pm}H^{\mp\mp}$, and $H^{++}H^{--}$, respectively. The relations imply that we may concentrate on the cross section of $H^0A^0$ production. Naively, one would expect the next important channel to be $H^+H^-$ since it only involves two cascade decays: \begin{equation} X_2=2\sigma(pp\to \gamma^*/Z^*\to H^+H^-)\times \mbox{BR}(H^{\pm}\to H^0 W^*)\mbox{BR}(H^{\pm}\to A^0 W^*). \end{equation} But as already mentioned in Ref. \cite{Akeroyd:2011zza}, a smaller coupling and destructive interference between the $\gamma^*$ and $Z^*$ exchange make the cross section of $H^+H^-$ production an order of magnitude smaller than that of $H^0A^0$ even for a degenerate spectrum. Considering further suppression due to cascade decays, $X_2$ is not important for the enhancement of $H^0A^0$ production and can be safely neglected in the numerical analysis. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{X.pdf} \includegraphics[width=0.45\linewidth]{Xe.pdf} \end{center} \caption{Production cross sections for a pair of neutral scalars versus $M_{\Delta}$ at LHC14 and with $\Delta M=5~{\rm GeV}$, $v_{\Delta}=0.001~{\rm GeV}$. Left: The red solid (dashed) line corresponds to $X_0$ ($X$). Right: The red line corresponds to $H^0H^0/A^0A^0$ from cascade decays $Y/Z$, and the green line to $H^0A^0$ from cascade decays ($X_C$). The shaded regions are filled by scanning over $\Delta M$ and $v_\Delta$. \label{csx}} \end{figure} The contribution from $H^{\pm}H^{\mp\mp}$ is more important despite the fact that it involves three cascade decays: \begin{eqnarray} X_3&=& 2[\sigma(pp\to W^{-*}\to H^+H^{--})+\sigma(pp\to W^{+*}\to H^-H^{++})]\times\\ \nonumber &&\mbox{BR}(H^{\pm\pm}\to H^\pm W^*)\mbox{BR}(H^{\pm}\to H^0 W^*)\mbox{BR}(H^{\pm}\to A^0 W^*). \end{eqnarray} As shown in Fig.~\ref{cs}, $\sigma(pp\to W^*\to H^\pm H^{\mp\mp})$ is the largest for a degenerate mass spectrum. When cascade decays are dominant, the phase-space suppression of heavy charged scalars will be important. So we expect that the $H^0A^0$ production receives considerable enhancement from $H^{\pm}H^{\mp\mp}$ when the mass splitting is small and cascade decays are dominant. Finally, the last mechanism is $H^{++}H^{--}$, which involves four cascade decays: \begin{eqnarray}\nonumber X_4 &=&2\sigma(pp\to \gamma^*/Z^* \to H^{++}H^{--})\times\mbox{BR}(H^{\pm\pm}\to H^\pm W^*)^2\\ &&\times\mbox{BR}(H^{\pm}\to H^0 W^*)\mbox{BR}(H^{\pm}\to A^0 W^*)~. \end{eqnarray} This mechanism is also promising since the cross section of $H^{++}H^{--}$ production is slightly larger than $H^{0}A^0$ production for a degenerate mass spectrum. The phase-space suppression of $X_4$ is more severe than that of $X_3$, because a pair of the heaviest $H^{\pm\pm}$ are produced. Summing over all four of the above channels yields the contribution to the $H^0A^0$ production from cascade decays, \begin{equation} X_C=X_1+X_2+X_3+X_4, \end{equation} and the total production cross section of $H^0A^0$ is then $X=X_0+X_C$. Using Eq.~(\ref{XXX}), the total cross sections for the pair production $H^0H^0/A^0A^0$, $Y=\sum_iY_i$, $Z=\sum_iZ_i$, are given by \begin{equation} Y=Z=\frac{1}{2}X_C. \end{equation} Since the enhancement from cascade decays depends on a not severely suppressed phase space and a larger branching ratio of cascade decays, we choose to work with a relatively smaller mass splitting and triplet vev as shown in Eq. (\ref{BP}). Figure \ref{csx} displays the cross sections of the $H^0A^0$, $H^0H^0$, and $A^0A^0$ production as a function of $M_{\Delta}$. As can be seen from the figure, the production of $H^0A^0$ can be enhanced by a factor of 3, while the $H^0H^0/A^0A^0$ production at the maximal enhancement can reach the level of $X_0$. This could make the detection of neutral scalar pair productions very promising in the negative scenario. \section{LHC Signatures of Neutral Scalar Production}\label{signal} In this section we investigate the signatures of neutral scalar production at the LHC. From previous studies on the SM $hh$ production, we already know that the most promising signal is $b\bar{b}\gamma\gamma$, and $b\bar{b}\tau^+\tau^-$ is next to it, while both semileptonic and dileptonic decays of $W$'s in the $b\bar{b}W^+W^-$ channel are challenging. In this work we analyze all three of the signals---$b\bar{b}\gamma\gamma$, $b\bar{b}\tau^+\tau^-$, and $b\bar{b}W^+W^-\to b\bar{b}\ell^+\ell^-2\nu$ ($\ell=e,~\mu$ for collider identification)---as well as their backgrounds based on the benchmark model presented in Eq. (\ref{BP}). In Sec.~\ref{Eh} we discussed the Drell-Yan production of $H^0A^0$ and the enhanced pair and associated production of neutral scalars $H^0/A^0$ due to cascade decays of charged scalars $H^\pm,~H^{\pm\pm}$. We are now ready to incorporate the branching ratios of $H^0/A^0$ decays for a specific signal channel. For instance, the cross sections for the $b\bar{b}\gamma\gamma$ signal channel can be written as \begin{eqnarray}\label{S0} S_0(b\bar{b}\gamma\gamma)& = & X_0\times\left[\mbox{BR}(H^0\to b\bar{b})\mbox{BR}(A^0\to\gamma\gamma) +\mbox{BR}(H^0\to\gamma\gamma)\mbox{BR}(A^0\to b\bar{b})\right],\\ S(b\bar{b}\gamma\gamma) & = & X\times\left[\mbox{BR}(H^0\to b\bar{b})\mbox{BR}(A^0\to\gamma\gamma) +\mbox{BR}(H^0\to\gamma\gamma)\mbox{BR}(A^0\to b\bar{b})\right]\\\nonumber &&+2Y\hspace{-0.25em}\times\mbox{BR}(H^0\to b\bar{b})\mbox{BR}(H^0\to\gamma\gamma) +2Z\hspace{-0.35em}\times\mbox{BR}(A^0\to b\bar{b})\mbox{BR}(A^0\to\gamma\gamma). \end{eqnarray} Here $S_0$ denotes the signal from the direct production $pp\to Z^*\to H^0A^0$ alone, and $S$ includes contributions from cascade decays. $S_{(0)}(b\bar{b}\tau^+\tau^-)$ has a similar expression as $S_{(0)}(b\bar{b}\gamma\gamma)$, while $S_{(0)}(b\bar{b}\ell^+\ell^-2\nu)$ is simpler since the decay mode $A^0\to W^+W^-$ is absent. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{CSbbaa.pdf} \includegraphics[width=0.44\linewidth]{CSbbtata.pdf} \includegraphics[width=0.44\linewidth]{CSbbll2v.pdf} \includegraphics[width=0.44\linewidth]{SoverS0.pdf} \end{center} \caption{Theoretical cross sections of $b\bar{b}\gamma\gamma$, $b\bar{b}\tau^+\tau^-$, and $b\bar{b}\ell^+\ell^-2\nu$ signal channels at LHC14. The red solid (dashed) line corresponds to the signal from $X_0$ ($X$), the green (blue) solid line corresponds to the signal from $Y~(Z)$, and the purple dashed line shows the total cross section $S$ for the signal. The SM $hh$ cross section is shown for comparison. The lower right panel shows the enhancement factor $S/S_0$ in the three signal channels. \label{sgn}} \end{figure} The theoretical cross sections for the $b\bar{b}\gamma\gamma$, $b\bar{b}\tau^+\tau^-$, and $b\bar{b}\ell^+\ell^-2\nu$ signal channels are plotted in Fig.~\ref{sgn}. The cross section $S_0(b\bar{b}\gamma\gamma)/S_0(b\bar{b}\tau^+\tau^-)$ is larger than that of the SM $hh$ production until $M_{\Delta}=159/161~{\rm GeV}$; taking into account cascade enhancement pushes the corresponding $M_{\Delta}$ further to $179/197~{\rm GeV}$. $S_{0}(b\bar{b}\ell^+\ell^-2\nu)$ is always larger than that of $hh$ in the mass region $130$-$200~{\rm GeV}$, and interestingly, it keeps about the same value when $M_{\Delta}<160~{\rm GeV}$. The signal from $H^0H^0$ is comparable with $S_0$ in these three channels only for $M_{\Delta}<160~{\rm GeV}$, while in contrast the signal from $A^0A^0$ becomes dominant for the $b\bar{b}\gamma\gamma$ and $b\bar{b}\tau^+\tau^-$ channels when $M_{\Delta}>160~{\rm GeV}$. Therefore, we have a chance to probe the $A^0A^0$ pair production in these two channels. Also shown in Fig.~\ref{sgn} is the enhancement factor $S/S_0$ for the three signal channels at the benchmark point (\ref{BP}) as a function of $M_\Delta$, which will help us understand the simulation results. \subsection{$b\bar{b}\gamma\gamma$ signal channel} \label{sec:bbgg} In our simulation, the parton-level signal and background events are generated with {\bf\scriptsize MADGRAPH5}~\cite{MG5}. We perform parton shower and fast detector simulations with {\bf\scriptsize PYTHIA}~\cite{Sjostrand:2006za} and {\bf\scriptsize DELPHES3}~\cite{Delphes}. Finally, {\bf\scriptsize MADANALYSIS5}~\cite{Conte:2012fm} is responsible for data analysis and plotting. We take a flat $b$-tagging efficiency of 70\%, and mistagging rates of 10\% for $c$ jets and 1\% for light-flavor jets, respectively. Jet reconstruction is done using the anti-$k_T$ algorithm with a radius parameter of $R=0.5$. We further assume a photon identification efficiency of $85\%$ and a jet-faking-photon rate of $1.2\times 10^{-4}$~\cite{Aad:2009wy}. The main SM backgrounds to the signal are as follows: \begin{eqnarray} b\bar{b}\gamma\gamma: p p &\to& b\bar{b}\gamma\gamma,\\ t\bar{t}h: p p &\to& t\bar{t}h \to b\ell^+\nu~\bar{b}\ell^-\nu~\gamma\gamma~(\ell^\pm~\mbox{missed}),\\ Zh: pp &\to& Zh \to b\bar{b} \gamma\gamma. \end{eqnarray} Among them, $b\bar{b}\gamma\gamma$ and $Zh$ are irreducible, while $t\bar{t}h$ is reducible and can be reduced by vetoing the additional $\ell$'s with $p_T^{\ell}>20~{\rm GeV}$ and $|\eta_\ell|<2.4$. In addition, there exist many reducible sources of fake $b\bar{b}\gamma\gamma$: \begin{eqnarray} \nonumber pp\to b\bar{b}jj\nrightarrow b\bar{b}\gamma\gamma,pp\to b\bar{b}j\gamma \nrightarrow b\bar{b}\gamma\gamma, \ldots \\ pp\to c\bar{c}\gamma\gamma\nrightarrow b\bar{b}\gamma\gamma,pp\to j\bar{j}\gamma\gamma\nrightarrow b\bar{b}\gamma\gamma, \ldots, \end{eqnarray} where $x\nrightarrow y$ stands for a final-state $x$ misidentified as $y$. The remaining fake sources are subdominant and are thus not included in our simulation. The QCD corrections to the backgrounds are included by a multiplicative $K$-factor of 1.10 and 1.33 for the leading cross sections of $t\bar{t}h$ and $Zh$ at LHC14~\cite{Dittmaier:2011ti}, respectively. The cross section of the $b\bar{b}\gamma\gamma$ background has been normalized to include fake sources and does not take NLO corrections into account. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{bbaa_PTb.pdf} \includegraphics[width=0.45\linewidth]{bbaa_PTa.pdf} \includegraphics[width=0.45\linewidth]{bbaa_DRbb.pdf} \includegraphics[width=0.45\linewidth]{bbaa_DRaa.pdf} \includegraphics[width=0.45\linewidth]{bbaa_Mbb.pdf} \includegraphics[width=0.45\linewidth]{bbaa_Maa.pdf} \includegraphics[width=0.45\linewidth]{bbaa_Mh0a0.pdf} \includegraphics[width=0.45\linewidth]{bbaa_Et.pdf} \end{center} \caption{Distributions of $p_T^{b,\gamma},~\Delta R_{bb,\gamma\gamma},~M_{bb,\gamma\gamma,H^0A^0}$, and $E_T$ for the signal $b\bar{b}\gamma\gamma$ and its backgrounds before applying any cuts at LHC14. \label{fig:bbaa}} \end{figure} The distributions of some kinematical variables before applying any cuts are shown in Fig. \ref{fig:bbaa}, where we assume $M_{\Delta}=130,~160,~190~{\rm GeV}$. In our analysis, we require that the final states include exactly one $b$-jet pair and one $\gamma$ pair and satisfy the following basic cuts: \begin{eqnarray} p_T^{b,\gamma}>30~{\rm GeV},~|\eta_{b,\gamma}|<2.4,~\Delta R_{bb,\gamma\gamma,b\gamma}>0.4, \end{eqnarray} where $\Delta R=\sqrt{(\Delta \phi)^2+(\Delta \eta)^2}$ is the particle separation, with $\Delta \phi$ and $\Delta \eta$ being the separation in the azimuthal angle and rapidity, respectively. Here we employ a tighter $p_T$ cut than is usually applied to suppress the QCD-electroweak $b\bar{b}\gamma\gamma$ background. The $b$-jet pair and $\gamma$ pair are then required to fall in the following windows on the invariant masses and fulfill the $\Delta R$ cut criteria: \begin{eqnarray} \Delta R_{bb}<2.5,&&~|M_{bb}-M_{\Delta}|<15~{\rm GeV},\\ \nonumber \Delta R_{\gamma\gamma}<2.5,&&~|M_{\gamma\gamma}-M_{\Delta}|<10~{\rm GeV}. \end{eqnarray} \begin{table} [!htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $M_{\Delta}=130~{\rm GeV}$ & $H^0A^0(S_0)$ & $b\bar{b}\gamma\gamma$ & $t\bar{t}h$ & $Zh$ & $S/B$ & $\mathcal{S}(S,B)$ \\ \hline Cross section at NLO& $8.01\times10^{-1}$ & $5.92\times10^3$~~ &1.18& $2.99\times10^{-1}$ & $1.39\times10^{-4}$ & $5.75\times10^{-1}$ \\ Basic cuts & $1.22\times10^{-1}$ & $4.16\times10^{1}$~~ & $1.03\times10^{-1}$ & $3.41\times10^{-2}$ & $2.92\times10^{-3}$ & 1.03 \\ Reconstruct scalars from $b$s & $6.99\times10^{-2}$ & 7.07 & $1.50\times10^{-2}$ & $9.61\times10^{-4}$ & $9.87\times10^{-3}$ & 1.44 \\ Reconstruct scalars from $\gamma$s & $5.28\times10^{-2}$ & $1.03\times10^{-1}$ & $1.08\times10^{-2}$ & $7.32\times10^{-4}$ & $4.63\times10^{-1}$ & 8.01 \\ Cut on $M_{H^0A^0}$ & $4.21\times10^{-2}$ & $2.04\times10^{-2}$ & $4.69\times10^{-3}$ & $3.23\times10^{-4}$ & 1.65 & $12.0$ \\ Cut on $E_T$ & $3.31\times10^{-2}$ & $6.58\times10^{-3}$ & $4.68\times10^{-3}$ & $2.27\times10^{-4}$ & 2.88 & $12.8$ \\ \hline Cascade enhanced & $1.51\times10^{-1}$ & $-$ & $-$ & $-$ & $13.1$ & $41.0$ \\ \hline \hline $M_{\Delta}=160~{\rm GeV}$ & $H^0A^0(S_0)$ & $b\bar{b}\gamma\gamma$ & $t\bar{t}h$ & $Zh$ & $S/B$ & $\mathcal{S}(S,B)$ \\ \hline Cross section at NLO & $5.10\times10^{-2}$ & $5.92\times10^3$~~ &1.18& $2.99\times10^{-1}$ & $8.61\times10^{-6}$ & $3.63\times10^{-2}$ \\ Basic cuts & $8.78\times10^{-3}$ & $4.16\times10^{1}$~~ & $1.03\times10^{-1}$ & $3.41\times10^{-2}$ & $2.10\times10^{-4}$ & $7.44\times10^{-2}$ \\ Reconstruct scalars from $b$s & $4.11\times10^{-3}$ & $5.06$ & $1.34\times10^{-2}$ & $2.36\times10^{-4}$ & $8.11\times10^{-4}$ & $9.99\times10^{-2}$ \\ Reconstruct scalars from $\gamma$s & $3.27\times10^{-3}$ & $3.42\times10^{-2}$ & $1.57\times10^{-5}$ & 0.00 & $9.56\times10^{-2}$ & $9.53\times10^{-1}$ \\ Cut on $M_{H^0A^0}$ & $2.57\times10^{-3}$ & $1.12\times10^{-2}$ & $1.18\times10^{-5}$ & 0.00 & $2.30\times10^{-1}$ & 1.28 \\ Cut on $E_T$ & $1.73\times10^{-3}$ & $3.95\times10^{-3}$ & $1.03\times10^{-5}$ & 0.00 & $4.37\times10^{-1}$ & 1.41 \\ \hline Cascade enhanced & $1.10\times10^{-2}$ & $-$ & $-$ & $-$ & 2.77 & 7.29 \\ \hline \hline $M_{\Delta}=190~{\rm GeV}$ & $H^0A^0(S_0)$ & $b\bar{b}\gamma\gamma$ & $t\bar{t}h$ & $Zh$ & $S/B$ & $\mathcal{S}(S,B)$ \\ \hline Cross section at NLO& $2.68\times10^{-3}$ & $5.92\times10^3$~~ &1.18& $2.99\times10^{-1}$ & $4.53\times10^{-7}$ & $1.91\times10^{-3}$ \\ Basic cuts & $5.33\times10^{-4}$ & $4.16\times10^{1}$~~ & $1.03\times10^{-1}$ & $3.41\times10^{-2}$ & $1.28\times10^{-5}$ & $4.52\times10^{-3}$ \\ Reconstruct scalars from $b$s & $2.27\times10^{-4}$ & 3.61 & $1.05\times10^{-2}$ & $1.24\times10^{-4}$ & $6.27\times10^{-5}$ & $6.53\times10^{-3}$ \\ Reconstruct scalars from $\gamma$s & $1.81\times10^{-4}$ & $2.47\times10^{-2}$ & $3.93\times10^{-6}$ & 0.00 & $7.34\times10^{-3}$ & $6.30\times10^{-2}$ \\ Cut on $M_{H^0A^0}$ & $1.55\times10^{-4}$ & $9.87\times10^{-3}$ & $3.93\times10^{-6}$ & 0.00 & $1.57\times10^{-2}$ & $8.52\times10^{-2}$ \\ Cut on $E_T$ & $8.35\times10^{-5}$ & $1.48\times10^{-3}$ & $3.93\times10^{-6}$ & 0.00 & $5.63\times10^{-2}$ & $1.18\times10^{-1}$ \\ \hline Cascade enhanced & $1.50\times10^{-3}$ & $-$ & $-$ & $-$ & 1.01 & 1.87 \\ \hline \end{tabular} \end{center} \caption{Evolution of signal and background cross sections (in $\textrm{fb}$) at LHC14 for the $b\bar{b}\gamma\gamma$ signal channel upon imposing the cuts one by one. For the cascade-enhanced signal only the cross section passing all the cuts is shown. The last two columns assume an integrated luminosity of $3000~\textrm{fb}^{-1}$. \label{tab:bbaacut}} \end{table} As shown in Fig. \ref{fig:bbaa}, the $\Delta R_{bb,\gamma\gamma}$ distributions of the signal are clearly more compact as they are more likely coming from the same particles. Thus the $\Delta R$ cuts can effectively suppress the background. More specific cuts are necessary for further analysis. A useful variable is the invariant mass of the neutral scalar pair $M_{H^0A^0}$, and the total transverse energy $E_T$ is also distinctive. The peak of $M_{H^0A^0}$ increases with $M_{\Delta}$, and similarly for $E_T$. For simplicity, we adopt for the cuts a linear shift between $M_{H^0A^0},~E_T$ and $M_{\Delta}$: \begin{equation} M_{H^0A^0}>2M_{\Delta}+90~{\rm GeV},~E_T>2M_{\Delta}-60~{\rm GeV}. \end{equation} For instance, we apply $M_{H^0A^0}>350~{\rm GeV}$, $E_T>200~{\rm GeV}$ at the benchmark point $M_{\Delta}=130~{\rm GeV}$. To estimate the observability quantitatively, we adopt the following significance measurement: \begin{equation} \mathcal{S}(S,B)=\sqrt{2\left((S\cdot\mathcal{L}+B\cdot\mathcal{L}) \log\left(1+\frac{S}{B}\right)-S\cdot\mathcal{L}\right)}, \end{equation} which is more suitable than the usual definition of $S/\sqrt{B}$ or $S/\sqrt{S+B}$ for Monte Carlo analysis~\cite{Cowan:2010js}. Here $S$ and $B$ are the signal and background cross sections, and $\mathcal{L}$ is the integrated luminosity. The survival cross sections of the signal from the Drell-Yan process and of the backgrounds upon imposing cuts step by step are summarized in Table~\ref{tab:bbaacut} at the benchmark point (\ref{BP}) for $M_\Delta=130,~160,~190~{\rm GeV}$ respectively. For the cascade-enhanced signal, only the cross section passing all the cuts is shown. The last two columns in the table show the signal-to-background ratio $S/B$ and the statistical significance $\mathcal{S}(S,B)$. For $M_{\Delta}=130~{\rm GeV}$, the $b\bar{b}\gamma\gamma$ channel is very promising. Without (with) cascade enhancement, the final significance can reach 12.8 (41) for LHC14@3000, corresponding to 99 (453) events. For $M_{\Delta}=160~{\rm GeV}$, the channel becomes challenging since the cross section has decreased by a factor of 15.7 compared with the case of $M_{\Delta}=130~{\rm GeV}$. But the cuts we applied are efficient to suppress the SM background, and with cascade enhancement the significance could still reach 7.29 for $3000~\textrm{fb}^{-1}$, corresponding to 33 events in the most optimistic case. For $M_{\Delta}=190~{\rm GeV}$, it looks hopeless even with maximal cascade enhancement in our benchmark model: to achieve 10 signal events, an integrated luminosity of at least $6670~\textrm{fb}^{-1}$ is required, which is beyond the reach of the future LHC. \subsection{$b\bar{b}\tau^+\tau^-$ signal channel} For this signal channel, an important part of the analysis depends on the ability to reconstruct the $b$ pair and the $\tau$ pair. Here we consider the hadronic decays of the $\tau$ lepton and assume a $\tau$-tagging efficiency of $70\%$ with a negligible fake rate. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{bbtata_PTta.pdf} \includegraphics[width=0.45\linewidth]{bbtata_DRtata.pdf} \includegraphics[width=0.45\linewidth]{bbtata_Mtata.pdf} \includegraphics[width=0.45\linewidth]{bbtata_Mh0a0.pdf} \includegraphics[width=0.45\linewidth]{bbtata_Et.pdf} \includegraphics[width=0.45\linewidth]{bbtata_MET.pdf} \end{center} \caption{Distributions of $p_T^\tau,~\Delta R_{\tau\tau},~M_{\tau\tau,H^0A^0},~E_T$, and $\cancel{E}_T$ for the signal $b\bar{b}\tau^+\tau^-$ and its backgrounds before applying any cuts at LHC14. \label{fig:bbtata}} \end{figure} The main SM backgrounds are as follows: \begin{eqnarray} b\bar{b}\tau^+\tau^- : p p &\to& b\bar{b}Z/\gamma^*/h \to b\bar{b}\tau^+\tau^-, \\ b\bar{b}W^+W^- : p p &\to &b\bar{b}W^+W^- \to b\bar{b}\tau^+ \nu_{\tau} \tau^- \bar{\nu}_{\tau}, \\ Zh : p p &\to& Zh \to b\bar{b}\tau^+\tau^-. \end{eqnarray} The irreducible QCD-electroweak background comes from $b\bar{b}\tau^+\tau^-$, where the $\tau$ pair originates from the decays of $Z/\gamma^*/h$. Since the hadronic decays of $\tau$ always contain neutrinos, we also include the SM background $b\bar{b}W^+W^-$, which contributes to the $b\bar{b}\tau^+ \nu_{\tau} \tau^- \bar{\nu}_{\tau}$ final state. The $b\bar{b}W^+W^-$ background mainly originates from $t\bar{t}$ production with subsequent decays $t\to b W$ and $W\to \tau \nu_{\tau}$. Moreover, the associated $Zh$ production gets involved through the subsequent decays $h\to b\bar{b}$ and $Z\to\tau^+\tau^-$ or vice versa. The QCD corrections to the backgrounds are included by a multiplicative $K$-factor of 1.21, 1.35, and 1.33 to the leading cross section of $b\bar{b}\tau^+\tau^-$~\cite{Campbell:2000bg}, $t\bar{t}$ \cite{tt}, and $Zh$ \cite{Dittmaier:2011ti} at LHC14. \begin{table} [!htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $M_{\Delta}=130~{\rm GeV}$ & $H^0A^0(S_0)$ & $b\bar{b}\tau^+\tau^-$ & $b\bar{b}W^+W^-$ & $Zh$ & $S/B$ & $\mathcal{S}(S,B)$ \\ \hline Cross section at NLO & $4.31\times10^{1}$~~ & $3.10\times10^{4}$~~ & $7.92\times10^{3}$~~ & $2.21\times10^{1}$~~~& $1.11\times10^{-3}$ & $12.0$ \\ Basic cuts & $7.75\times10^{-1}$ & $4.49\times10^{1}$~~ & $8.97\times10^{1}$~~ & $2.91\times10^{-1}$ & $5.74\times10^{-3}$ & 3.65 \\ Reconstruct scalars from $\tau$s & $5.14\times10^{-1}$ & $1.19\times10^{1}$~~ & $3.57\times10^{1}$~~ & $1.06\times10^{-1}$ & $1.08\times10^{-2}$ & 4.06 \\ Reconstruct scalars from $b$s & $2.14\times10^{-1}$ & 4.34 & $9.44\times10^{-1}$ & $2.28\times10^{-2}$ & $4.04\times10^{-2}$ & 5.06 \\ Cut on $M_{H^0A^0}$ & $1.29\times10^{-1}$ & $1.96\times10^{-1}$ & $1.51\times10^{-1}$ &$8.10\times10^{-3}$ & $3.64\times10^{-1}$ & $11.3$ \\ Cut on $E_T$ & $1.03\times10^{-1}$ & $9.87\times10^{-2}$ & $7.35\times10^{-2}$ & $5.89\times10^{-3}$ & $5.81\times10^{-1}$ & $12.4$ \\ \hline Cascade enhanced & $5.27\times10^{-1}$ & $-$ & $-$ & $-$ & $2.96$ & $51.6$ \\ \hline \hline $M_{\Delta}=160~{\rm GeV}$ & $H^0A^0(S_0)$ & $b\bar{b}\tau^+\tau^-$ & $b\bar{b}W^+W^-$ & $Zh$ & $S/B$ & $\mathcal{S}(S,B)$ \\ \hline Cross section at NLO & $3.08$~~ & $3.10\times10^{4}$~~ & $7.92\times10^{3}$~~ & $2.21\times10^{1}$~~~& $7.92\times10^{-5}$ & $8.55\times10^{-1}$ \\ Basic cuts & $6.81\times10^{-2}$ & $4.49\times10^{1}$~~ & $8.97\times10^{1}$~~ & $2.91\times10^{-1}$ & $5.05\times10^{-4}$ & $3.21\times10^{-1}$ \\ Reconstruct scalars from $\tau$s & $3.14\times10^{-2}$ & $1.52\times10^{1}$~~ & $2.46\times10^{2}$~~ & $3.17\times10^{-2}$ & $1.20\times10^{-3}$ & $3.36\times10^{-1}$ \\ Reconstruct scalars from $b$s & $1.2\times10^{-2}$ & 2.47 & $1.06\times10^{-1}$ & 0.00 & $4.80\times10^{-3}$ & $4.10\times10^{-1}$ \\ Cut on $M_{H^0A^0}$ & $6.99\times10^{-3}$ & $1.22\times10^{-1}$ & $2.06\times10^{-2}$ & 0.00 & $4.89\times10^{-2}$ & 1.00 \\ Cut on $E_T$ & $5.04\times10^{-3}$ & $4.72\times10^{-2}$ & $5.88\times10^{-3}$ & 0.00 & $9.48\times10^{-2}$ & 1.18 \\ \hline Cascade enhanced & $5.11\times10^{-2}$ & $-$ & $-$ & $-$ & $9.63\times10^{-1}$ & $10.7$ \\ \hline \hline $M_{\Delta}=190~{\rm GeV}$ & $H^0A^0(S_0)$ & $b\bar{b}\tau^+\tau^-$ & $b\bar{b}W^+W^-$ & $Zh$ & $S/B$ & $\mathcal{S}(S,B)$ \\ \hline Cross section at NLO & $2.47\times10^{-1}$ & $3.10\times10^{4}$~~ & $7.92\times10^{3}$~~ & $2.21\times10^{1}$~~~ & $6.34\times10^{-6}$ & $6.86\times10^{-2}$ \\ Basic cuts & $6.54\times10^{-3}$ & $4.49\times10^{1}$~~ & $8.97\times10^{1}$~~ & $2.91\times10^{-1}$ & $4.86\times10^{-5}$ & $3.09\times10^{-2}$ \\ Reconstruct scalars from $\tau$s & $2.32\times10^{-3}$ & $4.60\times10^{-1}$ & $1.47\times10^{1}$~~ & $5.89\times10^{-3}$ & $1.53\times10^{-4}$ & $3.26\times10^{-2}$ \\ Reconstruct scalars from $b$s & $7.66\times10^{-4}$ & $2.06\times10^{-2}$ & 1.21 & 0.00 & $6.25\times10^{-4}$ & $3.78\times10^{-2}$ \\ Cut on $M_{H^0A^0}$ & $6.34\times10^{-4}$ & $1.47\times10^{-3}$ & $9.03\times10^{-2}$ & 0.00 & $6.91\times10^{-3}$ & $1.14\times10^{-1}$ \\ Cut on $E_T$ & $3.87\times10^{-4}$ & 0.00 & $2.64\times10^{-2}$ & 0.00 & $1.47\times10^{-2}$ & $1.30\times10^{-1}$ \\ \hline Cascade enhanced & $6.85\times10^{-3}$ & $-$ & $-$ & $-$ & $2.60\times10^{-1}$ & 2.22 \\ \hline \end{tabular} \end{center} \caption{Similar to Table~\ref{tab:bbaacut}, but for the $b\bar{b}\tau^+\tau^-$ signal channel.} \label{tab:bbtatacut} \end{table} The kinematical distributions similar to the $b\bar b\gamma\gamma$ channel are shown in Fig.~\ref{fig:bbtata}. As one can see from the figure, the $\tau$ jets are less energetic than the $b$ jets (similar to those in the $b\bar{b}\gamma\gamma$ signal channel) due to missing neutrinos in the final state. We first employ the following selection cuts to pick up signals with exactly one $b$ pair and one $\tau$ pair: \begin{eqnarray} p_T^{b,\tau}>30~{\rm GeV},~|\eta_{b,\tau}|<2.4,~\Delta R_{bb,b\tau,\tau\tau}>0.4, \end{eqnarray} and no cut on $\cancel{E}_T$ is adopted. After the selection, the $\tau$ and $b$ pairs are required to fulfill the cuts on the invariant masses and separations: \begin{eqnarray} \Delta R_{\tau\tau}<2.5,M_{\Delta}-40~{\rm GeV}&<&M_{\tau\tau}<M_{\Delta},\\ \nonumber \Delta R_{bb}<2.5,~|M_{bb}-M_{\Delta}|&<&15~{\rm GeV}. \end{eqnarray} The different mass shift between $M_{\tau\tau}$ and $M_{bb}$ is owing to the missing neutrinos in $\tau$ decays resulting in a wider distribution of $M_{\tau\tau}$. For the reconstructed neutral scalars, we further adopt similar cuts on $M_{H^0A^0}$ and $E_T$ as in the $b\bar{b}\gamma\gamma$ channel: \begin{equation} M_{H^0A^0}>2M_{\Delta}+70~{\rm GeV},~E_T>2M_{\Delta}-80~{\rm GeV}. \end{equation} Both $M_{H^0A^0}$ and $E_T$ cuts are reduced by $20~{\rm GeV}$ compared with the $b\bar{b}\gamma\gamma$ channel, which again results from neutrinos in the final state. The corresponding results are summarized in Table \ref{tab:bbtatacut}. The $b\bar{b}\tau^+\tau^-$ is also promising for $M_{\Delta}=130~{\rm GeV}$ even without enhancement from cascade decays. The final significance is 12.4 and the corresponding number of signal events is 309 for LHC14@3000. Including the cascade enhancement, the significance is improved to 51.6, which is even better than the $b\bar{b}\gamma\gamma$ signal. For $M_{\Delta}=160~{\rm GeV}$, the biggest challenge is also the small production cross section of the signal. But in the most optimistic case, the cascade decays can increase the signal by a factor of 10.1, making this channel feasible. Finally, neutral scalars as heavy as $190~{\rm GeV}$ are difficult to detect at LHC14 in this channel. \subsection{$b\bar{b}W^+W^-$ signal channel} \label{sec:bbWW} It is difficult to search for the SM Higgs pair production in this channel due to missing energy brought about by neutrinos in leptonic decays of the $W$ boson, which makes one of the two Higgs bosons not fully reconstructible~\cite{Gouzevitch:2013qca,Baglio:2012np}. The situation is ameliorated in our scenario because, the production rate of $H^0A^0$ can be an order of magnitude larger than that of $hh$ and the di-$W$ decay branching ratio of $H^0$ can also be larger than $h$ in the vast parameter space. This considerably improves the signal events and partially compensates the deficiency of the detection capability. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\linewidth]{bbll2v_PTl.pdf} \includegraphics[width=0.45\linewidth]{bbll2v_DRll.pdf} \includegraphics[width=0.45\linewidth]{bbll2v_MCll.pdf} \includegraphics[width=0.45\linewidth]{bbll2v_MET.pdf} \includegraphics[width=0.45\linewidth]{bbll2v_MC.pdf} \includegraphics[width=0.45\linewidth]{bbll2v_Et.pdf} \end{center} \caption{Distributions of $p_T^\ell,~\Delta R_{\ell\ell},~M^{\ell\ell}_C,~\cancel{E}_T,~M_C$, and $E_T$ for the signal $b\bar{b}\ell^+\ell^-\cancel{E}_T$ and its backgrounds before applying any cuts at LHC14. \label{fig:bbll2v}} \end{figure} \begin{table} [!htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $M_{\Delta}=130~{\rm GeV}$ & $H^0A^0(S_0)$ & $t\bar{t}$ & $S/B$ & $\mathcal{S}(S,B)$ \\ \hline Cross section at NLO & 3.91 & $2.38\times10^4$ ~~& $1.69\times10^{-4}$ & 1.41 \\ Basic cuts & $1.51$ & $4.04\times10^3$~~ & $3.74\times10^{-4}$ & $1.30$ \\ Reconstruct scalars from $b$s & $3.29\times10^{-1}$ & $3.35\times10^2$~~ & $9.82\times10^{-4}$ & $0.984$ \\ Cut on $M_C^{\ell\ell}$ & $3.21\times10^{-1}$ & $2.14\times10^2$~~ & $1.50\times10^{-3}$ & 1.20 \\ Cut on $\Delta R_{\ell\ell}$ & $2.64\times10^{-1}$ & $9.26\times10^1$~~ & $2.85\times10^{-3}$ & 1.50 \\ Cut on $\cancel{E}_T$ & $8.45\times10^{-2}$ & $1.48\times10^1$~~ & $5.71\times10^{-3}$ & 1.20 \\ Cut on $M_C$ & $3.30\times10^{-2}$ & $1.69\times10^{-1}$ & $1.95\times10^{-1}$ & 4.26 \\ Cut on $E_T$ & $3.19\times10^{-2}$ & $1.47\times10^{-1}$ & $2.17\times10^{-1}$ & 4.41 \\ \hline Cascade enhanced& $1.40\times10^{-1}$ & $-$ & $9.53\times10^{-1}$ & $17.7$ \\ \hline \hline $M_{\Delta}=160~{\rm GeV}$ & $H^0A^0(S_0)$ & $t\bar{t}$ & $S/B$ & $\mathcal{S}(S,B)$ \\ \hline Cross section at NLO & 4.95 & $2.38\times10^4$~~ & $2.13\times10^{-4}$ & 1.78 \\ Basic cuts & $2.13$ & $4.04\times10^3$~~ & $5.27\times10^{-4}$ & $1.84$ \\ Reconstruct scalars from $b$s & $4.25\times10^{-1}$ & $2.68\times10^2$~~ & $1.59\times10^{-3}$ & 1.42 \\ Cut on $M_C^{\ell\ell}$ & $3.97\times10^{-1}$ & $1.89\times10^2$~~ & $2.10\times10^{-3}$ & 1.58 \\ Cut on $\Delta R_{\ell\ell}$ & $3.21\times10^{-1}$ & $7.04\times10^1$~~ & $4.56\times10^{-3}$ & 2.09 \\ Cut on $\cancel{E}_T$ & $9.47\times10^{-2}$ & 4.29 & $2.21\times10^{-2}$ & 2.50 \\ Cut on $M_C$ & $3.28\times10^{-2}$ & $4.74\times10^{-2}$ & $6.92\times10^{-1}$ & 7.50 \\ Cut on $E_T$ & $3.02\times10^{-2}$ & $3.62\times10^{-2}$ & $8.34\times10^{-1}$ & 7.78 \\ \hline Cascade enhanced & $1.01\times10^{-1}$ & $-$ & 3.24 & $23.2$ \\ \hline \hline $M_{\Delta}=190~{\rm GeV}$ & $H^0A^0(S_0)$ & $t\bar{t}$ & $S/B$ & $\mathcal{S}(S,B)$ \\ \hline Cross section at NLO & 1.19 & $2.38\times10^4$~~ & $5.00\times10^{-5}$ & $0.424$ \\ Basic cuts & $6.44\times10^{-1}$ & $4.04\times10^3$~~ & $1.59\times10^{-4}$ & $0.554$ \\ Reconstruct scalars from $b$s & $1.36\times10^{-1}$ & $2.26\times10^{2}$~~ & $6.02\times10^{-4}$ & $0.495$ \\ Cut on $M_C^{\ell\ell}$ & $1.27\times10^{-1}$ & $1.79\times10^{2}$~~ & $7.09\times10^{-4}$ & $0.520$ \\ Cut on $\Delta R_{\ell\ell}$ & $9.70\times10^{-2}$ & $6.05\times10^{1}$~~ & $1.60\times10^{-3}$ & $0.683$ \\ Cut on $\cancel{E}_T$ & $2.57\times10^{-2}$ & 1.62 & $1.59\times10^{-2}$ & 1.10 \\ Cut on $M_C$ & $8.85\times10^{-3}$ & $1.89\times10^{-2}$ & $4.68\times10^{-1}$ & 3.29 \\ Cut on $E_T$ & $8.37\times10^{-3}$ & $1.40\times10^{-2}$ & $5.98\times10^{-1}$ & 3.56 \\ \hline Cascade enhanced & $2.69\times10^{-2}$ & $-$ & 1.92 & $10.1$~~ \\ \hline \end{tabular} \end{center} \caption{Similar to Table~\ref{tab:bbaacut}, but for the $b\bar{b}\ell^+\ell^-\cancel{E}_T$ signal channel.} \label{tab:bbllcut} \end{table} With both $W$'s decaying leptonically, the final state appears as $b\bar{b}\ell^+\ell^-\cancel{E}_T$. The dominant SM backgrounds are as follows: \begin{equation} t\bar{t}:pp\to t\bar{t}\to bW^+\bar{b}W^-\to b\bar{b}\ell^+\ell^-\cancel{E}_T. \end{equation} As before, the QCD correction is included by a multiplicative $K$-factor of 1.35 for the $t\bar{t}$ production~\cite{tt}. We pick up the events that include exactly one $b$-jet pair and one opposite-sign lepton pair and filter them with the basic cuts: \begin{eqnarray} p_T^{b}>30~{\rm GeV},~p_T^{\ell}>20~{\rm GeV},~|\eta_{b,\ell}|<2.4,\\ \nonumber \Delta R_{bb,b\ell,\ell\ell}>0.4,~\cancel{E}_T>20~{\rm GeV}. \end{eqnarray} The separation and invariant mass of the $b$-jet pair are required to fulfill \begin{equation} \Delta R_{bb}<2.5,~|M_{bb}-M_{\Delta}|<15~{\rm GeV}. \end{equation} For the lepton pair, we reconstruct the transverse cluster mass $M_C^{\ell\ell}$: \begin{equation} M_C^{\ell\ell}=\sqrt{\left(\sqrt{p_{T,\ell\ell}^2+M^2_{\ell\ell}}+\cancel{E}_T\right)^2 +\left(\vec{p}_{T,\ell\ell}+\vec{\cancel{E}}_T\right)^2}. \end{equation} The distributions of $M_C^{\ell\ell}$, $\Delta R_{\ell\ell}$, and $\cancel{E}_T$ are shown in Fig.~\ref{fig:bbll2v}. The peak of $M_C^{\ell\ell}$ is always lower than $M_{\Delta}$ by about $30$-$40~{\rm GeV}$, and the lepton separation $\Delta R_{\ell\ell}$ in the signal is much smaller than in the $t\bar{t}$ background. Accordingly, we set a wide window on $M_{C}^{\ell\ell}$ while tightening up the cuts on $\Delta R_{\ell\ell}$ and $\cancel{E}_T$: \begin{equation} M_{\Delta}-80~{\rm GeV}<M_C^{\ell\ell}<M_{\Delta},~\Delta R_{\ell\ell}<1.2,~\cancel{E}_T>0.9M_{\Delta}. \end{equation} We find that $M_{C}^{\ell\ell}$ is least efficient around $M_{\Delta}\sim190~{\rm GeV}$, where the peak of $M_{C}^{\ell\ell}$ for the $t\bar{t}$ background is around $150~{\rm GeV}$. The very tight cuts on $\Delta R_{\ell\ell}$ and $\cancel{E}_T$ are sufficient to suppress the background by 1 or 2 orders of magnitude, while keeping the number of signal events as large as possible. We further combine the $b$-jet pair and the lepton pair into a cluster and construct the transverse cluster mass: \begin{equation} M_C=\sqrt{\left(\sqrt{p_{T,bb\ell\ell}^2+M_{bb\ell\ell}^2}+\cancel{E}_T\right)^2-\left( \vec{p}_{T,bb\ell\ell}+\vec{\cancel{E}}_T\right)^2}, \end{equation} \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.44\linewidth]{bbaa_sen.pdf} \includegraphics[width=0.45\linewidth]{bbaa_lum_new.pdf} \end{center} \caption{Left: Significance $\mathcal{S}(S,B)$ of the $b\bar{b}\gamma\gamma$ channel versus $M_{\Delta}$ reachable at LHC14@300 (red region) and LHC14@3000 (green). Right: Required luminosity to reach a $3\sigma$ (red region) and $5\sigma$ (green) significance in the $b\bar{b}\gamma\gamma$ channel versus $M_{\Delta}$ at LHC14. The solid line corresponds to the signal from $X_0$ alone, and the dashed line corresponds to the total signal including cascade enhancement. \label{bbaa_sen}} \end{figure} which is an analog of $M_{H^0A^0}$ in the previous subsection. The distribution of $M_C$ is displayed in Fig.~\ref{fig:bbll2v}, which is very similar to that of $M_{H^0A^0}$ in the $b\bar{b}\gamma\gamma$ channel. Although it looks from the $M_C$ distributions (before any cuts are made) that the $t\bar{t}$ background has a large overlap with the signal, the cuts on $M_C^{\ell\ell}$, $\Delta R_{\ell\ell}$, and $\cancel{E}_T$ actually modify them remarkably, so that a further cut on $M_C$ could improve the significance efficiently. We apply a cut on $M_C$ as we did with $M_{H^0A^0}$, as well as one on $E_T$: \begin{equation} M_C>2M_{\Delta}+90~{\rm GeV},~E_T>2M_{\Delta}-60~{\rm GeV}. \end{equation} The results following the cutflow are summarized in Table \ref{tab:bbllcut}. For $M_{\Delta}=130~{\rm GeV}$, the final significance is 4.41 (17.7) without (with) cascade enhancement. With cascade enhancement this should be enough to discover the neutral scalars. The signal channel is more promising for $M_{\Delta}=160~{\rm GeV}$ due to a slightly larger cross section and higher cut efficiencies. The final significance is 7.78 (23.2), which is also better than the $b\bar{b}\gamma\gamma$ and $b\bar{b}\tau^+\tau^-$ channels with the same mass. Finally, for $M_{\Delta}=190~{\rm GeV}$, the significance becomes 3.56 (10.1). Therefore, for our benchmark model, the only promising signal for such heavy neutral scalars ($\sim190~{\rm GeV}$) comes from the $b\bar{b}W^+W^-$ channel. \subsection{Observability} \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.44\linewidth]{bbtata_sen.pdf} \includegraphics[width=0.45\linewidth]{bbtata_lum_new.pdf} \end{center} \caption{Same as Fig.~\ref{bbaa_sen}, but for the $b\bar{b}\tau^+\tau^-$ channel. \label{bbtata_sen}} \end{figure} Based on our elaborate analysis of signal channels in Secs.~\ref{sec:bbgg}--\ref{sec:bbWW}, we examine the observability of the neutral scalars $H_0,~A_0$ in the mass region $130\sim200~{\rm GeV}$ by adopting essentially the same cuts as before. In the left panel of Figs.~\ref{bbaa_sen}, \ref{bbtata_sen}, and \ref{bbll2v_sen} we present the significance $\mathcal{S}(S,B)$ as a function of $M_{\Delta}$ in the three signal channels $b\bar{b}\gamma\gamma$, $b\bar{b}\tau^+\tau^-$, and $b\bar{b}\ell^+\ell^-\cancel{E}_T$ that is reachable for LHC14@300 and LHC14@3000, respectively. The required luminosity to achieve a $3\sigma$ and $5\sigma$ significance is displayed in the right panel of the figures. As was done in our previous analysis, the effect of cascade enhancement is included by a factor $S/S_0$ in the final results. As shown in Figs.~\ref{bbaa_sen} and \ref{bbtata_sen}, both the $b\bar{b}\gamma\gamma$ and $b\bar{b}\tau^+\tau^-$ channel are typically sensitive to the low-mass region ($M_{\Delta}\lesssim160~{\rm GeV}$). In the absence of cascade enhancement, the $3\sigma$ significance would never be reached for $M_{\Delta}\gtrsim138~(142)~{\rm GeV}$ in the $b\bar{b}\gamma\gamma$ ($b\bar{b}\tau^+\tau^-$) channel for LHC14@300. However, a cascade enhancement of $S/S_0\sim 4-6$ (as can be seen from Fig.~\ref{sgn}) in this mass region can greatly improve the observability, pushing the $3\sigma$ mass limit up to $157~(162)~{\rm GeV}$ in the $b\bar{b}\gamma\gamma$ ($b\bar{b}\tau^+\tau^-$) channel. Moreover, with cascade enhancement, one has a good chance to reach a $5\sigma$ significance if $M_{\Delta}\lesssim 153~(155)~{\rm GeV}$. In other words, the cascade enhancement significantly reduces the required luminosity. For instance, to achieve a $3\sigma$ and $5\sigma$ significance in the $b\bar{b}\gamma\gamma$ ($b\bar{b}\tau^+\tau^-$) channel with $M_{\Delta}=130~{\rm GeV}$, the required luminosity is as low as $16~(10)~\textrm{fb}^{-1}$ and $42~(27)~\textrm{fb}^{-1}$ at LHC14, respectively. The $b\bar{b}\tau^+\tau^-$ channel is more promising, thanks to a relatively larger production rate. At the future LHC14 with $3000~\textrm{fb}^{-1}$ data, the heavier mass region can also be probed. With a maximal cascade enhancement, the $3\sigma$ and $5\sigma$ mass reach is pushed to $177$ and $164~{\rm GeV}$, respectively, in the $b\bar{b}\gamma\gamma$ channel, which should be compared to $156$ and $151~{\rm GeV}$ in the absence of enhancement. For the $b\bar{b}\tau^+\tau^-$ channel, the enhancement factor $S/S_0$ can reach about $18$ above the $W$-pair threshold, upshifting the $3\sigma$ and $5\sigma$ mass reach to $189$ and $177~{\rm GeV}$, respectively, from $154$ and $150~{\rm GeV}$ without the enhancement. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.44\linewidth]{bbll2v_sen.pdf} \includegraphics[width=0.45\linewidth]{bbll2v_lum_new.pdf} \end{center} \caption{Same as Fig.~\ref{bbaa_sen}, but for the $b\bar{b}\ell^+\ell^-\cancel{E}_T$ channel. \label{bbll2v_sen}} \end{figure} The $b\bar{b}\ell^+\ell^-\cancel{E}_T$ channel shown in Fig. \ref{bbll2v_sen} is more special, compared with $b\bar{b}\gamma\gamma$ and $b\bar{b}\tau^+\tau^-$. It is relatively more sensitive to a higher mass between $150$-$180~{\rm GeV}$, where the decay mode $H^0\to W^+W^-$ dominates, while its observability deteriorates for $M_{\Delta}<150~{\rm GeV}$ due to phase-space suppression in the decay. The cascade enhancement $S/S_0$ at our benchmark point (\ref{BP}) is typically $3$-$4$ in the mass region $130$-$200~{\rm GeV}$, and decreases as $M_{\Delta}$ increases. For LHC14@300, the $3\sigma$ and $5\sigma$ mass reach is, respectively, $190$ and $181~{\rm GeV}$ with maximal cascade enhancement. These limits would just increase by $2$-$3~{\rm GeV}$ for LHC14@3000 if there were no cascade enhancement, while with cascade enhancement the $5\sigma$ limit, for instance, is pushed up to $200~{\rm GeV}$. Finally, a $3\sigma$ or $5\sigma$ reach in the mass region $150$-$180~{\rm GeV}$ requires an integrated luminosity of $50~\textrm{fb}^{-1}$ ($450~\textrm{fb}^{-1}$) or $150~\textrm{fb}^{-1}$ ($1300~\textrm{fb}^{-1}$) with (without) cascade enhancement. \section{Discussions and Conclusions} \label{Dis} In this paper, we have systematically investigated the LHC phenomenology of neutral scalar pair production in the negative scenario of the type II seesaw model. To achieve this goal, we first examined the decay properties of the neutral scalars $H_0/A_0$ and found that the scalar self-couplings $\lambda_i$ have a great impact on the branching ratios of $H^0/A^0$. The coupling $\lambda_4$ is important for tree-level decays of $H^0$ and $A^0$, while one-loop-induced decays of $H^0$ further depend on $\lambda_2$ and $\lambda_3$. We found that the decay $H^0\to W^+W^-$ could dominate for $2M_W<M_{H^0}<2M_h$ with $\lambda_4<0$, while it can be neglected once $M_{H^0}$ is above the light scalar pair threshold $2M_h$. Moreover, the branching ratios of the decays $H^0\to \gamma\gamma,~Z\gamma$ can cross 3 orders of magnitude when varying the couplings $\lambda_i$, and there exist zero points for the $H^0ZZ$, $H^0hh$, and $A^0Zh$ couplings. The cross section of the Drell-Yan process $pp\to Z^*\to H^0A^0$ for $M_{\Delta}<200~{\rm GeV}$ is much larger than that of the SM Higgs pair production driven by gluon fusion. In this paper, we studied the contributions to $H^0/A^0$ production from cascade decays of the charged scalars $H^{\pm}$ and $H^{\pm\pm}$. There are actually three different states for the neutral scalar pair: $H^0A^0$, $H^0H^0$, and $A^0A^0$. Here, $H^0H^0$ and $A^0A^0$ can only arise from cascade decays of charged scalars, and their production rates always stay the same to a good approximation. Further, for a fixed value of $M_{\Delta}$, cascade enhancement is determined by the variables $v_{\Delta}$ and $\Delta M$. By tuning these two variables, the associated production rate of $H^0A^0$ can be maximally enhanced by about a factor of 3, while those of the $H^0H^0$ and $A^0A^0$ pair production can reach the value of $H^0A^0$ production through the pure Drell-Yan process. We implemented detailed collider simulations of the associated $H^0A^0$ production for three typical signal channels ($b\bar{b}\gamma\gamma$, $b\bar{b}\tau^+\tau^-$, and $b\bar{b}W^+W^-$ with both $W$'s decaying leptonically). The enhancement from cascade decays of charged scalars is quantified by a multiplicative factor $S/S_0$. Due mainly to a larger production rate, all three channels are more promising than the SM Higgs pair case. If there were no cascade enhancement, the $5\sigma$ mass reach of the $b\bar{b}\gamma\gamma$, $b\bar{b}\tau^+\tau^-$, and $b\bar{b}\ell^+\ell^-\cancel{E}_T$ channels would be, respectively, $151$, $150$, and $180~{\rm GeV}$ for LHC14@3000. The cascade enhancement pushes these limits up to $164$, $177$, and $200~{\rm GeV}$. The $b\bar{b}\gamma\gamma$ and $b\bar{b}\tau^+\tau^-$ channels are more promising in the mass region below about $150~{\rm GeV}$, and the required luminosities for $5\sigma$ significance are $42~\textrm{fb}^{-1}$ and $27~\textrm{fb}^{-1}$, respectively, at our benchmark point. Compared with these two channels, the $b\bar{b}\ell^+\ell^-\cancel{E}_T$ channel is more advantageous in the relatively higher mass region $150$-$200~{\rm GeV}$, and the required luminosity for $5\sigma$ significance is about $150~\textrm{fb}^{-1}$ with maximal cascade enhancement. Needless to say, for the purpose of a full investigation on the impact of heavy neutral scalars on the SM Higgs pair production, more sophisticated simulations are necessary. We hope that this work may shed some light on further studies in both the phenomenological and experimental communities. \section*{Acknowledgments} This work was supported in part by the Grants No. NSFC-11025525, No. NSFC-11575089 and by the CAS Center for Excellence in Particle Physics (CCEPP).
train/arxiv
BkiUdGY4ukPiEPTal9yn
5
1
\section{Introduction} The Cepheus E (Cep E) outflow is an excellent object to study the relationship between the physical properties of the optical stellar jets and those of embedded outflows. The existence of a molecular CO flow originating in that region was first indicated by \markcite{fuk89}Fukui (1989) in his catalog of molecular outflows, but it was the K$^\char'23$ image of \markcite{hod94}Hodapp (1994), from his imaging survey of molecular outflows, that brought Cep E to the forefront. The south lobe of Cep E is observed optically in H$\alpha$\ and [SII] $\lambda$\lam6717/31 (\markcite{anc97}Noriega-Crespo 1997), and it has been named Herbig-Haro (HH) object 337 (\markcite{dev97}Devine, Reipurth \& Bally 1997). The spectra of this lobe displays one of the lowest ionizations measured in HH objects, with a ratio [SII]/H$\alpha$ = $8.8\pm 0.2$ (\markcite{aya98}Ayala et al. 1998). The Cep E is bright in H$_2$ emission lines at 2$\mu$m~and is likely to be driven by the IRAS 23011+6126 source, presumably a class Class 0 object (\markcite{eis96}Eisl\"offel et al. 1996). Because of its complex morphology and strong H$_2$ emission, Cep E has been used to gauge the reliability of 3-D molecular jet models (\markcite{sut97}Sutter et al. 1997). There is also clear evidence for a second outflow in H$_2$ almost perpendicular to the main flow emission, as indicated by some faint H$_2$ knots (\markcite{eis96}Eisl\"offel et al. 1996). Perhaps more surprising is the presence of a second molecular CO outflow, detected in both $^{12}$CO $J=2-1$ and $^{12}$CO $J=1-0$ molecular lines (\markcite{lad97}Ladd \& Hodapp 1997; \markcite{lad97b}Ladd \& Howe 1997), at an angle of $\sim 52$\hbox{$^\circ$}\ from the main CO flow, i.e. apparently unrelated to the second H$_2$ outflow. The $^{12}$CO emission indicates terminal velocities of 80 km~$\rm{s}^{-1}$~and $-$125 km~$\rm{s}^{-1}$~in its north and south lobes respectively, which implies a dynamical age of $\sim 3\times 10^3$ years, at a distance of 730 pc (\markcite{eis96}Eisl\"offel et al. 1996). The absence of a second source using interferometric observations at 2.65 mm suggests that both outflows arise very close to the position of IRAS 23011+6126 source (\markcite{lad97}Ladd \& Howe 1997). In this study we present new ISOCAM images of the Cep E outflow taken in the ground vibrational level (v=0-0) H$_2$ lines S(5)at 6.91 $\mu$m~and S(3) at 9.66 $\mu$m. The motivation of this work was to use the ground vibrational H$_2$ lines, which are predicted to be much brighter than the v=1-0 S(1) line (\markcite{wol91}Wolfire \& K\"onigl 1991), to study the excitation across the outflow (using the S(3)/S(5) ratio), and to try to detect the second H$_2$ outflow and the central source at mid-infrared wavelengths, as well as to search for traces of H$_2$ emission along the second CO outflow. These observations are complemented by HiRes IRAS images. \section{Observations} The Cep E outflow has a projected size onto the plane of the sky of $\sim 1\hbox{$^\prime$}$, so the ISOCAM images were obtained using a 2$\times$3 CVF raster map with a 6\hbox{$^{\prime\prime}$}~FOV pixel scale and 30\hbox{$^{\prime\prime}$}~steps. Four CVF filters were used; two very close centered on the v=0-0 S(3) 9.665 $\mu$m~and S(5) 6.909 $\mu$m~H$_2$ lines respectively (see below); plus two nearby continuum CVF steps at 9.535 $\mu$m~and 6.855 $\mu$m. We selected the S(3) 9.665 $\mu$m~line, despite the fact that its wavelength is right in the middle of the strong silicate absorption feature at $\sim 9.7$ $\mu$m, because the line is expected to be strong and should help to constrain the depth of the silicate absorption in the models of the spectral energy distribution (SED). The ISOCAM data was deglitched using the Multi-resolution Median Transform Method (CIA). The detectors transient effects were treated using the IPAC Model (Ganga 1997). The target dedicated time (TDT) was of 2728 seconds, with twelve stabilization time steps on line and ten in the continuum prior to the on target observations. The TDT was spent with 2/3 of the time on line and 1/3 in the continuum, this means approximately 900 secs in each of the H$_2$ lines. The ADU fluxes were transformed into calibrated fluxes using the upgraded values of the system response (e.~g. ISOCAM Observer's Manual, Tables 12-17). These values are given in ADU/sec/mJy/pixel and correspond to 125.52 (step 227 \@ 9.660 $\mu$m), 65.80 (step 331 \@ 9.535 $\mu$m), 122.12 (step 21 \@ 6.911 $\mu$m) and 121.686 (step 22 \@ 6.855 $\mu$m). The FWHM of four the filters are (9.660, 9.535, 6.911, 6.855) $\mu$m~ = (0.27, 0.22, 0.17, 0.17) $\mu$m; and we used an integration time step of 2 secs and a ADC gain of 2. We present also for comparison a near infrared image at the v=1-0 S(1) 2.12~$\mu$m~obtained at the 3.5m Apache Observatory with a 256$\times$256 array at f/5 with a 0.482\hbox{$^{\prime\prime}$} per pixel scale. A complete analysis of the imaging and spectroscopic near infrared data is presented elsewhere (Ayala et al. 1998). HiRes IRAS images are presented also to support these observations. These images have one degree field of view with 15\hbox{$^{\prime\prime}$}~pixels and can reach a spatial resolution of $\sim 1\hbox{$^\prime$}$. The HiRes images have been processed using Yu Cao algorithm (see e.~g. Noriega-Crespo et al. 1997). \section{Results} \subsection{Morphology} The grayscale images of H$_2$ at 6.91 $\mu$m~and 9.66 $\mu$m~are presented in Figures 1 and 2 respectively. From these images is evident that the morphology of these different molecular hydrogen lines is very similar. It is also clear that the main difference between them is the central intensity peak at 6.96 $\mu$m. This intensity peak coincides with the position of the IRAS 23011+6126 source, as determined using interferometric observations by \markcite{eis96}{Eisl\"offel et al. (1996), i.~e. $\alpha$(2000) = 23h 03m 13.0s, $\delta$(2000) = 61\hbox{$^\circ$}~42\hbox{$^\prime$}~ 26.5\hbox{$^{\prime\prime}$}. The IRAS 23011+6126 source is also detected in the continuum frame at 6.855 $\mu$m, indicating that the emission is not dominated by the excited H$_2$ emission. The Figure 3 shows the 6.91$\mu$m~image with an overlay of the H$_2$ 2.12 $\mu$m~emission, and once again the morphology of both molecular lines is very similar. One way to understand these observations is in terms of the interstellar extinction law (see e.~g.\markcite{mat90} Mathis 1991), which it has a minimum at $\sim 7$ $\mu$m~and the rises to a peak at $\sim 10$ $\mu$m, where this maximum is due to the silicate absorption feature. If the IRAS 23011+6126 is embedded in a dusty envelope, as expected for a Class 0 source, then the absorption by silicates will be even larger and enough to swamp the H$_2$ emission at 9.66 $\mu$m. The IRAS 23011+6126 source appears in all the IRAS bands, as is shown in Figure 4, which displays IRAS HiRes maps at 12, 25, 60 and 100 $\mu$m~centered on the Cep E source. We recall that another two outflows have been detected around the Cep E source. One outflow is observed in the 2.12 $\mu$m~H$_2$ emission and is almost perpendicular to the main H$_2$ flow. The second one is detected in the $^{12}$CO $J = 2-1$ transition and is centered in the IRAS 23011+6126 source ($\pm$2.3\hbox{$^{\prime\prime}$}), with an orientation of $\sim 52$\hbox{$^\circ$}~with respect the main H$_2$ flow and a scale of $\sim 4$\hbox{$^\prime$}~(\markcite{lad97}Ladd \& Hodapp 1997). We don't detect in the H$_2$ lines at 9.66 $\mu$m~or 6.91 $\mu$m~any signatures of the second H$_2$ outflow nor of the second CO outflow observed at millimeter wavelengths. For the faint 2.12 $\mu$m~H$_2$ outflow this perhaps is not surprising since the S(3) 6.91 $\mu$m~line would have needed to be $\sim 70$ times stronger than S(1) 2.12 $\mu$m~line (based on simple C-type shock models (e.g.\markcite{smth95}Smith 1995)) to overcome its very low surface brightness in comparison with the brightest regions. We estimate that the RMS noise levels of our ISOCAM images, based on measurements of the background are approximately 46 $\mu$Jy/arcsec$^2$ at 9.66 $\mu$m, and 12 $\mu$Jy/arcsec$^2$ at 6.91 $\mu$m. The values of the minimum contours in Figures 1b and 2b (continuum subtracted) are 0.4 and 0.2 mJy/arcsec$^2$ for the 9.66 $\mu$m~and 6.91 $\mu$m~lines respectively, so a factor 70 times lower in brightness for the faint H$_2$ outflow is at the noise level. There are not traces in the IRAS HiRes images of the faint H$_2$ flow nor of the CO outflow. The HiRes images do not resolve either the second source near IRAS 23011+6126, as is shown in Figure 4. The 12 and 25 $\mu$m~images in Figure 4 display only point sources, and although there is some faint emission at 60 $\mu$m~and 100 $\mu$m~ at a PA$\sim -45$\hbox{$^\circ$}, this is probably be due to the diffuse emission from the nearby sources. The existence of a second source at 2$\hbox{$^\prime$}$ SW from the IRAS 23011+6126 source has recently been confirmed by \markcite{tst98}L. Testi (1998) with OVRO observations at 1.3 mm. \subsection{Fluxes} As mentioned before the $v = 0 - 0$ S(3) and S(5) H$_2$ were selected for our observations because plane parallel molecular shock models predict, for the conditions encountered in HH objects, that these lines could be $\sim 10 - 100$ stronger than the $v = 1 - 0$ S(1) 2.12 $\mu$m~line if they are collisionally excited. Molecular shock models specifically calculated for HH objects (\markcite{wol91}Wolfire \& K\"onigl 1991) considered as typical parameters e.~g. a shock velocity of $v_s = $25 km~$\rm{s}^{-1}$, an initial preshock gas density of $n_0 = 10^3$ $\rm{cm}^{-3}$~and a preshock magnetic field of $B_0 = 30$ $\mu$G, a molecular hydrogen abundance ${n_{H_2}\over n}\sim 0.5$, an atomic hydrogen abundance ${n_{H}\over n}\sim 3\times 10^{-3}$, a neutral gas temperature of $T\sim 2000$ K and an electron temperature $T_e \sim 3000$ K. The point of ennumerating some of these parameters (there are a few more) is to illustrate the difficulty in comparing shock models with observations and to stress that the models should be taken as a guide, since the physical conditions across a shock front change and are more complex. For the input parameters of the shock models mentioned above, the expectation is that the ratios of the H$_2$ lines should be ${0-0~S(3)\over 1-0~S(1)} \sim 157$ and ${0-0~S(5)\over 1-0~S(1)} \sim 28$, or essentially ${0-0~S(3)\over 0-0~S(5)} = 5.6$. Figure 5 shows the ratio of our S(5) to S(3) H$_2$ images, which indicates the the ratio across the outflow is nearly unity. The ratio is constant except for the region around the IRAS 23011+6126 source, which is not detected in the S(5) 9.66 $\mu$m~image, and at the edge of the outflow, which is probably due to an artifact produced by the mismatch between the elliptical Gaussian function use to smear out the S(3) 6.91 $\mu$m~ image, and the true shape of the first Airy ring produced by the ISOCAM optics. Nearly constant distributions across the outflow have been also measured for the ${2-1~S(1)} \over {1-0~S(1)}$ and ${3-2~S(3)}\over {1-0~S(1)}$ ratios \markcite{eis96}(Eisl\"offel et al. 1996). This behavior of the ratios is very difficult to explain with simple shock models or shock geometries, and in the case of the v=2-1 S(1) and v=3-2 S(3) lines ratios, the best results were obtained with C-type bow shocks. Such C-type bow shock model needed shock velocities of $\sim 200$ km~$\rm{s}^{-1}$~ and preshock densities of $\sim 10^6$ $\rm{cm}^{-3}$~ (\markcite{eis96}Eisl\"offel et al. 1996), which are larger than what is measured in most optical outflows. A constant v=0-0 S(5) to S(3) ratio also indicates the lack of a steep extinction gradient between the north and south lobes, and that the extinction is mostly important around the IRAS 23011+6126 source. This is interesting because we know that the south lobe is visible at optical wavelengths (\markcite{anc97}Noriega-Crespo 1997), while the north lobe is not. Finally, we have measured the flux of the IRAS 23011+6126 source at 6.855 $\mu$m~, set an upper limit for the 9.535 $\mu$m~flux (which may be still affected by the silicate feature) and measured the IRAS fluxes from the HiRes images. In Figure 6 we show the SED of the source, including the values obtained at 1.25 mm and 2.22 $\mu$m~by \markcite{lef96}Lefloch et al. (1996) and 2.65 mm by \markcite{lad97b}Ladd \& Howe (1997). We have overplotted (following \markcite{lad97b}Ladd \& Howe 1997) four simple gray body models, at T$_{dust}$ = 10, 20, 30 and 40 K, of the form $F_\nu = B_\nu(T)~(1 - e^{-\tau})~\Omega_s$ with $\tau \propto \nu^{-2}$ and the optical depth is normalized to the 2.65 mm flux. The simple models do at reasonable job at the IRAS and millimeter wavelengths for T$_{dust}$ = 20 - 30 K, but are unable to fit the shorter wavelengths. It is possible to build more sophisticated models for the SED which include a density and temperature structure for the cloud core or envelope which surrounds the Class 0 object (\markcite{and94}Andr\'e \& Montmerle 1994), and which also take into account the dust opacity as a function of chemical composition and wavelength (see Appendix). In Figure 7 we display three models with different inner gas cloud core densities ($6\times 10^4$, 8$\times 10^4$ and $10^5$ $\rm{cm}^{-3}$), but otherwise identical in their input parameters. The model with an initial density of 8$\times 10^4$ $\rm{cm}^{-3}$~(solid line) fits the observations amazingly well and is consistent with the upper limits at 2.22 $\mu$m~and 9.855 $\mu$m, which are based in `no detections'. The models also illustrate how well they fit the IRAS and sub/millimeter observations, and how sensitive they are to the presence of the $\sim 20$ $\mu$m~silicate feature. This feature essentially disappears in the $6\times 10^4$ $\rm{cm}^{-3}$~model (dotted line) and not surprisingly becomes deeper with an increasing density at $10^5$ $\rm{cm}^{-3}$~(dashed line). The model SEDs in Figure 7 assume a dust opacity dominated by bare silicates, hence the strong absorption features at 10 and 20 $\mu$m, and a dust temperature of 18 K. The models consider a power law density and temperature distributions (see Appendix), with a core inner radius of 0.065 AU, and outer radius of 0.1 pc. The total masses for the cloud envelope surrounding the IRAS source for the three models (increasing in density) are 13, 17 and 22 M$_{\odot}$. Our best model (with T$_{dust}$ = 18 K and M$_{env}$ = 17 M$_{\odot}$) yields different values than those obtained by using a constant density distribution and the simple gray body models, i.~e. T$_{dust}$ = 20 K and M$_{env}$ = 10 M$_{\odot}$~(\markcite{ladb}Ladd \& Howe 1997). \section{Conclusions} We have analyzed some of the physical characteristics of Cep E embedded outflow using ISOCAM images in the v=0-0 S(5) 6.91 $\mu$m~and S(3) 9.66 $\mu$m~molecular hydrogen lines. We find that the morphology of Cep E outflow of the ground vibrational H$_2$ lines is similar to that of the near infrared emission in the v=1-0 2.12 $\mu$m~line. At these wavelengths, and at surface brightness of 12 - 46 $\mu$Jy/arcsec$^2$, we do not detect the second H$_2$ outflow almost perpendicular to the main 2.12 $\mu$m~flow, nor we found traces of H$_2$ along the second $^{12}$CO $J = 2-1$ outflow at $\sim 52$\hbox{$^\circ$}~angle. We detect at 6.91 $\mu$m~the likely source of the main H$_2$ and CO outflows, IRAS 23011+6126, and show that the source is well detected in all IRAS bands using HiRes images. The source is not detected at 9.66 $\mu$m (nor at 9.54 $\mu$m), but we think this agrees with the interstellar extinction curve which has a minimum at $\sim 7$ $\mu$m, but rises a $\sim 9.7$ $\mu$m~due to the strong silicate absorption feature, further enhanced in this case by a cocoon surrounding the Class 0 object as the model of the SED seems to indicate. The ${0-0~S(5)}\over {0-0~S(3)}$ ratio is uniform and near unity across the outflow, a fact which is difficult to explain with simple plane parallel shock models (\markcite{eis96}Eisl\"offel et al 1996). A constant S(5) to S(3) ratio also indicates that the extinction, with the exception around the IRAS 23011+6126 source, is not affecting the H$_2$ emission of the outflow lobes at these wavelengths. Assuming that the main source of opacity around IRAS 23011+6126 is due to bare silicates, then our best model of SED for the envelope surrounding the Class 0 source yields a total mass of 17 M$_{\odot}$~and a dust temperature of 18 K, \acknowledgements We thank Ken Ganga for his help and insight on the ISOCAM calibration and the data reduction procedures. Our gratitude goes also to Jochen Eisl\"offel for sharing with us the analysis of Cep E with ISO. We thank N. King for obtaining the imaging data, and the referee for helpful comments and a careful reading of the manuscript. \clearpage \begin{center} {\bf Appendix} \end{center} The envelope around Cep E source is modeled as a series of spherically symmetric dust shells, where temperature and density are assumed to vary according to radial power laws whose exponents are free parameters. Once the external radius of the envelope is fixed, the inner envelope boundary is also determined and is equal to the radius where dust attains its sublimation temperature ($\sim$1500 K). As our aim was not only to provide a reasonable model for the global spectral energy distribution of Cep E source, but also to verify if the non detection at 9.67 $\mu$m~ could be due to silicate absorption, we then decided to adopt the dust opacities tabulated by \markcite{oh94}Ossenkopf \& Henning (1994) for an MRN (\markcite{mrn77} Mathis, Rumpl \& Nordsieck 1977) silicate dust with variable volumes of ice mantles, instead of the usual assumption (e.g. \markcite{h83} Hildebrand 1983) of a dust opacity simply expressed as a power law of frequency. Dust emission is computed from each shell as: \begin{equation} F_{\nu, i}= \kappa_{\nu} B(\nu, T_i) \rho_i V_i \label{flux} \end{equation} where $\kappa_{\nu}$ is the dust mass opacity, $B(\nu, T)$ is the Planck function, $\rho_i$ is the density and V$_i$ is the volume of $i^{th}$ shell. First, for each frequency the optical depth is computed inward as $\tau=\sum _i (\kappa_{\nu} \rho_i dr_i)$ until either $\tau=1$ is eventually reached for a certain shell, or the inner envelope radius is reached; having identified the inner shell which contributes to observed radiation, the emitted flux is computed outward according to Eq.~\ref{flux} until the external radius is reached. The flux from each shell is extincted by the intervening shells toward the observer, and finally the contributions from all shells are summed at each frequency. \clearpage
train/arxiv
BkiUb1HxK7DgtAAQH73c
5
1
\section{Introduction} The `nature' of the neutrino mass (if any) has been for years one of the most intriguing puzzles of elementary-particle physics. The point still at issue is whether real neutrinos may even be {\it self-conjugate} (or self charge-conjugate) like Majorana particles $[{\ref{Majorana1937}}]$, and further endowed with so-called `Majorana masses' $[{\ref{Jehle1949}, \ref{Serpe1949}}]$, or there actually exist mere `Dirac mass' neutrinos looking like standard fermions ({\it different} from their own antiparticles). In this regard, the most general neutrino model available for each lepton family is believed to include an overall Lagrangian mass term with both `Dirac' and `Majorana' contributions, and with the latter contribution being the sum of two distinct mass terms for wholly `active' and `sterile' neutrino types $[{\ref{Esposito1998}}]$. Such a model, if suitably conceived with a `Majorana' sector having just one nonzero term, relevant to a super-heavy `sterile' neutrino type, can in particular be seen to account $-$ via the well-known See--Saw Mechanism $[{\ref{Gell-Mann1979}}]$ $-$ for the very small size to which the actual neutrino masses seem to be confined. Renewed interest in Majorana's conjecture has been recently aroused also by the experimental discovery of Majorana bound states (or quasiparticles) in superconductors $[{\ref{Mourik2012}}-{\ref{Esposito2013}}]$. This paper deals with some subtle, and not yet thoroughly investigated, basic theoretical aspects concerning the idea itself of a `Majorana' Lagrangian mass term as a way to get either an `active' or a `sterile' up-to-date version of the original (manifestly self-conjugate) Majorana neutrino field. It should, first of all, be reminded that the `Majorana mass' construct cannot be really traced back to Majorana himself, nor can it be said essential for a self-conjugate fermion. Despite this, that a `Majorana mass' fermion should just be a Majorana particle $-$ i.e. a really neutral fermion $-$ is normally regarded as quite an obvious conclusion, which automatically follows from an extended use of the well-known formula $-$ Eq.~(\ref{2.9bis}) $-$ defining the `charge conjugate' of a standard Dirac field. According to common views, the full legitimacy of such a procedure is in particular believed to be unquestionable. Doing like that, however, one is {\it not} directly applying charge conjugation (or particle--antiparticle conjugation) as it is {\it primarily} defined within Quantum Field Theory (QFT): namely, an operation, $C$, truly acting on annihilation and creation operators and merely consisting in turning them into their own `charge conjugates' (with no changes in either four-momenta or helicities) $[{\ref{Merzbacher1970}}]$. As already pointed out by Dvornikov in his canonical quantization of a massive Weyl field $[{\ref{Dvornikov2012}}]$, this should not be taken as a negligible detail. One may guess it even better on going over to the zero-mass case. It is sufficient, for example, to take account of the straightforward hints here given in Sec.~2, on how to interpret the two couples of fermionic and antifermionic Weyl solutions in order to make sure that $C$ may really have {\it no} effects on helicities. These hints show that an approach like the usual one in defining the `charge conjugate' of a Weyl field does not seem at all to lead to the appropriate choice. They also suggest the need for a more general check on the real consistency of a procedure that does nothing but {\it borrow} the standard definition of a `charge conjugate' Dirac field. The simplest way to do so is just to make {\it direct} use of the above-mentioned fundamental representation of $C$. Following this way, a basic {\it new} outcome is here obtained which overturns the current reading. It is found, indeed, that an {\it active} `Majorana mass' fermion field and its {\it sterile} counterpart prove rather to be {\it mutually charge-conjugate} than individually self-conjugate, and so, at most, they may give rise to {\it one and the same} (really neutral) Majorana field if they are further imposed to coincide. The formal aspects relevant to the whole question (including an explicit representation of the effective `new' action of $C$ on single chiral fields) are widely discussed in Sec.~3. In the subsequent section, moreover, it is shown that, regardless of whether the `Majorana mass' or `Dirac mass' case is considered, there generally exists a fully {\it symmetrical} link connecting both an active spin-$\frac{1}{2}$ field and its {\it charge conjugate} sterile counterpart with the corresponding pair of {\it charge conjugate} Dirac fields whence they have been constructed. This link is given by a unitary transformation being as well the {\it inverse} of itself, and it suitably allows an {\it extended} ($8\times8$) matrix representation for $C$. In the light of the new formalism, on the other hand, the conclusion may also be drawn that a {\it true} (really self-conjugate) Majorana field can no longer turn out to be of two {\it different} $-$ `active' and `sterile' $-$ types, and furthermore (in strict accordance with its self-conjugate nature) it can be assigned only a {\it unified} mass kind which may at once be viewed as either a `Majorana-like' or a `Dirac-like' mass kind. In Secs.~5, 6 and 7, a full insight is gained into the general variety of `charges' that should now characterize a genuine `Majorana mass' fermion and tell it from a genuine `Dirac mass' fermion. The former particle, unlike the latter, should actually be endowed with {\it pseudoscalar-type} (or {\it axial-type}) charges and be `neutral' as regards {\it scalar-type} charges only [{\ref{Dvoeglazov2012}}]. The `neutrality' of it, in other words, is now to be meant no longer under $C$, but rather under a {\it more restrictive} `charge conjugation' operation which leaves pseudoscalar-type charges unvaried and merely corresponds to a `scalar-charge conjugation' operation. One such charged spin-$\frac{1}{2}$ particle would {\it in turn} amount to a `fermion' or an `antifermion' depending on {\it either} chirality involved. Thus, for instance, an active `Majorana mass' neutrino is to be now referred to as a `lepton' (having {\it positive} lepton number) or an `antilepton' (having {\it negative} lepton number) according to whether being a {\it left-handed} or a {\it right-handed} particle, whereas the exact converse (with `lepton' and `antilepton' {\it interchanged}) should hold for the `charge conjugate' sterile counterpart of it. In close connection with this, active and sterile `Majorana mass' neutrinos may now be regarded as truly {\it obeying} ordinary mirror symmetry as just the analogue of `$CP$ symmetry' for Dirac neutrinos. A manifest (maximum) $C$ violation is to be instead recognized in their (maximally) asymmetrical dynamical behaviors, and this should actually imply, in the light of the $CPT$ theorem, a (maximum) {\it time reversal} violation as well (just counterbalancing the `recovered' $P$ symmetry). The new reading can also be seen, in particular, neither to influence the usual expectation for a neutrinoless double $\beta$-decay, nor to rule out the possibility $-$ still compatible with $CPT$ symmetry $-$ of {\it different} mass values for the two (active and sterile) `Majorana mass' neutrino versions. In Sec.~8, finally, it is pointed out that a pair of charge conjugate spin-$\frac{1}{2}$ fields with identical masses, whether it may be a `Dirac mass' or a `Majorana mass' field pair, can always be expressed as a linear combination of a couple of {\it true} Majorana fields with opposite $CP$ intrinsic parities (and identical masses). \section{A `Majorana mass' neutrino as not exactly a genuine (really neutral) Majorana particle} According to Majorana's early approach $[{\ref{Majorana1937}}]$, a {\it self-conjugate} neutrino is a really neutral spin-$\frac{1}{2}$ particle which may be formally assigned, say, a Dirac field solution of the special type \begin{equation} \psi_{\rm M}(x) = \frac{1}{\sqrt{2}}\left[\psi(x) + \psi^c(x)\right] \label{2.1} \end{equation} \noindent ($x \equiv x^\mu; \mu=0,1,2,3$), where $\psi^c(x)$, defined as \begin{equation} \psi^c(x) \equiv C \psi(x) C^{-1} = U_C\psi^{\dagger {\rm T}}(x), \label{2.9bis} \end{equation} \noindent is the charge conjugate of a standard Dirac field solution $\psi(x)$, such that $\psi(x)\not=\psi^c(x)$. Here $U_C$ denotes the usual charge-conjugation (or $C$) matrix, and $\psi^{\dagger {\rm T}}$ is the transpose of the adjoint solution $\psi^\dagger$. The field given by Eq.~(\ref{2.1}) has a {\it manifest} self-conjugate form: \begin{equation} \psi_{\rm M}^c(x) = \frac{1}{\sqrt{2}}\left[\psi^c(x) + \psi(x)\right] = \psi_{\rm M}(x), \label{2.1bis} \end{equation} \noindent and it can thus be automatically expanded in terms of {\it net} annihilation and creation operators coinciding with their own charge conjugates. This field, in other words, is self-conjugate {\it by definition}; and the associated fermion, usually known as a {\it Majorana particle}, is such that it cannot possibly be distinguished from its antiparticle. If we in particular split $\psi_{\rm M}$ into a {\it left-handed} chiral component, $\frac{1}{2}(1 - \gamma^5) \psi_{\rm M}$, plus a {\it right-handed} one, $\frac{1}{2}(1 + \gamma^5)\psi_{\rm M}$, where $\gamma^5$ ($\equiv i\gamma^0\gamma^1\gamma^2\gamma^3$) is just denoting the chirality matrix, we then have that the former (latter) component taken alone would indifferently be able to describe a left-handed {\it neutrino} (right-handed {\it antineutrino}) as well as a left-handed {\it antineutrino} (right-handed {\it neutrino}): \begin{equation} \frac{1}{2}(1 \mp \gamma^5)\psi_{\rm M} = \frac{1}{2}(1 \mp \gamma^5)\psi_{\rm M}^c. \label{2.2bis} \end{equation} \noindent In this regard, it is worth pointing out that $\psi_{\rm M}$, also expressible {\it in the form} \begin{equation} \psi_{\rm M}=\frac{1}{\sqrt{2}}\left[\frac{1}{2}(1 - \gamma^5)\psi \!+\! \frac{1}{2}(1 + \gamma^5)\psi^c\right] + \frac{1}{\sqrt{2}}\left[\frac{1}{2}(1 + \gamma^5)\psi \!+\! \frac{1}{2}(1 - \gamma^5)\psi^c\right], \label{2.1ter} \end{equation} \noindent can by no means be assigned any special sort of `handedness' marking it as {\it either} an `active' {\it or} a `sterile' field. As shown by (\ref{2.1ter}), one strictly has that `active' and `sterile' contributions are always {\it equally present} in $\psi_{\rm M}$\,! This, indeed, implies that a neutrino just described by a field like $\psi_{\rm M}$ would not be compatible with the Standard Model (SM) $[{\ref{Glashow1961} -\ref{Salam1968}}]$, as it could give only {\it half} of the required contribution to the square modulus of the matrix element. The equation obeyed by $\psi_{\rm M}$ (still the Dirac one) may be derived, as usual, from a free spin-$\frac{1}{2}$ quantum field Lagrangian with a mass term proportional to \begin{equation} \bar {\psi}_{\rm M}\psi_{\rm M}, \label{2.1quater} \end{equation} \noindent where $\bar {\psi}_{\rm M} = \psi_{\rm M}^\dagger\gamma^0$. This term looks just like the one for a standard Dirac particle; so, it does tell us nothing about the actual (self-conjugate) character of $\psi_{\rm M}$, which can only be inferred from definition (\ref{2.1}). The current formal way of introducing a `self-conjugate' neutrino is different from Majorana's way $-$ for a general review, see e.g. Ref.~$[{\ref{Bilenky1987}}]$ or $[{\ref{Giunti2007}}]$ $-$ and is essentially based on a `reformulation' of the Majorana neutrino theory in the light of parity-violating phenomenology $[{\ref{McLennan1957}},{\ref{Case1957}}]$. It leads to a neutrino type really compatible with the SM, being inspired by the idea of regarding a {\it massive} neutrino as merely an extension of a {\it massless} one. The new approach, in terms of basic {\it chiral} neutrino fields, relies just upon one peculiar requirement naturally fulfilled by the original Majorana field $\psi_{\rm M}$: its being made up of two chiral components, $\frac{1}{2}(1 \mp \gamma^5)\psi_{\rm M}$, that are subjected to the {\it mutual link} \begin{equation} \frac{1}{2}(1 \pm \gamma^5)\psi_{\rm M} = U_C \left[\frac{1}{2}(1 \mp \gamma^5)\psi_{\rm M}\right]^{\dagger {\rm T}} \label{2.2ter} \end{equation} \noindent as a result of the condition $\psi_{\rm M}=\psi_{\rm M}^c$ (recall that $U_C \gamma^{5\dagger {\rm T}}=-\gamma^5 U_C$ and $U_C^{\dagger {\rm T}} = U_C^{-1}$). For a better insight into this point, let us start off by considering the purely left-handed (i.e. negative-chirality) neutrinos and purely right-handed (i.e. positive-chirality) antineutrinos as known from experience. As far as they are assumed to be massless, the question of the existence of opposite-chirality `complements' of their own fields may somehow be ignored. This, strictly speaking, can no longer be the case in the presence of (not exactly zero) neutrino masses. If one is in particular thinking of real neutrinos and antineutrinos as {\it standard} massive elementary fermions and antifermions, one should be able to supply their (originally massless) Lagrangians $-$ by adding e.g. suitable Higgs couplings $-$ with mass terms proportional to \begin{equation} {\bar {\psi}_{\rm L}}\psi_{\rm R} + {\bar {\psi}_{\rm R}}\psi_{\rm L} = {\bar \psi}\psi \label{2.2} \end{equation} \noindent and \begin{equation} {\bar {\psi}^c_{\rm L}}\psi^c_{\rm R} + {\bar {\psi}^c_{\rm R}}\psi^c_{\rm L} = {\bar {\psi}^c}\psi^c, \label{2.3} \end{equation} \noindent respectively (${\bar {\psi}_{\rm L}}= \psi^\dagger_{\rm L}\gamma^0$, and so on), where \begin{equation} \psi_{\rm L} \equiv \frac{1}{2}(1 - \gamma^5)\psi, \,\,\, \psi_{\rm R} \equiv \frac{1}{2}(1 + \gamma^5)\psi \label{2.4} \end{equation} \noindent and \begin{equation} \psi^c_{\rm L} \equiv \frac{1}{2}(1 - \gamma^5)\psi^c, \,\,\, \psi^c_{\rm R} \equiv \frac{1}{2}(1 + \gamma^5)\psi^c. \label{2.5} \end{equation} \noindent For clarity's sake, it is worth noting that here symbols $\psi_{\rm L,R}$ and $\psi^c_{\rm L,R}$ are used {\it quite symmetrically} to denote the left- and right-handed chiral components of $\psi$ and those of $\psi^c$; so, one in particular has $\psi^c_{\rm L,R} = (\psi^c)_{\rm L,R}$\,, and {\it not} $\psi^c_{\rm L,R} = (\psi^c)_{\rm R,L}$ (as often encountered in the literature). There appears to be, on the other hand, an alternative formal way of constructing a congruous mass term for a neutrino; it consists in {\it directly mixing} the two (left-handed) neutrino and (right-handed) antineutrino fields themselves $[{\ref{Jehle1949},\ref{Serpe1949}}]$. Doing like this, one is enabled to get, for instance, a scalar of the type \begin{equation} {\bar {\psi}_{\rm L}}\psi^c_{\rm R} + {\bar {\psi}^c_{\rm R}}\psi_{\rm L} \equiv {\bar {\psi}'}\psi', \label{2.6} \end{equation} \noindent with the new (wholly `active') field \begin{equation} \psi'(x) = \psi_{\rm L}(x) + \psi^c_{\rm R}(x) \label{2.25} \end{equation} \noindent replacing the original field $\psi(x) = \psi_{\rm L}(x) + \psi_{\rm R}(x)$. Note that (\ref{2.6}) is different from (\ref{2.2}) or (\ref{2.3}) {\it provided} $\psi^c_{\rm R} \not= \psi_{\rm R}\,,\, \psi_{\rm L} \not=\psi^c_{\rm L}$. Such an approach to a massive neutrino does not necessarily need a complementary right-handed neutrino field, which could only enter into another (independent) mass term proportional to \begin{equation} {\bar {\psi}_{\rm R}}\psi^c_{\rm L} + {\bar {\psi}^c}_{\rm L}\psi_{\rm R}. \label{2.7} \end{equation} \noindent Of course, it is worth similarly stressing that this {\it extra} (wholly `sterile') massive neutrino type could by no means be conjectured if it were $\psi^c_{\rm L} =\psi_{\rm L}\,,\,\psi_{\rm R}=\psi^c_{\rm R}$. From (\ref{2.1ter}), recalling (\ref{2.4}) and (\ref{2.5}), it is therefore evident, in particular, that a nontrivial definition of the new field variable given by (\ref{2.25}) (such that $\psi^c_{\rm R} \not= \psi_{\rm R},\, \psi_{\rm L}\not=\psi^c_{\rm L}$) does indeed {\it prevent} it from being formally mistaken just for a $\psi_{\rm M}$ field variable. Yet, since \begin{equation} U_C \psi^{\dagger {\rm T}}_{\rm L}(x) = \psi^c_{\rm R}(x), \;\;\; U_C \psi^{c\dagger {\rm T}}_{\rm R}(x) = \psi_{\rm L}(x), \label{2.7bis} \end{equation} \noindent one actually gets, in full analogy with (\ref{2.2ter}), \begin{equation} \psi'_{\rm R}(x) = U_C \psi'^{\dagger {\rm T}}_{\rm L}(x), \;\;\; \psi'_{\rm L}(x) = U_C \psi'^{\dagger {\rm T}}_{\rm R}(x), \label{2.8} \end{equation} \noindent with $\psi'_{\rm R}(x) = \frac{1}{2}(1 + \gamma^5)\psi'(x) = \psi^c_{\rm R}(x)$ and $\psi'_{\rm L}(x)=\frac{1}{2} (1 - \gamma^5)\psi'(x) = \psi_{\rm L}(x)$. Hence it is commonly argued that Eq.~(\ref{2.25}) is just defining a {\it self-conjugate} neutrino field variable, associated with a new kind of mass term $-$ proportional to (\ref{2.6}) $-$ which is conventionally known as a {\it `Majorana mass' term}. Such a conclusion relies on the fact that, by use of either (\ref{2.7bis}) or (\ref{2.8}), one globally obtains \begin{equation} \psi'(x) = U_C \psi'^{\dagger {\rm T}}(x). \label{2.8ter} \end{equation} Of course, in interpreting (\ref{2.8ter}) as really a {\it sufficient} (besides necessary) condition to state that $\psi'(x)$ is self-conjugate, one is implicitly taking for granted that $U_C \psi'^{\dagger {\rm T}}(x)$ {\it does correspond} to the `charge conjugate' of $\psi'(x)$, or that $U_C \psi^{\dagger {\rm T}}_{\rm L}(x)=U_C \psi'^{\dagger {\rm T}}_{\rm L}(x)$ {\it does correspond} to the `charge conjugate' of $\psi_{\rm L}(x)=\psi'_{\rm L}(x)$. If so, it should then result \begin{equation} \psi'^c(x) \equiv C \psi'(x) C^{-1}= U_C \psi'^{\dagger {\rm T}}(x), \label{2.9'bis} \end{equation} \noindent with the Majorana condition $\psi'(x)=\psi'^c(x)$ being automatically fulfilled. At first sight, since Eq.~(\ref{2.9'bis}) is the exact analogue of Eq.~(\ref{2.9bis}), there seem to be no reasons for questioning it. Despite this, consider e.g. the two Weyl equations, into which the Dirac equation can be split up on going over to the zero-mass limit. As is well-known $[{\ref{Esposito2013}}]$, the solutions of one Weyl equation amount to the couple of (massless) Dirac solutions \begin{equation} \psi_{\rm L}(x)\,,\,\psi_{\rm R}^c(x) = U_C \psi^{\dagger {\rm T}}_{\rm L}(x), \label{2.10} \end{equation} \noindent whereas those of the other Weyl equation amount to the remaining couple of (massless) Dirac solutions \begin{equation} \psi_{\rm R}(x)\,,\,\psi_{\rm L}^c(x) = U_C \psi^{\dagger {\rm T}}_{\rm R}(x). \label{2.11} \end{equation} \noindent Referring merely to the positive-energy contributions in the field expansions, we may in particular associate with the left-handed solutions, $\psi_{\rm L}$ and $\psi_{\rm L}^c$, a {\it negative helicity} and with the right-handed ones, $\psi_{\rm R}$ and $\psi_{\rm R}^c$, a {\it positive helicity}. Parity violation clearly occurs whenever (\ref{2.10}) and (\ref{2.11}) enter {\it asymmetrically}, and it becomes maximal just when only {\it one} couple of solutions is really involved. In the same way, as $C$ does {\it not} change helicities, also $C$ violation is expected to occur, with a maximal degree just when, for example, the only couple (\ref{2.10}) appears to be available. Nevertheless, if we rewrite (\ref{2.10}) as \begin{equation} \psi_{\rm L}(x)\,,\,\psi_{\rm R}^c(x) \equiv C\psi_{\rm L}(x)C^{-1}, \label{2.10bis} \end{equation} \noindent and (\ref{2.11}), accordingly, as \begin{equation} \psi_{\rm R}(x)\,,\,\psi_{\rm L}^c(x) \equiv C\psi_{\rm R}(x)C^{-1}, \label{2.11bis} \end{equation} \noindent then, even on admitting that solutions (\ref{2.11bis}) are suppressed, we {\it cannot} truly say that $C$ is violated, and we are in fact faced with a {\it helicity inverting} charge-conjugation operation! On the contrary, if we choose to interpret (\ref{2.10}) as \begin{equation} \psi_{\rm L}(x)\,,\,\psi_{\rm R}^c(x) \equiv C\psi_{\rm R}(x)C^{-1}, \label{2.10ter} \end{equation} \noindent and (\ref{2.11}), accordingly, as \begin{equation} \psi_{\rm R}(x)\,,\,\psi_{\rm L}^c(x) \equiv C\psi_{\rm L}(x)C^{-1}, \label{2.11ter} \end{equation} \noindent we immediately see that any asymmetry occurring between (\ref{2.10ter}) and (\ref{2.11ter}) does {\it really} imply $C$ violation, with $C$ now being so defined as to {\it really} leave helicity unchanged. In the light of these remarks on how to get an appropriate $C$ definition (not affecting helicity) for zero-mass spin-$\frac{1}{2}$ particles, it appears quite reasonable to try to check more carefully whether Eq.~(\ref{2.9'bis}) is really consistent or not in the general framework of standard QFT. For this purpose, in order to form a clear idea on how to proceed, it may be useful to look first at Eq.~(\ref{2.9bis}). As is well-known, primarily putting $\psi^c(x) \equiv C \psi(x) C^{-1}$ does indeed mean making reference to the {\it basic} definition of charge conjugation $C$, just coinciding with the {\it fundamental representation} of $C$ (in the fermion--antifermion Fock space). Take e.g. the standard normal mode expansion of $\psi(x)$ in terms of single `particle' annihilation operators $a^{(h)}(\bf p)$ and `antiparticle' creation operators $b^{\dagger(h)}(\bf p)$ obeying the usual anticommutation rules and being relevant to simultaneous eigenstates of momentum $\bf p$ and helicity $h$. It looks like \begin{equation} \psi(x) = \int \frac{d^3{\bf p}}{(2\pi)^3 2p^0} \sum_{h} \left[ a^{(h)}({\bf p}) u^{(h)}(p) e^{-ip\cdot x} + b^{\dagger(h)}({\bf p}) v^{(h)}(p) e^{ip\cdot x} \right] \label{2.27} \end{equation} \noindent ($\hbar=c=1$), where $u^{(h)}(p)$ and $v^{(h)}(p)$ are four-spinor coefficients depending on four-momentum $p \equiv (p^0,\bf p)$ $(p^0>0)$. We thus have, according to standard QFT, that $C \psi(x) C^{-1}$ is the net field obtained from (\ref{2.27}) {\it as a result of the transformation} $[{\ref{Merzbacher1970}}]$ \begin{equation} C a^{(h)}({\bf p}) C^{-1} = b^{(h)}({\bf p}), \;\;\; C b^{\dagger(h)}({\bf p}) C^{-1} = a^{\dagger(h)}({\bf p}). \label{2.28} \end{equation} \noindent We also know that the subsequent equality, $C\psi(x)C^{-1} =U_C\psi^{\dagger {\rm T}}(x)$, does instead tell us how the field $\psi^c(x) \equiv C \psi(x) C^{-1}$ as defined by means of (\ref{2.28}) can {\it equivalently} be obtained via a suitable mapping, $\psi(x) \longrightarrow \psi^c(x) = U_C\psi^{\dagger {\rm T}}(x)$, in the four-spinor space (such a mapping is indeed allowed by the fact that $\psi^{\dagger {\rm T}}$ and $\psi^c$ share all four freedom degrees corresponding to the actual `particle' and `antiparticle' helicity eigenstates). Of course, Eq.~(\ref{2.9bis}) applies as well to a field, $\psi_{\rm M}(x)$, having the manifestly self-conjugate form (\ref{2.1}), whose expansion can be obtained from (\ref{2.27}) by making the substitutions $a^{(h)}({\bf p}) \rightarrow a^{(h)}_{\rm M}({\bf p}) \;,\;b^{\dagger(h)}({\bf p}) \rightarrow a^{\dagger(h)}_{\rm M} ({\bf p})$, where $a^{(h)}_{\rm M}({\bf p}) = \frac{1}{\sqrt2}\left[a^{(h)}({\bf p}) + b^{(h)}({\bf p})\right]$ and $a^{\dagger(h)}_{\rm M}({\bf p}) = \frac{1}{\sqrt2} \left[b^{\dagger(h)}({\bf p}) + a^{\dagger(h)}({\bf p})\right]$. Let us pass now to analyse Eq.~(\ref{2.9'bis}), addressed to a field, $\psi'(x)$, being such that \begin{equation} \psi'(x) = \frac{1}{2}(1 - \gamma^5)\psi(x) + \frac{1}{2}(1 + \gamma^5)\psi^c(x), \label{2.25bis} \end{equation} \noindent with $\psi(x)\not=\psi^c(x)$. We shall have, quite similarly, that writing $\psi'^c(x) \equiv C \psi'(x) C^{-1}$ does mean {\it defining} $\psi'^c(x)$ {\it as just that field which is obtained from} $\psi'(x)$ {\it by merely turning every annihilation and creation operator into their respective charge-conjugates}. As such operators can only be found within the $\psi(x)$ and $\psi^c(x)$ expansions, we may also write, due to the {\it linearity} property of $C$ in the Fock space, \begin{equation} \psi'^c(x) \equiv C \psi'(x) C^{-1} = \frac{1}{2}(1 - \gamma^5)C \psi(x) C^{-1} + \frac{1}{2}(1 + \gamma^5) C \psi^c(x) C^{-1}. \label{2.25ter} \end{equation} \noindent The actual check to be given to Eq.~(\ref{2.9'bis}) should therefore concern $C \psi'(x) C^{-1} = U_C \psi'^{\dagger {\rm T}}(x)$. In other words: Is the mapping $\psi'(x) \longrightarrow \psi'^c(x) = U_C \psi'^{\dagger {\rm T}}(x)$ consistently providing, as claimed, an {\it equivalent} way to get $\psi'^c(x) \equiv C \psi'(x) C^{-1}$ from $\psi'(x)$? To answer this question, one has to do nothing else than {\it directly applying prescription} (\ref{2.25ter}). Doing like this, one obtains \begin{equation} \psi'^c(x) = \frac{1}{2} (1 - \gamma^5) \psi^c(x) + \frac{1}{2} (1 + \gamma^5) \psi(x), \label{2.12bis} \end{equation} \noindent and hence, by use again of the compact notations (\ref{2.4}) and (\ref{2.5}), \begin{equation} \psi'^c(x) = \psi^c_{\rm L}(x) + \psi_{\rm R}(x) \not= \psi'(x) = \psi_{\rm L}(x) + \psi^c_{\rm R}(x). \label{2.12} \end{equation} \noindent The key to (\ref{2.12}) is just the {\it linear} behavior of the fundamental representation of $C$, as particularly regards its action inside the {\it single} Dirac-field chiral components $\psi_{\rm L}$ and $\psi^c_{\rm R}$. Such an outcome, if carefully examined, should not seem so surprising, especially in view of what we already know from the $V-A$ theory $[{\ref{Feynman1958}-\ref{Sakurai1958}}]$. Looking at Eq.~(\ref{2.25bis}), we can easily realize that purely replacing every annihilation and creation operator with their own charge-conjugates does amount to globally making the {\it net} interchange $\psi \rightleftharpoons \psi^c$ relative to $\frac{1}{2}(1 \mp \gamma^5)$; and this, indeed, fully agrees with the general fact that, if one takes ${\bar \psi}_2\gamma^\mu\psi_1$ and ${\bar \psi}_2\gamma^\mu(-\gamma^5)\psi_1$ as Dirac-field bilinear {\it covariants} (as in the $V-A$ theory), then one gets ${\bar \psi}_2\gamma^\mu(1 - \gamma^5) \psi_1 \stackrel {C}{\rightarrow}{\bar {\psi}^c}_2 \gamma^\mu(1 - \gamma^5)\psi^c_1$ $[\ref{Sakurai1964}]$. A comparison of Eqs.~(\ref{2.8ter}) and (\ref{2.12}) gives, despite appearances, \begin{equation} C \psi'(x) C^{-1} \not= U_C \psi'^{\dagger {\rm T}}(x), \label{2.29} \end{equation} \noindent and then \begin{equation} C \psi_{\rm L}(x) C^{-1} \not= U_C \psi^{\dagger {\rm T}}_{\rm L}(x); \label{2.12ter} \end{equation} \noindent which ultimately means that standard QFT does {\it not} really allow Eq.~(\ref{2.9bis}) to be extended to $\psi'(x)$. By the way, the {\it explicit} `new' formal representations for $C \psi_{\rm L}(x) C^{-1}$ and $C \psi'(x) C^{-1}$ will be discussed in Secs.~3 and 4: see Eqs.~(\ref{2.16}) and (\ref{3.10}), respectively. It is obvious that, to avoid the inequalities (\ref{2.12}), (\ref{2.29}), and (\ref{2.12ter}), one could just assume $\psi=\psi^c=\psi'=\psi'^c$, but this would also {\it cancel} the distinction between an `active' ($\psi'$) and a `sterile' ($\psi'^c$) fermion field, as well as the distinction itself between a `Majorana' and a `Dirac' mass term! So, if the primary $C$-definition (\ref{2.25ter}) is truly relied upon, one is indeed led to conclude that {\it an `active' fermion field of the type} $\psi_{\rm L}(x) + (\psi^c)_{\rm R}(x)$ {\it and its `sterile' counterpart} $(\psi^c)_{\rm L}(x) + \psi_{\rm R}(x)$ {\it are themselves a pair of mutually `charge conjugate' (rather than individually self-conjugate) fields}. It will be shown in Sec.~6 that the apparent (conventional) {\it full} neutrality of such fields is to be actually interpreted as a mere `neutrality' restricted to {\it scalar-type charges}. With the help of (\ref{2.12}), it can be checked that \begin{equation} \psi'(x) + \psi'^c(x) = \psi(x) + \psi^c(x). \label{2.33} \end{equation} \noindent This formally enables one to define the genuine Majorana field (\ref{2.1}) {\it also as} \begin{equation} \psi_{\rm M}(x) = \frac{1}{\sqrt{2}}[\psi'(x) + \psi'^c(x)], \label{2.34} \end{equation} \noindent where $\psi'$ and $\psi'^c$ are exactly identical to the field components within square brackets in (\ref{2.1ter}). The point at issue can also be faced in a reversed manner. We may begin, instead, by {\it assuming} field $\psi'$ to be truly self-conjugate, so that we are allowed to put $\psi'=\psi'^c$ ($\propto \psi_{\rm M}$). Since either $\psi'$ or $\psi'^c$ is to be still meant as in Eq.~(\ref{2.12}), we shall then have as well $\psi=\psi^c$ and, after all, $\psi'=\psi$. This automatically eliminates the inequalities in Eqs.~(\ref{2.12}) and (\ref{2.29}), but the chargeless fermion model obtained is {\it not} the same as the `Majorana mass' conventional one. First, it is evident that setting $\psi'=\psi'^c$ (in place of $\psi'\not=\psi'^c$) does cause fields $\psi'$ and $\psi'^c$ to {\it lose} their original `active' and `sterile' distinctive characters. Hence, in line with Eq.~(\ref{2.1ter}) or Eq.~(\ref{2.34}), but {\it not} in line with what is usually claimed, the general conclusion is to be drawn that {\it there cannot exist two different $-$ `active' and `sterile' $-$ kinds of genuine (truly self-conjugate) Majorana fermions}. Moreover, we have to consider that the net equality $\psi'=\psi$ does unavoidably make a `Majorana mass' term (proportional to ${\bar {\psi}'}\psi'$) {\it indistinguishable} from a `Dirac mass' term (proportional to ${\bar {\psi}}\psi$). So, it must be concluded as well that {\it a stricly neutral spin-$\frac{1}{2}$ fermion which is supposed to bear a `Majorana mass' can likewise be said to bear a `Dirac mass' (or vice versa)}. In other words, there would be simply a {\it unified} mass kind for such a fermion, which may be equally taken for a `Majorana' as for a `Dirac' mass kind\,! This fully corresponds to the fact that we cannot manage to build more than {\it one} mass term from chiral fields like those in (\ref{2.2bis}). \section{On the truly orthodox way of applying particle--antiparticle conjugation to single chiral fields within standard QFT} Owing to its subtlety, the whole question raised above deserves an even more detailed analysis. As shown by (\ref{2.12ter}), and as already implied in the opening discussion on Weyl solutions, the core of the problem is just how to define the `charge conjugates' of the {\it unpaired} chiral projections, $\psi_{\rm L}$ and $\psi^c_{\rm R}$, entering into (\ref{2.25}). To start with, it is worth emphasizing that the standard Dirac-field `prescription' (\ref{2.9bis}) does {\it not} automatically extend to the individual chiral components of $\psi$. In principle, we may write \begin{equation} U_ C\psi^{\dagger {\rm T}} = U_C \left[\frac{1}{2}(1 - \gamma^5)\psi\right]^{\dagger {\rm T}} + U_C \left[\frac{1}{2}(1 + \gamma^5)\psi\right]^{\dagger {\rm T}} \label{2.18} \end{equation} \noindent {\it as well as} \begin{equation} U_C\psi^{\dagger {\rm T}} = \frac{1}{2}(1 - \gamma^5)U_C\psi^{\dagger {\rm T}} + \frac{1}{2}(1 + \gamma^5)U_C\psi^{\dagger {\rm T}}, \label{2.18bis} \end{equation} \noindent where $\frac{1}{2}(1 \mp \gamma^5)U_C\psi^{\dagger {\rm T}} = U_C\left[\frac{1}{2}(1 \pm \gamma^{5})\psi\right]^{\dagger {\rm T}}$. From a strict formal viewpoint, we are thus faced with two possible alternative ways of defining the `charge conjugates' of $\psi_{\rm L}$ and $\psi_{\rm R}$: we may put {\it either} \begin{equation} \left\{ \! \begin{array}{lcr} C \psi_{\rm L}(x) C^{-1} \equiv U_C\psi_{\rm L}^{\dagger {\rm T}}(x) = (\psi^c)_{\rm R}(x) \\[.1in] C \psi_{\rm R}(x) C^{-1} \equiv U_C\psi_{\rm R}^{\dagger {\rm T}}(x) = (\psi^c)_{\rm L}(x) \end{array} \right. \label{2.21} \end{equation} \noindent {\it or} \begin{equation} \left\{ \! \begin{array}{lcr} C \psi_{\rm L}(x) C^{-1} \equiv U_C\psi_{\rm R}^{\dagger {\rm T}}(x) = (\psi^c)_{\rm L}(x) \\[.1in] C \psi_{\rm R}(x) C^{-1} \equiv U_C\psi_{\rm L}^{\dagger {\rm T}}(x) = (\psi^c)_{\rm R}(x) \end{array} \right. \label{2.22} \end{equation} \noindent and both these assumptions lead to {\it the same overall result} \begin{equation} \psi^c = (\psi_{\rm L} + \psi_{\rm R})^c = U_C (\psi^{\dagger {\rm T}}_{\rm L} + \psi^{\dagger {\rm T}}_{\rm R}) = U_C (\psi^{\dagger {\rm T}}_{\rm R} + \psi^{\dagger {\rm T}}_{\rm L}) = U_C \psi^{\dagger {\rm T}}. \label{2.23} \end{equation} \noindent Actually, that there may be some `freedom' in defining a $C$ operation is not a novelty in the literature: see e.g. the quite similar reasonings made in Ref.~$[{\ref{Ziino2006}}]$, or those made in Ref.~$[{\ref{Dvoeglazov1997}}]$ by use of the `chiral helicity' special construct, and see as well Ref.~$[{\ref{Dvornikov2012}}]$, where a new interpretation of the Majorana condition is proposed. In comparison with Refs.~$[{\ref{Dvoeglazov1997}}]$ and $[{\ref{Dvornikov2012}}]$, the major distinctive feature can here be drawn from Eq.~(\ref{2.23}), which shows that the helicity of a massive spin-$\frac{1}{2}$ fermion is always preserved by $C$ no matter whether (\ref{2.21}) or (\ref{2.22}) is being adopted. Also, note that both in (\ref{2.21}) and in (\ref{2.22}) the matrix $U_C$ is regularly connecting field components with {\it opposite} chiralities. It is worth remarking, however, that in the zero-mass limit a `$C$ definition' like (\ref{2.21}) leads to the couples of solutions (\ref{2.10bis}) and (\ref{2.11bis}), whereas a `$C$ definition' like (\ref{2.22}) leads to the {\it alternative} couples of solutions (\ref{2.10ter}) and (\ref{2.11ter}). To tell which of the two options (\ref{2.21}) and (\ref{2.22}) is to be taken as the {\it truly orthodox} one according to standard QFT, it is enough to consider that only (\ref{2.22}) is strictly consistent with the {\it genuine} (primary) $C$ definition (\ref{2.28}). The remaining option, even though it is just as well allowed by (\ref{2.23}) and may all the same reproduce Eq.~(\ref{2.9bis}), would rather correspond to a $C$ operation whose fundamental representation does no longer appear to be {\it fully} defined inside the (conventional) Fock space, as it would now consist of (\ref{2.28}) supplemented by one `spurious' basic prescription, $\frac{1}{2} (1 \mp \gamma^5) \rightarrow \frac{1}{2}(1 \pm \gamma^5)$, {\it outside} the Fock space. To this it should be added that the choice of either (\ref{2.21}) or (\ref{2.22}) $-$ apparently irrelevant if we neglect (\ref{2.28}) and we restrict ourselves to Eq.~(\ref{2.9bis}) $-$ is made by no means irrelevant if the validity of Eq.~(\ref{2.9'bis}) is also invoked, with $\psi'(x)$ being defined as in (\ref{2.25}). The point is that Eq.~(\ref{2.9'bis}) may really be claimed to hold {\it only if} the `wrong' option (\ref{2.21}) $-$ just defining $\psi_{\rm L}$ and $(\psi^c)_{\rm R}$ as the `charge conjugates' of each other $-$ is adopted. So, after all, we may even choose (\ref{2.21}) (as usually done) to extend Eq.~(\ref{2.9bis}) to a field like $\psi'(x)$, but in this way we are not keeping to the {\it strict} QFT prescription (\ref{2.28}) and we are actually introducing a {\it new} `charge conjugation' operation, say, $C'$, which is such that $C' \psi'(x) C'^{-1}= U_C \psi'^{\dagger {\rm T}}$ and $C'\psi_{\rm L,R}(x)C'^{-1} = U_C\psi_{\rm L,R}^{\dagger {\rm T}}(x)$, and which should not be confused with the one, $C$ itself, rigorously acting as `particle--antiparticle conjugation' according to (\ref{2.28}). It will be shown in Sec.~6 that $C'$ (normally mistaken for $C$) does amount, more precisely, to a mere `scalar-charge conjugation' operation. The difference between two such ways of defining a `charge conjugation' operation is clearly brought to its extreme consequences (which affect helicities themselves) on passing to the zero-mass case. This has already been mentioned in the previous section; yet, for completeness' sake, it may be worth trying to deal with that case in more detail, too. The zero-mass limit leads to a special situation in which the matrix $U_C$ can `manifestly' be seen to connect the two chiralities $[{\ref{Itzykson1985}}]$. This is due to the well-known fact that positive- and negative-energy massless eigenspinors with discordant (concordant) helicities are themselves {\it chiral} spinors with concordant (discordant) chiralities. So, if $\psi_{\rm L}(x)$ and $\psi_{\rm R}(x)$ are now taken as zero-mass fields, their respective normal mode expansions will become \begin{equation} \psi_{\rm L}(x) = \int \frac{d^3{\bf p}}{(2\pi)^3 2p^0} \left[ a^{(-)}({\bf p}) u^{(-)}(p) e^{-ip\cdot x} + b^{\dagger(+)}({\bf p}) v^{(+)}(p) e^{ip\cdot x} \right] \label{2.30} \end{equation} \noindent and \begin{equation} \psi_{\rm R}(x) = \int \frac{d^3{\bf p}}{(2\pi)^3 2p^0} \left[ a^{(+)}({\bf p}) u^{(+)}(p) e^{-ip\cdot x} + b^{\dagger(-)}({\bf p}) v^{(-)}(p) e^{ip\cdot x} \right], \label{2.31} \end{equation} \noindent where the superscripts $(\mp)$ still denote the eigenvalues of the helicity variable $h$. Hence, recalling that particle--antiparticle conjugation $C$ leaves (by definition) helicity {\it unvaried} $[{\ref{Merzbacher1970}}]$, we find e.g. that the `charge conjugate' of the (massless) field $\psi_{\rm L}(x)$ is to be strictly defined as \begin{equation} C \psi_{\rm L}(x) C^{-1} \equiv U_C \psi^{\dagger {\rm T}}_{\rm R}(x) = (\psi^c)_{\rm L}(x), \label{2.13} \end{equation} \noindent with $U_C\psi^{\dagger {\rm T}}_{\rm R}(x)$ including every required positive-energy spinor $U_Cv^{(-)}(p)$, of {\it negative} helicity, as well as every required negative-energy spinor $U_Cu^{(+)}(p)$, of {\it positive} helicity, and {\it not} as \begin{equation} C \psi_{\rm L}(x) C^{-1} \equiv U_C \psi^{\dagger {\rm T}}_{\rm L}(x) = (\psi^c)_{\rm R}(x), \label{2.14} \end{equation} \noindent with $U_C\psi^{\dagger {\rm T}}_{\rm L}(x)$ containing the corresponding {\it unwanted} spinors having interchanged helicities. On the other hand, this fully agrees with what has been inferred in Sec.~2 from comparing (\ref{2.10ter}),(\ref{2.11ter}) with (\ref{2.10bis}),(\ref{2.11bis}): it is definition (\ref{2.13}), and {\it not} definition (\ref{2.14}), that strictly requires also the presence of a (right-handed) field solution $\psi_{\rm R}$ (quite missing in neutrino phenomenology) thus leading us to conclude that $C$ itself is (maximally) {\it violated} by neutrino physics! Of course, we may come to (\ref{2.13}) even without allowing for the specific normal mode expansions (\ref{2.30}) and (\ref{2.31}): it is sufficient simply to take account of the basic $C$-definition (\ref{2.28}) to get \begin{equation} C \psi_{\rm L}(x) C^{-1} = \frac{1}{2} (1 - \gamma^5) C \psi(x) C^{-1} = \frac{1}{2} (1 - \gamma^5)U_C \psi^{\dagger {\rm T}}(x), \label{2.15} \end{equation} \noindent where the former equality (leaving chirality unaffected) is just due to the {\it linear} behavior of $C$ in the Fock space. Hence it also follows, as already implied in (\ref{2.13}), \begin{equation} C \psi_{\rm L}(x) C^{-1} = U_C \left[\frac{1}{2} (1 + \gamma^5) \psi(x)\right]^{\dagger {\rm T}} = U_C \psi_{\rm R}^{\dagger {\rm T}}(x), \label{2.16} \end{equation} \noindent and it may be concluded that {\it a $C$-matrix identical with the conventional one is still available, provided that complex conjugation is supplemented by} `${\rm L} \rightarrow {\rm R}$' {\it exchange}. This, of course, does not affect the `whole' Dirac result $\psi^c=U_C\psi^{\dagger {\rm T}}$ (with $\psi=\psi_{\rm L}+\psi_{\rm R}$) $-$ see Eq.(\ref{2.23}) $-$ and it is just the way to maintain helicity invariance under $C$. Note, on the other hand, that {\it no} $4\times4$ matrix $U'_C$ can further be found being such that $C \psi_{\rm L}(x) C^{-1}=U_C \psi_{\rm R}^{\dagger {\rm T}}(x) =U'_C\psi_{\rm L}^{\dagger {\rm T}}(x)$: to realize it, one may simply consider that positive-energy antifermions (fermions) annihilated (created) by a massless field like $U'_C\psi_{\rm L}^{\dagger {\rm T}}(x)$, with \begin{equation} \psi_{\rm L}^{\dagger {\rm T}}(x) = \int \! \frac{d^3{\bf p}}{(2\pi)^3 2p^0} \left[ b^{(+)}({\bf p}) v^{(+) \dagger {\rm T}}(p) e^{-ip\cdot x} + a^{\dagger(-)}({\bf p}) u^{(-) \dagger {\rm T}}(p) e^{ip\cdot x} \right], \label{2.24} \end{equation} \noindent would rather have {\it inverted} helicities with respect to positive-energy fermions (antifermions) annihilated (created) by $\psi_{\rm L}(x)$. On the grounds of either Eq.~(\ref{2.13}) or Eq.~(\ref{2.15}), it can thus be argued, after all, that even for zero mass there is no {\it net} chirality flip actually induced by particle--antiparticle conjugation (\ref{2.28}) (despite the unquestionable fact that $U_C$ itself is manifestly connecting fields with {\it opposite} chiralities!). \section{A double variety of mutually `charge conjugate' spin-1/2 fermion fields} Take Eq.~(\ref{2.33}), and consider the two (both admissible) formal ways, either (\ref{2.1}) or (\ref{2.34}), of defining a (manifestly self-conjugate) Majorana field $\psi_{\rm M}(x)$ in terms of {\it mutually `charge conjugate'} spin-$\frac{1}{2}$ fermion fields, either $\psi(x), \psi^c(x)$ or $\psi'(x),\psi'^c(x)$. These field pairs (no matter which of them is taken as a mass-eigenfield pair) can be formally put on an {\it equal} footing via the {\it unitary transformation} \begin{equation} \left\{ \begin{array}{lcl} \psi'(x)\!\!&=&\!\!X_{\rm L} \psi(x) + X_{\rm R} \psi^c(x) \\ [0.05in] \psi'^c(x)\!\!&=&\!\!X_{\rm R} \psi(x) + X_{\rm L} \psi^c(x) \end{array} \right. \label{3.1} \end{equation} \noindent or the {\it inverse} one \begin{equation} \left\{ \begin{array}{lcl} \psi(x)\!\!&=&\!\!X_{\rm L} \psi'(x) + X_{\rm R} \psi'^c(x) \\ [0.05in] \psi^c(x)\!\!&=&\!\!X_{\rm R} \psi'(x) + X_{\rm L} \psi'^c(x) \end{array} \right. \label{3.2bis} \end{equation} \noindent where \begin{equation} X_{\rm L} \equiv \frac{1}{2}(1 - \gamma^5), \;\;\;\; X_{\rm R} \equiv \frac{1}{2}(1 + \gamma^5) \label{3.2} \end{equation} \noindent (and where, of course, $X_{\rm L}X_{\rm R}=X_{\rm R}X_{\rm L} =0,X_{\rm L,R}^\dagger=X_{\rm L,R},X_{\rm L,R}^2 =X_{\rm L,R},X_{\rm R} + X_{\rm L}=1,X_{\rm R} - X_{\rm L}=\gamma^5$). Transformations (\ref{3.1}) and (\ref{3.2bis}), which can be suitably rewritten in the matrix forms \begin{equation} \pmatrix{\psi'(x)\cr \psi'^c(x)\cr} \!\!=\!\! \pmatrix{X_{\rm L}&X_{\rm R}\cr X_{\rm R}&X_{\rm L}\cr} \pmatrix{\psi(x)\cr \psi^c(x)\cr}, \pmatrix{\psi(x)\cr \psi^c(x)\cr} \!\!=\!\! \pmatrix{X_{\rm L}&X_{\rm R}\cr X_{\rm R}&X_{\rm L}\cr} \pmatrix{\psi'(x)\cr \psi'^c(x)\cr}, \label{3.2ter} \end{equation} \noindent make it possible to introduce a generalized matrix representation for $C$. Such an extension, clearly superfluous on dealing merely with the field pair $\psi(x),\psi^c(x)$, turns out to be strictly needed if the field pair $\psi'(x),\psi'^c(x)$ is also allowed for. This is because the use of Eq.~(\ref{2.16}) $-$ along with $C \psi^c_{\rm R}(x) C^{-1} = U_C \psi_{\rm L}^{c \dagger {\rm T}}(x)$ $-$ can only lead to the {\it trivial} overall result \begin{equation} C \psi'(x) C^{-1} \equiv \psi'^c(x) = U_C \psi'^{c \dagger {\rm T}}(x), \label{3.10} \end{equation} \noindent where $U_C \psi'^{c \dagger {\rm T}}(x)$ is merely an identical way to write $\psi'^c(x)$, and {\it not} an actual prescription to obtain $\psi'^c(x)$ {\it from} $\psi'(x)$\,! So, unlike what happens for $\psi(x) \stackrel{C}{\rightarrow} \psi^c(x)$, with $\psi^c(x)\equiv C\psi(x)C^{-1}=U_C\psi^{\dagger {\rm T}}(x)$, there seems to be {\it no} $4\times4$ matrix $U'_C$ effectively representing $\psi'(x) \stackrel{C}{\rightarrow}\psi'^c(x)$ and being such that $\psi'^c(x)\equiv C\psi'(x)C^{-1}=U'_C\psi'^{\dagger {\rm T}}(x)$ (the problem does not apply, of course, to a genuine Majorana field $\psi_{\rm M}$, since $\psi^c_{\rm M}=\psi_{\rm M}=U_C \psi^{c \dagger {\rm T}}_{\rm M} =U_C \psi^{\dagger {\rm T}}_{\rm M}$). A deep motivation for this lack can be found when the normal mode expansions of $\psi'(x)$ and $\psi'^c(x)$ (taken as mass eigenfields) are derived $-$ see Eqs.~(\ref{2.35}) and (\ref{2.36}) below $-$ and it is realized that (quite differently from the `Dirac mass' case) there are indeed {\it neither any `particle' nor any `antiparticle' helicity freedom degrees shared by these expansions}. More precisely, one still has that the freedom degrees in question are {\it four} in all (as in the `Dirac mass' case) but one also has that only {\it two} of them are included in the $\psi'(x)$ expansion and the other {\it two} in the $\psi'^c(x)$ expansion. It thus follows that $\psi'(x)$, as given by Eq.~(\ref{2.35}), and $\psi'^c(x)$, as given by Eq.~(\ref{2.36}), are actually belonging to two {\it orthogonal} field spaces; and this clearly makes quite inadmissible any mutual link of the type $\psi'^c(x)=U'_C\psi'^{\dagger {\rm T}}(x)$ (which would instead require {\it one and the same} four-spinor field space for them both!). That said, let us begin by defining the `charge conjugate' of the column matrix \begin{equation} \Psi(x) \equiv \pmatrix{\psi(x)\cr \psi^c(x)\cr}. \label{3.3ter} \end{equation} \noindent We clearly have \begin{equation} C\Psi(x)C^{-1} \equiv \pmatrix{\psi^c(x)\cr \psi(x)\cr} = \pmatrix{U_C&0\cr 0&U_C\cr} \pmatrix{\psi^{\dagger {\rm T}}(x)\cr \psi^{c \dagger {\rm T}}(x)\cr}, \label{3.3} \end{equation} \noindent where $\pmatrix{U_C&0\cr 0&U_C\cr}$ is an $8\times8$ matrix (still made up, as expected, of $4\times4$ diagonal blocks). To obtain, likewise, the `charge conjugate' of \begin{equation} \Psi'(x) \equiv \pmatrix{\psi'(x)\cr \psi'^c(x)\cr}, \label{3.3quater} \end{equation} \noindent it must be borne in mind that $C$ fundamentally acts {\it just} on the annihilation and creation operators included in the fields, without affecting the transformation matrix in either (\ref{3.1}) or (\ref{3.2bis}). The result is \begin{equation} C\Psi'(x)C^{-1} \equiv \pmatrix{\psi'^c(x)\cr \psi'(x)\cr} =\pmatrix{X_{\rm L}&X_{\rm R}\cr X_{\rm R}&X_{\rm L}\cr} \pmatrix{\psi^c(x)\cr \psi(x)\cr} \label{3.4} \end{equation} \noindent or ultimately, in full agreement with (\ref{3.10}), \begin{equation} C\Psi'(x)C^{-1} \equiv \pmatrix{\psi'^c(x)\cr \psi'(x)\cr} = \pmatrix{0&U_C\cr U_C&0\cr} \pmatrix{\psi'^{\dagger {\rm T}}(x)\cr \psi'^{c \dagger {\rm T}}(x)\cr}, \label{3.10bis} \end{equation} \noindent where we have substituted both (\ref{3.3}) and \begin{equation} \pmatrix{\psi^{\dagger {\rm T}}(x)\cr \psi^{c \dagger {\rm T}}(x)\cr} = \pmatrix{X_{\rm L}^{\dagger {\rm T}}&X_{\rm R}^{\dagger {\rm T}}\cr X_{\rm R}^{\dagger {\rm T}}&X_{\rm L}^{\dagger {\rm T}}\cr} \pmatrix{\psi'^{\dagger {\rm T}}(x)\cr \psi'^{c \dagger {\rm T}}(x)\cr} \label{3.1bis*} \end{equation} \noindent (and we have taken into account that $U_C X_{\rm L,R}^{\dagger {\rm T}} = X_{\rm R,L} U_C$). It thus turns out that the {\it transformed} $8\times8$ matrix \begin{equation} \pmatrix{X_{\rm L}&X_{\rm R}\cr X_{\rm R}&X_{\rm L}\cr} \pmatrix{U_C&0\cr 0&U_C\cr} \pmatrix{X_{\rm L}^{\dagger {\rm T}}&X_{\rm R}^{\dagger {\rm T}}\cr X_{\rm R}^{\dagger {\rm T}}&X_{\rm L}^{\dagger {\rm T}}\cr} =\pmatrix{0&U_C\cr U_C&0\cr} \label{3.10ter} \end{equation} \noindent is {\it no more} trivially made up of diagonal blocks. This generalized matrix representation of $C$, given by Eqs.~(\ref{3.3}) and (\ref{3.10bis}), has been built {\it without necessarily specifying} which of the two field pairs $\psi(x),\psi^c(x)$ and $\psi'(x),\psi'^c(x)$ should be also a mass-eigenfield pair: what has been only assumed is the validity of the formal link (\ref{2.9bis}) connecting $\psi(x)$ and $\psi^c(x)$. So, after all, Eqs.~(\ref{3.3}) and (\ref{3.10bis}) may be referred to as just the {\it basic} peculiar features generally distinguishing $\psi(x),\psi^c(x)$ and $\psi'(x),\psi'^c(x)$. By use of either (\ref{3.1}) or (\ref{3.2bis}) (and with the help of the $X_{\rm L,R}$ properties) it can further be checked that \begin{equation} \bar{\psi}\gamma^\mu\psi - \bar{\psi}^c \gamma^\mu\psi^c = \bar{\psi}'\gamma^\mu (-\gamma^5)\psi' - \bar{\psi}'^c\gamma^\mu (-\gamma^5)\psi'^c, \label{4.1} \end{equation} \noindent where the individual currents \begin{equation} \bar{\psi}'\gamma^\mu(-\gamma^5)\psi' = \bar{\psi}'\gamma^\mu X_{\rm L}\psi' - \bar{\psi}'\gamma^\mu X_{\rm R}\psi' = \bar{\psi}\gamma^\mu X_{\rm L}\psi - \bar{\psi}^c\gamma^\mu X_{\rm R}\psi^c \label{4.2} \end{equation} \noindent and \begin{equation} \bar{\psi}'^c\gamma^\mu(-\gamma^5)\psi'^c = \bar{\psi}'^c\gamma^\mu X_{\rm L}\psi'^c - \bar{\psi}'^c\gamma^\mu X_{\rm R}\psi'^c = \bar{\psi^c}\gamma^\mu X_{\rm L}\psi^c - \bar{\psi}\gamma^\mu X_{\rm R}\psi \label{4.2bis} \end{equation} \noindent (just like the ordinary ones $\bar{\psi}\gamma^\mu\psi$ and $\bar{\psi}^c\gamma^\mu\psi^c$) are generally {\it non}vanishing [{\ref{Dvoeglazov2012}}], in spite of the fact that $\psi'(x) = U_C \psi'^{\dagger {\rm T}}(x)$ and $\psi'^c(x) = U_C \psi'^{c \dagger {\rm T}}(x)$. Let us look first at the `Dirac mass' special case, i.e. when, as usual, the field pair $\psi(x),\psi^c(x)$ is also a pair of mass eigenfields, the one defined by the expansion (\ref{2.27}) and the other by the charge conjugate of (\ref{2.27}). In this case, while $\psi(x)$ and $\psi^c(x)$ $-$ taken as free fields $-$ are {\it single} solutions of the Dirac equation (with a given mass parameter $m$), the same cannot be said for $\psi'(x)$ and $\psi'^c(x)$, as it will clearly result \begin{equation} i\gamma^\mu \partial_\mu \psi'(x) = m \psi'^c(x), \;\;\;\; i\gamma^\mu \partial_\mu \psi'^c(x) = m \psi'(x) \label{4.3} \end{equation} \noindent ($\hbar=c=1$). Yet, on the basis of (\ref{4.1}), (\ref{4.2}), and (\ref{4.2bis}), it may be argued that fields $\psi'(x)$ and $\psi'^c(x)$ themselves seem in particular to enter into maximally-$P$-violating weak couplings like real `dynamical eigenfields' $-$ the former {\it wholly `active'} and the latter {\it wholly `sterile'} $-$ thus giving, furthermore, a {\it direct} evidence of the maximum $C$-violation also implied in those couplings. Let us pass now to the `Majorana mass' special case, i.e. when, on the contrary, it is just the field pair $\psi'(x),\psi'^c(x)$ that stands for an actual pair of mass eigenfields. To make an explicit derivation of the normal mode expansions defining $\psi'(x)$ and $\psi'^c(x)$ in such a case, we may exploit the fact that these expansions should clearly have forms which are also available for the zero-mass limit. Splitting up both $\psi'$ and $\psi'^c$ into chiral components, such that \begin{equation} \left\{ \begin{array}{lcr} \frac{1}{2}(1 - \gamma^5)\psi' = \frac{1}{2}(1 - \gamma^5)\psi, \;\; \frac{1}{2}(1 + \gamma^5)\psi' = \frac{1}{2}(1 + \gamma^5)\psi^c \\[.1in] \frac{1}{2}(1 - \gamma^5)\psi'^c = \frac{1}{2}(1 - \gamma^5)\psi^c, \;\; \frac{1}{2}(1 + \gamma^5)\psi'^c = \frac{1}{2}(1 + \gamma^5)\psi, \end{array} \right. \label{2.37} \end{equation} \noindent we may, thus, simply substitute the zero-mass normal mode expansions of the single chiral-field couples $\frac{1}{2}(1-\gamma^5)\psi,\frac{1}{2}(1+\gamma^5)\psi^c$ and $\frac{1}{2}(1-\gamma^5)\psi^c,\frac{1}{2}(1+\gamma^5)\psi$ to obtain (for the {\it non}zero-mass case at issue) \begin{eqnarray} \psi'(x) & \!\!=\!\! & \int \frac{d^3{\bf p}}{(2\pi)^3 2p^0} \left\{\big[a^{(-)}({\bf p}) u^{(-)}(p) + b^{(+)}({\bf p}) u^{(+)}(p)\big] e^{-ip\cdot x}\right. \nonumber \\ & \! \! & \left. + \big[b^{\dagger(+)}({\bf p}) v^{(+)}(p) + a^{\dagger(-)}({\bf p}) v^{(-)}(p)\big] e^{ip\cdot x} \right\} \; \mbox{(`Major. mass')} \label{2.35} \end{eqnarray} \noindent and \begin{eqnarray} \psi'^c(x) & \!\!\!=\!\!\! & \int \frac{d^3{\bf p}}{(2\pi)^3 2p^0} \left\{\big[b^{(-)}({\bf p}) u^{(-)}(p) + a^{(+)}({\bf p}) u^{(+)}(p)\big] e^{-ip\cdot x}\right. \nonumber \\ & \! \! & \left. + \big[a^{\dagger(+)}({\bf p}) v^{(+)}(p) + b^{\dagger(-)}({\bf p}) v^{(-)}(p)\big] e^{ip\cdot x} \right\} \; \mbox{(`Major. mass')}, \label{2.36} \end{eqnarray} \noindent with the superscripts $(\mp)$ still denoting (negative and positive) helicities. Of course, as now the other two fields, $\psi(x)$ and $\psi^c(x)$, are no longer mass eigenfields, it is not surprising that their corresponding expansions obtained from (\ref{2.35}) and (\ref{2.36}) by use of (\ref{3.2bis}) may not be the same as the (usual) ones for the `Dirac mass' case. A glance at (\ref{2.35}) and (\ref{2.36}) shows that only operators such as $a^{(-)}({\bf p})$ and $b^{\dagger(+)}({\bf p})$ (plus their adjoints) enter into (\ref{2.35}), and similarly, only operators such as $b^{(-)}({\bf p})$ and $a^{\dagger(+)}({\bf p})$ (plus their adjoints) enter into (\ref{2.36}). This, at first sight, might lead one to mistake either $\psi'(x)$ or $\psi'^c(x)$ {\it alone} for a truly neutral spin-$\frac{1}{2}$ field, merely endowed with {\it two} freedom degrees like $a'^{(-)}({\bf p})\equiv a^{(-)}({\bf p})\,, \,a'^{(+)}({\bf p})\equiv b^{(+)}({\bf p})$ (as regards $\psi'$) or $b'^{(-)}({\bf p})\equiv b^{(-)}({\bf p}) \,, \,b'^{(+)}({\bf p})\equiv a^{(+)}({\bf p})$ (as regards $\psi'^c$). From comparing (\ref{2.35}) and (\ref{2.36}), it appears evident, however, that such pairs of annihilation operators, as well as the whole fields $\psi'(x)$ and $\psi'^c(x)$ themselves, are actually {\it interchanged} by particle--antiparticle conjugation (\ref{2.28}), despite the fact that (\ref{2.35}) and (\ref{2.36}) also provide clear evidence for the {\it individual} formal constraints $\psi'(x)=U_C\psi'^{\dagger {\rm T}}(x)$ and $\psi'^c(x)=U_C\psi'^{c \dagger {\rm T}}(x)$. To this it is worth adding that (as it should be expected) the two expansions (\ref{2.35}) and (\ref{2.36}) are fully consistent with Eq.~(\ref{3.10bis}). Therefore, to come to a {\it genuine} self-conjugate field, it would be strictly necessary to impose the {\it extra} constraint $a^{(\mp)}({\bf p})=b^{(\mp)}({\bf p}), b^{\dagger(\pm)}({\bf p})=a^{\dagger(\pm)}({\bf p})$; which, indeed, would mean nothing else than {\it trivially reducing $\psi'$ and $\psi'^c$ to one and the same field by means of the Majorana condition $\psi'=\psi'^c$}\,! On the other hand, if we truly admit that (\ref{2.35}) and (\ref{2.36}) are just defining `charged' fields, we have also to admit, of course, that standard (i.e. scalar-type) charges are to be ruled out for them. It is thus left to see, after all, what {\it new} kind of `charges' may ever characterize a pair of mutually `charge conjugate' fermion fields each having only {\it two} (rather than four) freedom degrees, and what should be the {\it new} meaning to be assigned, accordingly, to each single relationship $\psi'(x) = U_C \psi'^{\dagger {\rm T}}(x)$ and $\psi'^c(x) = U_C \psi'^{c \dagger {\rm T}}(x)$. \section{A spin-1/2 fermion with mass of the `Majorana' (rather than `Dirac') kind as a particle correspondingly endowed with pseudoscalar-type (rather than scalar-type) charges} Let us look again at either (\ref{3.1}) or (\ref{3.2bis}). If we still assume the two fermion fields $\psi(x),\psi^c(x)$ $-$ characterized by Eq.~(\ref{2.9bis}) $-$ to be also mass eigenfields (as in the `Dirac mass' case), we are clearly left with the usual `charged' spin-$\frac{1}{2}$ particles. If we instead suppose the field pair $\psi'(x),\psi'^c(x)$ to be just an alternative pair of mass-eigenfields (as in the `Majorana mass' case), we can no longer expect the associated fermions to be `really neutral' (as commonly believed), the reason being because $\psi'(x)$ and $\psi'^c(x)$ are themselves two `mutually charge-conjugate' (rather than individually self-conjugate) fields. In this case, as we have already pointed out, we should be actually faced with a {\it new} type of `charged' spin-$\frac{1}{2}$ particles. To see what basic peculiar features may truly distinguish the latter `charged' fermion type from the former one, it appears crucial to compare how, in the two cases, the single `particle' and `antiparticle' annihilation (creation) operators do transform under {\it space reflection}. In either case, each field that happens to be a mass eigenfield turns out to be also a solution of a Dirac-type equation. Thus, if Dirac invariance with respect to space reflection is always invoked, one gets e.g. \begin{equation} P\psi(x^R)P^{-1} = i \gamma^0 \psi(x), \;\; P\psi^c(x^R)P^{-1} = i \gamma^0 \psi^c(x) \label{3.12ter} \end{equation} \noindent for the `Dirac mass' case, and \begin{equation} P'\psi'(x^R)P'^{-1} \!=\! i \gamma^0 \psi'(x), \;\; P'\psi'^c(x^R)P'^{-1} \!=\! i \gamma^0 \psi'^c(x) \label{3.12quater} \end{equation} \noindent for the `Majorana mass' case, where $x^R \equiv(t,-\bf r)$ (and where the phase choice is such that it allows either $P$ or $P'$ to commute with $C$ and to imply, as required, a fermion--antifermion relative intrinsic parity $i^2=-1$). It can be argued that the two parity operators $P$ and $P'$ $-$ the former defined by (\ref{3.12ter}) and the latter by (\ref{3.12quater}) $-$ provide two {\it non}coinciding representations of $x \rightarrow x^R$ (just relevant to the two cases under consideration). That $P$ and $P'$ do {\it not} overlap can be easily shown by direct use of (\ref{3.1}) or (\ref{3.2bis}). Bearing in mind that either $P$ or $P'$ is (in itself) nothing but a linear operator acting on annihilation and creation operators, one should expect, for example, that $P'$ applied to (\ref{3.2bis}) still gives \begin{equation} \left\{ \begin{array}{lcl} P'\psi(x)P'^{-1}\!\!&=&\!\!X_{\rm L} P'\psi'(x)P'^{-1} + X_{\rm R} P'\psi'^c(x)P'^{-1} \\ [0.05in] P'\psi^c(x)P'^{-1}\!\!&=&\!\!X_{\rm R} P'\psi'(x)P'^{-1} + X_{\rm L} P'\psi'^c(x)P'^{-1}. \end{array} \right. \label{3.1ter} \end{equation} \noindent Hence, substituting (\ref{3.12quater}) (and recalling that $X_{\rm L,R}\gamma^0=\gamma^0X_{\rm R,L}$), one is actually led to a result which is {\it not} the same as (\ref{3.12ter}): \begin{equation} P'\psi(x^R)P'^{-1} = i \gamma^0 \psi^c(x), \;\; P'\psi^c(x^R)P'^{-1} = i \gamma^0 \psi(x). \label{3.23bis} \end{equation} \noindent If $\psi(x)$ and $\psi^c(x)$ are still mass eigenfields (with usual normal mode expansions), then \begin{equation} Pa^{(\mp)}({\bf p})P^{-1} = i a^{(\pm)}(-{\bf p}) \mbox{, and so on} \;\;\;\; \mbox{(`Dirac mass')}; \label{3.13bis} \end{equation} \noindent which means, strictly speaking, that standard Dirac particles $-$ or `Dirac mass' fermions $-$ are bound to carry charges behaving just like ordinary {\it scalars}. This cannot be true for `Majorana mass' fermions, i.e. for the new kind of `charged' spin-$\frac{1}{2}$ particles associated with the alternative pair $\psi'(x),\psi'^c(x)$ of mass eigenfields. The fact is that, even though Eq.~(\ref{3.12quater}) is quite analogous to Eq.~(\ref{3.12ter}), the expansions strictly defining $\psi'(x)$ and $\psi'^c(x)$ as mass eigenfields are the {\it non}standard ones (\ref{2.35}) and (\ref{2.36}). So, space reflection now implies \begin{equation} P'a^{(\mp)}({\bf p})P'^{-1} = i b^{(\pm)}(-{\bf p}) \mbox{, and so on}\;\;\;\; \mbox{(`Majorana mass')}, \label{3.13ter} \end{equation} \noindent thus acting on `Majorana mass' fermions {\it in the same way as the whole CP operation acts on `Dirac mass' fermions}. This makes sense, of course, if and only if `Majorana mass' fermions are assumed to carry nonzero charges behaving just like {\it pseudoscalars}. Due to such charges, one and the same particle of this kind is indeed predicted to behave like {\it either} a `fermion' {\it or} an `antifermion' according to the given chirality involved, so that, in the ultrarelativistic limit, it would naturally approach an exact {\it two-component} particle model. Associated with it, there should also be a {\it non}vanishing (though not generally conserved) current, proportional e.g. to \begin{equation} \bar{\psi'}\gamma^\mu(-\gamma^5)\psi' = \bar{\psi'}_{\rm L}\gamma^\mu\psi'_{\rm L} - \bar{\psi'}_{\rm R}\gamma^\mu\psi'_{\rm R} \label{3.23} \end{equation} \noindent ($\mu=0,1,2,3$) or to the current operator `charge conjugate' to (\ref{3.23}). For speed zero (or at least negligible compared with $c$) such a particle should be {\it equally} able to look like a `fermion' (with negative chirality) {\it as} like an `antifermion' (with positive chirality), while for ultrarelativistic speeds it should tend, depending on its helicity, to behave either in the former or in the latter manner {\it only}. Similarly, its individual helicity eigenstates could never turn out to be {\it sharp} `charge eigenstates' but could only tend to become so in the ultrarelativistic limit (or in the limit of zero mass $[{\ref{Barut1993}}]$). A particle like this cannot be strictly said to be `chargeless' (as it would be for a {\it true} Majorana particle). The general fact is left, anyhow, that {\it whatever spin-$\frac{1}{2}$ particle endowed with `Majorana mass' would still be `neutral' with respect to scalar-type charges}. A comparison of Eqs.~(\ref{3.23bis}),(\ref{3.13ter}) with Eqs.~(\ref{3.12ter}),(\ref{3.13bis}) shows that actually, \begin{equation} P' = CP \;(= PC). \label{3.25} \end{equation} \noindent Such a relationship, just equating parity $P'$ for `Majorana mass' fermions with $CP$ for `Dirac mass' fermions, may in particular lead one to speculate that, if all particles of the latter type were replaced by particles of the former type, the usual $CP$ mirror symmetry of weak processes would then become nothing but a {\it genuine} (ordinary) mirror symmetry! In view of Eq.~(\ref{3.25}), one may also put, for convenience, \begin{equation} P = P'_{\rm ex} \Longrightarrow P' = CP'_{\rm ex} \; (= P'_{\rm ex}C), \label{3.26} \end{equation} \noindent where $P'_{\rm ex}$ stands just for an `external' parity operator (identical with $P'$ except for not involving pseudoscalar-charge reversal). This shows that, on passing from the `Dirac mass' case (when $C$ may specifically be said to invert scalar-type charges) to the `Majorana mass' case (when $C$ may specifically be said to invert pseudoscalar-type charges), the standard effect of (maximum) $P$ violation would indeed be reduced to a mere effect of (maximum) $P'_{\rm ex}$ violation. \section{`Scalar-charge conjugation' and `pseudoscalar-charge conjugation' operations} It follows from the foregoing that $C$ does in principle reverse {\it scalar-type } as well as {\it pseudoscalar-type} charges, giving rise always to a {\it full} particle $\rightleftharpoons$ antiparticle interchange. If so, how can one {\it separately} think of a `scalar-charge conjugation' and a `pseudoscalar-charge conjugation' operation? Let two such individual operations be denoted by $C_{\rm s}$ and $C_{\rm ps}$, respectively (with $C_{\rm s}^2=C_{\rm ps}^2=1$). Actually, it is only the {\it product} of them both, i.e. $C$ itself, that is strictly demanded to result in a {\it pure} operation acting on annihilation and creation operators as in (\ref{2.28}). It appears legitimate, therefore, to attempt to define these single charge-conjugation operations so that, regardless of the mass kind involved, they may simply fulfil the general requirements \begin{equation} C_{\rm s}\Psi(x)C_{\rm s}^{-1} = C\Psi(x)C^{-1}, \;\; C_{\rm s}\Psi'(x)C_{\rm s}^{-1} = \Psi'(x) \label{4.3} \end{equation} \noindent and \begin{equation} C_{\rm ps}\Psi(x)C_{\rm ps}^{-1} = \Psi(x), \;\;\; C_{\rm ps}\Psi'(x)C_{\rm ps}^{-1} = C\Psi'(x)C^{-1}, \label{4.3bis} \end{equation} \noindent where $\Psi(x)$ and $\Psi'(x)$ stand for the column matrices in Eqs.~(\ref{3.3ter}) and (\ref{3.3quater}), and where $C= C_{\rm ps}C_{\rm s}$ ($=C_{\rm s}C_{\rm ps}$). Definitions (\ref{4.3}) and (\ref{4.3bis}) naturally embody, in particular, the new achievement that `Majorana mass' eigenfields are themselves non-neutral fields associated with pseudoscalar-type (rather than scalar-type) charges. Making use of transformation (\ref{3.1}) or its inverse (\ref{3.2bis}), we can see that neither $C_{\rm s}$ nor $C_{\rm ps}$ as given above may be fundamentally represented as {\it purely} acting (like $C$) on annihilation and creation operators: for instance, it can be easily checked that, for the two requirements in (\ref{4.3}) to hold simultaneously, the $C_{\rm s}$ operation must be understood to be further {\it such that} \begin{equation} \left\{ \begin{array}{lcr} C_{\rm s}\left[ \frac{1}{2}(1 \mp \gamma^5)\psi\right]C_{\rm s}^{-1} =\frac{1}{2}(1 \pm \gamma^5)C \psi C^{-1} = U_C \left[ \frac{1}{2}(1 \mp \gamma^5)\psi \right]^{\dagger {\rm T}} \\[.1in] C_{\rm s}\left[ \frac{1}{2}(1 \pm \gamma^5)\psi^c \right] C_{\rm s}^{-1} =\frac{1}{2}(1 \mp \gamma^5)C \psi^c C^{-1} = U_C \left[ \frac{1}{2}(1 \pm \gamma^5)\psi^c \right]^{\dagger {\rm T}}. \label{3.22ter} \end{array} \right. \end{equation} Still focusing on (\ref{4.3}), let us compare $C_{\rm s}$ with $C$ as represented in Eqs.~(\ref{3.3}) and (\ref{3.10bis}). Due to (\ref{3.22ter}), we shall now have {\it not only} \begin{equation} C_{\rm s}\Psi(x)C_{\rm s}^{-1} = \pmatrix{U_C&0\cr 0&U_C\cr} \pmatrix{\psi^{\dagger {\rm T}}(x)\cr \psi^{c \dagger {\rm T}}(x)\cr} \label{4.4} \end{equation} \noindent {\it but also} \begin{equation} C_{\rm s}\Psi'(x)C_{\rm s}^{-1} = \pmatrix{U_C&0\cr 0&U_C\cr} \pmatrix{\psi'^{\dagger {\rm T}}(x)\cr \psi'^{c \dagger {\rm T}}(x)\cr}. \label{4.5} \end{equation} \noindent This leads us, in particular, to the conclusion that the conventional identity $C \psi'(x) C^{-1} = U_C \psi'^{\dagger {\rm T}}(x)$ is to be properly recast {\it into} \begin{equation} C_{\rm s} \psi'(x) C_{\rm s}^{-1}= U_C \psi'^{\dagger {\rm T}}(x). \label{2.9'ter} \end{equation} In the light of (\ref{3.22ter}) and (\ref{2.9'ter}), it is immediate to see that $C_{\rm s}$ is exactly coinciding with the special `charge conjugation' operation $C'$ (distinct from $C$) mentioned in Sec.~3. As shown by (\ref{2.9'ter}), it is thus $C_{\rm s}$ that is normally {\it mistaken} for $C$ on dealing with `Majorana mass' fermions! This is a fundamental outcome which enables us to shed light, at last, on the {\it new} meaning to be assigned to the formal relationship $\psi' = U_C \psi'^{\dagger {\rm T}}$ (or its `charge conjugate' $\psi'^c = U_C \psi'^{c \dagger {\rm T}}$): \bigskip {\it Strictly speaking, the constraint} $\psi' = U_C \psi'^{\dagger {\rm T}}$ ($\psi'^c = U_C \psi'^{c \dagger {\rm T}}$) {\it does only express neutrality of} $\psi'$ ($\psi'^c$) {\it with respect to scalar-type charges, and not real neutrality of} $\psi'$ ($\psi'^c$). \bigskip Bearing in mind transformation (\ref{3.1}), let us then consider an {\it active} `Majorana mass' neutrino, associated with a field of the $\psi'$-type, and a {\it sterile} one, associated with the partner field of the $\psi'^c$-type. Herein, two such `Majorana mass' neutrino versions are charge conjugate to each other (and no longer individually self-conjugate). This, however, does not exactly mean that they can now be said to represent {\it just} a `lepton' and {\it just} an `antilepton' (as it normally happens for a `Dirac mass' neutrino and the corresponding antineutrino). The point is that, unlike any standard neutrino--antineutrino pair, they would share a {\it pseudoscalar} (rather than scalar) `lepton number' variety (proportional to chirality). We thus have that one and the same {\it active} `Majorana mass' neutrino $-$ associated with a current of the type (\ref{3.23}) $-$ could in turn be {\it either} a left-handed `lepton' (with {\it positive} lepton number) {\it or} a right-handed `antilepton' (with {\it negative} lepton number), and the exact converse (with `lepton' and `antilepton' interchanged) would in principle apply to the (`charge conjugate') {\it sterile} version of it. Note, on the other hand, that due to the presence of mass (which breaks chirality conservation) the lepton number at issue may be conserved only in magnitude, and {\it not} in sign. Hence, with the help of (\ref{2.35}), it is in particular easy to realize that such a neutrino (taken in its active version) could invariably give rise to a net neutrinoless double $\beta$-decay {\it even without being a really neutral particle}. It is worth pointing out, moreover, that a given {\it left-handed} ({\it right-handed}) neutrino would remain coupled to a given {\it left-handed} `charged lepton' ({\it right-handed} `charged antilepton'), so that we should always be able, after all, to recognize single lepton families marked by their own `lepton-number conserving' weak currents. This should be related to the general fact $-$ already emphasized in Sec.~4 $-$ that `Dirac mass' fermions themselves, when involved in weak processes, are apparently described by `dynamical eigenfields' structured just like $\psi'(x)$ and $\psi'^c(x)$ (as if they were as well carrying a {\it pseudoscalar} charge variety which is normally kept `hidden' in their strict Dirac behaviors and may indeed be revealed once weak interaction is turned on $[\ref{Ziino2006}-\ref{Ziino2007}]$). \section{Pseudoscalar-type charges and the $CPT$ theorem} As is well-known, one familiar example of a pseudoscalar-type charge is just provided by {\it magnetic} charge $[{\ref{Barut1972}-\ref{Ziino2000}}]$. The fact that the field ${\bf H}$ generated by a magnetic dipole is an axial-vector (invariant under parity) clearly means that space reflection does also interchange the signs of the two poles (besides interchanging the spatial locations of them). The fact, on the other hand, that ${\bf H}$ is inverted by time reversal (instead of being left unvaried like an electric field) shows that each pole does undergo once again a change in sign if space reflection is followed by time reversal. We therefore have that the overall effect of both space and time inversions on magnetic charge would be still the same as on electric charge. This holds as well for the whole $CPT$ operation, which would regularly turn a magnetic monopole with a given four-momentum into an {\it opposite} monopole with identical four-momentum. Such a conclusion (obviously extensible to whatever pseudoscalar-type charges) is also supported by the fact that the `proper' relativistic transformation of {\it strong reflection} $-$ essentially equivalent to $CPT$ [${\ref{Sakurai1964bis}} -{\ref{Recami1976}}$] $-$ acts identically on {\it vector} as on {\it pseudovector} currents. That said, let us go back to the revised `Majorana mass' fermion model herein proposed. The basic peculiar feature of it is obviously given by (\ref{3.13ter}). In short, we have that the parity operator $P'$ (relevant to the `Majorana mass' case) does naturally exert a {\it `charge conjugating' internal extra action}, and this may clearly happen only for a particle just endowed with pseudoscalar-type charges. As already stressed in Sec.~4, such a result tells us that $P'$ is exactly the same as $CP$ for standard fermions. We also have, therefore, that the only symmetry under $CP$ ($=P'$) does not enable us to distinguish between scalar-type and pseudoscalar-type charges, and so it may {\it equally} allow, in principle, either the conjecture of a `Dirac mass' neutrino (endowed with a {\it scalar} lepton number) or the conjecture of a `Majorana mass' neutrino (endowed with a {\it pseudoscalar} lepton number). To enlarge the discussion to $CPT$ symmetry, let us just denote by $T$ and $T'$ the two (antiunitary) operators representing time reversal in the pure `Dirac mass' and `Majorana mass' cases, respectively. In the light of the above remarks on the behaviors of pseudoscalar-type charges, such operators cannot be expected to coincide, the reason being because $T'$ is also demanded (just like $P'$) to include a {\it `charge conjugating' internal effect}. This can be properly obtained in terms of `particle' and `antiparticle' annihilation (creation) operators such as $a^{(h)}({\bf p})$ ($a^{(h)\dagger}({\bf p})$) and $b^{(h)}({\bf p})$ ($b^{(h)\dagger}({\bf p})$). In other words, considering that (except for phase factors) \begin{equation} Ta^{(h)}({\bf p})T^{-1} = a^{(h)}(-{\bf p}) \mbox{, and so on} \;\;\;\;\;\; \mbox{(`Dirac mass')}, \label{4.9bis} \end{equation} \noindent it should correspondingly result \begin{equation} T'a^{(h)}({\bf p})T'^{-1} = b^{(h)}(-{\bf p}) \mbox{, and so on} \;\;\;\;\;\; \mbox{(`Majorana mass')}. \label{4.9ter} \end{equation} \noindent We thus have that $T'$ can be formally related to $T$ as follows: \begin{equation} T' = CT, \label{4.9quater} \end{equation} \noindent where, in analogy with (\ref{3.26}), it may be set \begin{equation} T = T'_{\rm ex} \Longrightarrow T' = CT'_{\rm ex}, \label{4.14} \end{equation} \noindent with $T'_{\rm ex}$ standing for a mere `external' time-reversal operator (identical with $T'$ except for not involving pseudoscalar-charge reversal). From (\ref{3.25}) and (\ref{4.9quater}), on the other hand, it can be seen that the overall $PT$ and $P'T'$ operators do indeed {\it coincide}. Hence, \begin{equation} CPT = CP'T' = CP'_{\rm ex}T'_{\rm ex}, \label{4.2quater} \end{equation} \noindent and this is in full accordance with the above-mentioned fact that the whole (equivalent) symmetry operation of {\it strong reflection} does not make any distinction between scalar-type and pseudoscalar-type charges. It may therefore be concluded that the $CPT$ theorem is just as well available for the special new kind of {\it charged} particles to which `Majorana mass' fermions should strictly correspond. This, however, does not really mean that an active field $\psi'$ ($=X_{\rm L}\psi + X_{\rm R}\psi^c$) and its sterile counterpart $\psi'^c$ ($=X_{\rm L}\psi^c + X_{\rm R}\psi$) are bound to have {\it identical} masses as a result of their being also a pair of `mutually charge-conjugate' fields. If $\psi'$ is coupled, say, to a mass $m_{\rm L}$, and $\psi'^c$, say, to a mass $m_{\rm R}$, we may, in other words, generally assume $m_{\rm L}\not=m_{\rm R}$ as in the conventional approach. The reason is just because, for fields having such expansions as (\ref{2.35}) and (\ref{2.36}), the whole symmetry operation (\ref{4.2quater}) is only able (like $P'$ and unlike $C$) to connect annihilation or creation operators always included in {\it one and the same} expansion. As an immediate consequence, one has e.g. that the See--Saw Mechanism may still apply to neutrino masses without spoiling $CPT$ symmetry. Yet, there seems to be another intriguing feature to be pointed out. According to the new approach, fermions with `Majorana masses' would indeed obey {\it ordinary} mirror symmetry (as just the {\it analogue} of $CP$ symmetry for standard fermions) and they would also experience a {\it manifest} (maximum) $C$ violation (due to the fact that an {\it active} `Majorana mass' fermion and its {\it sterile} counterpart are now `charge conjugate to each other'). Hence, neglecting the extreme conjecture of $CPT$ breakdown $[{\ref{Colladay1998}-\ref{Esposito2010}}]$, we see that such fermions would as well obey symmetry under $CT'$ ($=T'_{\rm ex}$), even though at the price of also experiencing a (maximum) {\it time reversal} (i.e. $T'$) violation, which would just counterbalance the `recovered' ordinary mirror symmetry. In close connection with this, it is worth noting that merely releasing the constraint $m_{\rm L}=m_{\rm R}$ would already break $C$ and $T'$ individual symmetries, with no need to consider weak dynamics. We thus have, for example, that active and sterile `Majorana mass' neutrino versions which are supposed to have masses $m_{\rm L}\not=m_{\rm R}$ should anynow be taken as particles {\it intrinsically violating} either $C$ or time reversal. \section{Single `Dirac mass' or `Majorana mass' charged fermion fields as superpositions of pure Majorana neutral fields} We know from Ref.~$[{\ref{Majorana1937}}]$ $-$ see also Refs.~$[{\ref{Bilenky1987}},{\ref{Giunti2007}}]$ $-$ not only that a single Majorana neutral field $\psi_{\rm M}(x)$ can be obtained, via Eq.~(\ref{2.1}), as a superposition of two distinct (and mutually charge-conjugate) `Dirac mass' charged fermion fields, but even that a single `Dirac mass' charged fermion field $\psi(x)$ may be seen, conversely, as a superposition of two distinct Majorana neutral fields (with opposite $CP$ intrinsic parities). One has, more precisely, \begin{equation} \psi(x) = \frac{1}{\sqrt{2}}\left[\psi^{(+)}_{\rm M}(x) + i\psi^{(-)}_{\rm M}(x)\right] \;\;\;\;\;\; \mbox{(`Dirac mass')}, \label{8.1} \end{equation} \noindent where $\psi^{(+)}_{\rm M}(x)=\psi_{\rm M}(x)$, and where $\psi^{(-)}_{\rm M}(x)$ is a new (still manifestly self-conjugate) Majorana field defined as \begin{equation} \psi^{(-)}_{\rm M}(x) = \frac{-i}{\sqrt{2}}\left[\psi(x) - \psi^c(x)\right]. \label{8.2} \end{equation} \noindent The superscripts $(\pm)$ have here been used to denote the $CP$ intrinsic parities distinguishing two such neutral fields. A similar result holds for the `Dirac mass' field being the charge conjugate of $\psi(x)$, which may likewise be expressed in the form \begin{equation} \psi^c(x) = \frac{1}{\sqrt{2}}\left[\psi^{(+)}_{\rm M}(x) - i\psi^{(-)}_{\rm M}(x)\right] \;\;\;\;\;\; \mbox{(`Dirac mass')}. \label{8.3} \end{equation} \noindent Whether we are considering (\ref{8.1}) or (\ref{8.3}), fields $\psi_{\rm M}^{(\pm)}(x)$ are understood to be mass eigenfields with identical eigenvalues, and their masses (the same as those carried by $\psi$ and $\psi^c$) may be said to display `Dirac-like' characters. In the light of transformation (\ref{3.2bis}) $-$ which turns Eq.~(\ref{2.1}) into Eq.~(\ref{2.34}) $-$ we may now, on the other hand, also think of a Majorana neutral field being a superposition of two mutually charge-conjugate `Majorana mass' charged fermion fields, $\psi'(x)$ and $\psi'^c(x)$, with identical masses. This, indeed, is in line with the general fact that, due to condition (\ref{2.2bis}), the mass term relevant to a true Majorana field may be claimed to be {\it equally reminiscent} of a `Dirac' as of a `Majorana' mass term. Starting from $\psi'(x)$ and $\psi'^c(x)$ (with masses $m_{\rm L}$ and $m_{\rm R}$ being in particular such that $m_{\rm L} =m_{\rm R}$), we can, thus, again construct two independent (manifestly self-conjugate) Majorana neutral fields as above. They read \begin{equation} \psi'^{(+)}_{\rm M}(x) = \frac{1}{\sqrt{2}}\left[\psi'(x) + \psi'^c(x)\right], \;\;\; \psi'^{(-)}_{\rm M}(x) = \frac{-i}{\sqrt{2}}\left[\psi'(x) - \psi'^c(x)\right] \label{8.4} \end{equation} \noindent and still have opposite $CP$ intrinsic parities. Hence we can see that, in full analogy with the `Dirac mass' case, fields $\psi'(x)$ and $\psi'^c(x)$ themselves (taken with identical masses) may be split, conversely, as follows: \begin{equation} \left\{ \begin{array}{lcl} \!\psi'(x) \!\!\!&=&\!\!\! \frac{1}{\sqrt{2}}\left[\psi'^{(+)}_{\rm M}(x) + i\psi'^{(-)}_{\rm M}(x)\right] \\ [0.09in] \!\psi'^c(x) \!\!\!&=&\!\!\! \frac{1}{\sqrt{2}}\left[\psi'^{(+)}_{\rm M}(x) - i\psi'^{(-)}_{\rm M}(x)\right] \end{array} \; \mbox{(`Majorana mass'\,; $m_{\rm L}=m_{\rm R}$)}. \right. \label{8.5} \end{equation} \noindent Of course, the two neutral fields $\psi'^{(\pm)}_{\rm M}(x)$ in (\ref{8.5}) may correspondingly be said to have masses displaying `Majorana-like' characters. \section{Concluding remarks} There are mainly two motivations underlying this paper. The former one is the purpose of throwing light upon some basic theoretical inconsistencies which appear to be present in the usual approach to Majorana fermions and `Majorana mass' fermions. The latter one is the need of working out accordingly $-$ with no departure from standard QFT $-$ a formalism being really free of such inconsistencies and being further able to lead, after all, to a new insight into the whole subject. As a starting point, a brief discussion has been made on how to interpret the two couples of massless Dirac field solutions (\ref{2.10}) and (\ref{2.11}) in order to avoid that charge conjugation $C$ may happen to invert helicities. It has been argued that the appropriate reading is provided by (\ref{2.10ter}) and (\ref{2.11ter}), and {\it not} by (\ref{2.10bis}) and (\ref{2.11bis}), even though the latter choice seems just to come from a natural extension of the well-known Dirac `prescription' (\ref{2.9bis}). Opting for (\ref{2.10ter}) and (\ref{2.11ter}) has also been shown to be the only choice that correctly implies $C$ {\it violation} as a result of any {\it asymmetry} occurring between (\ref{2.10}) and (\ref{2.11}). The fact is left, however, that the alternative reading is also the one which normally allows a `Majorana mass' neutrino to be recognized as a {\it self-conjugate} particle! What, then, about the real nature of such a neutrino? To shed full light on the matter, a truly {\it direct} (and thus unambiguous) check has been tried, based on the {\it primary} QFT definition itself of charge conjugation, i.e. its {\it fundamental representation} (\ref{2.28}) (in the Fock space). This procedure has led us to conclude that {\it a `Majorana mass' neutrino, unlike the original neutrino by Majorana himself, cannot be really claimed to be genuinely self-conjugate}. The point may be generally set as follows. Take the wholly {\it active} ({\it sterile}) fermion field which can be obtained from suitably mixing the chiral components of two mutually charge-conjugate Dirac fields. If $C$ is {\it exactly} applied as in (\ref{2.28}), the net outcome is that such a field is correspondingly turned into its {\it sterile} ({\it active}) counterpart, and {\it not} into itself! This also shows that the standard formula (\ref{2.9bis}) $-$ just suitable for defining the charge conjugate of a `Dirac mass' fermion field $-$ cannot be extended to `Majorana mass' fermion fields (and thus be used as `proof' of their real neutrality) without coming into conflict with the true $C$ definition (\ref{2.28}). If so, how can we explain the individual constraints, $\psi'(x) = U_C \psi'^{\dagger {\rm T}}(x)$ and $\psi'^c(x) = U_C \psi'^{c\dagger {\rm T}}(x)$, naturally applying to an active `Majorana mass' fermion field $\psi'(x)$ and its (`charge conjugate') sterile version $\psi'^c(x)$? The answer to this crucial question can actually be found in the {\it new} formalism which has been herein developed to remodel a fermion with mass of the `Majorana' (rather than `Dirac') type. The basic novelty is that replacing a `Dirac mass' with a `Majorana mass' does now mean passing from a standard fermion, endowed with {\it scalar-type} charges, to a fermion which is {\it not} really neutral but is endowed with {\it pseudoscalar-type} charges. A `charged' spin-$\frac{1}{2}$ particle like this should essentially be thought of as a {\it chiral} object in turn behaving like a `fermion' or an 'antifermion' according to {\it either} sign of the associated chirality (where `fermion' and `antifermion' should clearly appear {\it interchanged} for the particle `charge conjugate' to it). In such a framework, charge conjugation $C$ proves to act as a {\it true} `particle--antiparticle conjugation' operation which may generally be split into the product of a mere `scalar-charge conjugation' and a mere `pseudoscalar-charge conjugation' operation. Hence it can indeed be seen that the above constraints peculiar to $\psi'(x)$ and $\psi'^c(x)$ are just expressing {\it neutrality of `Majorana mass' fermions with respect to scalar-type charges}. This, however, does {\it not} mean that a genuinely neutral Majorana fermion cannot exist in nature. Such a particle $-$ strictly described by a {\it manifestly self-conjugate} field like the one given in Eq.~(\ref{2.1}) or (\ref{2.34}) $-$ would regularly possess {\it no charges at all} (i.e. neither {\it scalar-type} nor {\it pseudoscalar-type} charges). Herein a `new' model of it has been obtained, where some inconsistent features unavoidably affecting the conventional theory appear to be automatically removed. Firstly, in full compliance with Eq.~(\ref{2.1ter}), one now has that a genuine Majorana fermion can no longer be assigned any special sort of `handedness' marking it as {\it just} an `active' or `sterile' fermion: there may always be only a fifty-fifty probability for it to look like the former or the latter particle. This implies, for example, that a true Majorana neutrino cannot really be claimed to be quite compatible with the SM (contrary to what may clearly be said for a {\it pure} active `Majorana mass' neutrino). Secondly, as {\it rigorously} demanded by the natural constraint (\ref{2.2bis}), the mass of a genuine Majorana fermion is now bound to be of a {\it single} kind, equally reminiscent of a `Majorana' as of a `Dirac' mass kind. The (no longer chargeless) `Majorana mass' fermion model here proposed does actually introduce a {\it new} sort of spin-$\frac{1}{2}$ particle which is somehow {\it half way} between the Dirac one and the genuine Majorana one. The point is that a `Majorana mass' fermion is now a particle endowed with {\it pseudoscalar-type} charges and still devoid of {\it scalar-type} charges: thus, it is `charged' (like a Dirac particle) but it also retains only {\it two} freedom degrees (like a true Majorana particle). This model applies, in principle, to any {\it wholly} active or {\it wholly} sterile massive fermions (including SUSY ones) which are usually (and improperly) referred to as `Majorana fermions' {\it tout court}. It in particular deals with the active and sterile versions of a `Majorana mass' neutrino as a pair of `mutually charge-conjugate' (rather than individually self-conjugate) particles which may always have two distinct mass values {\it all the same} (though now at the price of {\it intrinsically violating} $C$). The latter feature (still permitting a mass-generating mechanism like the See--Saw one) is just due to the following reason: unlike what happens for standard fermions (endowed with scalar-type charges), such (active and sterile) charge-conjugate neutrinos would {\it not} be really interchanged under $CPT$ (were it not so, on the other hand, $CPT$ itself would then be maximally violated!). Similarly, although a `Majorana mass' neutrino is now predicted to bear a {\it non}zero lepton number, we have that the conventional expectation for a neutrinoless double $\beta$-decay is left unaffected. This is simply because a {\it pseudoscalar} lepton number is as well a quantity that changes sign {\it along with chirality}. Yet, supposing that real neutrinos should truly turn out to be `Majorana mass' (and not `Dirac mass') fermions, we have also that their well-known phenomenology should then be reread in a way {\it opposite} to the usual one: the actual behaviors of them under space reflection and time inversion would indeed appear to have {\it reversed} meanings! The fact is that real neutrinos themselves should be admitted, accordingly, to be particles carrying {\it pseudoscalar-type} (rather than scalar-type) charges; so they would paradoxically seem to obey {\it pure} mirror symmetry (as just the {\it analogue} of `$CP$ symmetry' for standard fermions) and to violate instead (to a maximal degree) either {\it time reversal} or particle--antiparticle conjugation $C$ (with possible far-reaching effects on the yet unsolved `time arrow' fundamental question). The last comment to be made goes beyond neutrino physics and is more generally addressed to the correspondence that has been herein set up between `Majorana masses' and pseudoscalar-type charges. Since one has that an active `Majorana mass' neutrino, whether viewed with {\it no} lepton number or with a {\it pseudoscalar} nonzero lepton number, is identically able to induce a net neutrinoless double $\beta$-decay, one could get the general idea that remodelling (active and sterile) `Majorana mass' fermions as particles endowed with pseudoscalar-type charges (rather than genuinely neutral) should truly have no direct repercussions in experimental reality. Such an idea, however, would leave out of account the fact that pseudoscalar-type charges could themselves be at the origin of {\it new} interactions. A particularly significant example may come from considering magnetic charge, whose pseudoscalar nature (opposed to the scalar nature of electric charge) is already well-known. This indeed suggests that `Majorana mass' fermions (just opposed to `Dirac mass' ones) might now be seriously expected to be even the natural candidates for magnetic {\it monopoles}. \section*{Acknowledgments} The author is particularly grateful to Dr. Salvatore Esposito for his incisive and detailed comments and his precious advice. Thanks are also due to Prof. V. V. Dvoeglazov, Dr. M. Dvornikov, Prof. E. Fiordilino and Dr. R. Plaga for useful suggestions. \section*{References} \begin{enumerate} \item \label{Majorana1937} E. Majorana, {\it Nuovo Cimento} {\bf 14}, 171 (1937). \item \label{Jehle1949} H. Jehle, {\it Phys. Rev.} {\bf 75}, 1609 (1949). \item \label{Serpe1949} J. Serpe, {\it Phys. Rev.} {\bf 76}, 1538 (1949). \item \label{Esposito1998} See e.g. S. Esposito, {\it Int. J. Mod. Phys. A} {\bf 13}, 5023 (1998); S. Esposito and N. Tancredi, {\it Eur. Phys. J. C} {\bf 4}, 221 (1998). \item \label{Gell-Mann1979} M. Gell-Mann, P. Ramond, and R. Slansky, in {\it Supergravity}, eds. D. Z. Freeman and P. van Nieuwenhuizen (North-Holland, Amsterdam, 1979). \item \label{Mourik2012} V. Mourik, K. Zuo, S. M. Frolov, S. R. Brissard, E. P. A. M. Bakkers, L. P. Kouwenhoven {\it Science} {\bf 336}, 1003 (2012). \item \label{Brouwer2012} P. W. Brouwer, {\it Science} {\bf 336}, 989 (2012). \item \label{Williams2012} J. R. Williams, A. J. Bestwick, P. Gallagher, Seung Sae Hong, Y. Cui, A. S. Bleich, J. G. Analytis, I. R. Fisher, and D. Goldhaber-Gordon, {\it Phys. Rev. Lett.} {\bf 109}, 056803 (2012). \item \label{Rokhinson2012} L. P. Rokhinson, X. Liu and J. K. Furdyna, {\it Nature Physics} {\bf 8}, 795 (2012). \item \label{Das2012} A. Das, Y. Ronen, Y. Most, Y. Oreg, M. Heiblum and H. Shtrikman, {\it Nature Physics} {\bf 8}, 887 (2012). \item \label{Deng2012} M. T. Deng, C. L. Yu, G. Y. Huang, M. Larsson, P. Caroff and H. Q. Xu, {\it Nano Lett.} {\bf 12}, 6414 (2012). \item \label{Esposito2013} S. Esposito, {\it Europhys. Lett.} {\bf 102}, 17006 (2013). \item \label{Merzbacher1970} See e.g. (no matter for some different notations therein used): E. Merzbacher, {\it Quantum Mechanics} (second edition) (John Wiley and Sons, New York, 1970) pp.~584,585. \item \label{Dvornikov2012} M. Dvornikov, {\it Found. Phys.} {\bf 42}, 1469 (2012) (arXiv:1106.3303 {\bf [hep-th]}). \item \label{Dvoeglazov2012} V. V. Dvoeglazov, {\it J. Phys: Conf. Series} {\bf 343}, 012033 (2012). \item \label{Glashow1961} S. L. Glashow, {\it Nucl. Phys.} {\bf 22}, 579 (1961). \item \label{Weinberg1967} S. Weinberg, {\it Phys. Rev. Lett.} {\bf 19}, 1264 (1967). \item \label{Salam1968} A. Salam, in {\it Proceedings of the Eighth Nobel Symposium on Elementary Particle Theory}, ed. N. Svartholm (Almquist and Wiksell, Stockholm, 1968) p.~367. \item \label{Bilenky1987} S. M. Bilenky and S. T. Petcov, {\it Rev. Mod. Phys.} {\bf 59} (1987) 671. \item \label{Giunti2007} C. Giunti and C. W. Kim, {\it Fundamentals of Neutrino Physics and Astrophysics} (Oxford University Press, 2007). \item \label{McLennan1957} J. A. McLennan, Jr., {\it Phys. Rev.} {\bf 106}, 821 (1957). \item \label{Case1957} K. M. Case, {\it Phys. Rev.}{\bf 107}, 307 (1957). \item \label{Feynman1958} R. P. Feynman and M. Gell-Mann, {\it Phys. Rev.} {\bf 109}, 193 (1958). \item \label{Sudarshan1958} R. E. Marshak and E. C. G. Sudarshan, {\it Phys. Rev.} {\bf 109}, 1860 (1958). \item \label{Sakurai1958} J. J. Sakurai, {\it Nuovo Cimento} {\bf 7}, 649 (1958). \item \label{Sakurai1964} See e.g. J. J. Sakurai, {\it Invariance Principles and Elementary Particles} (Princeton University Press, Princeton, 1964) pp.~122,129--132. \item \label{Dvoeglazov1997} V. V. Dvoeglazov, {\it Mod. Phys. Lett. A} {\bf 12}, 2741 (1997). \item \label{Itzykson1985} C. Itzykson and J. Zuber, {\it Quantum Field Theor} (McGraw Hill, New York, 1985) pp.~87--89. \item \label{Barut1993} A. O. Barut and G. Ziino, {\it Mod. Phys. Lett. A} {\bf 8}, 1011 (1993). \item \label{Ziino2006} G. Ziino, {\it Int. J. Theor. Phys.} {\bf 45}, 1993 (2006). \item \label{Ziino2006bis} G. Ziino, {\it Ann. Fond. Louis de Broglie} {\bf 31}, 169 (2006). \item \label{Ziino2007} G. Ziino, {\it Mod. Phys. Lett. A} {\bf 22}, 853 (2007). \item \label{Barut1972} A. O. Barut, {\it Phys. Lett. B} {\bf 38}, 97 (1972); {\bf 46}, 81 (1973). \item \label{Defaria-Rosa1986} M. A. Defaria-Rosa, E. Recami, and W. A. Rodriguez Jr., {\it Phys. Lett. B} {\bf 173}, 233 (1986). \item \label{Ziino1996} G. Ziino, {\it Int. J. Mod. Phys. A} {\bf 11}, 2081 (1996). \item \label{Ziino2000} G. Ziino, {\it Int. J. Theor. Phys.} {\bf 39}, 2605 (2000). \item \label{Sakurai1964bis} J. J. Sakurai, {\it Invariance Principles and Elementary Particles} (Princeton University Press, Princeton, 1964) pp.~136--143. \item \label{Mignani1974} R. Mignani and E. Recami, {\it Nuovo Cimento A} {\bf 24}, 438 (1974). \item \label{Mignani1975} R. Mignani and E. Recami, {\it Int. J. Theor. Phys.} {\bf 12}, 299 (1975). \item \label{Recami1976} E. Recami and G. Ziino, {\it Nuovo Cimento A} {\bf 33}, 205 (1976). \item \label{Colladay1998} D. Colladay and V. A. Kostelecky, {\it Phy. Rev. D} {\bf 55}, 6760 (1997); ibid. {\it D} {\bf 58}, 116002 (1998). \item \label{Kostelecky2004} V. A. Kostelecky and M: Mewes, {\it Phy. Rev. D} {\bf 69}, 016005 (2004). \item \label{Esposito2010} S. Esposito and G. Salesi, {\it Mod. Phys. Lett. A} {\bf 25}, 597 (2010). \end{enumerate} \end{document}
train/arxiv
BkiUe53xK3xgpbERELrk
5
1
\section{Introduction} The study of patterns in permutations has grown in the last two decades to one of the most active trends of research in contemporary combinatorics. The study of permutations which are constrained by not having one or more subsequences ordered in various prescribed ways has been historically motivated by the problem of sorting permutations by means of certain devices. However, nowadays research on this topic is further fueled by its intrinsic combinatorial difficulty and from the plentiful appearances of patterns in several very different disciplines, such as algebra, geometry, analysis, computer science, mathematical physics and computational biology, and many others. For this reason, it is reasonable to believe that this field of research will continue growing for a long time to come. More recently, the notion of vincular (or generalized) pattern in permutations has been considered. Whereas an occurrence of a classical pattern $\pi$ in a permutation $\sigma$ is simply a (non-necessarily consecutive) subsequence of $\sigma$ whose items are in the same relative order as those in $\pi$, in an occurrence of a vincular pattern, some items of that subsequence may be required to be adjacent in the permutation. For instance, the classical pattern $1234$ simply corresponds to an increasing subsequence of length 4, whereas an occurrence of the generalized pattern $1-23-4$ would require the middle two letters of that sequence to be adjacent in $\sigma$. Thus, the permutation $23145$ contains $1234$ but not $1-23-4$. The study of vincular patterns provides significant additions to the extensive literature on classical patterns. In fact, the non-classical vincular patterns are likely to provide richer connections to other combinatorial structures than the classical ones do. Other than combinatorics, vincular patterns find applications in other scientific topics such as, for instance, in the genome rearrangement problem, which is one of the major trends in bioinformatics and biomathematics. The main issue we adress in this paper concerns permutations avoiding a well known vincular pattern of length $4$, namely the pattern $1-32-4$. The paper is organized as follows. In Section $\ref{VP}$ we fix the main notations and terminology which we will use throughout the paper. In Section $\ref{1-32-4avoiders}$ we describe our contribution to the study of permutations avoiding $1-32-4$. Permutations avoiding this pattern were actually enumerated by Callan in $\cite{C}$, who provided a recursive formula to count them only as a collateral consequence of a very intricated bijection with a certain class of ordered rooted trees. The problem of finding a more explicit justification for the recursive formula obtained by Callan has received some attention in the past decade (see $\cite{DGR}$). In Section $\ref{1-32-4avoiders}$ we construct a generating tree with a single label for permutations avoiding the vincular pattern $1-32-4$, finally providing such a justification. In order to introduct this topic, in Sections $\ref{PW}$ and $\ref{ECO}$ we collect the latest results about vincular pattern avoidance and we quickly review the ECO method, which is the framework needed to describe the construction in Section $\ref{1-32-4avoiders}$. \section{The notion of vincular pattern}\label{VP} Let $n\in \mathbb{N}^{*}$ and denote $[n]=\{1,2,...,n\}$. For our purposes, a \emph{permutation of length} $n$ will be just a word with $n$ distinct letters from the alphabet $[n]$. The length of a permutation $\sigma$ will be denoted by $|\sigma|$. We will denote by $\mathcal{S}_{n}$ the set of all permutations of length $n$ and we will set $\mathcal{S}=\bigcup_{n\in \mathbb{N}}\mathcal{S}_{n}$ (where by convention we set $\mathcal{S}_{0}=\{\varepsilon\}$ and $|\varepsilon|=0$). We begin by quickly recalling the classical definition of pattern in a permutation. In general, given any poset $\mathcal{P}$ and two non-empty words $\alpha$ and $\beta$ with length $k$ in the alphabet $\mathcal{P}$, we say that $\alpha$ and $\beta$ are \emph{order isomorphic}, and we will write $\alpha\sim \beta$ when, for every $i,j\in [k]$, $\alpha_{i}\leq \alpha_{j}$ if and only if $\beta_{i}\leq \beta_{j}$. Let now $\sigma,\tau\in \mathcal{S}$ and suppose $i\in [|\tau|]^{|\sigma|}$. We say that $i$ is an $\emph{occurrence}$ of $\sigma$ in $\tau$ when $i_{1}<i_{2}<...<i_{|\sigma|}$ and $\tau_{i_{1}}\tau_{i_{2}}...\tau_{i_{|\sigma|}}\sim\sigma$. We say that $\sigma$ \emph{occurs as pattern} in $\tau$, and we write $\sigma\leq \tau$, when either $\sigma=\varepsilon$ or one can find an occurrence of $\sigma$ in $\tau$, we say that $\tau$ $\emph{avoids}$ $\sigma$ \emph{as a} $\emph{pattern}$ otherwise. It is routine to check that $\leq$ is a partial order relation, turning $\mathcal{S}$ into a poset, which is called the \emph{permutation pattern poset}. It is worth noting that the items in an occurrence of a classical pattern in a permutation are not required to be necessarily consecutive, forcing this further condition we obtain the consecutive pattern poset on $\mathcal{S}$. With the notations as above, we say that $i$ is a $\emph{consecutive}$ $\emph{occurrence}$ of $\sigma$ in $\tau$ when $i$ is a occurrence of $\sigma$ in $\tau$ (in the classical sense) and either $|\sigma|=1$ or $i_{j+1}=i_{j}+1$ for every $j\in [|\sigma|-1]$. In the same fashion, we say that $\sigma$ \emph{occurs as consecutive pattern} in $\tau$ when one can find a consecutive occurrence of $\sigma$ in $\tau$. The resulting poset on $\mathcal{S}$ will be called the $\emph{consecutive}$ $\emph{pattern}$ $\emph{poset}$. Usually, the consecutive pattern poset reveals a much simpler structure than the classical pattern poset. For instance, the M\"obius function of the consecutive permutation pattern poset is completely understood, whereas it is largely unknown in the classical case. Consecutive and classical patterns are special (actually, extremal) cases of the more general notion of vincular patterns. Vincular patterns were introduced by Babson and Steingrimsson (under the name of generalized patterns), and constitute a vast intermediate continent between the two lands of consecutive patterns and classical patterns. Investigation of this intermediate notion could hopefully shed some light on differences and analogies between the extremal cases of consecutive and classical patterns. An occurrence of a vincular pattern is basically an occurrence of that pattern in which entries are subject to given adjacency conditions. For a more formal definition of vincular pattern we follow $\cite{BF}$. A $\emph{dashed}$ $\emph{permutation}$ is a permutation in which some dashes are possibly inserted between any two consecutive letters. For instance, $5-13-42$ is a dashed permutation (of length 5). The type of a dashed permutation $\sigma$ such that $|\sigma|\geq 2$ is the $\{0, 1\}-$vector $r = (r_{1},...,r_{|\sigma|-1})\in \{0,1\}^{|\sigma|-1}$ such that $r_{i} = 0$ whenever there is no dash between $\sigma_{i}$ and $\sigma_{i+1}$ and $r_{i} = 1$ whenever there is a dash between $\sigma_{i}$ and $\sigma_{i+1}$, for every $i\in [|\sigma|-1]$. For example, the above dashed permutation $5-13-42$ has type $(1,0,1,0)$. Given a dashed permutation $\sigma$, the $\emph{underlying}$ $\emph{permutation}$ of $\sigma$ is the permutation obtained by removing the dashes from $\sigma$. For instance, the compresed permutation of $5-13-42$ is the permutation $51342$. Let $\sigma$ be a dashed permutation. With the same notations as before, given some $i=(i_{1},...,i_{|\sigma|})\in [|\tau|]^{|\sigma|}$, we say that $i$ is an $\emph{occurrence}$ of $\sigma$ in $\tau$ when it is a (classical) occurrence of the underlying permutation of $\sigma$ and $i_{j+1}=i_{j}+1$ whenever $j\in [|\sigma|-1]$ and $\sigma_{j}$ and $\sigma_{j+1}$ are not separated by a dash. We say that $\sigma$ \emph{occurs as a vincular pattern} in $\tau$ when one can find an occurrence of $\sigma$ in $\tau$, we say that $\tau$ \emph{avoids} $\sigma$ \emph{as a vincular pattern} otherwise. Given a dashed permutation $\sigma$, the set of all permutations avoiding $\sigma$ will be denoted by $\mathcal{S}(\sigma)$ and the set of all $\sigma\in \mathcal{S}(\sigma)$ such that $|\sigma|=n$ will be denoted by $\mathcal{S}_{n}(\sigma)$. \begin{comment} Let now $A$ be an infinite lower triangular $\{0,1\}-$matrix. For every $k\in \mathbb{N}$ we denote by $r_{k}$ the $k^{th}$ row of $A$. Again, with the same notations as before, given $i=(i_{1},...,i_{|\sigma|})\in [|\tau|]^{|\sigma|}$, we say that $i$ is a $A-\emph{occurrence}$ of $\sigma$ in $\tau$ when either $|\sigma|=1$ and $i$ is an occurrence of $\sigma$ in $\tau$ or $i$ is a occurrence of the $r_{|\sigma|-1}-$dashed word of $\sigma$ in $\tau$. We say that $\sigma$ $\emph{occurs}$ $\emph{as}$ $\emph{an}$ $A-\emph{vincular}$ $\emph{pattern}$ in $\tau$, and we write $\sigma\in_{A}\tau$, when one can find an $A-$occurrence of $\sigma$ in $\tau$, we say that $\tau$ $\emph{avoids}$ $\sigma$ as an $A-\emph{vincular}$ $\emph{pattern}$ otherwise. Note that for the infinite lower triangular $\{0,1\}-$matrices $$A_{1}=\left(\begin{matrix}1& 0& 0& 0& 0& \dots\\ 1& 1& 0& 0& 0& \dots\\ 1& 1& 1& 0& 0& \dots\\ 1& 1& 1& 1& 0& \dots\\ 1& 1& 1& 1& 1& \dots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots\end{matrix}\right)\ \ \ A_{0}=\left(\begin{matrix}0& 0& 0& 0& 0& \dots\\ 0& 0& 0& 0& 0& \dots\\ 0& 0& 0& 0& 0& \dots\\ 0& 0& 0& 0& 0& \dots\\ 0& 0& 0& 0& 0& \dots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots\end{matrix}\right)$$ one recovers the usual notions of classical pattern and consecutive pattern containment respectively. However, it is easy to see that in general $\in_{A}$ is not a transitive relation. Write $\sigma\preceq_{A}\tau$ when $\sigma\in_{A}\tau$ and $|\sigma|\in \{|\tau|-1,|\tau|\}$. Then $\preceq_{A}$ is a covering relation on $\mathcal{S}$, the transitive closure of $\preceq_{A}$ is a partial order relation on $\mathcal{S}$ denoted by $\leq_{A}$ and the related poset is called the $A-\emph{vincular}$ $\emph{permutation}$ $\emph{pattern}$ $\emph{poset}$. \end{comment} \begin{comment} It was already noticed in $\cite{BF}$ that $\in_{A}$ and $\leq_{A}$ do not coincide in general. As a consequence, in $\cite{BF}$, the authors asked for a characterization of the lower triangular $\{0,1\}-$ matrices $A$ for which $\in_{A}$ and $\leq_{A}$ coincide. Section $\ref{VPP}$ of this paper will be devoted to describe a characterization of these matrices. \end{comment} \section{Previous work on vincular pattern avoidance}\label{PW} In this section we provide a quick overview of the latest results in enumeration of permutations avoiding a vincular pattern. We refer to $\cite{Ki1}$ and $\cite{Ste}$ for a more detailed survey on this topic. The first systematic study of vincular patterns of length 3 was done by Claesson in $\cite{Cl}$ and permutations avoiding $\pi$ have been enumerated for every vincular pattern $\pi$ of length 3. It is worth mentioning that the fact that, as it turns out, $\mathcal{S}(1-23)$ is counted by the Bell numbers shows that the analogous of the Stanley-Wilf conjecture does not hold for some vincular patterns. Moreover, the same fact shows that the conjecture of Noonan and Zeilberger stated in $\cite{NZ}$ is also false for vincular patterns, namely, the number of permutations avoiding a vincular pattern is not necessarily polynomially recursive. As for vincular patterns of length 4, it is stated in \cite{Ste} that there are 48 symmetry classes of vincular patterns of length 4, and computer experiments show that there are at least 24 Wilf-equivalent classes (although their exact number seems to be unknown). For vincular non-classical patterns enumerative results are known for seven Wilf classes (out of at least 24) which are as follows: \begin{itemize} \item Elizalde and Noy $\cite{EN}$ gave the exponential generating functions for the number of occurrences of a consecutive pattern of length $4$ for three out of the seven Wilf-equivalence classes for consecutive patterns, namely the classes with representatives 1234, 1243 and 1342. \item Kitaev $\cite{Ki2}$ and Elizalde $\cite{E}$ decomposed the class $\mathcal{S}(\sigma -k)$ in a suitable boxed product, where $\sigma$ is any consecutive pattern and $k=|\sigma|+1$. This decomposition allows to provide an expression for the exponential generating function of $\mathcal{S}(\sigma -k)$ in terms of the exponential generating function of $\mathcal{S}(\sigma)$. In particular, if $\sigma$ is any consecutive pattern of length 3, this, together with the results of Elizalde and Noy $\cite{EN}$, yields an explicit formula for the exponential generaing function $\mathcal{S}(\sigma-4)$, where $\sigma$ is any consecutive pattern of length 3. Since there are precisely two Wilf-equivalence classes of consecutive patterns of length 3, with representatives $123$ and $132$, the result of Kitaev yields explicit formulas for the exponential generating functions of $\mathcal{S}(\pi)$ where $\pi$ is any vincular pattern Wilf-equivalent to $123-4$ or $132-4$. Explicitly, these formulas are $$\exp\left( \frac{\sqrt{3}}{2}\int_{0}^{x}\frac{e^{\frac{t}{2}}}{\cos\left(\frac{\sqrt{3}}{2}t+\frac{\pi}{6}\right)}\right) \ \ \mathrm{and}\ \ \exp\left(\int_{0}^{x}\frac{dt}{1-\int_{0}^{t}e^{-\frac{u^{2}}{2}}du}\right)$$ for a vincular pattern Wilf-equivalent to $123-4$ or $132-4$, respectively. \item Callan gave a recursion for $a_{n}=|\mathcal{S}_{n}(31-4-2)|$, which goes as follows. Set $a_{0}=c_{1}=1$ and \begin{itemize} \item[1.] $a_{n}=\sum_{i=0}^{n-1}a_{i}c_{n-i}$ for $n\geq 1$. \item[2.] $c_{n}=\sum_{i=0}^{n-1}ia_{(n-1),i}$ for $n\geq 2$. \item[3.] $a_{n,k}=\begin{cases}\sum_{i=0}^{k}a_{i}\sum_{j=k-i}^{n-1-i}a_{(n-1-i),j} & 1\leq k\leq n-1\\ a_{n-1} & k=n\end{cases}$ \end{itemize} \item Finally, Callan also showed in $\cite{C}$ that $|\mathcal{S}_{n}(1-32-4)|=\sum_{k=1}^{n}u(n,k)$ where, for every $1\leq k\leq n$, the triangle $u(n,k)$ satisfies the recurrence relation \begin{equation}\label{Callan} u(n,k)=u(n-1,k-1)+k\sum_{j=k}^{n-1}u(n-1,j) \end{equation} with initial conditions $u(0,0)=1$ and $u(n,0)=0$ for every $n\geq 1$. \end{itemize} As far as the author knows, these seem to be the only explicit enumerative results concerning vincular patterns of length 4. An alternative, and we believe more explicative, proof of the recursive formula in Equation $(\ref{Callan})$ will be the main issue of the next sections. \section{ECO method}\label{ECO} In this section we provide a quick digression on ECO method, which proved to be the most suitable framework for describing the construction in Section $\ref{1-32-4avoiders}$. The ECO method (Enumerating Combinatorial Objects method) was introduced by Barcucci, Del Lungo, Pergola and Pinzani $\cite{BDLPP}$ and it is quite a natural approach to generation and enumeration of combinatorial classes of objects according to their size. The main idea of the ECO method consists of looking for a way to grow objects from smaller to larger ones by making some local expansions, where each object should be obtained from a unique father so that this construction gives rise to a tree that allows us to recursively generate all the objects in the class. If the shape of this tree can be described with a simple rule there is hope for exact enumeration results, translating this description into equations for the generating function of the class. In the following we borrow standard terminology and notation about combinatorial classes from $\cite{FS}$. Let $\mathcal{A}$ be a combinatorial class and denote by $\mathcal{A}_{n}$ the set of all $x\in \mathcal{A}$ such that $|x|_{\mathcal{A}}=n$ for every $n\in \mathbb{N}$. Let $\vartheta:\mathcal{A}\longrightarrow 2^{\mathcal{A}}$ be a map. We say that $\vartheta$ is an $\emph{ECO}$ $\emph{operator}$ on $\mathcal{A}$ when for every $n\in \mathbb{N}$ the following conditions hold: \begin{itemize} \item[(i)] $\vartheta(\mathcal{A}_{n})\subseteq 2^{\mathcal{A}_{n+1}}$. \item[(ii)] for every $y\in \mathcal{A}_{n+1}$ one can find some $x\in \mathcal{A}_{n}$ such that $y\in \vartheta(x)$. \item[(iii)] $\vartheta(x_{1})\cap \vartheta(x_{2})=\emptyset$ for every $x_{1},x_{2}\in \mathcal{A}_{n}$ such that $x_{1}\neq x_{2}$. \end{itemize} In particular $\{\vartheta(x):\ x\in \mathcal{A}_{n}\}$ is a partition of $\mathcal{A}_{n+1}$. In other words, the map $\vartheta$ generates all the objects of the class $\mathcal{A}$ in such a way that each object in $\mathcal{A}_{n+1}$ is obtained from a unique object in $\mathcal{A}_{n}$ for every $n\in \mathbb{N}$. Actually, we can also characterize ECO operators as follows. We say that a map $\rho:\mathcal{A}\smallsetminus\mathcal{A}_{0}\longrightarrow\mathcal{A}$ is a $\emph{reduction}$ $\emph{operator}$ on $\mathcal{A}$ when $\rho(\mathcal{A}_{n+1})\subseteq \mathcal{A}_{n}$ for every $n\in \mathbb{N}$. The next proposition states that defining an ECO operator is essentially equivalent to defining a reduction operator. In the following, for a function $f:A\longrightarrow B$, we will denote by $f^{\longleftarrow}$ the function $f^{\longleftarrow}:B\longrightarrow 2^{A}$ assigning to each $b\in B$ its preimage $f^{\longleftarrow}(b)$ under $f$, i.e. the set of all $a\in A$ such that $f(a)=b$. \begin{proposition} Let $\mathcal{A}$ be a combinatorial class and $\vartheta:\mathcal{A}\longrightarrow 2^{\mathcal{A}}$ be a map. Then $\vartheta$ is an ECO operator on $\mathcal{A}$ if and only if $\vartheta=\rho^{\longleftarrow}$ for some reduction operator $\rho$ on $\mathcal{A}$. \end{proposition} \proof Suppose $\vartheta$ is an ECO operator and $y\in \mathcal{A}\smallsetminus \mathcal{A}_{0}$, then $y\in \mathcal{A}_{n+1}$ for some $n\in \mathcal{N}$, hence $y\in \vartheta(x)$ for a unique $x\in \mathcal{A}_{n}$, because $\vartheta$ is an ECO operator, and we set $\rho(y)=x$. Let now $\rho:\mathcal{A}\longrightarrow\mathcal{A}$ denote the map assigning the object $\rho(y)$ to each $y\in \mathcal{A}$. Then by definition $\rho(\mathcal{A}_{n+1})\subseteq \mathcal{A}_{n}$ for every $n\in \mathbb{N}$, furthermore, for every $x\in \mathcal{A}$, again by definition $y\in\vartheta(x)$ if and only if $x=\rho(y)$, equivalently $y\in \rho^{\longleftarrow}(x)$, hence $\vartheta(x)=\rho^{\longleftarrow}(x)$ and $\vartheta=\rho^{\longleftarrow}$. Conversely, suppose $\vartheta=\rho^{\longleftarrow}$ for some reduction operator $\rho$ on $\mathcal{A}$. Take $n\in \mathbb{N}$, $x\in \mathcal{A}_{n}$ and $y\in \vartheta(x)=\rho^{\longleftarrow}(x)$, then $\rho(y)=x$, hence $y\in \mathcal{A}_{n+1}$ because $\rho(\mathcal{A}_{n+1})\subseteq \mathcal{A}_{n}$. Pick now $y\in \mathcal{A}_{n+1}$ and let $x=\rho(y)\in \mathcal{A}_{n}$, then by assumption $y\in \rho^{\longleftarrow}(x)=\vartheta(x)$. Finally, suppose $x_{1},x_{2}\in \mathcal{A}_{n}$ and $y\in \vartheta(x_{1})\cap\vartheta(x_{2})=\rho^{\longleftarrow}(x_{1})\cap \rho^{\longleftarrow}(x_{2})$, then $x_{1}=\rho(y)=x_{2}$. This proves that $\vartheta$ is an ECO operator on $\mathcal{A}$. \endproof Let $\vartheta$ be an ECO operator on $\mathcal{A}$. Say that $\mathcal{A}$ is $\emph{rooted}$ when $\mathcal{A}$ contains a unique object with minimum size. Suppose $\mathcal{A}$ is rooted. In this case, one can represent $\vartheta$ by means of a tree, called the $\emph{generating}$ $\emph{tree}$ of $\vartheta$ and denoted by $\mathcal{T}_{\vartheta}$, which is a rooted tree having the objects of $\mathcal{A}$ as nodes, the object in $\mathcal{A}$ with minimum size as root and such that $\vartheta(x)$ is the set of children of $x$ for every $x\in \mathcal{A}$. This representation of $\vartheta$ can be useful for enumeration purpose when $\mathcal{T}_{\vartheta}$ displays enough regularity to be described by a so-called succession rule. Let $S$ be a set, let $a$ be an element of $S$, let $e$ be a sequence of maps in $S^{S}$ and let $p:S\longrightarrow\mathbb{N}$ be a map. The triple $(a,e,p)$ will be called the $\emph{succession}$ $\emph{rule}$ with $\emph{axiom}$ $a$, $\emph{production}$ $\emph{rule}$ $e$ and $\emph{production}$ $\emph{parameter}$ $p$ and it will be denoted by the symbol $$\begin{cases}(a)\\ (k) \leadsto (e_{1}(k))(e_{2}(k))...(e_{p(k)}(k))\end{cases}$$ Let now $\mathcal{T}$ be a rooted tree with set of nodes $T$ and root $R$, let $\Omega=(a,e,p)$ be a succession rule and let $\ell:T\longrightarrow S$ be a map. We say that $\ell$ is a $\Omega-\emph{labelling}$ of $\mathcal{T}$ when $\ell(R)=a$ and, if $v\in T$, then $v$ has $p(\ell(v))$ children $v_{1},...,v_{p(\ell(v))}$ and $\ell(v_{i})=e_{i}(\ell(v))$ for every $i\in [p(\ell(v))]$. We say that $\vartheta$ and $\mathcal{A}$ are $\emph{described}$ by $\Omega$ when one can find some $\Omega-$labelling of $\mathcal{T}_{\vartheta}$. Suppose now $S\subseteq \mathbb{N}$ and $\ell$ is an $\Omega-$labelling of $\mathcal{T}_{\vartheta}$. Denote by $\mathcal{A}(z,u)$ the generating function $$\mathcal{A}(z,u)=\sum_{\sigma\in \mathcal{A}}z^{|\sigma|}u^{\ell(\sigma)}=\sum_{n,k\geq 0}|\mathcal{A}_{n,k}|z^{n}u^{k}$$ where $\mathcal{A}_{n,k}$ denotes the set of all $\sigma\in \mathcal{A}_{n}$ such that $\ell(\sigma)=k$ for every $(n,k)\in \mathbb{N}^{2}$. We can translate the $\Omega-$description of $\mathcal{A}$ into a functional equation for $\mathcal{A}(z,u)$ as follows. Denote by $L_{\Omega}$ the unique $\mathbb{Z}[[z]]-$linear map $L_{\Omega}:\mathbb{Z}[[z,u]]\longrightarrow \mathbb{Z}[[z,u]]$ such that $$L_{\Omega}(u^{k})=\begin{cases} u^{a} & k=0\\ u^{e_{1}(k)}+...+u^{e_{p(k)}(k)} & k\geq 1 \end{cases}$$ Then it is easily seen that $$\mathcal{A}(z,u)=\mathcal{A}(0,u)+zL_{\Omega}(\mathcal{A}(z,u)).$$ In the luckiest cases, these kind of equations can be solved using kernel type methods or other standard tools. \begin{comment} An $\Omega-$labelling of $\vartheta$ easily translats in terms of generating functions. Suppose $S\subseteq \mathbb{N}^{*}$ and let $L_{\Omega}:\mathbb{Z}[X]\longrightarrow \mathbb{Z}[X]$ be the $\mathbb{Z}-$linear operator defined setting \begin{align*} L_{\Omega}(1)=& X^{a} \\ L_{\Omega}(X^{k})=& X^{e_{1}(k)}+...+X^{e_{p(k)}(k)} \end{align*} \end{comment} \section{Permutations avoiding the pattern $1-32-4$}\label{1-32-4avoiders} The vincular pattern $1-32-4$ is a dashed version of the classical pattern $1324$, which attracted great attention among combinatorialists, as enumeration of permutations avoiding this classical pattern has proven to be one of the hardest open problems in permutation pattern combinatorics. Hopefully, our insight into the class of permutations avoiding the classical pattern could benefit from a closer study of permutations avoiding one of its vincular counterparts. However, these two patterns seem also to display quite a different behaviour, for instance it is not difficult to see that $1-32-4$ is actually Wilf-equivalent to $1-23-4$ (see $\cite{E}$), whereas this does not hold for their classical counterparts. Permutations avoiding $1-32-4$ are counted by sequence A113227 in $\cite{S}$, whose first ten terms are 1, 1, 2, 6, 23, 105, 549, 3207, 20577, 143239. As mentioned in Section $\ref{PW}$, an efficient bivariate recursive formula to count permutations avoiding $1-32-4$ was first discovered by Callan in $\cite{C}$. This formula actually relies on a very intricated bijection (involving several contrived discrete structures defined ad hoc in order to break this transformation into somewhat simpler steps) between permutations of length $n$ avoiding $1-32-4$ and increasing ordered rooted trees on $n+1$ nodes with increasing leaves for every $n\geq 1$. An ordered rooted tree on $n+1$ nodes $\{0,1,2,...,n\}$ is $\emph{increasing}$ when every node is smaller than each of its children. If in addition its leaves are increasing from left to right we say that it has $\emph{increasing}$ $\emph{leaves}$. For instance, the figure below shows two increasing ordered rooted trees, the first has increasing leaves while the second does not. \begin{center} \begin{tikzpicture} \node at (0,0) { \scalebox{0.8}{\begin{tikzpicture} \draw [fill] (1,2) circle [radius=0.1]; \draw [fill] (2,1) circle [radius=0.1]; \draw [fill] (3,2) circle [radius=0.1]; \draw [fill] (4,0) circle [radius=0.1]; \draw [fill] (4,1) circle [radius=0.1]; \draw [fill] (5,2) circle [radius=0.1]; \draw [fill] (6,1) circle [radius=0.1]; \draw [fill] (6,2) circle [radius=0.1]; \draw [fill] (6,3) circle [radius=0.1]; \draw [fill] (7,2) circle [radius=0.1]; \node at (1,2.5) {3}; \node at (1.5,1) {2}; \node at (3,2.5) {4}; \node at (4,-0.5) {0}; \node at (4,1.5) {6}; \node at (6.5,1) {1}; \node at (5,2.5) {7}; \node at (6,3.5) {8}; \node at (7,2.5) {9}; \node at (5.7,2) {5}; \draw[thick] (4,0)--(2,1)--(1,2); \draw[thick] (4,0)--(2,1)--(3,2); \draw[thick] (4,0)--(4,1); \draw[thick] (4,0)--(6,1)--(6,2)--(6,3); \draw[thick] (6,1)--(5,2); \draw[thick] (6,1)--(7,2); \end{tikzpicture}} }; \node at (6,0) { \scalebox{0.8}{\begin{tikzpicture} \draw [fill] (1,2) circle [radius=0.1]; \draw [fill] (2,1) circle [radius=0.1]; \draw [fill] (3,2) circle [radius=0.1]; \draw [fill] (4,0) circle [radius=0.1]; \draw [fill] (4,1) circle [radius=0.1]; \draw [fill] (5,2) circle [radius=0.1]; \draw [fill] (6,1) circle [radius=0.1]; \draw [fill] (6,2) circle [radius=0.1]; \draw [fill] (6,3) circle [radius=0.1]; \draw [fill] (7,2) circle [radius=0.1]; \node at (1,2.5) {4}; \node at (1.5,1) {2}; \node at (3,2.5) {6}; \node at (4,-0.5) {0}; \node at (4,1.5) {3}; \node at (6.5,1) {1}; \node at (5,2.5) {7}; \node at (6,3.5) {9}; \node at (7,2.5) {8}; \node at (5.7,2) {5}; \draw[thick] (4,0)--(2,1)--(1,2); \draw[thick] (4,0)--(2,1)--(3,2); \draw[thick] (4,0)--(4,1); \draw[thick] (4,0)--(6,1)--(6,2)--(6,3); \draw[thick] (6,1)--(5,2); \draw[thick] (6,1)--(7,2); \end{tikzpicture}} }; \end{tikzpicture} \end{center} Let $\mathcal{I}$ denote the combinatorial class of such trees. If, for every $1\leq k\leq n$, we denote by $u(n)$ the number of trees in $\mathcal{I}_{n}$ and by $u(n,k)$ the number of trees in $\mathcal{I}_{n}$ such that the root has $k$ children, so that $u(n)=\sum_{k=1}^{n}u(n,k)$, then it is easily proved in $\cite{C}$ that the triangle $u(n,k)$ satisfies the recurrence relation \begin{equation}\label{(1-32-4)-recursion} u(n,k)=u(n-1,k-1)+k\sum_{j=k}^{n-1}u(n-1,j) \end{equation} when $1\leq k\leq n$, with initial conditions $u(0,0)=1$ and $u(n,0)=0$ for every $n\geq 1$. Thanks to the bijection established by Callan, this recursive formula allows also to count permutations of length $n$ avoiding $1-32-4$. Although the recursive formula given in Equation $(\ref{(1-32-4)-recursion})$ provides an efficient way to enumerate these permutations, we believe it provides only little insight into their structure, as it is not transparent at all from the bijection constructed by Callan how to read this recursive formula directly from a description of these permutations. Actually, it is not even clear which statistic on these permutations should correspond to the number of children of the root. Some unsuccesful attempts to read the recursive formula given in Equation $(\ref{(1-32-4)-recursion})$ directly from a description of permutations avoiding $1-32-4$ have been made. As far as we know, the best result in this direction has been achieved by Duchi, Guerrini and Rinaldi, who constructed a two labels generating tree for this class of permutations as a consequence of a certain insertion algorithm called $\mathsf{INSERTPOINT}$ (see $\cite{DGR}$). However, in the same paper, they suggest that the recursive formula given in Equation $(\ref{(1-32-4)-recursion})$ appears to be difficult to understand directly on $\mathcal{S}(1-32-4)$. Additionally, as already pointed out by Callan, sequence A113227 also happens to count quite a wide variety of combinatorial objects, among which we find increasing ordered rooted trees with increasing leaves, valley marked Dyck paths and inversion sequences avoiding the pattern $101$ (see $\cite{C}$ and $\cite{CMSW}$). In all these cases, it is instead relatively easy to read the recurrence relation given in Equation $(\ref{(1-32-4)-recursion})$ from the same structural description of these objects and it is actually not hard to construct quite a straightforward bijection between them. In this section we construct a single label generating tree for permutations avoiding the pattern $1-32-4$. We believe that this construction finally break the annoying asimmetry between the aforementioned combinatorial objects and permutations avoiding $1-32-4$, by providing a better insight into the structure of these permutations and, compared to Callan's bijection, a clearer explanation of why the recursive formula given in Equation $(\ref{(1-32-4)-recursion})$ actually counts them. As remarkable byproducts of this construction, we also obtain an explicit algorithm to generate all permutations avoiding $1-32-4$ and we refine the enumeration of these permutations according to a simple statistic, which is the number of right-to-left maxima to the right of 1. \begin{figure}[t] \begin{center}\scalebox{0.5}{\begin{tikzpicture} \draw [fill] (0,0) circle [radius=0.08]; \draw [fill] (1,1) circle [radius=0.08]; \draw [fill] (2,2) circle [radius=0.08]; \draw [fill] (3,3) circle [radius=0.08]; \draw [fill] (4,4) circle [radius=0.08]; \draw [fill] (5,3) circle [radius=0.08]; \draw [fill] (6,4) circle [radius=0.08]; \draw [fill] (7,5) circle [radius=0.08]; \draw [fill] (8,4) circle [radius=0.08]; \draw [fill] (9,3) circle [radius=0.08]; \draw [fill] (10,2) circle [radius=0.08]; \draw [fill] (11,1) circle [radius=0.08]; \draw [fill] (12,2) circle [radius=0.08]; \draw [fill] (13,1) circle [radius=0.08]; \draw [fill] (14,0) circle [radius=0.08]; \draw [fill] (15,1) circle [radius=0.08]; \draw [fill] (16,0) circle [radius=0.08]; \draw [fill,blue] (5,2) circle [radius=0.2]; \draw [fill] (5,1) circle [radius=0.08]; \draw [fill] (5,0) circle [radius=0.08]; \draw [fill,blue] (11,1) circle [radius=0.2]; \draw [fill] (11,0) circle [radius=0.08]; \draw [fill,blue] (14,0) circle [radius=0.2]; \draw[thick] (0,0)--(4,4)--(5,3)--(7,5)--(11,1)--(12,2)--(14,0)--(15,1)--(16,0); \draw[dashed] (0,0)--(16,0); \end{tikzpicture}}\end{center} \caption{A valley marked Dyck path is a Dyck path in which, for each valley $DU$, one of the lattice points between the valley vertex and the $x-$axis inclusive is marked.} \end{figure} Although not explicitly stated in $\cite{C}$, the class $\mathcal{I}$ can be described by a succession rule as follows. Consider the map $e:\mathcal{I}\longrightarrow \mathbb{N}$ attaching to every $T\in \mathcal{I}$ the number $e(T)$ of the children of its root. One can easily construct an ECO operator $\eta$ on the class $\mathcal{I}$ such that $e$ is a $\Lambda-$labelling of $\mathcal{T}_{\eta}$ where $\Lambda$ is the succession rule $$\begin{cases} (1)\\ (k) \leadsto (1)(2)^{2}(3)^{3}...(k)^{k}(k+1)\end{cases}$$ The ECO operator $\eta$ is implicitly described by Callan in $\cite{C}$ when he proves that the class $\mathcal{I}$ is enumerated by the recursive formula in Equation $(\ref{(1-32-4)-recursion})$, but we omit further details. We will show that the class $\mathcal{S}(1-32-4)$ can be described by pretty the same rule, which proves that the combinatorial classes $\mathcal{S}(1-32-4)$ and $\mathcal{I}$ are isomorphic. The outline of the proof is as follows. First, we attach to every permutation $\pi$ a label $\ell(\pi)$ defined as the number of right-to-left maxima of $\pi$ on the right of $1$ (e.g. $\ell(84617523)=3$ as the right-to-left maxima of $84617523$ on the right of $1$ are exactly $7,5$ and $3$). Next, we define an ECO operator $\vartheta$ on the combinatorial class $\mathcal{S}(1-32-4)$ and we show that the map $\ell:\mathcal{S}(1-32-4)\longrightarrow \mathbb{N}$ attaching to every $\pi\in \mathcal{S}(1-32-4)$ the label $\ell(\pi)$ is actually an $\Omega-$labelling of $\mathcal{T}_{\vartheta}$ where $\Omega$ is the succession rule \begin{equation}\label{rule} \begin{cases} (0)\\ (k) \leadsto (0)(1)^{2}(2)^{3}...(k)^{k+1}(k+1)\end{cases} \end{equation} This is the same as the succession rule $$\begin{cases} (1)\\ (h) \leadsto (1)(2)^{2}(3)^{3}...(h)^{h}(h+1)\end{cases}$$ up to the change of label $h=k+1$. The first 3 levels of the generating tree defined by the succession rule $\Omega$ are displayed in Figure $\ref{Gen_Tree_Rule}$. Observe also that it actually takes not much effort to deduce Equation $(\ref{(1-32-4)-recursion})$ directly from the succession rule $\Omega$, without any reference to increasing ordered rooted trees with increasing leaves. As a corollary, we find that the recursive relation in Equation $(\ref{(1-32-4)-recursion})$ for the triangle $u(n,k)$ also allows us to count permutations of length $n$ avoiding $1-32-4$ with $k-1$ right-to-left maxima to the right of 1. \begin{figure}\label{Gen_Tree_Rule} \begin{center} \scalebox{0.7}{\begin{tikzpicture} \tikzset{level distance=60pt,sibling distance=5pt} \Tree [.0 [.0 [.0 [.0 ] [.1 ] ] [.1 [.0 ] [.1 ] [.1 ] [.2 ] ] ] [.1 [.0 [.0 ] [.1 ] ] [.1 [.0 ] [.1 ] [.1 ] [.2 ] ] [.1 [.0 ] [.1 ] [.1 ] [.2 ] ] [.2 [.0 ] [.1 ] [.1 ] [.2 ] [.2 ] [.2 ] [.3 ] ] ] ] \end{tikzpicture}} \end{center} \caption{The first 3 levels of the labeled tree defined by the succession rule $\Omega$.} \end{figure} Actually, it is easier to construct our ECO operator $\vartheta$ moving backwards, i.e. by first defining a reduction operator $\rho$ on $\mathcal{S}(1-32-4)$ and then setting $\vartheta(\pi)=\rho^{\longleftarrow}(\pi)$ for every $\pi\in \mathcal{S}(1-32-4)$. Suppose $\pi$ is a permutation of length $n\geq 1$ avoiding $1-32-4$ such that $\ell(\pi)=k$. As already noted in $\cite{E}$, any permutation of this kind can be written in the form \begin{equation}\label{structure} \pi=m_{1}\ell_{11}...\ell_{1k_{1}}m_{2}\ell_{21}...\ell_{2k_{2}} ... m_{h}\ell_{h1}...\ell_{hk_{h}} \end{equation} where $m_{1},...,m_{h}$ are the left to right minima of $\pi$ (and of course $m_{h}=1$) for some $h\geq 1$, while, for every $1\leq i\leq h$, the letters $\ell_{i1},...,\ell_{ik_{i}}$ (where possibly $k_{i}=0$, with obvious meaning) denote non-empty increasing sequences such that $\max(\ell_{ij})>\max(\ell_{i(j+1)})$ for every $j\in [k_{i}-1]$ when $k_{i}\geq 2$. In particular $k_{h}=k$ by definition of $\ell(\pi)$. Actually, a permutation that can be written in the form displayed in $(\ref{structure})$ avoids $1-32-4$ if and only if $\max(\ell_{(i+1)1})<\max(\ell_{i(k_{i}-1)})$ whenever $h\geq 2$, $i\in [h-1]$ and $k_{i}\geq 2$. For instance, the permutation $\pi=(8,9,14,12,5,2,4,10,11,1,3,13,6,7)$ avoids $1-32-4$ and decomposes as follows $$ \begin{matrix} \pi=& ( \bf{8}, & \fbox{9,14}, & \fbox{12}, & \bf{5}, & \bf{2}, & \fbox{4,10,11}, & \bf{1}, & \fbox{3,13}, & \fbox{6,7} )\\ & m_{1} & \ell_{11} & \ell_{12} & m_{2} & m_{3} & \ell_{31} & m_{4} & \ell_{41} & \ell_{42} \end{matrix} $$ \begin{figure}\label{Pattern} \begin{center} \scalebox{0.3}{\begin{tikzpicture} \draw[step=1,black,thin] (1,1) grid (14,14); \draw [fill,red] (1,8) circle [radius=0.3]; \draw [fill] (2,9) circle [radius=0.2]; \draw [fill] (3,14) circle [radius=0.2]; \draw [fill] (4,12) circle [radius=0.2]; \draw [fill,red] (5,5) circle [radius=0.3]; \draw [fill,red] (6,2) circle [radius=0.3]; \draw [fill] (7,4) circle [radius=0.2]; \draw [fill] (8,10) circle [radius=0.2]; \draw [fill] (9,11) circle [radius=0.2]; \draw [fill,red] (10,1) circle [radius=0.3]; \draw [fill] (11,3) circle [radius=0.2]; \draw [fill] (12,13) circle [radius=0.2]; \draw [fill] (13,6) circle [radius=0.2]; \draw [fill] (14,7) circle [radius=0.2]; \end{tikzpicture}} \end{center} \caption{A plot of the permutation $\pi=(8,9,14,12,5,2,4,10,11,1,3,13,6,7)$ which avoids $1-32-4$. The left-to-right minima of $\pi$ are marked in bold.} \end{figure} where the left-to-right minima of $\pi$ are marked in bold. Note that in this case $h=4$, while $k_{1}=2$, $k_{2}=0$, $k_{3}=1$ and $k_{4}=k=2$. There is quite a natural way to reduce $\pi$ to another permutation $\rho(\pi)$ of length $n-1$ avoiding the pattern $1-32-4$, namely by deleting $1$, performing some further almost forced operations to restore the avoidance of the pattern $1-32-4$ and taking the standard reduction of the sequence obtained in this way (i.e. subtracting $1$ to each item of the sequence). Indeed, we can distinguish two cases: \begin{itemize} \item[(i)] Suppose $2$ occurs to the left of $1$ in $\pi$, i.e. $h\geq 2$ and $m_{h-1}=2$. In this case we say that $\pi$ has $\emph{type}$ $(2,1)$ and we can construct $\rho(\pi)$ as follows. We delete $1$ from $\pi$ and restore the structure displayed in $(\ref{structure})$ by sorting the list $\ell_{(h-1)1},...,\ell_{(h-1)k_{h-1}},$ $\ell_{h1},...,\ell_{hk_{h}}$ of increasing sequences to the right of $2$ in such a way that their maximum elements are in decreasing order from left to right. Finally, we define $\rho(\pi)$ as the standard reduction of the integer sequence obtained in this way. It is clear by construction that $\rho(\pi)$ avoids $1-32-4$. \item[(ii)] Suppose $2$ occurs to the right of $1$ in $\pi$, i.e. either $h=1$ or $h\geq 2$ and $m_{h-1}\geq 3$, so that $\ell_{hi}=2\ell'_{hi}$ for some $i\in [k]$ and a possibly empty increasing sequence $\ell'_{hi}$. In this case we say that $\pi$ has $\emph{type}$ $(1,2)$ and we can construct $\rho(\pi)$ as follows. We delete $1$ and restore the structure displayed in $(\ref{structure})$ by moving $2$ to the position previously occupied by $1$ (in other words we swap $1$ and $2$ and then delete $1$), thus obtaining the integer sequence $m_{1}\ell_{11}...\ell_{1k_{1}}m_{2}\ell_{21}...\ell_{2k_{2}} ... 2\ell_{h1}...\ell_{hi}'...\ell_{hk_{h}}$. Finally, we define $\rho(\pi)$ as the standard reduction of this sequence. Again, it is clear by construction that $\rho(\pi)$ avoids $1-32-4$. \end{itemize} \begin{example} Let us illustrate the previous construction with two examples. \begin{itemize} \item[(i)] Take the permutation $\pi=(8,9,14,12,5,2,4,10,11,1,3,13,6,7)$ and let us compute its reduction $\rho(\pi)$. Note that $\pi$ has type $(2,1)$, therefore we first delete $1$ to obtain the sequence $(8,9,14,12,5,2,4,10,11,3,13,6,7)$, then we sort the increasing sequences $(4,10,11)$, $(3,13)$ and $(6,7)$ on the right of $2$ in such a way that their maximum elements $11,13$ and $7$ are in decreasing order. Therefore the correct order is given by $(3,13)(4,10,11)(6,7)$, which yields the sequence $(8,9,14,12,5,2,3,13,4,10,11,6,7)$. Taking the standard reduction of this sequence returns the permutation $\rho(\pi)=(7,8,13,11,4,1,2,12,3,9,10,5,6)$. \item[(ii)] Take the permutation $\pi=(8,9,14,12,5,3,4,10,11,1,6,13,2,7)$ and let us compute its reduction $\rho(\pi)$. Note that $\pi$ has type $(1,2)$, hence we first delete $1$ to obtain the sequence $(8,9,14,12,5,3,4,10,11,6,13,2,7)$, then we move $2$ to the position previously occupied by $1$, so to obtain the sequence $(8,9,14,12,5,3,4,10,11,2,6,13,7)$. Taking the standard reduction of this sequence returns the permutation $\rho(\pi)=(7,8,13,11,4,2,3,9,10,1,5,12,6)$. \end{itemize} \end{example} This construction induces a reduction operator $\rho$ on $\mathcal{S}(1-32-4)$ and thus, as mentioned before, an ECO operator $\vartheta$ on $\mathcal{S}(1-32-4)$. Now we want to show that $\vartheta$ can be described by the succession rule $\Omega$ given by ($\ref{rule}$). For this purpose, we will explicitly describe all the elements of $\vartheta(\pi)$ and compute their labels by reversing the previous construction. More specifically, we will expand $\pi$ by appending $0$ at the end of $\pi$, then moving some of the increasing sequences $\ell_{h1},...,\ell_{hk_{h}}$ to the right of $0$ and finally normalizing the sequence thus obtained (i.e. adding $1$ to each item of the sequence). In fact, the range of possibilities to perform this operation is quite constrained because we have to preserve the avoidance of the pattern $1-32-4$. First append a $0$ at the end of $\pi$. \begin{itemize} \item[(i)] Of course, no occurrence of $1-32-4$ will appear if we move the whole sequence $\ell_{h1}...\ell_{hk_{h}}$ to the right of $0$. In this way we get the sequence $m_{1}\ell_{11}...\ell_{1k_{1}}...m_{h}0\ell_{h1}...\ell_{hk_{h}}$, whose normalization is a permutation which we denote by $\pi^{(k)}$. Note that $\ell(\pi^{(k)})=k$. \item[(ii)] Suppose instead that $k\geq 1$ and we want to move only $i\in \{0,...,k-1\}$ increasing sequences among $\ell_{h1},...,\ell_{hk_{h}}$ to the right of $0$. Then it is easy to see that there is a unique way to perform this operation so to prevent an occurrence of $1-32-4$ to appear in the resulting expansion of $\pi$, which is the following way. Choose some $j\in [i+1]$ and move the $(i+1)^{th}$ suffix $\ell_{h(k_{h}-i)},...,\ell_{hk_{h}}$ of the list $\ell_{h1},...,\ell_{hk_{h}}$, except for its $j^{th}$ increasing sequence $\ell_{h(k_{h}-i+j-1)}$, to the right of $0$. In other words, move the sequence $\ell_{h(k_{h}-i)}...\hat{\ell}_{h(k_{h}-i+j-1)}...\ell_{hk_{h}}$ (where the hat over an item means that it must be omitted) to the right of $0$, thus obtaining the sequence $$m_{1}\ell_{11}...\ell_{1k_{1}}...m_{h}\ell_{h1}...\ell_{h(k_{h}-i-1)}\ell_{h(k_{h}-i+j-1)}0\ell_{h(k_{h}-i)}...\hat{\ell}_{h(k_{h}-i+j-1)}...\ell_{hk_{h}}.$$ Finally normalize this sequence to a permutation, which we denote by $\pi^{(i,j)}$. Note that $\ell(\pi^{(i,j)})=i$. Hence, this operation produces $i+1$ children with label $i$, for every $0\leq i\leq k-1$, from a node with label $k$. \end{itemize} Note that all permutations defined in $(i)$ and $(ii)$ will have type $(2,1)$, therefore these permutations cannot exhaust the whole class $\mathcal{S}(1-32-4)$ and we need to construct other expansions of $\pi$ to include also permutations of type $(1,2)$. To this purpose we also move $m_{h}=1$ to the right of $0$ and perform some further transformations. \begin{itemize} \item[(iii)] Of course, no occurrence of $1-32-4$ will appear if we move the whole sequence $1\ell_{h1}...\ell_{hk_{h}}$ to the right of $0$. In this way, we get the sequence $m_{1}\ell_{11}...\ell_{1k_{1}}...01\ell_{h1}...\ell_{hk_{h}}$, whose normalization is a permutation which we denote by $\pi^{[1]}$. More generally, it is clear that no occurrence of $1-32-4$ will appear if we perform the following operation. Choose some $i\in [k]$ and move $1$ immediately to the left of $\ell_{hi}$, then move the sequence $\ell_{h1}...1\ell_{hi}...\ell_{hk_{h}}$ to the right of $0$. In this way we obtain the sequence $m_{1}\ell_{11}...\ell_{1k_{1}}...0\ell_{h1}...1\ell_{hi}...\ell_{hk_{h}}$, whose normalization is a permutation, which we denote by $\pi^{[i]}$. Note that $\ell(\pi')=k$. Hence, operation $(i)$ and $(iii)$ produce $k+1$ children with label $k$ from a node with label $k$. \item[(iv)] Finally, we have a last possibility to transform $\pi$ and prevent an occurrence of $1-32-4$ to appear. Move $1$ back to the right of $\ell_{hk_{h}}$, then move the sequence $\ell_{h1}...\ell_{hk_{h}}1$ to the right of $0$. In this way we obtain the sequence $m_{1}\ell_{11}...\ell_{1k_{1}}...0\ell_{h1}...\ell_{hk_{h}}1$ and normalize it to a permutation, which we denote by $\pi^{[k+1]}$. Note that in this case, unlike in the previous one, we have $\ell(\pi^{[k+1]})=k+1$. Hence, this operation will produces a unique child with label $k+1$ from a node with label $k$. Note that this last possibility could actually be regarded as a special case of $(iii)$ if we let $\pi$ terminate with an additional empty increasing sequence $\ell_{(h+1) k_{h+1}}$. \end{itemize} Note that all permutations defined in $(iii)$ and $(iv)$ have type $(1,2)$. \begin{example} Let us take $\pi=({\bf{5}},9,14,10,12,{\bf{1}},2,7,13,6,11,3,8,4)$ as a working example to illustrate some of the previous constructions. Note that in this case $\pi$ has the form $m_{1}\ell_{11}\ell_{12}m_{2}\ell_{21}\ell_{22}\ell_{23}\ell_{24}$ where $m_{1}=5$, $\ell_{12}=(9,14)$ and $\ell_{12}=(10,12)$, while $m_{2}=1$, $\ell_{21}=(2,7,13)$, $\ell_{22}=(6,11)$, $\ell_{23}=(3,8)$ and $\ell_{24}=(4)$, in particular $\ell(\pi)=4$. First we insert a $0$ at the end of $\pi$ to obtain the sequence $(5,9,14,10,12,1,2,7,13,6,11,3,8,4,0)$. \begin{itemize} \item[(i)] We start by constructing $\pi^{(4)}$. We move the sequence $(2,7,13,6,11,3,8,4)$ to the right of $0$, thus obtaining the sequence $(5,9,14,10,12,1,0,2,7,13,6,11,3,8,4)$, whose normalization is given by the permutation $\pi^{(4)}=(6,10,15,11,13,2,1,3,8,14,7,12,4,9,5)$. \item[(ii)] Now let us construct the permutation $\pi^{(2,2)}$. We move the suffix $(6,11)(3,8)(4)$ of the list $(2,7,13)(6,11)(3,8)(4)$, except for its $2^{nd}$ element $(3,8)$, to the right of $0$, thus obtaining the sequence $(5,9,14,10,12,1,2,7,13,3,8,0,6,11,4)$, whose normalization is given by the permutation $\pi^{(2,2)}=(6,10,15,11,13,2,3,8,14,4,9,1,7,12,5)$. \item[(iii)] Let us now construct the permutation $\pi^{[3]}$. We move the sequence $(1,2,7,13,6,11,3,8,4)$ to the right of $0$ and move $1$ immediately to the left of $(3,8)$, thus obtaining the sequence $(5,9,14,10,12,0,2,7,13,6,11,1,3,8,4)$, whose normalization is given by the permutation $\pi^{[3]}=(6,10,15,11,13,1,3,8,14,7,12,2,4,9,5)$. \item[(iv)] Finally we construct the permutation $\pi^{[4]}$. We move the sequence $(1,2,7,13,6,11,3,8,4)$ to the right of $0$ and move $1$ immediately to the right of $(4)$, thus obtaining the sequence $(5,9,14,10,12,0,2,7,13,6,11,3,8,4,1)$, whose normalization is given by the permutation $\pi^{[3]}=(6,10,15,11,13,,1,3,8,14,7,12,4,9,5,2)$. \end{itemize} \end{example} \begin{figure}\label{Gen_Tree} \begin{center} \scalebox{0.5}{\begin{tikzpicture}[grow=right] \tikzset{level distance=150pt,sibling distance=0.1pt} \Tree [.1(0) [.21(0) [.321(0) [.4321(0) ] [.4312(1) ] ] [.312(1) [.4231(0) ] [.4213(1) ] [.4123(1) ] [.4132(2) ]] ] [.12(0) [.231(0) [.3421(0) ] [.3412(1) ] ] [.213(1) [.3241(0) ] [.3214(1) ] [.3124(1) ] [.3142(2) ] ] [.123(1) [.2341(0) ] [.2134(1) ] [.1234(1) ] [.1342(2) ] ] [.132(2) [.2431(0) ] [.2413(1) ] [.2314(1) ] [.2143(2) ] [.1243(2) ] [.1423(2) ] [.1432(3) ] ] ] ] \end{tikzpicture}} \end{center} \caption{The first 3 levels of the generating tree for permutations avoiding the pattern $1-32-4$, where each node $\pi\in \mathcal{S}(1-32-4)$ is labeled by $(\ell(\pi))$.} \end{figure} Now we are in a position to state and prove the main result. \begin{theorem} Suppose $\pi\in \mathcal{S}(1-32-4)$ and $k=\ell(\pi)$. \begin{itemize} \item[(i)] If $k=0$, then $\vartheta(\pi)=\{\pi^{(0)},\pi^{[1]}\}$. \item[(ii)] If $k\geq 1$, then $\vartheta(\pi)=\{\pi^{(k)},\pi^{(i,j)},\pi^{[p]}:\ 0\leq i\leq k-1, 1\leq j\leq i+1,1\leq p\leq k+1\}$. \item[(iii)] The map $\ell$ is an $\Omega-$labelling of $\mathcal{T}_{\vartheta}$. \end{itemize} \end{theorem} \proof It is mere routine to check that $\rho(\pi^{(0)})=\rho(\pi^{[1]})=\pi$ when $k=0$ and that $\rho(\pi^{(k)})=\rho(\pi^{(i,j)})=\rho(\pi^{[p]})=\pi$ when $k\geq 1$, $0\leq i\leq k-1, 1\leq j\leq i+1 $ and $1\leq p\leq k+1$. Conversely, assume $\sigma\in \vartheta(\pi)$ and write $\sigma$ in the form $m_{1}\ell_{11}...\ell_{1k_{1}}...m_{h}\ell_{h1}...\ell_{hk_{h}}$ as in Equation $(\ref{structure})$. Suppose first that $\sigma$ has type $(2,1)$. If $2$ and $1$ occur consecutively in $\pi$, then it is immediate to see that $\sigma=\pi^{(k)}$. Otherwise, it is also fairly easy to check that $\sigma=\pi^{(i,j)}$ where $i=\ell(\sigma)$, while $j=1$ in case $\max(\ell_{h1})<\max(\ell_{(h-1)k_{h-1}})$ and $j=\max\{j\in \{2,3,...,i+1\}:\ \max(\ell_{h(j-1)})>\max(\ell_{(h-1)k_{h-1}})\}$ otherwise. Suppose now $\sigma$ has type $(1,2)$. Then it is also easy to check that $\sigma=\pi^{[p]}$ where $p$ is the unique $p\in [k_{h}]$ such that $2$ occurs in the increasing sequence $\ell_{hp}$. This proves $(i)$ and $(ii)$. Finally, $(iii)$ holds because we know that $\ell(\pi^{(0)})=0$ and $\ell(\pi^{[1]})=1$ when $k=0$, while $\ell(\pi^{(i,j)})=i$, $\ell(\pi^{(k)})=\ell(\pi^{[p]})=k$ and $\ell(\pi^{[k+1]})=k+1$ when $k\geq 1$, $0\leq i\leq k-1, 1\leq j\leq i+1 $ and $1\leq p\leq k$. \endproof We observe once again that the construction above actually provides an algorithm to generate all permutations avoiding $1-32-4$. The permutations generated this way up to length $4$ are displayed in Figure $\ref{Gen_Tree}$. We end this section with the following corollary summarizing the further byproduct of the previous construction, which is the refined enumeration of permutations avoiding $1-32-4$ according to the number of right-to-left maxima to the right of 1. \begin{corollary} For every $0\leq k\leq n-1$ denote by $v(n,k)$ the number of permutations avoiding $1-32-4$ with length $n$ and $k$ right-to-left maxima to the right of $1$. Then the triangle $v(n,k)$ satisfies the recurrence relation $$v(n,k)=v(n-1,k-1)+(k+1)\sum_{j=k}^{n-2}v(n-1,j)$$ where we agree that $v(0,-1)=1$ and $v(n,-1)=0$ for every $n\geq 1$. \end{corollary} \begin{table} \begin{center} \begin{tabular}{|c|cccccccc|} \hline $n\backslash k$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline 1 & 1 & & & & & & & \\ 2 & 1 & 1 & & & & & & \\ 3 & 2 & 3 & 1 & & & & & \\ 4 & 6 & 10 & 6 & 1 & & & & \\ 5 & 23 & 40 & 31 & 10 & 1 & & & \\ 6 & 105 & 187 & 166 & 75 & 15 & 1 & & \\ 7 & 549 & 993 & 958 & 530 & 155 & 21 & 1 & \\ 8 & 3207 & 5865 & 5988 & 3786 & 1415 & 287 & 28 & 1\\ \hline \end{tabular} \end{center} \caption{The triangle $v(n,k)$ counting permutations avoiding $1-32-4$ with length $n$ and $k$ right-to-left maxima to the right of $1$, for $1\leq n\leq 8$ and $0\leq k\leq n-1$.} \label{m1} \end{table} \section{Conclusion and further work} In this paper we finally succeded in describing a generating tree with single label for permutations avoiding $1-32-4$. However, as pointed out in Section $\ref{1-32-4avoiders}$, the sequence counting these permutations also happens to count other combinatorial structures, such as increasing ordered rooted trees with increasinig leaves or valley marked Dyck paths, and describing a clear bijection between these structures would have a great combinatorial significance. Although a bijection was already established by Callan in $\cite{C}$ through a sequence of non-trivial steps and despite the remarkable advances in $\cite{DGR}$ to make more explicit the connection between these objects, we believe it is worth looking for a bijection admitting a reasonably simpler description. Using the generating tree described in this paper, it is likely that one can recursively construct some other bijection, preserving the respective labels, and we hope this could eventually lead to a significant advance in this direction. As for a more generic issue, we observe that permutations avoiding the classical pattern $1324$ form a subclass of permutations avoiding the vincular pattern $1-32-4$. Hence, it might be interesting to investigate in which cases the ECO operator $\vartheta$ fails to expand a permutation avoiding the classical pattern $1324$ to another permutation avoiding the same pattern, causing an occurrence of $1324$ to appear. Furthermore, we believe it is worth investigating whether the construction provided in this paper to generate all permutations avoiding $1-32-4$ can be adapted to generate all permutations avoiding the vincular pattern $1-(\pi+1)-(|\pi|+2)$, for some particular consecutive patterns $\pi$ other than $12$ or $21$ (where $\pi+1$ denotes the permutation obtained from $\pi$ by adding $1$ to each item of $\pi$). In this case there would be hope to find a recurrence relation to count these permutations. In general, the fast recurrence for permutations avoiding $1-32-4$ provided in $\cite{C}$ suggests to look for nonobvious recurrences counting other similar patterns. For instance, note that a kind of structural description for permutations avoiding $12-34$ is given in $\cite{E}$, hence there may be hope to find a nice generating tree and deduce some reasonable recurrence relation, just like we did in the case of permutations avoiding $1-23-4$. Finally, still concerning enumeration of permutations avoiding $1-32-4$, although the recursive formula found by Callan provides quite an efficient and elegant way to count them, a closed formula would clearly be a more satisfactory answer. In this regard, the generating function $u(z)$ of permutations avoiding $1-32-4$ can be recursively described as a continued fraction $u(z)=1-z(U(0)-z)$ where $U(n)=1-z^{n}-z/U(n+1)$ for every $n\in\mathbb{N}$ (see $\cite{S}$). However, this description can hardly be considered a closed formula. The succession rule describing permutations avoiding $1-32-4$ can also be translated into a functional equation for the generating function $u(z,t)$ of these permutations (where $z$ keeps track of the length and $t$ keeps track of the label), as explained at the very end of Section $\ref{ECO}$. This equation is actually a linear PDE of the form \begin{equation}\label{PDE} (1-t)zt^{2}\frac{\partial u}{\partial t}(z,t)+((1-t)^{2}(1-zt)+zt)u(z,t)=zt(1-t)^{2}+ztu(z,1). \end{equation} However, we do not know whether Equation $(\ref{PDE})$ provides enough information to find some closed form expression for $u(z,t)$. Further research in this direction could improve our understanding of sequence A113227. \paragraph*{Acknowledgement} The author whishes to thank Luca Ferrari for kindly reviewing some technical details related to the construction illustrated in this paper and for his valuable suggestions to improve the organization of its content.
train/arxiv
BkiUeiDxK7FjYHGH1OyF
5
1
\section{Introduction} This paper is a sequel to our recent study \cite{HIMN} on the $K$-theoretic degeneracy loci formulas for the vector bundles. In \cite{HIMN}, we proved a determinant formula for Grassmann bundles, and a Pfaffian formula for isotropic Grassmann bundles associated to a symplectic vector bundle. Both formulas live in the $K$-ring of algebraic vector bundles. In this paper, we deal with a vector bundle of odd rank equipped with a non-degenerate symmetric bilinear form. Our study is modeled on Kazarian's work \cite{Kazarian} for the Lagrangian and orthogonal degeneracy loci in cohomology. We succeeded in \cite{HIMN} to extend Kazarian's approach to $K$-theory for both ordinary and symplectic vector bundles. The main technical issue in the orthogonal case is that one has to deal with schemes that are not reduced and in particular this means that it is necessary to find a way to recover the fundamental class of the associated reduced scheme. In cohomology, this is achieved by dividing by $2$, so the resulting cohomology classes of Lagrangian and maximal orthogonal degeneracy loci differ only by a multiple of a power of $2$. This reflects the difference between the root systems of type $B$ and $C$. We reduce this issue related to the multiplicity to the case of the quadric bundle and deal with it in the appendix. We can also see this subtlety in the fact that the structure constants for the natural basis formed by the structure sheaves of the Schubert varieties are quite different for Lagrangian and the maximal orthogonal Grassmannian. Such structure constants are known explicitly for the maximal orthogonal Grassmannian by the work of Pechenik and Yong \cite{YongPechenik}, while for the Lagrangian case no conjecture has been proposed. In the Lagrangian case, the special classes given by the degeneracy loci with only one Schubert condition coincide with the Segre classes of tautological vector bundles. The general degeneracy loci classes are given by a Pfaffian with entries being a quadratic expression in terms of those Segre classes. On the other hand, in the orthogonal case, the special class is given as a series in terms of the Segre classes of the tautological bundles (see Lemma \ref{special}). We introduce classes ${\mathscr P}_{m}^{(\ell)}$ that can be considered as a deformation of the special ones (see Definition \ref{defPclass}). Our main theorem (Theorem \ref{mainthm}) describes an arbitrary degeneracy loci class as a Pfaffian with entries being a quadratic expression in terms of those classes ${\mathscr P}_m^{(\ell)}$. If we specialize it to the $K$-theory of orthogonal Grassmannian, it is a Pfaffian formula of Schubert classes given in terms of honest special classes. This is in the spirit of Giambelli \cite{Giambelli}. Our formula for the orthogonal degeneracy loci does not change the form when we consider an orthogonal Grassmann bundle for higher rank orthogonal vector bundle. Thus we can think of the degeneracy loci class in infinite rank setting. Such universal class should be described by the ${\textit{GP}}$-functions defined by the second and the forth authors in \cite{IkedaNaruse}. The result of this paper implies that those ${\textit{GP}}$-functions should be expressed as a Pfaffian. Further combinatorial implications of our result will be discussed elsewhere. We expect that our formula can be generalized to the degeneracy loci classes associated to the \emph{vexillary signed permutations} due to Anderson-Fulton. In fact, in \cite{AndersonFulton2} they obtained a Pfaffian formula in cohomology, which we plan to achieve at the level of $K$-theory, using our method. \section{Basics on connective $K$-theory} Connective $K$-theory, denoted by ${\textit{CK}}^*$, is an example of oriented cohomology theory built out of the algebraic cobordism introduced by Levine and Morel \cite{LevineMorel}. It is a contravariant functor, together with pushforwards for projective morphisms, satisfying some axioms. We refer to \cite{DaiLevine, Hudson, LevineMorel} for the detailed construction. In this section, we recall some preliminary facts on ${\textit{CK}}^*$, especially regarding Chern classes. Let $X$ be a smooth quasiprojective variety over the complex numbers ${\mathbb C}$. The connective $K$-theory of $X$ interpolates between the Grothendieck ring $K(X)$ of algebraic vector bundles on $X$ and the Chow ring ${\textit{CH}}^*(X)$ of $X$. Connective $K$-theory assigns to $X$ a commutative graded algebra ${\textit{CK}}^*(X)$ over the coefficient ring ${\textit{CK}}^*(\operatorname{pt})$ which is isomorphic to the polynomial ring ${\mathbb Z}[\beta]$ by setting $\beta$ to be the class of degree $-1$ obtained by pushing forward the fundamental class along the structural morphism ${\mathbb P}^1 \to \operatorname{pt}$. The ${\mathbb Z}[\beta]$-algebra ${\textit{CK}}^*(X)$ specializes to the Chow ring ${\textit{CH}}^*(X)$ and the Grothendieck ring $K(X)$ by setting $\beta$ equal to $0$ and $-1$ respectively. For any closed equidimensional subvariety $Y$ of $X$, there exists an associated fundamental class $[Y]_{{\textit{CK}}^*}$ in ${\textit{CK}}^*(X)$. In particular, $[Y]_{{\textit{CK}}^*}$ is specialized to the class $[Y]$ in ${\textit{CH}}^*(X)$ and also to the class of the structure sheaf $\mathcal{O}_Y$ of $Y$ in $K(X)$. In the rest of the paper, we denote the fundamental class of $Y$ in ${\textit{CK}}^*(X)$ by $[Y]$ instead of $[Y]_{{\textit{CK}}^*}$. As an oriented cohomology theory, connective $K$-theory admits A theory of Chern classes. For line bundles $L_1$ and $L_2$ over $X$, one has first Chern classes $c_1(L_i)\in {\textit{CK}}^1(X)$ which satisfy \begin{equation}\label{L tensor L} c_1(L_1\otimes L_2)=c_1(L_1)+c_1(L_2)+\beta c_1(L_1)c_1(L_2). \end{equation} This fundamental law characterizes the theory. Note that the operation \[ (u,v)\mapsto u\oplus v:= u+v+\beta uv \] is an example of commutative one-dimensional formal group law, which is an essential feature of oriented cohomology theories. We should stress here that the sign convention of $\beta$ is opposite from the one used in \cite{DaiLevine, Hudson, LevineMorel}. On the other hand, it follows from (\ref{L tensor L}) that $c_1(L^{\vee}) = -c_1(L)/(1+\beta c_1(L))$. Therefore it is convenient to introduce the notation for the \emph{formal inverse}: \[ \bar u := \frac{-u}{1+\beta u}. \] The main ingredient in our computation is the $K$-theoretic Segre class of vector bundles. Let $E$ be a vector bundle over $X$ of rank $e$. For convenience, we use the following notation to denote the Chern polynomial: \[ c(E;u):=\sum_{i=0}^{e} c_i(E) u^i. \] Let $F$ be another vector bundle over $X$. In \cite{HIMN}, we defined the relative Segre class ${\mathscr S}_m(E-F)$ for each $m\in {\mathbb Z}$ by using the following generating function: \begin{equation}\label{segre vir} {\mathscr S}(E-F;u):=\sum_{m\in {\mathbb Z}} {\mathscr S}_{m}(E-F) u^{m}= \frac{1}{1 + \beta u^{-1}} \frac{c(E - F;\beta)}{c(E-F;-u)}, \end{equation} where $c(E-F;u):=c(E;u)/c(F;u)$ defines the usual relative Chern classes. It was shown in \cite{HIMN} that ${\mathscr S}_m(E-F)$ can be also obtained as the pushforward of the product of certain Chern classes as follows. \begin{lem}\label{lemtensor} Let $\pi: {\mathbb P}^*(E)\to X$ be the dual projective bundle of $E$ and ${\mathcal Q}$ its tautological quotient line bundle. For each integer $s\geq 0$, we have \begin{equation}\label{push of tensor} \pi_*\left(c_1({\mathcal Q})^sc_{f}({\mathcal Q} \otimes F^{\vee})\right) = {\mathscr S}_{s+f-{e}+1}(E-F), \end{equation} where $f$ is the rank of $F$. \end{lem} \section{Maximal Orthogonal Grassmannians of type $B$} In this section, we first define the degeneracy loci in the odd orthogonal Grassmann bundle. In order to compute its associated class, we construct a resolution of singularities. With the help of Lemma \ref{cor1} on the quadric bundle (proved in the appendix), we express the degeneracy loci class as a pushforward of a product of top Chern classes. With the help of the calculus of formal Laurent series developed in \cite{HIMN}, we finally obtain the Pfaffian formula (Theorem \ref{mainthm}). \subsection{Degeneracy loci}\label{secKL} Let $X$ be a smooth quasiprojective variety. Consider the vector bundle $E$ of rank $2n+1$ over $X$ with a symmetric non-degenerate bilinear form ${\langle} \ ,\ {\rangle}: E \otimes E \to {\mathcal O}$ where ${\mathcal O}$ is the trivial line bundle. Let $\xi: {\operatorname{OG}}(E) \to X$ be the Grassmann bundle parametrizing rank $n$ isotropic subbundles of $E$, equipped with the tautological bundle $U$. A point of ${\operatorname{OG}}(E)$ is a pair $(x,U_x)$ of a point $x\in X$ and an isotropic $n$-dimensional subspace of the fiber $E_x$ of $E$ at $x$. Fix a flag of isotropic subbundles of $E$ \[ F^n \subset \cdots \subset F^2 \subset F^1, \] where ${\operatorname{rk}}\ F^i = n-i+1$. Let $F^{-i+1}:=(F^{i})^{\perp}$. Note that the bilinear form ${\langle}\ ,\ {\rangle}$ on $E$ induces an isomophism $F^{\perp}/F \otimes F^{\perp}/F \cong {\mathcal O}$ for any maximal isotropic subbundle $F$ of $E$. This implies that $c_1(F^{\perp}/F)=0$ in ${\textit{CK}}^*(X)\otimes_{{\mathbb Z}}{\mathbb Z}[1/2]$. A strict partition of at most $n$ parts is a sequence $\lambda=(\lambda_1,\dots, \lambda_n)$ of non-negative integers such that $\lambda_i>0$ implies $\lambda_i>\lambda_{i+1}$ for all $i=1,\dots n-1$. Let $\calS\calP(n)$ be the set of such strict partitions $\lambda$ such that $\lambda_1\leq n$. The length of $\lambda$ is the number of nonzero parts. For each partition $\lambda \in \calS\calP(n)$ of length $r$, the corresponding degeneracy loci $X_{\lambda}$ in ${\operatorname{OG}}(E)$ is defined by \[ X_{\lambda} = \{ (x,U_x) \in {\operatorname{OG}}(E)\ |\ \dim(U_x \cap F^{\lambda_i}_x)\geq i, i=1,\dots,r\}. \] \subsection{Resolution of singularities} Let $\pi: \operatorname{Fl}(F_{\bullet}^{\lambda}) \to {\operatorname{OG}}(E)$ be the flag bundle associated to $F_{\bullet}^{\lambda}: F^{\lambda_1} \subset \cdots \subset F^{\lambda_r}$ with the tautological flag $D_1\subset \cdots \subset D_r$ with ${\operatorname{rk}}\ D_i=i$. That is, for each point $p:=(x,U_x) \in {\operatorname{OG}}(E)$, its fiber along $\pi$ consists of the partial flag $(D_1)_p\subset \cdots \subset (D_r)_p$ of $E_x$ such that $\dim (D_i)_p=i$ and $(D_i)_p \subset F^{\lambda_i}_p$. We can construct the associated flag bundle $\pi: \operatorname{Fl}(F_{\bullet}^{\lambda}) \to {\operatorname{OG}}(E)$ as a tower of projective bundles \begin{eqnarray} &&\operatorname{Fl}(F_{\lambda}^{\bullet})={\mathbb P}(F^{\lambda_r}/D_{r-1}) \stackrel{\pi_r}{\longrightarrow} {\mathbb P}(F^{\lambda_{r-1}}/D_{r-2}) \stackrel{\pi_{r-1}}{\longrightarrow} \cdots \ \ \ \ \ \ \ \ \ \nonumber\\\label{P tower} &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots\stackrel{\pi_3}{\longrightarrow} {\mathbb P}(F^{\lambda_2}/D_1) \stackrel{\pi_2}{\longrightarrow} {\mathbb P}(F^{\lambda_1}) \stackrel{\pi_1}{\longrightarrow} {\operatorname{OG}}(E). \end{eqnarray} We regard $D_i/D_{i-1}$ as the tautological line bundle of ${\mathbb P}(F^{\lambda_i}/D_{i-1})$ and denote by $\tau_i$ the first Chern class $c_1((D_i/D_{i-1})^{\vee})$ in ${\textit{CK}}^*({\mathbb P}(F^{\lambda_i}/D_{i-1}))$. Define the sequence of subvarieties $Y_r\subset \cdots \subset Y_1 \subset \operatorname{Fl}(F_{\bullet}^{\lambda})$ by \[ Y_i = \{ (p, (D_{\bullet})_p) \in \operatorname{Fl}(F_{\bullet}^{\lambda}) \ |\ p=(x,U_x), \ (D_i)_p \subset U_x\}. \] \begin{lem} $Y_i$ is smooth and $Y_r$ is birational to $X_{\lambda}$ through $\pi$. Furthermore, $\pi_*[Y_r] = [X_{\lambda}]$. \end{lem} \begin{proof} To prove that $Y_i$ is smooth, it suffices to prove it for $i=r$. Since $F^i$'s are vector bundles over $X$, we have the flag bundle $\operatorname{Fl}(F_{\bullet}^{\lambda})' \to X$ associated to the partial flag $F_{\bullet}^{\lambda}$ defined as above. It is easy to see that $\operatorname{Fl}(F_{\bullet}^{\lambda})=\operatorname{Fl}(F_{\bullet}^{\lambda})'\times {\operatorname{OG}}(E)$. Let $\xi_1: \operatorname{Fl}(F_{\bullet}^{\lambda}) \to \operatorname{Fl}(F_{\bullet}^{\lambda})'$ be the projection to its first factor. Then $Y_r$ surjects to $\operatorname{Fl}(F_{\bullet}^{\lambda})'$ and each fiber is ${\operatorname{OG}}((D_r)_p^{\perp}/(D_r)_p)$. Thus we can see that $Y_r$ is a fiber bundle over $\operatorname{Fl}(F_{\bullet}^{\lambda})'$ with the fiber being identified with the maximal orthogonal Grassmannian ${\operatorname{OG}}({\mathbb C}^{2(n-r)+1})$ of ${\mathbb C}^{2(n-r)+1}$. Thus it is smooth. The birationality is clear. The last claim follows from the fact that $X_{\lambda}$ has at worst rational singularities (\textit{cf.} \cite[Lemma 4]{HIMN}). \end{proof} \begin{lem}\label{lemY_r} Let $\kappa:=c_1(U^{\perp}/U)$. In ${\textit{CK}}^*(\operatorname{Fl}(F^{\lambda}_{\bullet}))$, we have \[ \left(\prod_{i=1}^r (2 + \beta(\tau_i\oplus \kappa) )\right)[Y_r] = \prod_{i=1}^r c_{n-i+1}((D_i/D_{i-1})^{\vee}\otimes D_{i-1}^{\perp}/U^{\perp}). \] \end{lem} \begin{proof} Consider the vector bundle $D_{i-1}^{\perp}/D_{i-1}$ over $Y_{i-1}$ with the induced bilinear form. Let $Q(D_{i-1}^{\perp}/D_{i-1})$ be the corresponding quadric bundle over $Y_{i-1}$ (see Section \ref{appendix}). Let $S_i$ be the tautological line bundle of $Q(D_{i-1}^{\perp}/D_{i-1})$ and consider the maximal isotropic subbundle $U/D_{i-1}$ of $D_{i-1}^{\perp}/D_{i-1}$. We apply Lemma \ref{cor1} and obtain the following identity in ${\textit{CK}}^*(Q(D_{i-1}^{\perp}/D_{i-1}))$: \begin{equation}\label{eqi-1} (2+\beta c _1(S_i^{\vee}\otimes U^{\perp}/ U)) [{\mathbb P}(U/D_{i-1})] = c_{n-i+1}(S_i^{\vee}\otimes D_{i-1}^{\perp}/U^{\perp}). \end{equation} The line bundle $D_i/D_{i-1} \to Y_{i-1}$ defines a section $a_i: Y_{i-1} \to Q(D_{i-1}^{\perp}/D_{i-1})$ by sending a point $y\in Y_{i-1}$ to its fiber of $D_i/D_{i-1}$ which is isotropic since $D_i \subset F^{\lambda_i}$. The pullback of $S_i$ along $a_i$ coincides with $D_i/D_{i-1}$. Furthermore $a_i^*[{\mathbb P}(U/D_{i-1})]=[Y_i]$. Thus by pulling back (\ref{eqi-1}) along $a_i$, we obtain the following identity in ${\textit{CK}}^*(Y_{i-1})$: \begin{eqnarray*} (2+\beta (\tau_i\oplus \kappa))[Y_i] &=& c_{n-i+1}((D_i/D_{i-1})^{\vee}\otimes D_{i-1}^{\perp}/U^{\perp}). \end{eqnarray*} The claim follows by the projection formula applied to the inclusions $Y_{i-1} \hookrightarrow Y_i$. \end{proof} The next corollary is an obvious consequence of Lemma \ref{lemY_r} and the fact that $\kappa=c_1(U^{\perp}/U)=0$ in ${\textit{CK}}^*({\operatorname{OG}}(E))\otimes_{{\mathbb Z}}{\mathbb Z}[1/2]$. \begin{cor}\label{corXlambda} In ${\textit{CK}}^*({\operatorname{OG}}(E))\otimes_{{\mathbb Z}}{\mathbb Z}[1/2]$, we have \[ [X_{\lambda}] = \pi_* \left(\prod_{i=1}^r\frac{ c_{n-i+1}((D_i/D_{i-1})^{\vee}\otimes D_{i-1}^{\perp}/U^{\perp})}{2 + \beta\tau_i}\right). \] \end{cor} \emph{In the rest of the paper, we work in ${\textit{CK}}^*({\operatorname{OG}}(E))\otimes_{{\mathbb Z}}{\mathbb Z}[1/2]$.} \subsection{Special Schubert classes} Let us now introduce the classes ${\mathscr P}_m^{(\ell)}$, which will be used in the main theorem to describe the fundamental class of a general degeneracy loci. \begin{defn}\label{defPclass} For each $m\in {\mathbb Z}$ and $\ell=1,\dots, n$, we define the classes ${\mathscr P}_m^{(\ell)}$ by the following generating function \begin{equation*} \sum_{m\in {\mathbb Z}}{\mathscr P}_m^{(\ell)} u^m = \frac{1}{2+ \beta u^{-1}} {\mathscr S}((U^{\perp}- E/F^{\ell})^{\vee};u). \end{equation*} Or equivalently, we define \begin{equation}\label{eqP} {\mathscr P}_m^{(\ell)}=\frac{1}{2} \sum_{s\geq 0} \left(-\frac{\beta}{2}\right)^s{\mathscr S}_{s+m}((U^{\perp}- E/F^{\ell})^{\vee}). \end{equation} \end{defn} For each integer $k=1,\dots,n$, let $\lambda=(k) \in \calS\calP(n)$ be the strict partition with only one part. The corresponding degeneracy loci is denoted by $X_k$ and its associated class $[X_k]$ is called a \emph{special class}. The next lemma shows that we can view ${\mathscr P}_k^{(\ell)}$ as a deformation of the special class $[X_k]$. \begin{lem}\label{special} We have $[X_{k}]={\mathscr P}_{k}^{(k)}$. \end{lem} \begin{proof} As in Section \ref{secKL}, we consider $\pi: {\mathbb P}(F^{k}) \to {\operatorname{OG}}(E)$ and $Y_1 \subset {\mathbb P}(F^{k})$ which is the loci where $D_1$ is contained in $U$. By Lemma \ref{lemY_r}, we have \[ (2 + \beta(\tau_1\oplus \kappa) [Y_1] = c_{n}(D_1^{\vee}\otimes E/U^{\perp}). \] As in Corollary \ref{corXlambda}, we get \[ [X_{k}] = \pi_* \left(\frac{ c_{n}(D_1^{\vee}\otimes E/U^{\perp})}{2 + \beta\tau_1}\right)= \frac{1}{2} \sum_{s\geq 0} \left(-\frac{\beta}{2}\right)^s \pi_*\left(\tau_1^sc_n(D_1^{\vee}\otimes E/U^{\perp})\right) \] in ${\textit{CK}}^*({\operatorname{OG}}(E))\otimes_{{\mathbb Z}}{\mathbb Z}[1/2]$. Now the claim follows from Lemma \ref{lemtensor}. \end{proof} \subsection{Computing $[X_{\lambda}]$}\label{secPFthm} First, we recall notations from \cite{HIMN} for the ring of certain formal Laurent series, necessary for the computation of the class $[X_{\lambda}]$. Let $R=\oplus_{m\in {\mathbb Z}}R_m$ be a commutative ${\mathbb Z}$-graded ring. Let $t_1,\ldots,t_r$ be indeterminates with $\deg(t_i)=1$. For ${\sfs}=(s_1,\ldots,s_r)\in {\mathbb Z}^r$, we denote $t^{\sfs}=t_1^{s_1}\cdots t_r^{s_r}$. A formal Laurent series of degree $m$ in the variables $t_1,\ldots,t_r$ with coefficients in $R$ is given by \[ f(t)=\sum_{\sfs\in {\mathbb Z}^r}a_{\sfs}t^{\sfs}, \] where $a_{\sfs}\in R_{m-|{\sfs}|}$ for all ${\sfs}\in {\mathbb Z}^r$ with $|{\sfs}|=\sum_{i=1}^rs_i$. Its support ${\operatorname{supp}} f$ is defined as \[ {\operatorname{supp}} f = \{ \sfs \in {\mathbb Z}^r \ |\ a_{\sfs} \not=0\}. \] For each $m\in {\mathbb Z}$, let ${\mathscr L}_m^R$ denote the set of all formal Laurent series $f(t)$ of degree $m$ such that there exists $\sfm \in {\mathbb Z}^r$ such that $\sfm + {\operatorname{supp}} f$ is contained in the cone $C \subset {\mathbb Z}^r$ defined by $s_1\geq0, \; s_1+s_2\geq 0, \;\cdots, \; s_1+\cdots + s_{r} \geq 0$. The direct sum ${\mathscr L}^R:=\oplus_{m\in {\mathbb Z}}{\mathscr L}_m^R$ is a graded ring in an obvious manner. For each $i=1,\ldots,r,$ let ${\mathscr L}^{R,i}$ denote the subring of ${\mathscr L}^R$ consisting of series that do not contain negative powers of $t_1,\ldots,t_{i-1}.$ In particular we have ${\mathscr L}^{R,1}={\mathscr L}^R$. For each $m\in {\mathbb Z}$, let $R[[t_1,\ldots,t_r]]_m$ denote the set of formal power series in $t_1,\ldots,t_r$ of homogeneous degree $m$. We define the ring $R[[t_1,\ldots,t_r]]_{\operatorname{gr}}$ of graded formal power series to be $\oplus_{m\in {\mathbb Z}}R[[t_1,\ldots,t_r]]_m$. Note that ${\mathscr L}^{R,i}$ is a graded $R[[t_1,\ldots,t_r]]_{\operatorname{gr}}$-module. Let us apply the above notation to $R={\textit{CK}}^*(\mathrm{OG}(E))$. We define an $R[[t_1,\ldots,t_{i-1}]]_{\operatorname{gr}}$-module structure on ${\textit{CK}}^*(\mathbb{P}(F^{\lambda_{i-1}}/D_{i-2}))$ by \[ f(t_1,\ldots,t_{i-1})\alpha:=f(\tau_1,\ldots,\tau_{i-1})\alpha, \] for each $f(t_1,\ldots,t_{i-1})\in R[[t_1,\ldots,t_{i-1}]]_{\operatorname{gr}}$ and $\alpha\in {\textit{CK}}^*(\mathbb{P}(F^{\lambda_{i-1}}/D_{i-2}))$ where $\tau_i = c_1((D_i/D_{i-1})^{\vee})$ as before. We can uniquely define a homomorphism $\phi_i:{\mathscr L}^{R,i} \to {\textit{CK}}^*(\mathbb{P}(F^{\lambda_{i-1}}/D_{i-2}))$ of graded $R[[t_1,\ldots,t_{i-1}]]_{\operatorname{gr}}$-modules by setting \[ t_{i}^{s_i}\cdots t_r^{s_r}\mapsto {\mathscr S}_{s_i}((U^\perp -E/F^{\lambda_i})^\vee) \cdots {\mathscr S}_{s_r}((U^\perp -E/F^{\lambda_r})^\vee), \] for each $s_i,\dots, s_r \in {\mathbb Z}$. Note that for $m\in {\mathbb Z}$ and $j\geq i$, we have \begin{equation}\label{remP} \phi_i\left(\frac{t_j^{m}}{2+\beta t_j}\right) ={\mathscr P}_m^{(\lambda_j)}. \end{equation} \begin{thm}\label{mainPF} Let $\lambda \in \calS\calP(n)$ of length $r$. In ${\textit{CK}}^*({\operatorname{OG}}(E))\otimes_{{\mathbb Z}}{\mathbb Z}[1/2]$, we have \begin{eqnarray*} [X_{\lambda}] &=& \phi_1\left(\prod_{i=1}^r\frac{t_i^{\lambda_i}}{2+\beta t_i} \prod_{1\leq i<j\leq r } \left(\frac{1-\bar t_i/\bar t_j}{1+\bar t_i/ t_j}\right)\right), \end{eqnarray*} where $\bar t = \frac{-t}{1+\beta t}$ is the notation for the formal inverse as before. \end{thm} \begin{proof} As in \cite[Proposition 1]{HIMN}, the definition (\ref{segre vir}) and the property (\ref{push of tensor}) of the relative Segre classes imply that, for $\pi_i : {\mathbb P}(F^{\lambda_i}/D_{i-1}) \to {\mathbb P}(F^{\lambda_{i-1}}/D_{i-2})$, we have \[ \pi_{i*}(\tau_i^s c_{n-i+1}((D_i/D_{i-1})^{\vee} \otimes D_{i-1}^{\perp}/U^{\perp})) = \sum_{p=0}^{\infty} c_p(D_{i-1} - D_{i-1}^{\vee}) \sum_{q=0}^p \binom{p}{q}\beta^{q} {\mathscr S}_{\lambda_i+s - p + q}((U^{\perp} - E/F^{\lambda_i})^{\vee}) \] for each integer $s\geq 0$. Therefore by the same computation used in the proof of \cite[Proposition 3]{HIMN}, we obtain \begin{equation}\label{lempiphi} \pi_{i*}\left(\frac{c_{n-i+1}((D_i/D_{i-1})^{\vee} \otimes D_{i-1}^{\perp}/U^{\perp})}{2+\beta \tau_i}\right) = \phi_i\left( \frac{t_i^{\lambda_i}}{2+\beta t_i}\prod_{j=1}^{i-1}\frac{ 1-\bar t_j/\bar t_i}{1 - t_j/\bar t_i}\right). \end{equation} Now the claim follows by repeatedly applying (\ref{lempiphi}) to Corollary \ref{corXlambda}. \end{proof} \subsection{Main theorem} Let $\lambda \in \calS\calP(n)$ of length $r$. Let $2m$ be the smallest even integer such that $r \leq 2m$. If $2m>r$, then we set $\lambda_{r+1}=0$. For each $i,j$ such that $1\leq i<j \leq 2m$, we expand the following rational function in ${\mathcal L}^R$: \begin{equation} \left(1 + \beta \bar t_i\right)^{2m-i-1} \left(1 + \beta \bar t_j\right)^{2m-j} \frac{1 - \bar t_i/\bar t_j}{1 - t_i/\bar t_j } = \sum_{a,b \in {\mathbb Z} \atop{a \geq 0, a+b \geq 0}} \gamma_{a,b}^{ij} t_i^{a} t_j^{b}. \end{equation} Note that $\gamma_{ab}^{ij}\in {\mathbb Z}[\beta]$. Note that if $j=2m$, $\gamma^{ij}_{ab}=0$ for all $b>0$. Let ${\mathscr P}_{m}^{(0)} := (-\beta)^{-m}$ for all $m \in {\mathbb Z}_{\leq 0}$. If $A$ is a skewsymmetric $2m \times 2m$ matrix, we denote the Pfaffian of $A$ by ${\operatorname{Pf}}(A)$. We are now ready to state and prove our main result. \begin{thm}\label{mainthm} The fundamental class of the degeneracy loci $X_{\lambda}$ is given by \begin{equation}\label{PfB} [X_{\lambda}] = {\operatorname{Pf}} \left( \sum_{a,b \in {\mathbb Z}\atop{a \geq 0, a+b \geq 0}}\gamma_{ab}^{ij}{\mathscr P}_{\lambda_i+a}^{(\lambda_i)} {\mathscr P}_{\lambda_j+b}^{(\lambda_j)} \right)_{1 \leq i < j \leq 2m}. \end{equation} \end{thm} \begin{proof} If $r$ is even, the equality follows from the identity of Schur-Pfaffian \begin{align} &\prod_{i=1}^{2m}\frac{t_i^{\lambda_i}}{2+\beta t_i} \prod_{1\leq i<j\leq {2m} } \frac{1-\bar t_i/\bar t_j}{1+\bar t_i/ t_j} \notag\\ &={\operatorname{Pf}} \left( \frac{t_i^{\lambda_i} t_j^{\lambda_j} \left(1 + \beta \bar t_i\right)^{2m-i-1} \left(1 + \beta \bar t_j\right)^{2m-j} }{(2+\beta t_i)(2+\beta t_j)} \frac{1 - \bar t_i/\bar t_j}{1 - t_i/\bar t_j }\right)_{1 \leq i < j \leq 2m}, \label{pfid} \end{align} which is similar to \cite[Lemma 14]{HIMN}. Indeed, we can write \[ [X_{\lambda}] = {\operatorname{Pf}} \left( \phi_1\left(\frac{t_i^{\lambda_i} t_j^{\lambda_j} \left(1 + \beta \bar t_i\right)^{2m-i-1} \left(1 + \beta \bar t_j\right)^{2m-j} }{(2+\beta t_i)(2+\beta t_j)} \frac{1 - \bar t_i/\bar t_j}{1 - t_i/\bar t_j }\right)_{1 \leq i < j \leq 2m}\right). \] Now (\ref{remP}) implies the claim. To see the case when $r$ is odd, we add one more stage to the projective tower, namely $\pi_{r+1}: {\mathbb P}(F^0/D_r) \to {\mathbb P}(F^{\lambda_r}/D_{r-1})$. If we applying (\ref{push of tensor}), by a direct computation, we obtain (\textit{cf.} \cite[Section 5.3]{HIMN}) \[ (\pi_{r+1})_*(c_{n-r}((D_{r+1}/D_{r})^{\vee}\otimes D_{r}^{\perp}/U^{\perp})) = 1, \] where $D_{r+1}/D_r$ is the tautological line bundle of ${\mathbb P}(F^0/D_r)$. Therefore \[ [X_{\lambda}] = \pi_*\circ(\pi_{r+1})_* \left(c_{n-r}((D_{r+1}/D_{r})^{\vee}\otimes D_{r}^{\perp}/U^{\perp})\prod_{i=1}^r\frac{ c_{n-i+1}((D_i/D_{i-1})^{\vee}\otimes D_{i-1}^{\perp}/U^{\perp})}{2 + \beta\tau_i}\right). \] Thus by noting that \begin{equation} (\pi_{r+1})_*(c_{n-r}((D_{r+1}/D_{r})^{\vee}\otimes D_{r}^{\perp}/U^{\perp})) = \phi_{r+1} \left(\prod_{i=1}^{r} \left(\frac{1-\bar t_i/\bar t_{r+1}}{1+\bar t_i/ t_{r+1}}\right)\right) \end{equation} we have \begin{eqnarray*} [X_{\lambda}] = \phi_1\left(\prod_{i=1}^{r}\frac{t_i^{\lambda_i}}{2+\beta t_i} \prod_{1\leq i<j\leq r+1 } \left(\frac{1-\bar t_i/\bar t_j}{1+\bar t_i/ t_j}\right)\right). \end{eqnarray*} Therefore the claim follows from the next identity similar to (\ref{pfid}) \begin{align} &\prod_{i=1}^{r}\frac{t_i^{\lambda_i}}{2+\beta t_i} \prod_{1\leq i<j\leq {r+1} } \frac{1-\bar t_i/\bar t_j}{1+\bar t_i/ t_j} \notag\\ &={\operatorname{Pf}} \left( \frac{t_i^{\lambda_i} t_j^{\lambda_j} \left(1 + \beta \bar t_i\right)^{2m-i-1} \left(1 + \beta \bar t_j\right)^{2m-j} }{[2]_i[2]_j} \frac{1 - \bar t_i/\bar t_j}{1 - t_i/\bar t_j }\right)_{1 \leq i < j \leq 2m}, \label{pfid} \end{align} where $[2]_i=\begin{cases} 2+\beta t_i & i=1,\dots, r\\ 1 & i=r+1 \end{cases}$. \end{proof} \section{Appendix: $K$-theory of odd quadric bundles $Q(E)$}\label{appendix}\label{app} In this section, we show Identity (\ref{rel1}) in the $K$-theory of the odd quadric bundle, which we used in the main body of this paper. As an application, we also exhibit a presentation of the $K$-theory of the quadric bundle. Let $E$ be a vector bundle of rank $2n+1$ over a smooth quasiprojective variety $X$ with a symmetric non-degenerate bilinear form with values in a line bundle $L$. Let $S$ be the tautological line bundle of ${\mathbb P}(E)$. The quadric bundle $Q(E) \subset {\mathbb P}(E)$ is given by \[ Q(E) = \{(\ell,x) \in {\mathbb P}(E)|\ \ell \in {\mathbb P}(E_x), \ell \mbox{ is isotropic }\}. \] Let $U$ be a maximal isotropic subbundle of $E$. We have $U^{\perp}/U \otimes U^{\perp}/U =L$. Consider the following diagram of obvious inclusions. \[ \xymatrix{ Q(E) \ar[rrr] &&& {\mathbb P}(E) \\ Q(E) \cap {\mathbb P}(U^{\perp}) \ar[rrr]\ar[u]_{\iota'}&&&{\mathbb P}(U^{\perp}) \ar[u]\\ {\mathbb P}(U)\ar@/_1pc/[urrr]\ar[u]_{\iota}&&& } \] \begin{lem}\label{cor1} In ${\textit{CK}}^*(Q(E))$, we have \begin{equation}\label{rel1} (2+\beta c _1(S^{\vee}\otimes U^{\perp}/ U)) [{\mathbb P}(U)] = c_n(S^{\vee}\otimes E/U^{\perp}). \end{equation} \end{lem} \begin{proof} We show the identity by computing the class $[Q(E) \cap {\mathbb P}(U^{\perp})]$ in ${\textit{CK}}^*(Q(E))$ in two different ways. ${\mathbb P}(U)$ is a divisor in ${\mathbb P}(U^{\perp})$ and the corresponding line bundle over ${\mathbb P}(U^{\perp})$ is $S^{\vee}\otimes U^{\perp}/ U$. The class $[{\mathbb P}(U)]$ in ${\textit{CK}}^*({\mathbb P}(U^{\perp}))$ is equal to $c _1(S^{\vee}\otimes U^{\perp}/ U)$. The scheme theoretic intersection $Q(E)\cap {\mathbb P}(U^\perp)$ is not reduced and defines the Weil divisor $2 {\mathbb P}(U)$ on ${\mathbb P}(U^{\perp})$, which is obviously a strict normal crossing. Thus, following \cite[Section 7.2.1]{LevineMorel}, we can compute the fundamental class $1_{Q(E) \cap {\mathbb P}(U^{\perp})}$ in ${\textit{CK}}^*({Q(E) \cap {\mathbb P}(U^{\perp})})$ as \begin{equation} 1_{Q(E) \cap {\mathbb P}(U^{\perp})} = \iota_*(2 + \beta c _1(S^{\vee}\otimes U^{\perp}/U)). \end{equation} In ${\textit{CK}}^*(Q(E))$, the fundamental class $[Q(E) \cap {\mathbb P}(U^{\perp})]$ is defined as the pushforward $\iota_*'1_{Q(E) \cap {\mathbb P}(U^{\perp})}$. Since the class $2 + \beta c _1(S^{\vee}\otimes U^{\perp}/U)$ pulls back from $Q(E)$, the projection formula applied to $\iota'\circ \iota$ implies \begin{eqnarray*} [Q(E) \cap {\mathbb P}(U^{\perp})] &=& (2 + \beta c _1(S^{\vee}\otimes U^{\perp}/U))\cdot [{\mathbb P}(U)]. \end{eqnarray*} On the other hand, the scheme $Q(E)\cap {\mathbb P}(U^{\perp})$ is the locus where the obvious bundle map $S \to E/U^{\perp}$ has rank zero, and its codimension in $Q(E)$ is $n$. Thus, by \cite[Lemma 6.6.7]{LevineMorel}, we have \[ [Q(E)\cap {\mathbb P}(U^{\perp})]=c_n(S^{\vee}\otimes E/U^{\perp}) \] in ${\textit{CK}}^*(Q(E))$. \end{proof} \begin{rem} It is easy to generalize this to the algebraic cobordism of $Q(E)$. Indeed, $2+\beta c _1(S^{\vee}\otimes U^{\perp}/ U)$ in the above formula is nothing but $F_1^{(2)}(c _1(S^{\vee}\otimes U^{\perp}/ U))$ in the notation of \cite[Section 3.1.2]{LevineMorel}. \end{rem} \begin{thm}[\textit{cf.} \cite{BuchSamuel}] We have \[ {\textit{CK}}^*(Q(E)) \cong {\textit{CK}}^*(X)[h,f]/I, \] where the ideal $I$ is generated by the relations (\ref{rel1}) and \begin{equation}\label{rel2} f^2 = c_n(S^{\vee} \otimes E/U-S^{\vee}\otimes S^{\vee}\otimes L))f. \end{equation} \end{thm} \begin{proof} It is known that ${\textit{CK}}^*(Q(E))$ is a free module over ${\textit{CK}}^*(X)$ with basis \[ 1,h,\dots,h^{n-1}, \ \ \ f,hf,\dots,fh^{n-1}, \] where $f=[{\mathbb P}(U)]$ and $h=c_1(S^{\vee})$. For example, it follows from the standard fact that $Q(E)$ admits a cell decomposition of $Q(E)$, combined with the cellular decomposition property \cite[Section 5.1.2]{LevineMorel}. The relation (\ref{rel2}) is identical to the one in \cite[Appendix (A.4)]{AndersonFulton2} for cohomology case and it also holds in ${\textit{CK}}^*(Q(E))$. Indeed, the self-intersection formula $f^2=\iota'_*\iota_* c_n(N_{{\mathbb P}(U)}Q(E))$ holds also in connective $K$-theory (\cite{LevineMorel}) and the normal bundle $N_{{\mathbb P}(U)}Q(E)$ of ${\mathbb P}(U)$ in $Q(E)$ sits in the short exact sequence \[ 0 \to N_{{\mathbb P}(U)}Q(E) \to S^{\vee} \otimes E/U\to S^{\vee}\otimes S^{\vee}\otimes L \to 0. \] We find that (\ref{rel1}) is a polynomial equation in $h, f$ with coefficients in ${\textit{CK}}^*(X)$. The top degree of $h$ in (\ref{rel1}) is $n$ and its coefficient is $c(E/U^{\perp};\beta)$ which is invertible in ${\textit{CK}}^*(X)$. Therefore, as we know an additive basis, the relations determine the ring structure. \end{proof} \textbf{Acknowledgements.} Part of this work developed while the first author was affiliated to POSTECH, which he would like to thank for the excellent working conditions. He would also like to gratefully acknowledge the support of the National Research Foundation of Korea (NRF) through the grants funded by the Korea government (MSIP) (2014-001824 and 2011-0030044). The second author is supported by Grant-in-Aid for Scientific Research (C) 24540032, 15K04832. The fourth author is supported by Grant-in-Aid for Scientific Research (C) 25400041.
train/arxiv
BkiUdSg4uzlhjrffiEAJ
5
1
\section{Introduction}\label{seci} Semi-supervised (SS) learning has received increasing attention as one of the most promising areas in statistics and machine learning in recent years. We refer interested readers to \citet{zhu2005semi} and \citet{chapelle2010semi} for a detailed overview on this topic, including its definition, \textcolor{black}{goals}, applications and \textcolor{black}{the} fast growing literature. Unlike traditional supervised or unsupervised learning settings, a\textcolor{black}{n} {\it SS setting}, as the name suggests, represents a \textcolor{black}{confluence} of these two kinds of settings, in the sense that it involves two data sets: (i) a {\it labeled data set} $ \mathcal{L}$ containing observations for an outcome $\mathbb{Y}$ and a set of covariates ${\mathbf X}$ (that are possibly high dimensional), and (ii) a \emph{much larger} {\it unlabeled data set} $ \mathcal{U}$ where only ${\mathbf X}$ is observed. Such situations arise naturally when ${\mathbf X}$ is easily available for a large number of individuals while the corresponding observations for $\mathbb{Y}$ are much harder to collect owing to cost or time constraints. The SS setting is common to a broad class of practical problems in the modern era of ``big data'', including \textcolor{black}{machine learning applications like} text mining, web page classification, speech recognition, natural language processing etc. \textcolor{black}{Among} biomedical \textcolor{black}{applications}, SS settings have turned out to be increasingly relevant \textcolor{black}{in} modern integrative genomics, especially \textcolor{black}{in} expression quantitative trait loci \textcolor{black}{(eQTL)} studies \citep{michaelson2009detection} combining genetic association studies with gene expression profiling. \textcolor{black}{These} have become instrumental in understanding various important questions in genomics, including gene regulatory networks \citep{gilad2008revealing, hormozdiari2016colocalization}. However, one issue with such studies is that they are often under-powered due to the limited size of the gene expression data which are expensive \citep{flutre2013statistical}. On the other hand, records on the genetic variants are cheaper and often available for a massive cohort, thus naturally leading to SS settings while necessitating robust and efficient strategies that can leverage this extra information to produce more powerful association mapping tools as well as methods for detecting the causal effects of the genetic variants. \textcolor{black}{Moreover, SS settings also have great relevance in the analysis of electronic health records data, which are popular resources for discovery research but also suffer from a major bottleneck in obtaining validated outcomes due to logistical constraints; see\textcolor{black}{, e.g.,} \citet{chakrabortty2018efficient} and \citet{cheng2020robust} for more details.} \subsection{Problem setup}\label{sec:psetup} \textcolor{black}{In this paper, we consider causal inference problems in SS settings. To characterize the \textcolor{black}{basic} setup,} suppose our sample consists of two independent data sets: the labeled \textcolor{black}{(or supervised)} data $ \mathcal{L}:=\{(\mathbb{Y}_i,T_i,{\mathbf X}_i^{\rm T})^{\rm T}:i=1,\ldots,n\}$, and the unlabeled \textcolor{black}{(or unsupervised)} data $ \mathcal{U}:=\{(T_i,{\mathbf X}^{\rm T}_i)^{\rm T}:i=n+1,\ldots,n+N\}$ \textcolor{black}{(with $N \gg n$ possibly)}, containing \textcolor{black}{$n$ and $N$} independent copies of ${\mathbf Z}:=(\mathbb{Y},T,{\mathbf X}^{\rm T})^{\rm T}$ and $(T,{\mathbf X}^{\rm T})^{\rm T}$, respectively, where $T\in\{0,1\}$ serves as a {\it treatment indicator}, i.e., $T=1$ or $0$ represents whether an individual is treated or not. \textcolor{black}{(\emph{Note}: Though not the main focus of this paper, we \textcolor{black}{also} consider the setting where the treatment $T$ is \emph{unobserved} in $ \mathcal{U}$\textcolor{black}{,} in Section \ref{sec_ate_u_dagger}.)} The covariates \textcolor{black}{(often also called confounders)} ${\mathbf X}\in {\cal X}\subset\mathbb{R}^p$ \textcolor{black}{are (possibly) \emph{high dimensional}, with dimension} $p\equiv p_n$ allowed to diverge and \textcolor{black}{possibly exceed} $n$ \textcolor{black}{(including $p \gg n$)}, while the {\it observed outcome} \textcolor{black}{is given by:} \begin{eqnarray*} \mathbb{Y}~:=~TY(1) + (1-T)Y(0), \end{eqnarray*} where $Y(t)$ is the {\it potential outcome} of an individual with $T=t$ $(t=0,1)$ \citep{rubin1974estimating, imbens2015causal}. \textcolor{black}{Thus, $(\mathbb{Y} \mid T = t) ~\equiv~ Y(t)$ (also called the consistency assumption).} \textcolor{black}{A} major challenge \textcolor{black}{(and a key feature)} in the above \textcolor{black}{framework} arises from the \textcolor{black}{(possibly)} {\it disproportion\textcolor{black}{ate} \textcolor{black}{sizes}} of $ \mathcal{L}$ and $ \mathcal{U}$, \textcolor{black}{namely $| \mathcal{U}| \gg | \mathcal{L}|$}, \textcolor{black}{an issue} widely \textcolor{black}{encountered} in modern \textcolor{black}{(often digitally recorded) observational} datasets \textcolor{black}{of massive sizes}, such as electronic health records \citep{cheng2020robust}. We therefore assume \textcolor{black}{(rather, {\it allow} for)}: \begin{eqnarray} \nu~:=~\hbox{$\lim_{n,\textcolor{black}{N}\to\infty}$}n/(n+N)~=~0, \label{disproportion} \end{eqnarray} as in \citet{chakrabortty2018efficient} and \citet{gronsbell2018semia}. An example \textcolor{black}{of \eqref{disproportion}} is the {\it ideal SS setting} where $n<\infty$ and $N=\infty$ (i.e., the distribution of $(T,{\mathbf X}^{\rm T})^{\rm T}$ is known). Essentially, the condition \eqref{disproportion} distinguishes our framework from that of traditional missing data theory, which typically requires the proportion of complete cases in the sample to be bounded away from zero \textcolor{black}{-- often known as the ``positivity condition'' \citep{imbens2004nonparametric, tsiatis2007semiparametric}. The \emph{natural violation} of this condition in SS settings is what makes them \emph{unique} and more \emph{challenging} than traditional missing data problems. \textcolor{black}{On the other hand, we \textcolor{black}{do assume} throughout this paper \textcolor{black}{that $ \mathcal{L}$ and $ \mathcal{U}$ have the same underlying distribution (i.e., $\mathbb{Y}$ in $ \mathcal{U}$ are missing completely at random)} which is the typical (and often implicit) setup in the traditional SS literature \citep{zhu2005semi, chapelle2010semi}. \textcolor{black}{We formalize this below.}}} \begin{assumption}\label{ass_equally_distributed} \textcolor{black}{\textcolor{black}{The observations in $ \mathcal{L}$ and $ \mathcal{U}$ have the same underlying distribution, so that $\{(\mathbb{Y}_i,T_i,{\mathbf X}_i^{\rm T})^{\rm T}: i=1,\ldots,n\}$ and $\{(T_i,{\mathbf X}_i^{\rm T})^{\rm T}: i=n+1,\ldots,n+N\}$ respectively are $n$ and $N$ independent realizations from the distributions of $(\mathbb{Y},T,{\mathbf X}^{\rm T})^{\rm T}$ and $(T,{\mathbf X}^{\rm T})^{\rm T}$.}} \end{assumption} \paragraph*{Causal parameters of interest} Based on the available data $ \mathcal{L}\cup \mathcal{U}$, we aim to estimate: \vskip0.05i \textcolor{black}{(i)} the {\it average treatment effect} (ATE)\textcolor{black}{:} \begin{eqnarray} \mu_0(1) - \mu_0(0)~:=~{\cal E}\{Y(1)\}-{\cal E}\{Y(0)\}, \textcolor{black}{~~\mbox{and}} \label{ate} \end{eqnarray} \hspace{0.15in} \textcolor{black}{(ii)} the {\it quantile treatment effect} (QTE)\textcolor{black}{:} \begin{eqnarray} {\boldsymbol\theta}(1,\tau)-{\boldsymbol\theta}(0,\tau)~\equiv~{\boldsymbol\theta}(1)-{\boldsymbol\theta}(0), \label{qte} \end{eqnarray} where ${\boldsymbol\theta}(t,\tau)\equiv{\boldsymbol\theta}(t)$ represents the $\tau$\textcolor{black}{-}quantile of $Y(t)$ for some fixed and known $\tau\in(0,1)$, defined as the solution to the equation\textcolor{black}{:} \begin{eqnarray} {\cal E}[ \psi\{Y(t),{\boldsymbol\theta}(t,\tau)\}] ~:=~ {\cal E}[I\{ Y(t) < {\boldsymbol\theta}(t,\tau)\} - \tau] ~=~ 0 \quad (t=0,1)\textcolor{black}{,} \label{defqte} \end{eqnarray} with $I(\cdot)$ \textcolor{black}{being} the indicator function. It is \textcolor{black}{worth noting that} by setting $T\equiv 1$ and $\mu_0(0)={\boldsymbol\theta}(0)\equiv 0$, the above problems \textcolor{black}{also} cover SS estimation of \textcolor{black}{the} response mean and quantile as \textcolor{black}{\emph{special cases}}. \textcolor{black}{The ATE and the QTE are both well-studied choices of causal estimands in supervised settings; see Section \ref{sec_literature} for an overview of these literature(s). While the ATE is perhaps \textcolor{black}{the more common choice,} the QTE is \textcolor{black}{often} more useful and informative\textcolor{black}{, especially} in settings where the causal effect of the treatment is heterogeneous and/or the outcome distribution\textcolor{black}{(s)} is highly skewed so that the average causal effect may be of limited value.} \vskip0.05in \textcolor{black}{Our \emph{goal} here, in general, is to investigate how, when, and to what extent, one can exploit the full data $ \mathcal{L} \cup \mathcal{U}$ to develop SS estimators of these parameters that can ``improve'' standard supervised approaches using $ \mathcal{L}$ only, where the term ``improve'' could be in terms of efficiency or robustness or \emph{both}. The rest of this paper is dedicated to a thorough understanding of such questions via a \emph{complete characterization} of the possible SS estimators and their properties.} \vskip0.05in \textcolor{black}{We also clarify here that we choose the ATE and QTE as two \emph{representative} causal estimands -- presenting diverse methodological and technical challenges -- to exemplify the key features of our SS approach and its benefits, without compromising much on the clarity of the main messages. Extensions to other more general functionals (\textcolor{black}{such as} those based on general estimating equations) are indeed possible -- as we discuss later in Section \ref{sec_conclusion_discussion} \textcolor{black}{and Appendix \ref{sm_Z_estimation}} -- though we skip \textcolor{black}{a detailed technical analysis} for the sake of brevity and minimal obfuscation.} \paragraph*{\textcolor{black}{Basic assumptions}} To \textcolor{black}{ensure} that the parameters \textcolor{black}{$\{\mu_0(t),{\boldsymbol\theta}(t)\}_{t = 0}^1$} are identifiable and estimable \textcolor{black}{from the observed data}, we make the \textcolor{black}{following standard assumptions \citep{imbens2004nonparametric}:} \begin{eqnarray} T \ind \{Y(0), Y(1)\} \mid {\mathbf X},\quad \mbox{and} \quad \pi({\mathbf x})~:=~{\cal E}(T\mid{\mathbf X}={\mathbf x})~\in(c,1-c)\textcolor{black}{,} \label{mar_positivity} \end{eqnarray} for any ${\mathbf x}\in {\cal X}$ and some constant $c\in(0,1)$. \textcolor{black}{The quantity $\pi({\mathbf x})$ is also known as the \emph{propensity score} for the treatment. \eqref{mar_positivity} encodes some well known conditions \citep{imbens2015causal}.} The first part of \eqref{mar_positivity} is \textcolor{black}{often} known as the \emph{no unmeasured confounding} assumption, equivalent to the {\it missing at random} assumption in the context of missing data \citep{tsiatis2007semiparametric, little2019statistical}, while the second part is the {\it positivity} (or {\it overlap}) assumption \textcolor{black}{\it on the treatment}. \paragraph*{Clarification} Considering that the \textcolor{black}{corresponding case} of $Y(0)$ is analogous, we would henceforth focus on the mean and quantile estimation of $Y(1)$ without loss of generality, \textcolor{black}{and} \begin{eqnarray} \textcolor{black}{\mbox{let~$\{Y,\mu_0,{\boldsymbol\theta}\}$~~generically denote~~$\{Y(1),\mu_0(1),{\boldsymbol\theta}(1)\}$.}} \label{generic_notation} \end{eqnarray} \subsection{\textcolor{black}{Related literature} }\label{sec_literature} \textcolor{black}{The setup and contributions of our work naturally relate to \emph{three} different facets of existing literature, namely: (a) ``traditional'' (non-causal) SS inference, (b) supervised causal inference, and finally, (c) SS causal inference. Below we briefly summarize the relevant works in each of these areas, followed by a detailed account of our contributions.} \paragraph*{SS learning and inference} For estimation in an SS setup, the primary and most critical \textcolor{black}{goal} is \textcolor{black}{to investigate} when and how its robustness and efficiency can be improved, compared to {\it supervised} methods using the labeled data $ \mathcal{L}$ only, by exploiting the unlabeled data $ \mathcal{U}$. Chapter 2 of \citet{Chakrabortty_Thesis_2016} provided an elaborate discussion on this question, claiming that the answer is generally determined by the \textcolor{black}{nature of the relationship} between the parameter of interest and the marginal distribution, ${\mathbb P}_{\mathbf X}$, of ${\mathbf X}$\textcolor{black}{,} as $ \mathcal{U}$ provides information regarding ${\mathbb P}_{\mathbf X}$ only. Therefore\textcolor{black}{,} many existing algorithms \textcolor{black}{for} SS learning \textcolor{black}{that target} ${\cal E}(\mathbb{Y}\mid{\mathbf X})$, including, for instance, generative modeling \citep{nigam2000text, nigam2001using}, graph-based methods \citep{zhu2005semi} and manifold regularization \citep{belkin2006manifold}, rely to some extent on assumptions relating ${\mathbb P}_{\mathbf X}$ to the conditional distribution of $\mathbb{Y}$ given ${\mathbf X}$. When these assumptions are violated, \textcolor{black}{however,} they may perform even worse than the corresponding supervised methods \citep{cozman2001unlabeled, cozman2003semi}. Such undesirable degradation highlights the need for safe usage of the unlabeled data $ \mathcal{U}$. To achieve this goal, \citet{chakrabortty2018efficient} advocated the {\it robust} and {\it adaptive} property for SS approaches, i.e., being consistent for the target parameters while \textcolor{black}{being} at least as efficient as their supervised counterparts and more efficient whenever possible. Adopting such a perspective explicitly or implicitly, robust and adaptive \textcolor{black}{procedures for} SS estimation and inference have been developed under the semi-parametric framework recently for various problems\textcolor{black}{,} including mean estimation \citep{zhang2019semi,zhang2019high}, linear regression \citep{azriel2016semi, chakrabortty2018efficient}, general $Z$-estimation \citep{kawakita2013semi, Chakrabortty_Thesis_2016}, prediction accuracy evaluation \citep{gronsbell2018semia} and covariance functionals \citep{tony2020semisupervised, chan2020semi}. \textcolor{black}{However,} different from our work considering causal inference and treatment effect estimation, most of this recent progress focused on relatively ``standard'' \textcolor{black}{(non-causal)} problems defined {\it without} the potential outcome framework \textcolor{black}{(and its ensuing challenges, e.g., confounding\textcolor{black}{,} and \textcolor{black}{the} missingness of one of the potential outcomes induced by the treatment assignment \textcolor{black}{$T$})}. \paragraph*{Average treatment effect} Both \textcolor{black}{the} ATE and \textcolor{black}{the} QTE are fundamental and popular causal estimands which have been extensively studied in the context of supervised causal inference based on a wide range of approaches; see \citet{imbens2004nonparametric} and \citet{tsiatis2007semiparametric} for an overview of the ATE literature. In particular, these include inverse probability weighted (IPW) approaches \citep{rosenbaum1983central, rosenbaum1984reducing, robins1994estimation, hahn1998role, hirano2003efficient, ertefaie2020nonparametric} involving approximation of the propensity score $\pi({\mathbf X})$, as well as \textcolor{black}{\emph{doubly robust}} (DR) methods \citep{robins1994estimation, robins1995semiparametric, rotnitzky1998semiparametric, scharfstein1999adjusting, kang2007demystifying, vermeulen2015bias} which require estimating both ${\cal E}(Y\mid{\mathbf X})$ and $\pi({\mathbf X})$. As the name implies, the DR estimators are consistent whenever one of the two nuisance models is correctly specified, while attaining the semi-parametric efficiency bound for the unrestricted model, as long as both are correctly specified. When the number of covariates is fixed, semi-parametric inference via such DR methods has a rich literature; see \citet{bang2005doubly}, \citet{tsiatis2007semiparametric}, \citet{kang2007demystifying} and \citet{graham2011efficiency} for a review. In recent times, there has also been substantial interest in the extension of these approaches to high dimensional scenarios, leading to a flurry of work\textcolor{black}{, e.g., \citet{farrell2015robust, chernozhukov2018double, athey2018approximate, smucler2019unifying}, among many others}. \textcolor{black}{\textcolor{black}{Most of t}hese papers generally impose one of the following two conditions on the nuisance function\textcolor{black}{s'} estimation to attain $n^{1/2}$\textcolor{black}{-}consistency and asymptotic normality for valid (supervised) inference \textcolor{black}{based on their ATE estimators}:} \begin{enumerate}[(a)] \item \textcolor{black}{Both ${\cal E}(Y\mid{\mathbf X})$ and $\pi({\mathbf X})$ are correctly specified, and the product of their estimators' convergence rates vanishes fast enough \textcolor{black}{(typically, faster than $n^{-1/2}$)} \citep{belloni2014inference, farrell2015robust, belloni2017program, chernozhukov2018double}.} \item \hspace{-0.056in} \textcolor{black}{Either ${\cal E}(Y\mid{\mathbf X})$ or $\pi({\mathbf X})$ is correctly specified by a linear/logistic regression model\textcolor{black}{, while} some \textcolor{black}{carefully tailored} bias corrections are applied\textcolor{black}{,} and some rate conditions are satisfied \textcolor{black}{as well} \citep{smucler2019unifying, tan2020model, dukes2021inference}.} \end{enumerate} \textcolor{black}{However, we will show that, \textcolor{black}{under our SS setup,} through using the massive unlabeled data, \textcolor{black}{there are some striking \emph{robustification benefits} that ensure} these requirements \textcolor{black}{\emph{can}} be substantially relaxed, \textcolor{black}{and that \emph{$n^{1/2}$-rate inference} on the ATE (or QTE) \emph{can} be achieved in a \emph{seamless} way, \emph{without} requiring any specific forms of the nuisance model(s) nor any sophisticated bias correction techniques under misspecification}; see Point (I) in Section \ref{sec_contributions} for details.} \paragraph*{Quantile treatment effect} \textcolor{black}{The} marginal QTE, though technically a more challenging parameter due to the \textcolor{black}{inherently} {\it non-smooth} nature of the quantile estimating equation \eqref{defqte}, provides a \textcolor{black}{more complete} picture of the causal effect on \textcolor{black}{the} outcome distribution, beyond just its mean\textcolor{black}{.} \textcolor{black}{There is a fairly rich literature on \textcolor{black}{(supervised)} QTE estimation as well.} For example, \citet{firpo2007efficient} developed an IPW estimator \textcolor{black}{that attains} semi-parametric efficiency under some smoothness assumptions. \citet{hsu2020qte} viewed the quantile ${\boldsymbol\theta}$ \textcolor{black}{from the perspective of the conditional distribution,} as the solution to the equation $\tau={\cal E}\{F({\boldsymbol\theta}\mid{\mathbf X})\}$\textcolor{black}{,} where $F(\cdot\mid{\mathbf x}):={\mathbb P}(Y<\cdot\mid{\mathbf X}={\mathbf x})$. Their method thus requires estimating the whole conditional distribution of $Y$ given ${\mathbf X}$. To avoid such a burdensome task, \citet{kallus2019localized} recently proposed the localized debiased machine learning approach, which only involves estimation of $F(\cdot\mid{\mathbf X})$ at a preliminary estimate of the quantile and can leverage a broad range of machine learning methods besides kernel smoothing used by \citet{hsu2020qte}. Moreover, \citet{zhang2012causal} compared methods based on the propensity score $\pi({\mathbf X})$ and the conditional distribution $F(\cdot\mid{\mathbf X})$. They also devised a DR estimator for the QTE under parametric specification of $\pi({\mathbf X})$ and $F(\cdot\mid{\mathbf X})$. Nevertheless, all \textcolor{black}{these} aforementioned work\textcolor{black}{s are still} restricted to the supervised domain involving only \textcolor{black}{the} labeled data $ \mathcal{L}$. \paragraph*{SS inference for treatment effect\textcolor{black}{s}} Although there \textcolor{black}{has} been \textcolor{black}{work} on a variety of problems in SS settings, as listed in the first paragraph of Section \ref{sec_literature}, less attention, however, has been paid to causal inference and treatment effect estimation \textcolor{black}{problems}, except \textcolor{black}{for some (very recent)} progress \citep{zhang2019high, kallus2020role, cheng2020robust}. When there exist post-treatment surrogate variables that are potentially predictive of the outcome, \citet{cheng2020robust} combined imputing and inverse probability weighting, building on the\textcolor{black}{ir} technique of ``double-index'' propensity scores \citep{cheng2020estimating}, to devise \textcolor{black}{an IPW-type} SS estimator for \textcolor{black}{the} ATE, which is doubly robust. \textcolor{black}{Though not explicitly stated, their approach, however, only applies to low dimensional $(p \ll n)$ settings, and more importantly, their estimator being of an IPW type, does not have a naturally ``orthogonal'' structure (in the sense of \citet{chernozhukov2018double}), and therefore, is not first order insensitive to estimation errors of the nuisance functions, unlike our proposed approach. This feature is particularly crucial in situations involving high dimensional and/or non-parametric nuisance estimators.} \citet{kallus2020role} also considered the role of surrogates in SS estimation of the ATE, but \textcolor{black}{mostly} in cases where the labeling fractions are bounded below. Further, with a largely theoretical focus, their main aims were characterizations of efficiency and optimality\textcolor{black}{,} rather than \textcolor{black}{implementation.} \textcolor{black}{In a setting similar to \citet{kallus2020role}, with surrogates available, \citet{hou2021efficient}, a very recent work we noticed at the final stage\textcolor{black}{s} of our preparation \textcolor{black}{of} this paper, \textcolor{black}{also} developed SS estimators \textcolor{black}{for} the ATE. Unlike our data structure\textcolor{black}{,} where $ \mathcal{U}$ provides observations for both ${\mathbf X}$ and $T$, \citet{hou2021efficient} assumed the treatment indicator is missing in the unlabeled data, \textcolor{black}{and} so their estimators have fairly different robustness guarantees from ours. This case, with $T$ unobserved in $ \mathcal{U}$, is not of our \textcolor{black}{primary} interest\textcolor{black}{.} But we will briefly address \textcolor{black}{it as well} in Section \ref{sec_ate_u_dagger}.} Lastly, \citet{zhang2019high} extended their SS mean estimation method using a linear working model for ${\cal E}(Y \mid {\mathbf X})$ to the case of the ATE. While all these articles mostly investigated the efficiency of their approaches, none of them clarified the potential gain of \textcolor{black}{\it robustness} from leveraging \textcolor{black}{the} unlabeled data $ \mathcal{U}$. In addition, \textcolor{black}{\citet{zhang2019high} and \citet{cheng2020robust} mainly focused on some specific working models for ${\cal E}(Y\mid{\mathbf X})$ and/or $\pi({\mathbf X})$}, and \citet{zhang2019high} \textcolor{black}{only} briefly discussed the ATE estimation \textcolor{black}{problem --} as an illustration of their SS mean estimation approach; see Remark \ref{remark_comparison_zhang2019} for a more detailed comparison of our work \textcolor{black}{with} \citet{zhang2019high}. \vskip0.05in \textcolor{black}{As for} \textcolor{black}{the} QTE, its SS estimation has, to the best of our knowledge, not been studied in \textcolor{black}{any of} the existing works. \textcolor{black}{Our work here appears to be the \emph{first} contribution in this regard.} \subsection{Our contributions}\label{sec_contributions} \textcolor{black}{This paper aims to bridge some of these major gaps in the existing literature, towards a better and unified understanding -- both methodological and theoretical -- of SS causal inference and its benefits. We summarize our main contributions below.} \begin{enumerate}[(I)] \item We develop under the SS setting \eqref{disproportion} a \emph{family} of DR estimators for\textcolor{black}{:} (a) the ATE \textcolor{black}{(Section \ref{secos})} and (b) the QTE \textcolor{black}{(Section \ref{secqte})}, which take the \textcolor{black}{\emph{whole}} data $ \mathcal{L}\cup \mathcal{U}$ into consideration and enable us to employ arbitrary methods for estimating the nuisance functions as long as some high level conditions are satisfied. \textcolor{black}{These estimators, apart from affording a \emph{flexible} and \emph{general} construction (\textcolor{black}{involving} imputation and IPW strategies, along with \textcolor{black}{the} use of cross fitting, applied to $ \mathcal{L}\cup \mathcal{U}$), also enjoy several desirable properties and advantages.} In addition to \textcolor{black}{being} DR in \textcolor{black}{terms of} consistency, we further prove that, whenever the propensity score $\pi({\mathbf X})$ is correctly \textcolor{black}{specified and estimated at a suitably fast rate} \textcolor{black}{-- something that is indeed {\it achievable} under our SS setting,} our estimators are \emph{$n^{1/2}$-consistent and asymptotically normal} \emph{even if the outcome model is misspecified \textcolor{black}{and none of the nuisance functions has a specific (\textcolor{black}{e.g.,} linear$/$logistic) form}}; see Theorems \ref{thate} and \ref{thqte} as well as Corollaries \ref{corate} and \ref{corqte}, along with the discussions in the \textcolor{black}{subsequent Remarks \ref{remark_ate_robustness} and \ref{remark_qte_property}.} {\it Agnostic to the construction of nuisance function estimators}, this \textcolor{black}{robustness} property \textcolor{black}{-- a \emph{$n^{1/2}$-rate robustness property} of sorts --} is particularly desirable for inference, \textcolor{black}{while \emph{generally not achievable in purely supervised settings} \textcolor{black}{without extra targeted (and nuanced) bias corrections which \textcolor{black}{do} require specific (linear$/$logistic) forms of the nuisance function estimators along with \textcolor{black}{other} conditions, as discussed in our review of (supervised) ATE estimation in Section \ref{sec_literature}.}} \textcolor{black}{In contrast, our \textcolor{black}{SS approach is} much more flexible \textcolor{black}{and seamless}, allowing for \emph{any} reasonable strategies (parametric, semi-parametric or non-parametric) \textcolor{black}{for estimating} the nuisance functions.} Moreover, \textcolor{black}{even if this improvement in robustness is set aside}, our \textcolor{black}{SS} estimators are ensured to be \emph{more efficient} than their supervised counterparts, and are \textcolor{black}{also} semi-parametrically \emph{optimal} when correctly specifying both the propensity score $\pi({\mathbf X})$ and the outcome model, i.e., ${\cal E}(Y\mid{\mathbf X})$ or $F(\cdot\mid{\mathbf X})$ for the ATE or the QTE, respectively\textcolor{black}{; see Remarks \ref{remark_ate_efficiency} and \ref{remark_qte_efficiency}, in particular, regarding these efficiency claims, and Table \ref{table_ate_summary} for a full characterization of the robustness and efficiency benefits of our SS estimators.} \vskip0.1i \item Compared to the case of the ATE, the QTE estimation is substantially more \emph{challenging} in both theory and implementation due to the non-separability of $Y$ and $\theta$ in the quantile estimating equation \eqref{defqte}. To overcome these difficulties, we establish novel results of empirical process theory for deriving the properties of our QTE estimators; see Lemma \ref{1v2} in \textcolor{black}{Appendix} \ref{sm_lemmas}. In addition, we adopt the strategy of \emph{one-step update} \citep{van2000asymptotic, tsiatis2007semiparametric} in the construction of our QTE estimators to facilitate computation. This strategy also avoids the laborious task of recovering the conditional distribution function $F(\cdot\mid{\mathbf X})$ for the whole parameter space of $\theta_0$. Instead, we \emph{only} need to estimate $F(\cdot\mid{\mathbf X})$ at one \emph{single} point. Such an advantage was advocated by \citet{kallus2019localized} as well. \textcolor{black}{Our QTE (as well as ATE) estimators thus have \emph{simple implementations}, in general.} \vskip0.1i \item \textcolor{black}{Finally, a}nother major contribution \textcolor{black}{of this work, though of a somewhat different flavor,} \textcolor{black}{are our results on} the \emph{nuisance function\textcolor{black}{s'} estimation} \textcolor{black}{(Section \ref{secnf})} \textcolor{black}{-- an important component in all our SS estimators' implementation --} for which we consider a \emph{variety} of reasonable and flexible approaches\textcolor{black}{,} including kernel smoothing \textcolor{black}{(with possible use of dimension reduction)}, parametric regression and random forest. \textcolor{black}{In particular,} as a \textcolor{black}{detailed} illustration, we verify the high-level conditions required by our methods for \emph{IPW type kernel smoothing estimators with \textcolor{black}{so-called} ``generated'' covariates} \citep{mammen2012nonparametric, escanciano2014uniform, mammen_rothe_schienle_2016} \textcolor{black}{involving (unknown) transformations of possibly \emph{high dimensional} covariates}. Specifically, we investigate in detail their \emph{uniform \textcolor{black}{($L_{\infty}$)} convergence rates, extending the existing theory to cases involving high dimensionality and \textcolor{black}{IPW schemes that need to be estimated}}; see Theorems \ref{theorem_ks_ate} and \ref{thhd}. \textcolor{black}{These results are novel to the best of our knowledge, and can be applicable more generally in other problems. Thus they should be of independent interest.} \end{enumerate} \begin{comment} proposes a family of estimators for (a) the ATE and (b) the QTE, which takes the whole data $ \mathcal{L}\cup \mathcal{U}$ into consideration and enables us to employ arbitrary methods for estimating the nuisance functions as long as some high level conditions are satisfied. In addition to DR, which can be achieved by supervised approaches as well, we prove the $n^{1/2}$-consistency and asymptotic normality of our estimators whenever the \textcolor{black}{propensity} score $\pi({\mathbf X})$ is correctly specified. These properties are desirable for inference and generally unachievable in supervised settings without extra bias corrections \citep{vermeulen2015bias, smucler2019unifying}. Moreover, our methods are semi-parametrically efficient when correctly specifying both the propensity score $\pi({\mathbf X})$ and the outcome model, i.e., ${\cal E}(Y\mid{\mathbf X})$ or $F(\cdot\mid{\mathbf X})$ for the ATE or QTE, respectively. In the construction of our QTE estimator, we adopt the strategy of one-step update \citep{van2000asymptotic, tsiatis2007semiparametric} to overcome computational difficulty incurred by the inseparability of $Y$ and ${\boldsymbol\theta}$ in (\ref{defqte}). It also allows us to estimate the conditional distribution function $F(\cdot\mid{\mathbf X})$ at only one single point, an advantage advocated by \citet{kallus2019localized}. In addition, to approximate the nuisance functions in our methods, we investigate the uniform convergence of IPW type kernel smoothing estimators with generated covariates \citep{mammen2012nonparametric, escanciano2014uniform, mammen_rothe_schienle_2016}, extending the existing theory to cases involving high dimensionality and unknown weighting mechanisms. Our conclusions can be applicable more generally and thereby should be of independent interest. In summary, the main contribution of this article is as follows. \begin{enumerate}[(I)] \item We develop under the semi-supervised setting \eqref{disproportion} a family of DR estimators for (a) ATE and (b) QTE, which attain $n^{1/2}$-consistency and asymptotic normality whenver $\pi(\cdot)$ is correctly specified, while achieving semi-parametric optimality given both the propensity score and the outcome model are correctly specified. \item As an illustration of estimating the nuisance functions, we establish novel results of IPW type kernel smoothing estimators with generated covariates in high dimensional scenarios. \end{enumerate} \end{comment} \subsection{Organization \textcolor{black}{of the rest} of the article} We introduce our \textcolor{black}{family of} SS estimators for (a) the ATE and (b) the QTE, as well as establish their asymptotic properties, in Sections \ref{secos} and \ref{secqte}, respectively. Then the choice and estimation of \textcolor{black}{the} nuisance functions \textcolor{black}{involved} in our approaches, \textcolor{black}{along with their theoretical properties}, are discussed in Section \ref{secnf}. Section \ref{sec_simulations} presents \textcolor{black}{detailed} simulation results \textcolor{black}{under various data generating settings to validate the claimed properties and improvements} of our proposed methods, followed by an empirical data example in Section \ref{sec_data_analysis}. Concluding remark\textcolor{black}{s} along with discussion\textcolor{black}{s} on possible extension\textcolor{black}{s} of our work \textcolor{black}{are} provided \textcolor{black}{in} Section \ref{sec_conclusion_discussion}. \textcolor{magenta} \textcolor{black}{Further details on extending our SS approaches} to more general causal estimands, as well as} \textcolor{black}{all} technical materials, \textcolor{black}{including proofs of all results}, and further numerical results, can be found in the Supplementary Material \textcolor{black}{(Appendices \ref{sm_Z_estimation}--\ref{sm_data_analysis})}. \section{SS estimation for the ATE}\label{secos} \textcolor{black}{Following our clarification at the end of Section \ref{sec:psetup}, it suffices to focus only on the SS estimation of $\mu_0$, as in \eqref{generic_notation}, which will be our primary goal in Sections \ref{sec_ate_sup}--\ref{sec_ate_u_dagger}, after which we formally address SS inference for the ATE in Section \ref{sec_ate_difference}.} \vspace{-0.02in} \subsection*{Notation\textcolor{black}{s}} \textcolor{black}{We first introduce some notations that will be used throughout the paper.} \textcolor{black}{We} use the lower letter $c$ to represent a generic positive constant, including $c_1$, $c_2$, etc, which may vary from line to line. For a $d_1\times d_2$ matrix $\mathbf{P}$ whose $(i,j)$th component is $\mathbf{P}_{[ij]}$, \textcolor{black}{we} let \begin{eqnarray*} &&\hbox{$\|\mathbf{P}\|_0~:=~\max_{1\leq j\leq d_2}\{\sum_{i=1}^{d_1}I(\mathbf{P}_{[ij]}\neq 0)\},~~ \|\mathbf{P}\|_1~:=~\max_{1\leq j\leq d_2}(\sum_{i=1}^{d_1}|\mathbf{P}_{[ij]}|)$}, \\ &&\|\mathbf{P}\|~:=~\hbox{$\max_{1\leq j\leq d_2}\{(\sum_{i=1}^{d_1}\mathbf{P}_{[ij]}^2)^{1/2}\},~~ \textcolor{black}{\mbox{and}}~~ \|\mathbf{P}\|_\infty~:=~\max_{1\leq i\leq d_1, 1\leq j\leq d_2}|\mathbf{P}_{[ij]}|$}. \end{eqnarray*} The bold numbers $\mathbf{1}_d$ and $\mathbf{0}_d$ refer to $d$-dimensional vectors of ones and zeros, respectively. \textcolor{black}{We d}enote $ \mathcal{B}(\mbox{\boldmath $\alpha$},{\varepsilon}):=\{{\bf a}:\|{\bf a}-\mbox{\boldmath $\alpha$}\|\leq{\varepsilon}\}$ as a generic neighborhood of a vector $\mbox{\boldmath $\alpha$}$ with some \textcolor{black}{radius} ${\varepsilon}>0$. We use $\mbox{\boldmath $\alpha$}_{[j]}$ to \textcolor{black}{denote} the $j$th component of a vector $\mbox{\boldmath $\alpha$}$. For two data sets $\mathcal{S}_1$ and $\mathcal{S}_2$, \textcolor{black}{we} define ${\mathbb P}_{\mathcal{S}_1}(\cdot\mid\mathcal{S}_2)$ as the conditional probability with respect to $\mathcal{S}_1$ given $\mathcal{S}_2$. For any random function $\widehat{g}(\cdot,\theta)$ and a random vector ${\mathbf W}$ with copies ${\mathbf W}_1,\ldots,{\mathbf W}_{n+N}$, \textcolor{black}{we} denote \begin{eqnarray*} {\cal E}_{{\mathbf W}}\{\widehat{g}({\mathbf W},\theta)\}~:=~ \hbox{$\int$} \widehat{g}({\mathbf w}, \theta) d\,{\mathbb P}_{{\mathbf W}}({\mathbf w}) \end{eqnarray*} as the expectation of $\widehat{g}({\mathbf W},\theta)$ with respect to ${\mathbf W}$\textcolor{black}{,} treating $\widehat{g}(\cdot,\theta)$ as a non\textcolor{black}{-}random function, where ${\mathbb P}_{{\mathbf W}}(\cdot)$ is the distribution function of ${\mathbf W} $. For $M\in\{n,n+N\}$, \textcolor{black}{we} write \begin{eqnarray*} &&{\cal E}_M\{\widehat{g}({\mathbf W},\theta)\}~:=~ M^{-1}\hbox{$\sum_{i=1}^M$} \widehat{g}({\mathbf W}_i,\theta),\\ &&\mathbb{G}_M\{\widehat{g}({\mathbf W},\theta)\}~:=~ M^{1/2}[{\cal E}_M\{\widehat{g}({\mathbf W},\theta)\}-{\cal E}_{\mathbf W}\{\widehat{g}({\mathbf W},\theta)\}], ~~\textcolor{black}{\mbox{and}}\\ &&\hbox{var}_M\{\widehat{g}({\mathbf W},\theta)\}~:=~{\cal E}_M[\{\widehat{g}({\mathbf W},\theta)\}^2]-[{\cal E}_M\{\widehat{g}({\mathbf W},\theta)\}]^2. \end{eqnarray*} Also, \textcolor{black}{we} define \begin{eqnarray*} &&{\cal E}_N\{\widehat{g}({\mathbf W},\theta)\}~:=~ N^{-1}\hbox{$\sum_{i=n+1}^{n+N}$} \widehat{g}({\mathbf W}_i,\theta), ~~\textcolor{black}{\mbox{and}}\\ &&\mathbb{G}_N\{\widehat{g}({\mathbf W},\theta)\}~:=~ N^{1/2}[{\cal E}_N\{\widehat{g}({\mathbf W},\theta)\}-{\cal E}_{\mathbf W}\{\widehat{g}({\mathbf W},\theta)\}]. \end{eqnarray*} Lastly, we \textcolor{black}{let} $f(\cdot)$ and $F(\cdot)$ \textcolor{black}{denote} the density and distribution functions of $Y$, while $f(\cdot\mid{\mathbf w})$ and $F(\cdot\mid{\mathbf w})$ represent the conditional density and distribution functions of $Y$ given ${\mathbf W}={\mathbf w}$. \subsection{Supervised \textcolor{black}{estimator}} \label{sec_ate_sup} As noted earlier, for estimating \textcolor{black}{the} ATE, we \textcolor{black}{can simply} focus on $\mu_0\equiv{\cal E}(Y)$ with $Y\equiv Y(1)$. To this end, we first observe the following representation \textcolor{black}{(and identification)} of $\mu_0$. Let $m({\mathbf X}):={\cal E}(Y\mid{\mathbf X})$ \textcolor{black}{and recall $\pi({\mathbf X}) \equiv {\cal E}(T\mid {\mathbf X})$. We then have:} \begin{eqnarray*} \mu_0 &~=~& {\cal E}\{m({\mathbf X}) \}+ {\cal E}[\{\pi^*({\mathbf X})\}^{-1} T\{Y-m({\mathbf X})\}] \\ & ~=~& {\cal E}\{ m^*({\mathbf X}) \} + {\cal E}[\{\pi({\mathbf X})\}^{-1} T\{Y-m^*({\mathbf X})\}]\textcolor{black}{,} \end{eqnarray*} for some \emph{arbitrary} functions $\pi^*(\cdot)$ and $m^*(\cdot)$, implying that the equivalence\textcolor{black}{:} \begin{eqnarray} \mu_0 &~=~& {\cal E}\{m^*({\mathbf X}) \}+ {\cal E}[\{\pi^*({\mathbf X})\}^{-1} T\{Y - m^*({\mathbf X}) \}] \label{ate_dr_representation} \end{eqnarray} holds given either $\pi^*({\mathbf X})=\pi({\mathbf X})$ or $ m^*({\mathbf X}) = m({\mathbf X}) $ but {\it not} necessarily both. The equation \eqref{ate_dr_representation} is thus a DR representation of $\mu_0$\textcolor{black}{, involving the nuisance functions $\pi(\cdot)$ and $m(\cdot)$}. Using the empirical version of \eqref{ate_dr_representation} based on $ \mathcal{L}$ precisely leads to the traditional DR estimator of the mean $\mu_0$ \citep{bang2005doubly, chernozhukov2018double}, i.e., the \emph{supervised estimator} \begin{eqnarray} \hat{\mu}_{\mbox{\tiny SUP}}~:=~{\cal E}_n\{\hat{m}_n({\mathbf X})\}+{\cal E}_n[\{\hat{\pi}_n({\mathbf X})\}^{-1}T\{Y-\hat{m}_n({\mathbf X})\}], ~~\textcolor{black}{\mbox{where}} \label{sup_ate} \end{eqnarray} $\{\hat{\pi}_n(\cdot),\hat{m}_n(\cdot)\}$ are \textcolor{black}{\it some} estimators of $\{\pi(\cdot),\mu(\cdot)\}$ from $ \mathcal{L}$ with possibly misspecified limits $\{\pi^*(\cdot),m^*(\cdot)\}$. Apart from \textcolor{black}{being} DR, the estimator $\hat{\mu}_{\mbox{\tiny SUP}}$ also possesses the two nice properties below as long as \textcolor{black}{the} models for $\{\pi(\cdot),\mu(\cdot)\}$ are \textcolor{black}{\it both} correctly specified and \textcolor{black}{certain rate conditions \citep{chernozhukov2018double} on the convergence of $\{\hat{\pi}_n(\cdot),\hat{m}_n(\cdot)\}$} are satisfied. \begin{enumerate}[(i)] \item First-order insensitivity \textcolor{black}{-- When both nuisance models are correctly specified, t}he influence function of $\hat{\mu}_{\mbox{\tiny SUP}}$ is not affected by the estimation errors of $\{\hat{\pi}_n(\cdot),\hat{m}_n(\cdot)\}$ \citep{robins1995semiparametric, chernozhukov2018double, chakrabortty2019high}. This feature is directly relevant to the {\it debiasing} term ${\cal E}_n[\{\hat{\pi}_n({\mathbf X})\}^{-1}T\{Y-\hat{m}_n({\mathbf X})\}]$ in \eqref{sup_ate} and is desirable for inference, particularly when the construction of $\{\hat{\pi}_n(\cdot),\hat{m}_n(\cdot)\}$ involves non-parametric calibrations or \textcolor{black}{if} ${\mathbf X}$ is high dimensional \textcolor{black}{(leading to rates slower than $n^{-1/2}$)}. \par\smallskip \item Semi-parametric optimality among all regular and asymptotically linear estimators for $\mu_0$ \textcolor{black}{--} $\hat{\mu}_{\mbox{\tiny SUP}}$ attains the semi-parametric efficiency bound for estimating $\mu_0$ under a fully non-parametric (i.e., unrestricted up to the condition \eqref{mar_positivity}) family \textcolor{black}{of} distributions of $(Y,T,{\mathbf X}^{\rm T})^{\rm T}$ \citep{robins1994estimation, robins1995semiparametric, graham2011efficiency}\textcolor{black}{.} \end{enumerate} In the sense of the aforementioned advantages, $\hat{\mu}_{\mbox{\tiny SUP}}$ is the ``best'' achievable estimator for $\mu_0$ under a purely supervised setting \textcolor{black}{\citep{robins1995semiparametric, chernozhukov2018double}}. \subsection[A family of SS estimators \textcolor{black}{for mu0}]{A family of SS estimators \textcolor{black}{for $\mu_0$}}\label{sec_ate_ss} Despite the above desirable properties, the supervised DR estimator $\hat{\mu}_{\mbox{\tiny SUP}}$ may, however, be suboptimal when the unlabeled data $ \mathcal{U}$ is available, owing to ignoring the extra observations for $(T,{\mathbf X}^{\rm T})^{\rm T}$ \textcolor{black}{therein}. An intuitive interpretation is that, since ${\cal E}(Y-\mu_0\mid{\mathbf X})\neq 0$ with a positive probability if we exclude the trivial case where ${\cal E}(Y\mid{\mathbf X})=\mu$ almost surely, the marginal distribution ${\mathbb P}_{\mathbf X}$ of ${\mathbf X}$ actually plays a role in the definition of $\mu_0$ and the information of ${\mathbb P}_{\mathbf X}$ provided by $ \mathcal{U}$ can therefore help estimate $\mu_0$; see Chapter 2 of \citet{Chakrabortty_Thesis_2016} for further insights in a more general context. \textcolor{black}{To} utilize $ \mathcal{U}$, we notice that the term ${\cal E}_n\{\hat{m}_n({\mathbf X})\}$ in \eqref{sup_ate} can be replaced by ${\cal E}_{n+N}\{\hat{m}_n({\mathbf X})\}$ which integrates $ \mathcal{L}$ and $ \mathcal{U}$. Moreover, estimation of the propensity score can certainly be improved by using $ \mathcal{U}$ as well, since $\pi({\mathbf X})$ is entirely determined by the distribution of $(T,{\mathbf X}^{\rm T})^{\rm T}$. \textcolor{black}{This provides a much better chance to estimate $\pi(\cdot)$ more \emph{robustly} (possibly at a faster rate!).} \vskip0.05in \textcolor{black}{Thus,} with \textcolor{black}{\emph{any} estimators (with possibly misspecified limits)} $\hat{\pi}_N(\cdot)$ for $\pi(\cdot)$, based on $ \mathcal{U}$, and $\hat{m}_n(\cdot)$ \textcolor{black}{for $m(\cdot)$} from $ \mathcal{L}$\textcolor{black}{,} same as before, we \textcolor{black}{now} propose a family of \textcolor{black}{\emph{SS estimators} of $\mu_0$:} \begin{eqnarray} \hat{\mu}_{\mbox{\tiny SS}}~:=~{\cal E}_{n+N}\{\hat{m}_n({\mathbf X})\}+{\cal E}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1}T\{Y-\hat{m}_n({\mathbf X})\}]\textcolor{black}{,} \label{ss_ate} \end{eqnarray} indexed by $\{\hat{\pi}_N(\cdot),\hat{m}_n(\cdot)\}$. Here\textcolor{black}{,} we apply the strategy of \textcolor{black}{\emph{cross fitting}} \citep{chernozhukov2018double, newey2018cross} when estimating $\hat{m}_n(\cdot)$. Specifically, for some fixed integer $\mathbb{K}\geq 2$, we divide the index set ${\cal I}=\{1,\ldots,n\}$ into $\mathbb{K}$ disjoint subsets ${\cal I}_1,\ldots,{\cal I}_\mathbb{K}$ of the same size $n_\mathbb{K}:=n/\mathbb{K}$ without loss of generality. Let $\hat{m}_{n,k}(\cdot)$ be an estimator for $m^*(\cdot)$ using the set $ \mathcal{L}_k^-:=\{{\bf Z}_i:i\in{\cal I}_k^-\}$ of size $n_{\mathbb{K}^-}:=n-n_\mathbb{K}$, where ${\cal I}_k^-:={\cal I}/{\cal I}_k$. Then\textcolor{black}{,} we define\textcolor{black}{:} \begin{eqnarray} \hat{m}_n({\mathbf X}_i)&~:=~&\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$}\hat{m}_{n,k}({\mathbf X}_i)\quad (i=n+1,\ldots,n+N), \quad \textcolor{black}{\mbox{and}} \label{ds1}\\ \hat{m}_n({\mathbf X}_i)&~:=~&\hat{m}_{n,k}({\mathbf X}_i)\quad (i\in{\cal I}_k;\ k=1,\ldots,\mathbb{K}). \label{ds2} \end{eqnarray} The motivation \textcolor{black}{for the} cross fitting is to bypass technical challenges from the dependence of $\hat{m}_n(\cdot)$ and ${\mathbf X}_i$ in the term $\hat{m}_n({\mathbf X}_i)$ $(i=1,\ldots,n)$. Without cross fitting, the same theoretical conclusions require more \emph{stringent} assumptions in the same spirit as the stochastic equicontinuity conditions in the classical theory of empirical process. These assumptions are generally hard to verify and less likely to hold in high dimensional scenarios. Essentially, using cross fitting makes the second-order errors in the stochastic expansion of $\hat{\mu}_{\mbox{\tiny SS}}$ easier to control while \emph{not} changing the first-order properties, i.e., the influence function of $\hat{\mu}_{\mbox{\tiny SS}}$. See Theorem 4.2 and the following discussion in \citet{chakrabortty2018efficient}, as well as \citet{chernozhukov2018double} and \citet{newey2018cross}, for more discussion concerning cross fitting. Analogously, when estimating $\pi(\cdot)$, we use $ \mathcal{U}$ only so that $\hat{\pi}_N(\cdot)$ and ${\mathbf X}_i$ are independent in $\hat{\pi}_N({\mathbf X}_i)$ $(i=1,\ldots,n)$. Discarding $ \mathcal{L}$ herein is asymptotically negligible owing to the assumption \eqref{disproportion}. \vskip0.1in The definition \eqref{ss_ate} equips us with a \emph{family} of SS estimators for $\mu_0$, \emph{indexed} by $\hat{\pi}_N(\cdot)$ and $\hat{m}_n(\cdot)$. To derive their limiting properties, we need the following \textcolor{black}{(high-level)} conditions. \begin{assumption}\label{api4} The function $\hat{D}_N({\mathbf x}):=\{\hat{\pi}_N({\mathbf x})\}^{-1}-\{\pi^*({\mathbf x})\}^{-1}$ satisfies\textcolor{black}{:} \begin{eqnarray} &&({\cal E}_{\mathbf X}[\{\hat{D}_N({\mathbf X})\}^2])^{1/2}~=~O_p(s_N), ~~ \textcolor{black}{\mbox{and}} \label{sn2} \\ &&\{{\cal E}_{\mathbf Z}([\hat{D}_N({\mathbf X})\{Y- m^*({\mathbf X}) \}]^2)\}^{1/2}~=~O_p(b_N)\textcolor{black}{,} \label{sn4} \end{eqnarray} for some positive sequences $s_N$ and $b_N$ that \textcolor{black}{can possibly diverge,} where $\pi^*(\cdot)$ is \textcolor{black}{\it some} function \textcolor{black}{(target of $\hat{\pi}_N(\cdot)$)} such that $\pi^*({\mathbf x})\in(c,1-c)$ for any ${\mathbf x}\in {\cal X}$ and some constant $c\in(0,1)$. \end{assumption} \begin{assumption}\label{ahmu} The estimator $\hat{m}_{n,k}(\cdot)$ satisfies\textcolor{black}{: for {\it some} function $m^*(\cdot)$,} \begin{eqnarray} &&{\cal E}_{\mathbf X}\{|\hat{m}_{n,k}({\mathbf X})- m^*({\mathbf X}) |\}~=~O_p(w_{n,1}), ~~ \textcolor{black}{\mbox{and}} \label{wn1}\\ &&({\cal E}_{\mathbf X}[\{\hat{m}_{n,k}({\mathbf X})- m^*({\mathbf X}) \}^2])^{1/2}~=~O_p(w_{n,2})\quad (k=1,\ldots,\mathbb{K})\textcolor{black}{,} \label{wn2} \end{eqnarray} for some positive sequences $w_{n,1}$ and $w_{n,2}$ that are possibly divergent. \end{assumption} \begin{remark}\label{remark_ate_assumptions} Assumptions \ref{api4}--\ref{ahmu} impose some rather mild \textcolor{black}{(and \emph{high-level})} regulations on the behavior of the estimators $\{\hat{\pi}_N(\cdot),\hat{m}_n(\cdot)\}$ and their possibly \textcolor{black}{\it misspecified} limit\textcolor{black}{s} $\{\pi^*(\cdot),m^*(\cdot)\}$. The condition \eqref{sn4} is satisfied when, for example, the function $\hat{D}_N({\mathbf X})$ is such that $({\cal E}_{\mathbf X}[\{\hat{D}_N({\mathbf X})\}^4])^{1/4}=O_p(b_N)$\textcolor{black}{,} while $Y$ and $m^*({\mathbf X})$ have finite fourth moments. The restriction on $\pi^*(\cdot)$ in Assumption \ref{api4} is the counterpart of the second condition in \eqref{mar_positivity} under model misspecification, ensuring our estimators $\hat{\mu}_{\mbox{\tiny SS}}$ have influence functions with finite variances; see Theorem \ref{thate}. Moreover, it is noteworthy that all the sequences in Assumptions \ref{api4}--\ref{ahmu} are allowed to \textcolor{black}{\emph{diverge},} while specifying \textcolor{black}{\emph{only}} the rates of finite norms \textcolor{black}{(i.e., $L_r$ moments for some finite $r$)} \textcolor{black}{of} $\hat{D}_N({\mathbf X}) $ and $\hat{m}_{n,k}({\mathbf X})-m^*({\mathbf X})$, \textcolor{black}{which is} weaker than requiring their convergences uniformly \textcolor{black}{over} % ${\mathbf x}\in {\cal X}$ \textcolor{black}{(i.e., $L_{\infty}$ convergence)}. These assumptions will be verified for some choices of $\{\hat{\pi}_N(\cdot),\hat{m}_n(\cdot),\pi^*(\cdot),m^*(\cdot)\}$ in Section \ref{secnf}. \par\smallskip In the theorem below, we present the stochastic expansion \textcolor{black}{(and a complete characterization of the asymptotic properties)} of our SS estimators $\hat{\mu}_{\mbox{\tiny SS}}$ defined in \eqref{ss_ate}. \end{remark} \begin{theorem}\label{thate} Under Assumptions \ref{ass_equally_distributed} and \ref{api4}--\ref{ahmu}, the stochastic expansion of $\hat{\mu}_{\mbox{\tiny SS}}$ is\textcolor{black}{:} \begin{eqnarray*} &&\hat{\mu}_{\mbox{\tiny SS}}-\mu_0~=~n^{-1}\hbox{$\sum_{i=1}^n$}\zeta_{n,N}({\mathbf Z}_i)~+~O_p\{n^{-1/2}(w_{n,2}+b_N)+s_N\,w_{n,2}\}~+ \\ && \phantom{\hat{\mu}_{\mbox{\tiny SS}}-\mu_0~=~}~I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}O_p(w_{n,1})~+~I\{ m^*({\mathbf X}) \neq m({\mathbf X}) \}O_p(s_N)\textcolor{black}{,} \end{eqnarray*} when $\nu\geq 0$, where \textcolor{black}{$I(\cdot)$ is the indicator function as defined earlier, and} \begin{eqnarray*} \zeta_{n,N}({\mathbf Z})~:=~\{\pi^*({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}~+~{\cal E}_{n+N}\{ m^*({\mathbf X}) \}~-~\mu_0\textcolor{black}{,} \end{eqnarray*} \textcolor{black}{with} ${\cal E}\{\zeta_{n,N}({\mathbf Z})\}=0$ \textcolor{black}{if} either $\pi^*({\mathbf X})=\pi({\mathbf X})$ or $m^*({\mathbf X}) = m({\mathbf X})$ but not necessarily both. \end{theorem} Theorem \ref{thate} establishes the \textcolor{black}{\emph{asymptotic linearity}} of $\hat{\mu}_{\mbox{\tiny SS}}$ for the \textcolor{black}{\emph{general}} case where $\nu\geq 0$, i.e., the labeled and unlabeled data sizes are either comparable or not. Considering, \textcolor{black}{however, the typical case is that} the number of the extra observations for $(T,{\mathbf X}^{\rm T})^{\rm T}$, whose distribution completely determines the propensity score $\pi({\mathbf X})$, from the unlabeled data $ \mathcal{U}$ is much larger than the labeled data size $n$ in the SS setting \eqref{disproportion}, \textcolor{black}{i.e., $\nu = 0$}, it is fairly reasonable to assume that $\pi({\mathbf X})$ can be correctly specified \textcolor{black}{(i.e., $\pi^*(\cdot) = \pi(\cdot)$) \emph{and}} estimated \textcolor{black}{from $ \mathcal{U}$} at a rate \textcolor{black}{\emph{faster}} than $n^{-1/2}$. We therefore study the asymptotic behavior of our proposed estimators $\hat{\mu}_{\mbox{\tiny SS}}$ under such an assumption in the next corollary, which directly follows from Theorem \ref{thate}. \begin{corollary}\label{corate} Suppose that the conditions in Theorem \ref{thate} hold true, that $\nu=0$, \textcolor{black}{as in \eqref{disproportion}}, and that $\pi^*({\mathbf X})=\pi({\mathbf X})$. Then the stochastic expansion of $\hat{\mu}_{\mbox{\tiny SS}}$ is\textcolor{black}{:} \begin{eqnarray*} &&\hat{\mu}_{\mbox{\tiny SS}}-\mu_0~=~n^{-1}\hbox{$\sum_{i=1}^n$}\zeta_{\mbox{\tiny SS}}({\mathbf Z}_i)~+~O_p\{n^{-1/2}(w_{n,2}+b_N)+s_N\,w_{n,2}\}~+ \\ &&\phantom{\hat{\mu}_{\mbox{\tiny SS}}-\mu_0~=~}~I\{ m^*({\mathbf X}) \neq m({\mathbf X}) \}O_p(s_N), \end{eqnarray*} where \begin{eqnarray*} \zeta_{\mbox{\tiny SS}}({\mathbf Z})~:=~\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \} ~+~ {\cal E}\{ m^*({\mathbf X}) \} ~-~ \mu_0\textcolor{black}{,} \end{eqnarray*} satisfying ${\cal E}\{\zeta_{\mbox{\tiny SS}}({\mathbf Z})\}=0$\textcolor{black}{, and with $m^*(\cdot)$ being arbitrary (i.e., not necessarily equal to $m(\cdot)$)}. Further, if either $s_N=o(n^{-1/2})$ or $ m^*({\mathbf X}) = m({\mathbf X}) $ but not necessarily both, and \begin{eqnarray*} n^{-1/2}(w_{n,2}+b_N)+s_N\,w_{n,2}~=~o(n^{-1/2}), \end{eqnarray*} the limiting distribution of $\hat{\mu}_{\mbox{\tiny SS}}$ is\textcolor{black}{:} \begin{eqnarray} n^{1/2}\lambda_{\mbox{\tiny SS}}^{-1}(\hat{\mu}_{\mbox{\tiny SS}}-\mu_0)~\xrightarrow{d}~\mathcal{N}(0,1)\quad (n,\textcolor{black}{N} \to\infty), \label{ate_normality} \end{eqnarray} where the asymptotic variance $\lambda_{\mbox{\tiny SS}}^2:={\cal E}[\{\zeta_{\mbox{\tiny SS}}({\mathbf Z})\}^2]=\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}]$ can be estimated by $\hbox{var}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1}T\{Y-\hat{m}_n({\mathbf X})\}]$. \end{corollary} \begin{remark}[Robustness \textcolor{black}{benefits} and first-order insensitivity of $\hat{\mu}_{\mbox{\tiny SS}}$]\label{remark_ate_robustness} According to the conclusions in Theorem \ref{thate}, as long as the residual terms in the expansion vanish asymptotically, our proposed estimators $\hat{\mu}_{\mbox{\tiny SS}}$ converge to $\mu_0$ in probability given either $\hat{\pi}_N(\cdot)$ targets the true $\pi(\cdot)$ or $\hat{m}_{n,k}(\cdot)$ estimates the true $m(\cdot)$\textcolor{black}{,} but \textcolor{black}{\it not} necessarily both. Apart from such \textcolor{black}{a} DR property\textcolor{black}{,} which can be attained using only the labeled data $ \mathcal{L}$ as well \citep{bang2005doubly, kang2007demystifying}, Corollary \ref{corate} further establishes the $n^{1/2}$-consistency and asymptotic normality of $\hat{\mu}_{\mbox{\tiny SS}}$, two critical properties for inference, \textcolor{black}{\it whenever} $\hat{\pi}_N({\mathbf X})$ converges to $\pi({\mathbf X})$ at a rate faster than $n^{-1/2}$, via exploiting the information regarding the distribution of $(T,{\mathbf X}^{\rm T})^{\rm T}$ from the unlabeled data $ \mathcal{U}$. \textcolor{black}{Notably, this holds {\it regardless} of whether $m(\cdot)$ is correctly specified or not}. To attain the same \textcolor{black}{kind of} result without $ \mathcal{U}$, it is generally necessary to require that $\{\pi(\cdot),m(\cdot)\}$ are both correctly specified unless additional bias corrections are applied \textcolor{black}{(and in a nuanced targeted manner)} and \textcolor{black}{specific (linear$/$logistic) forms of $\{\pi(\cdot),m(\cdot)\}$ are assumed} \citep{vermeulen2015bias, smucler2019unifying, tan2020model, dukes2021inference}. \textcolor{black}{Such a significant relaxation of the requirements demonstrates that our SS ATE estimators actually enjoy \textcolor{black}{much} better robustness relative to the ``best'' achievable estimators in purely supervised setups.} \textcolor{black}{These benefits of SS causal inference ensure {\it $n^{1/2}$-rate inference on the ATE (or QTE) can be achieved in a seamless way}, regardless of the misspecification of the outcome model, and moreover, without requiring any specific forms for either of the nuisance model(s).} \textcolor{black}{\textcolor{black}{It should also be noted that these benefits are} quite different \textcolor{black}{in flavor} from \textcolor{black}{those in} many ``standard'' \textcolor{black}{(non-causal)} SS problems, such as mean estimation \citep{zhang2019semi, zhang2019high} and linear regression \citep{azriel2016semi, chakrabortty2018efficient}, where the supervised methods possess full robustness \textcolor{black}{(\textcolor{black}{as} the parameter needs no nuisance function for \textcolor{black}{its} identification)} and the \textcolor{black}{main goal of SS inference is efficiency improvement.} \textcolor{black}{For causal inference, however, we have a more challenging setup, where the supervised methods have to deal with nuisance functions -- inherently required for the parameter's identification and consistent estimation -- and are} no longer fully robust. \textcolor{black}{The} SS \textcolor{black}{setup enables one to} to attain extra robustness, compared to purely supervised methods, from leveraging the unlabeled data.} \textcolor{black}{Thus, for causal inference, the SS setting in fact provides a {\it broader scope of improvement -- in both robustness and efficiency} -- we discuss the latter aspect in Section \ref{sec_ate_efficiency_comparison} below.} \textcolor{black}{Lastly, a}nother notable feature of $\hat{\mu}_{\mbox{\tiny SS}}$ is \textcolor{black}{its} \textcolor{black}{\it first-order insensitivity}, i.e., the influence function $\zeta_{n,N}({\mathbf Z})$ in Theorem \ref{thate} is not affected by estimation errors or \textcolor{black}{any knowledge of the mode of construction} of the nuisance estimators. This is \textcolor{black}{particularly} desirable for \textcolor{black}{($n^{1/2}$-rate)} inference when $\{\hat{\pi}_N(\cdot),\hat{m}_n(\cdot)\}$ involves non-parametric calibrations\textcolor{black}{, or machine learning methods, with slow/unclear first order rates,} or \textcolor{black}{if} ${\mathbf X}$ is high dimensional. \end{remark} \subsection{Efficiency comparison}\label{sec_ate_efficiency_comparison} In this \textcolor{black}{s}ection, we analyze the efficiency gain of $\hat{\mu}_{\mbox{\tiny SS}}$ relative to its supervised counterparts. We have \textcolor{black}{already} clarified in Remark \ref{remark_ate_robustness} the robustness \textcolor{black}{benefits} of $\hat{\mu}_{\mbox{\tiny SS}}$ \textcolor{black}{that are} generally not attainable by purely supervised methods. \textcolor{black}{Therefore, setting aside this already existing improvement (which is partly due to the fact that the SS setup allows $\pi(\cdot)$ to be estimated better, via $\hat{\pi}_N(\cdot)$ from $ \mathcal{U}$), and to ensure} a ``fair'' comparison \textcolor{black}{(with minimum distraction)}, focusing \textcolor{black}{\it solely} on efficiency, we consider \textcolor{black}{the} {\it pseudo-supervised} estimator\textcolor{black}{(s):} \begin{eqnarray} \hat{\mu}_{\mbox{\tiny SUP}}^*~:=~ {\cal E}_{n}\{\hat{m}_n({\mathbf X})\}+{\cal E}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1}T\{Y-\hat{m}_n({\mathbf X})\}], \label{pseudo_sup_ate} \end{eqnarray} which estimates $\pi(\cdot)$ by $\hat{\pi}_N(\cdot)$\textcolor{black}{,} but does not employ $ \mathcal{U}$ to approximate ${\cal E}_{\mathbf X}\{\hat{m}_n({\mathbf X})\}$. \textcolor{black}{(So it is essentially a version of the purely supervised estimator $\hat{\mu}_{\mbox{\tiny SUP}}$ in \eqref{sup_ate} with $\hat{\pi}_n(\cdot)$ therein replaced by $\hat{\pi}_N(\cdot)$, due to the reasons stated above.)} \textcolor{black}{Here we emphasize that, as the name ``pseudo-supervised'' suggests, \emph{they \textcolor{black}{\it cannot} actually be constructed in purely supervised settings and are proposed just for efficiency comparison}}. \textcolor{black}{In a sense, this gives the supervised estimator its best chance to succeed -- in terms of efficiency (setting aside any of its robustness drawbacks) -- and yet, as we will discuss in Remark \ref{remark_ate_efficiency}, they are {\it still} outperformed by our SS estimator(s).} \vskip0.05in We state \textcolor{black}{the properties of these pseudo-supervised estimator(s)} in the following corollary, which can be proved analogously to Theorem \ref{thate} and Corollary \ref{corate}, \textcolor{black}{and then compare their efficiency (i.e., the ideal supervised efficiency) to that of our SS estimator(s) in Remark \ref{remark_ate_efficiency}.} \begin{corollary}\label{coratesup} Under the \textcolor{black}{same} conditions \textcolor{black}{as} in Corollary \ref{corate}, the pseudo-supervised estimator $\hat{\mu}_{\mbox{\tiny SUP}}^*$ in \eqref{pseudo_sup_ate} \textcolor{black}{satisfies the following expansion:} \begin{eqnarray} &&\hat{\mu}_{\mbox{\tiny SUP}}^*-\mu_0~=~n^{-1}\hbox{$\sum_{i=1}^n$}\zeta_{\mbox{\tiny SUP}}({\mathbf Z}_i)~+~O_p\{n^{-1/2}(w_{n,2}+b_N)+s_N\,w_{n,2}\}~+ \nonumber\\ &&\phantom{\hat{\mu}_{\mbox{\tiny SUP}}^*-{\boldsymbol\theta}~=~}~I\{ m^*({\mathbf X}) \neq m({\mathbf X}) \}O_p(s_N), ~~\textcolor{black}{\mbox{and}}\nonumber\\ &&n^{1/2}\lambda_{\mbox{\tiny SUP}}^{-1}(\hat{\mu}_{\mbox{\tiny SUP}}^*-\mu_0)~\xrightarrow{d}~\mathcal{N}(0,1)\quad (n, \textcolor{black}{N} \to\infty), ~~\textcolor{black}{\mbox{where}} \label{ate_sup_normality} \end{eqnarray} $\zeta_{\mbox{\tiny SUP}}({\mathbf Z},\theta):=\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}+ m^*({\mathbf X}) -\mu_0$\textcolor{black}{,} satisfying ${\cal E}\{\zeta_{\mbox{\tiny SUP}}({\mathbf Z})\}=0$\textcolor{black}{,} and \begin{eqnarray*} &&\lambda_{\mbox{\tiny SUP}}^2~:=~{\cal E}[\{\zeta_{\mbox{\tiny SUP}}({\mathbf Z})\}^2]~=~\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}]- \hbox{var}\{ m^*({\mathbf X}) \}~+\\ &&\phantom{\lambda_{\mbox{\tiny SUP}}^2~:=~{\cal E}[\{\zeta_{\mbox{\tiny SUP}}({\mathbf Z})\}^2]~=~}~2\,{\cal E}\{ m^*({\mathbf X}) (Y-\mu_0)\}. \end{eqnarray*} \end{corollary} \begin{remark}[Efficiency improvement \textcolor{black}{of $\hat{\mu}_{\mbox{\tiny SS}}$ and semi-parametric optimality}]\label{remark_ate_efficiency} If the conditions in Corollary \ref{corate} hold and the imputation function takes the form\textcolor{black}{:} \begin{eqnarray} m^*({\mathbf X})~\equiv~{\cal E}\{Y\mid {\bf g}({\mathbf X})\}\textcolor{black}{,} \label{mstarX} \end{eqnarray} with some \textcolor{black}{(possibly)} unknown function ${\bf g}(\cdot)$, the SS variance $\lambda_{\mbox{\tiny SS}}^2$ in \eqref{ate_normality} is less than or equal to the supervised variance $\lambda_{\mbox{\tiny SUP}}^2$ in \eqref{ate_sup_normality}, i.e., \begin{eqnarray} \qquad \lambda_{\mbox{\tiny SS}}^2~=~\lambda_{\mbox{\tiny SUP}}^2-2\,{\cal E}\{ m^*({\mathbf X}) (Y-\mu_0)\}+\hbox{var}\{ m^*({\mathbf X}) \}~=~\lambda_{\mbox{\tiny SUP}}^2-\hbox{var}\{ m^*({\mathbf X}) \}~\leq~ \lambda_{\mbox{\tiny SUP}}^2, \label{variance_comparison_ATE} \end{eqnarray} which implies that $\hat{\mu}_{\mbox{\tiny SS}}$ is equally or more efficient compared to the pseudo-supervised estimator $\hat{\mu}_{\mbox{\tiny SUP}}^*$. An example of the function ${\bf g}({\mathbf x})$ is the linear transformation ${\bf g}({\mathbf x})\equiv\mathbf{P}_0^{\rm T}{\mathbf x}$, where $\mathbf{P}_0$ is some unknown $r\times p$ matrix with a fixed $r\leq p$ and can be estimated\textcolor{black}{, e.g.,} by dimension reduction techniques such as \textcolor{black}{sliced} inverse regression \citep{li1991sliced, lin2019sparse}\textcolor{black}{, as well as by standard parametric (e.g., linear/logistic) regression (for the special case $r=1$).} Further, when the outcome model is correctly specified, i.e., $m^*({\mathbf X})={\cal E}(Y\mid {\mathbf X})$, we have\textcolor{black}{:} \begin{eqnarray} \lambda_{\mbox{\tiny SS}}^2&~\equiv~&\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}]\nonumber\\ &~=~&{\cal E}[\{\pi({\mathbf X})\}^{-2}T\{Y- {\cal E}(Y\mid{\mathbf X}) \}^2]\label{ate_eff}\\ &~\leq~&{\cal E}[\{\pi({\mathbf X})\}^{-2}T\{Y- g({\mathbf X}) \}^2]\textcolor{black}{,} \nonumber \end{eqnarray} for any function $g(\cdot)$ and the equality holds only if $g({\mathbf X})={\cal E}(Y\mid{\mathbf X})$ almost surely. This fact demonstrates the asymptotic \emph{optimality} of $\hat{\mu}_{\mbox{\tiny SS}}$ among all regular and asymptotically linear estimators of $\mu_0$, whose influence functions take the form $\{\pi({\mathbf X})\}^{-1}T\{Y-g({\mathbf X})\}$ for some function $g(\cdot)$. Under the semi-parametric model of $(Y,{\mathbf X}^{\rm T},T)^{\rm T}$, \textcolor{black}{given by the following class of allowable distributions \textcolor{black}{(the most unrestricted class naturally allowed under our SS setup)}:} \begin{eqnarray} \textcolor{black}{\{{\mathbb P}_{(Y,T,{\mathbf X}^{\rm T})^{\rm T}}: \hbox{ \eqref{mar_positivity} is satisfied, }{\mathbb P}_{(T,{\mathbf X}^{\rm T})^{\rm T}} \hbox{ is known and } {\mathbb P}_{Y\mid(T,{\mathbf X}^{\rm T})^{\rm T}} \hbox{ is unrestricted}\},} \label{semiparametric_model} \end{eqnarray} one can show that (\ref{ate_eff}) equals the efficient asymptotic variance for estimating $\mu_0$, that is, the estimator $\hat{\mu}_{\mbox{\tiny SS}}$ \emph{achieves the semi-parametric efficiency bound}; \textcolor{black}{see Remark 3.1 of \citet{chakrabortty2018efficient}\textcolor{black}{, and also the results of \citet{kallus2020role},} for similar bounds}. In Section \ref{sec_nf_ate}, we would detail the above choices of $m^*(\cdot)$ and some corresponding estimators $\hat{m}_{n,k}(\cdot)$. \textcolor{black}{Lastly, it is worth noting that the efficiency bound here not surprisingly is lower compared to the supervised case showing the scope of efficiency gain (apart from robustness) in SS setups.} \end{remark} \subsection[Case where T is not observed in U]{Case where $T$ is not observed in $ \mathcal{U}$}\label{sec_ate_u_dagger} So far, we have focused on \textcolor{black}{the case} where the unlabeled data contains observations for both the treatment indicator $T$ and the covariates ${\mathbf X}$. We now briefly discuss settings where $T$ is \emph{not} observed in the unlabeled data. Based on the sample $ \mathcal{L}\cup \mathcal{U}^\dag$\textcolor{black}{,} with $ \mathcal{U}^\dag:=\{{\mathbf X}_i:i=n+1,\ldots,n+N\}$, we introduce \textcolor{black}{the \emph{SS estimators $\hat{\mu}_{\mbox{\tiny SS}}^\dag$}: \begin{eqnarray} \hat{\mu}_{\mbox{\tiny SS}}^\dag~:=~ {\cal E}_{n+N}\{\hat{m}_n({\mathbf X})\}+{\cal E}_n[\{\hat{\pi}_n({\mathbf X})\}^{-1}T\{Y-\hat{m}_n({\mathbf X})\}] \label{hatmuss_dag} \end{eqnarray} for $\mu_0$. Here $\hat{\pi}_n(\cdot)$ is constructed \textcolor{black}{-- this time solely from $ \mathcal{L}$ --} through a cross fitting procedure similar to \eqref{ds2}\textcolor{black}{,} so that $\hat{\pi}_n(\cdot)$ and ${\mathbf X}_i$ are independent in $\hat{\pi}_n({\mathbf X}_i)$ $(i=1,\ldots,n)$. Specifically, we let $\hat{\pi}_n({\mathbf X}_i):=\hat{\pi}_{n,k}({\mathbf X}_i)$ $(i\in \mathcal{L}_k)$ with $\hat{\pi}_{n,k}(\cdot)$ some estimator for $\pi(\cdot)$ based on \textcolor{black}{$ \mathcal{L}_k^-$} $(k=1,\ldots,\mathbb{K})$. See the discussion below \eqref{ds2} for the motivation and benefit of cross fitting. Compared to $\hat{\mu}_{\mbox{\tiny SS}}$, the estimators $\hat{\mu}_{\mbox{\tiny SS}}^\dag$ substitute $\hat{\pi}_n(\cdot)$ for $\hat{\pi}_N(\cdot)$, approximating the working propensity score model $\pi^*(\cdot)$ using $ \mathcal{L}$ only. We thus impose the following condition on the behavior of $\hat{\pi}_n(\cdot)$, \textcolor{black}{as} a counterpart of \textcolor{black}{our earlier} Assumption \ref{api4}. \begin{assumption}\label{apin4} The function $\hat{D}_{n,k}({\mathbf x}):=\{\hat{\pi}_{n,k}({\mathbf x})\}^{-1}-\{\pi^*({\mathbf x})\}^{-1}$ satisfies\textcolor{black}{:} \begin{eqnarray*} ({\cal E}_{\mathbf X}[\{\hat{D}_{n,k}({\mathbf X})\}^2])^{1/2}~=~O_p(s_n), ~~\textcolor{black}{\mbox{and}}~~ \{{\cal E}_{\mathbf Z}([\hat{D}_{n,k}({\mathbf X})\{Y- m^*({\mathbf X}) \}]^2)\}^{1/2}~=~O_p(b_n)\textcolor{black}{,} \end{eqnarray*} for some positive sequences $s_n$ and $b_n$ $(k=1,\ldots,\mathbb{K})$. \end{assumption} Replacing $\hat{\pi}_N(\cdot)$ by $\hat{\pi}_n(\cdot)$ in Corollary \ref{corate}, we immediately obtain the next corollary regarding the properties of $\hat{\mu}_{\mbox{\tiny SS}}^\dag$. \textcolor{black}{(This serves as the counterpart of our Corollary \ref{corate} on $\hat{\mu}_{\mbox{\tiny SS}}$.)} \begin{corollary}\label{corate_dagger} Under Assumptions \ref{ass_equally_distributed}, \ref{ahmu} and \ref{apin4} as well as the condition that $\nu=0$ \textcolor{black}{as in \eqref{disproportion}}, the SS estimator $\hat{\mu}_{\mbox{\tiny SS}}^\dag$ defined by \eqref{hatmuss_dag} has the stochastic expansion\textcolor{black}{:} \begin{eqnarray*} &&\hat{\mu}_{\mbox{\tiny SS}}^\dag-\mu_0~=~n^{-1}\hbox{$\sum_{i=1}^n$}\zeta_{\mbox{\tiny SS}}({\mathbf Z}_i)~+~O_p\{n^{-1/2}(w_{n,2}+b_n)+s_n\,w_{n,2}\}~+ \\ &&\phantom{\hat{\mu}_{\mbox{\tiny SS}}-\mu_0~=~}~I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}O_p(w_{n,1})~+~I\{ m^*({\mathbf X}) \neq m({\mathbf X}) \}O_p(s_n), ~~\textcolor{black}{\mbox{where}} \end{eqnarray*} {\color{black} $ \zeta_{\mbox{\tiny SS}}({\mathbf Z})~\equiv~\{\pi^*({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}+{\cal E}\{ m^*({\mathbf X}) \}~-~\mu_0\textcolor{black}{,} $} \textcolor{black}{as in Corollary \ref{corate},} satisfying ${\cal E}\{\zeta_{\mbox{\tiny SS}}({\mathbf Z})\}=0$ given either $\pi^*({\mathbf X})=\pi({\mathbf X})$ or $ m^*({\mathbf X}) = m({\mathbf X}) $ but not necessarily both. \vskip0.05in \textcolor{black}{Further,} if $\pi^*({\mathbf X})=\pi({\mathbf X})$, $ m^*({\mathbf X}) = m({\mathbf X}) $ and {\color{black} $ n^{-1/2}(w_{n,2}+b_n)+s_n\,w_{n,2}~=~o(n^{-1/2}), $} \begin{eqnarray} \textcolor{black}{\mbox{then}} ~~~ n^{1/2}\lambda_{\mbox{\tiny SS}}^{-1}(\hat{\mu}_{\mbox{\tiny SS}}^\dag-\mu_0)~\xrightarrow{d}~\mathcal{N}(0,1)\quad (n, \textcolor{black}{N} \to\infty)\textcolor{black}{,} \label{ate_ss_dagger_normality} \end{eqnarray} with $\lambda_{\mbox{\tiny SS}}^2\equiv{\cal E}[\{\zeta_{\mbox{\tiny SS}}({\mathbf Z})\}^2]=\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{Y- m({\mathbf X}) \}]$. \end{corollary} \begin{remark}[Comparison of estimators using different types of data]\label{remark_hatmuss_dag} We can see \textcolor{black}{from} Corollary \ref{corate_dagger} that $\hat{\mu}_{\mbox{\tiny SS}}^\dag$ possesses the same robustness as the supervised estimator $\hat{\mu}_{\mbox{\tiny SUP}}$ in \eqref{sup_ate}. Specifically, it is consistent whenever one \textcolor{black}{among} $\{\pi(\cdot), m(\cdot)\}$ is correctly specified, while its $n^{1/2}$-consistency and asymptotic normality in \eqref{ate_ss_dagger_normality} require both \textcolor{black}{to be correct}. As regards efficiency, as long as the limiting distribution \eqref{ate_ss_dagger_normality} holds, the asymptotic variance $\lambda_{\mbox{\tiny SS}}^2$ of $\hat{\mu}_{\mbox{\tiny SS}}^\dag$ equals that of $\hat{\mu}_{\mbox{\tiny SS}}$ in Theorem \ref{thate}, implying that $\hat{\mu}_{\mbox{\tiny SS}}^\dag$ outperforms $\hat{\mu}_{\mbox{\tiny SUP}}$ and enjoys semi-parametric optimality as discussed in Remark \ref{remark_ate_efficiency}. We summarize in Table \ref{table_ate_summary} \textcolor{black}{the} achievable properties of \textcolor{black}{all} the ATE estimators based on different types of available data. Estimation of the QTE using the data $ \mathcal{L}\cup \mathcal{U}^\dag$ is similar in spirit while technically more laborious. We will therefore omit the relevant discussion considering that such a setting is not \textcolor{black}{our} main interest. \end{remark} \vskip-0.2in \begin{table}[H] \def~{\hphantom{0}} \caption{ \textcolor{black}{SS ATE estimation and its benefits: a complete picture of the a}chievable \textcolor{black}{robustness and efficiency} properties of the ATE estimators based on different types of available data. Here\textcolor{black}{,} the efficiency (Eff.) gain is relative to the supervised estimator \eqref{sup_ate} when $\{m^*(\cdot),\pi^*(\cdot)\}=\{m(\cdot),\pi(\cdot)\}$, \textcolor{black}{while} the optimality (Opt.) \textcolor{black}{refers to} attaining the \textcolor{black}{corresponding} semi-parametric efficiency bound. The abbreviation $n^{1/2}$-CAN stands for $n^{1/2}$-consistency and asymptotic normality\textcolor{black}{, while DR stands for doubly robust (in terms of consistency only).}} { \begin{tabular}{c||c|c|c|c|c} \hline \multirow{3}{*}{Data} & \multirow{3}{*}{DR} & \multicolumn{2}{c|}{$n^{1/2}$-CAN} & \multirow{3}{*}{Eff. gain} & \multirow{3}{*}{Opt.} \\ \cline{3-4} & &$ \pi^*(\cdot)=\pi(\cdot)$ & $ \pi^*(\cdot)=\pi(\cdot)$& & \\ & & $m^*(\cdot)=m(\cdot)$ &$m^*(\cdot)\neq m(\cdot)$ & & \\ \hline $ \mathcal{L}$ & \cmark & \cmark & \xmark & \xmark & \xmark \\ $ \mathcal{L}\cup \mathcal{U}^\dag$ & \cmark & \cmark & \xmark & \cmark & \cmark \\ $ \mathcal{L}\cup \mathcal{U}$ & \cmark & \cmark & \cmark & \cmark &\cmark \\ \hline \end{tabular}} \label{table_ate_summary} \end{table} \subsection{Final \textcolor{black}{SS} estimator for the ATE}\label{sec_ate_difference} In \textcolor{black}{Sections \ref{sec_ate_ss}--\ref{sec_ate_efficiency_comparison},} we have established the asymptotic properties of our SS estimator $\hat{\mu}_{\mbox{\tiny SS}}\equiv\hat{\mu}_{\mbox{\tiny SS}}(1)$ for $\mu_0\equiv\mu_0(1)$. We now propose \textcolor{black}{our \emph{final SS estimator for the ATE,} i.e., the difference $\mu_0(1)-\mu_0(0)$ in \eqref{ate}, as: $\hat{\mu}_{\mbox{\tiny SS}}(1)-\hat{\mu}_{\mbox{\tiny SS}}(0)$, with} \begin{eqnarray*} \hat{\mu}_{\mbox{\tiny SS}}(0)~:=~{\cal E}_{n+N}\{\hat{m}_n({\mathbf X},0)\}+{\cal E}_n[\{1-\hat{\pi}_N({\mathbf X})\}^{-1}(1-T)\{Y-\hat{m}_n({\mathbf X},0)\}], \end{eqnarray*} where the estimator $\hat{m}_n({\mathbf X},0)$ is constructed by cross fitting procedures similar to \eqref{ds1}--\eqref{ds2} and has a probability limit $m^*({\mathbf X},0)$, a working outcome model for the conditional expectation ${\cal E}\{Y(0)\mid{\mathbf X}\}$. Adapting Theorem \ref{thate} and Corollary \ref{corate} with $\{Y,T\}$ therein replaced by $\{Y(0),1-T\}$, we can directly obtain theoretical results \textcolor{black}{for} $\hat{\mu}_{\mbox{\tiny SS}}(0)$ including its stochastic expansion and limiting distribution. By arguments analogous to those in Remarks \ref{remark_ate_robustness}--\ref{remark_ate_efficiency}, one can easily conclude the double robustness, asymptotic normality, efficiency gain compared to the supervised counterparts and semi-parametric optimality of $\hat{\mu}_{\mbox{\tiny SS}}(0)$. Also, it is straightforward to show that these properties are possessed by the difference estimator $\hat{\mu}_{\mbox{\tiny SS}}(1)-\hat{\mu}_{\mbox{\tiny SS}}(0)$ as well. Among all the above conclusions, a particularly important one is that\textcolor{black}{:} \begin{eqnarray} n^{1/2}\lambda_{\mbox{\tiny ATE}}^{-1}[\{\hat{\mu}_{\mbox{\tiny SS}}(1)-\hat{\mu}_{\mbox{\tiny SS}}(0)\}-\{\mu_0(1)-\mu_0(0)\}]~\xrightarrow{d}~\mathcal{N}(0,1)\quad (n, \textcolor{black}{N} \to\infty)\textcolor{black}{,} \label{ate_difference_distribution} \end{eqnarray} under the conditions in Corollary \ref{corate} for $\hat{\mu}_{\mbox{\tiny SS}}(1)$ as well as their counterparts for $\hat{\mu}_{\mbox{\tiny SS}}(0)$, where the asymptotic variance\textcolor{black}{:} \begin{eqnarray*} \lambda_{\mbox{\tiny ATE}}^2~:=~\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}-\{1-\pi({\mathbf X})\}^{-1}(1-T)\{Y(0)- m^*({\mathbf X},0) \}] \end{eqnarray*} can be estimated by\textcolor{black}{:} \begin{eqnarray*} \hbox{var}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1}T\{Y- \hat{m}_n({\mathbf X}) \}-\{1-\hat{\pi}_N({\mathbf X})\}^{-1}(1-T)\{Y(0)- \hat{m}_n({\mathbf X},0) \}]. \end{eqnarray*} In theory, the limiting distribution \eqref{ate_difference_distribution} provides the basis for \textcolor{black}{our SS} inference regarding the ATE\textcolor{black}{:} $\mu_0(1)-\mu_0(0)$; see the data analysis in Section \ref{sec_data_analysis} for an instance of its application. \begin{remark}[Comparison with \citet{zhang2019high}]\label{remark_comparison_zhang2019} It is worth mentioning \textcolor{black}{here} that our work on the ATE bears \textcolor{black}{some resemblance} with \textcolor{black}{the} recent article by \citet{zhang2019high}, who discussed SS inference for the ATE as an illustration of their SS mean estimation method and mainly focused on using a linear working model for ${\cal E}(Y\mid{\mathbf X})$. We, however, treat this problem in more generality \textcolor{black}{-- both in methodology and theory}. Specifically, we allow for a wide range of methods to estimate \textcolor{black}{the} nuisance functions \textcolor{black}{in our estimators,} \textcolor{black}{allowing flexibility in terms of} model misspecification\textcolor{black}{, and also establish through this whole section a suit of generally applicable results -- with only high-level conditions on the nuisance estimators -- giving a complete understanding/characterization of our SS ATE estimators' properties, uncovering in the process, various interesting aspects of their robustness and efficiency benefits.} \textcolor{black}{In Section \ref{secnf} later,} we \textcolor{black}{also} provide a careful study of a \textcolor{black}{family of} outcome model estimators based on kernel smoothing, inverse probability weighting and dimension reduction, establishing novel results \textcolor{black}{on} their uniform convergence rates, which verify the high-level conditions required in Corollary \ref{corate} and ensure the efficiency superiority of our method discussed in Remark \ref{remark_ate_efficiency}; see Section \ref{sec_nf_ate} for \textcolor{black}{more} detail\textcolor{black}{s}. \textcolor{black}{In general, we believe the SS ATE estimation problem warranted a more detailed and thorough analysis in its own right, as we attempt to do in this paper.} \textcolor{black}{Moreover,} we also consider, \textcolor{black}{as in the next section, the QTE estimation problem, which to our knowledge is an entirely novel contribution in the area of SS (causal) inference}. \end{remark} \section{SS estimation for the QTE}\label{secqte} We now study SS estimation of the QTE \textcolor{black}{in \eqref{qte}}. As before \textcolor{black}{in Section \ref{secos}}, we will simply focus \textcolor{black}{here} on \textcolor{black}{SS estimation of the} $\tau$\textcolor{black}{-}quantile ${\boldsymbol\theta}\equiv{\boldsymbol\theta}(1,\tau)\in\Theta\subset\mathbb{R}$ of $Y\equiv Y(1)$\textcolor{black}{, as in \eqref{generic_notation},} with some fixed and known $\tau\in(0,1)$. \textcolor{black}{This will be our goal in Sections \ref{sec_qte_general}--\ref{sec_qte_efficiency_comparison}, after which we finally address SS inference for the QTE in Section \ref{sec_qte_difference}.} \begin{remark}[Technical difficulties \textcolor{black}{with} QTE estimation]\label{qte_challenges} While the basic ideas \textcolor{black}{underlying the SS estimation of the QTE} are similar in spirit to those in Section \ref{secos} for the ATE, the inherent inseparability of $Y$ and $\theta$ in the quantile estimating equation \eqref{defqte} poses significantly more challenges in both implementation and theory. To overcome these difficulties, we use the strategy of one-step update in the construction of our QTE estimators, and \textcolor{black}{also} develop technical novelties of empirical process theory in the proof of their properties; see Section \ref{sec_qte_general} as well as Lemma \ref{1v2} \textcolor{black}{(}in \textcolor{black}{Appendix} \ref{sm_lemmas} of the Supplementary Material\textcolor{black}{)} for \textcolor{black}{more} details. \end{remark} \begin{remark}[Semantic clarification for Sections \ref{sec_qte_general}--\ref{sec_qte_efficiency_comparison}]\label{remark_semantics} \textcolor{black} As mentioned above, our estimand in Sections \ref{sec_qte_general}--\ref{sec_qte_efficiency_comparison} is the quantile ${\boldsymbol\theta}$ of $Y(1)$, not QTE, per se. However, for semantic convenience, we will occasionally refer to it as ``QTE'' itself (and the estimators as ``QTE estimators'') while presenting our results and discussions in these sections. We hope this slight abuse of terminology is not a distraction, as the true estimand should be clear from context.} \end{remark} \subsection[SS estimators for theta0: general construction and properties]{ \textcolor{black}{SS estimators for ${\boldsymbol\theta}$: g}eneral construction and properties }\label{sec_qte_general} \textcolor{black}{Let us define} $\phi({\mathbf X},\theta):={\cal E}\{\psi(Y,\theta)\mid{\mathbf X}\}$. Analogous to \textcolor{black}{the construction} \eqref{ate_dr_representation} for \textcolor{black}{the mean} $\mu_0$, we observe that, for arbitrary functions $\pi^*(\cdot)$ and $\phi^*(\cdot,\cdot)$, the equation \eqref{defqte} \textcolor{black}{for ${\boldsymbol\theta}$} satisfies the DR type representation\textcolor{black}{:} \begin{eqnarray} 0~=~{\cal E}\{\psi(Y,{\boldsymbol\theta})\} ~=~ {\cal E}\{ \phi^*({\mathbf X},{\boldsymbol\theta})\}+ {\cal E}[\{\pi^*({\mathbf X})\}^{-1} T\{\psi(Y,{\boldsymbol\theta}) - \phi^*({\mathbf X},{\boldsymbol\theta})\}]\textcolor{black}{,} \label{qte_dr_representation} \end{eqnarray} given either $\pi^*({\mathbf X})=\pi({\mathbf X})$ or $\phi^*({\mathbf X},\theta)=\phi({\mathbf X},\theta)$ but {\it not} necessarily both. \textcolor{black}{To} clarify the \textcolor{black}{basic} logic behind the construction of our \textcolor{black}{SS} estimators, suppose momentarily that $\{\pi^*(\cdot),\phi^*(\cdot,\cdot)\}$ are known and equal to $\{\pi(\cdot),\phi(\cdot,\cdot)\}$. One may then expect to obtain a supervised estimator of ${\boldsymbol\theta}$ by solving the empirical version of \eqref{qte_dr_representation} based on $ \mathcal{L}$, i.e., \textcolor{black}{solve} \begin{eqnarray} {\cal E}_n\{ \phi({\mathbf X},\theta)\}+ {\cal E}_n[\{\pi({\mathbf X})\}^{-1} T\{\psi(Y,\theta) - \phi({\mathbf X},\theta)\}] ~=~0, \label{sv} \end{eqnarray} with respect to $\theta$. However, solving \eqref{sv} directly is not a simple task due to its \textcolor{black}{inherent} non-smoothness and non-linearity \textcolor{black}{in $\theta$}. \textcolor{black}{A reasonable strategy to adopt instead is a} \emph{one-step update} \textcolor{black}{approach} \citep{van2000asymptotic, tsiatis2007semiparametric}\textcolor{black}{,} using the corresponding \emph{influence function} \textcolor{black}{(a term used a bit loosely here to denote the expected influence function in the supervised case):} \begin{eqnarray} \{f({\boldsymbol\theta})\}^{-1} ({\cal E}[\{\pi({\mathbf X})\}^{-1} T\{\phi({\mathbf X},{\boldsymbol\theta})-\psi(Y,{\boldsymbol\theta})\}]-{\cal E}\{ \phi({\mathbf X},{\boldsymbol\theta})\}). \label{qte_influence_function} \end{eqnarray} Specifically, by replacing the unknown functions $\{\pi(\cdot),~\phi(\cdot,\cdot)\}$ in \eqref{qte_influence_function} with \textcolor{black}{\it some} estimators $\{\hat{\pi}_n(\cdot),~ \hat{\phi}_n(\cdot,\cdot)\}$ based on $ \mathcal{L}$ that \textcolor{black}{may} target possibly misspecified limits $\{\pi^*(\cdot),~\phi^*(\cdot,\cdot)\}$, we immediately obtain a {\it supervised estimator} \textcolor{black}{of ${\boldsymbol\theta}$ \textcolor{black}{via a one-step update approach as follows:}} \begin{eqnarray} &&\hvt_{\mbox{\tiny SUP}}~:=~ \hvt_{\mbox{\tiny INIT}} +\{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}({\cal E}_n[\{\hat{\pi}_n({\mathbf X})\}^{-1}T\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}}) - \psi(Y,\hvt_{\mbox{\tiny INIT}})\}]- \label{sup_qte} \\ &&\phantom{\hvt_{\mbox{\tiny SUP}}~:=~ \hvt_{\mbox{\tiny INIT}} +\{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}(}{\cal E}_{n}\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}})\})\textcolor{black}{,} \nonumber \end{eqnarray} with $\hvt_{\mbox{\tiny INIT}}$ an initial estimator for ${\boldsymbol\theta}$ and $\hat{f}_n(\cdot)$ an estimator for the density function $f(\cdot)$ of $Y$. \paragraph*{SS estimators \textcolor{black}{of ${\boldsymbol\theta}$}} \textcolor{black}{With the above motivation for a one-step update approach, and recalling the basic principles of our SS approach in Section \ref{sec_ate_ss}, we now formalize the details of our SS estimators of ${\boldsymbol\theta}$.} Similar to the \textcolor{black}{rationale used in the} construction of \textcolor{black}{\eqref{ss_ate}} for \textcolor{black}{estimating $\mu_0$ in context of} the ATE, replacing ${\cal E}_{n}\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}$ and $\hat{\pi}_n({\mathbf X})$ in \eqref{sup_qte} by ${\cal E}_{n+N}\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}$ and $\hat{\pi}_N({\mathbf X})$, respectively\textcolor{black}{, now} \textcolor{black}{produces a family of \emph{SS estimators} $\hvt_{\mbox{\tiny SS}}$ for ${\boldsymbol\theta}$, given by:} \begin{eqnarray} &&\hvt_{\mbox{\tiny SS}}~:=~ \hvt_{\mbox{\tiny INIT}} +\{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}({\cal E}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1}T\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}}) - \psi(Y,\hvt_{\mbox{\tiny INIT}})\}]- \label{ss_qte}\\ &&\phantom{\hvt_{\mbox{\tiny SS}}~:=~ \hvt_{\mbox{\tiny INIT}} +\{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}(}{\cal E}_{n+N}\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}). \nonumber \end{eqnarray} Here\textcolor{black}{, a} cross fitting technique \textcolor{black}{similar to \eqref{ds1}--\eqref{ds2}} is applied to \textcolor{black}{obtain the estimates $\hat{\phi}_n({\mathbf X}_i,\cdot)$:} \begin{eqnarray} \hat{\phi}_n({\mathbf X}_i,\theta)&~:=~&\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$}\hat{\phi}_{n,k}({\mathbf X}_i,\theta)\quad (i=n+1,\ldots,n+N), \quad \textcolor{black}{\mbox{and}} \label{ds3}\\ \hat{\phi}_n({\mathbf X}_i,\theta)&~:=~&\hat{\phi}_{n,k}({\mathbf X}_i,\theta)\quad (i\in{\cal I}_k\textcolor{black}{;\ k=1,\ldots,\mathbb{K}}), \label{ds4} \end{eqnarray} where $\hat{\phi}_{n,k}(\cdot,\cdot)$ is an estimator for $\phi^*(\cdot,\cdot)$ based \textcolor{black}{only} on the data set $ \mathcal{L}_k^-$ $(k=1,\ldots,\mathbb{K})$. \vskip0.05in We now have a family of SS estimators for ${\boldsymbol\theta}$ indexed by $\{\hat{\pi}_N(\cdot),\hat{\phi}_n(\cdot,\cdot)\}$ from \eqref{ss_qte}. \textcolor{black}{To establish their theoretical properties, we will require the following (high-level) assumptions.} \begin{assumption} \label{adensity} The quantile ${\boldsymbol\theta}$ is in the interior of its parameter space $\Theta$. The density function $f(\cdot)$ of $Y$ is positive and has a bounded derivative in $\mb(\vt,{\varepsilon})$ \textcolor{black}{for some $\varepsilon > 0$}. \end{assumption} \begin{assumption} \label{ainit} \textcolor{black}{The initial estimator $\hvt_{\mbox{\tiny INIT}}$ and the density estimator $\hat{f}_n(\cdot)$} satisfy that, for some positive sequences $u_n=o(1)$ and $v_n=o(1)$, \begin{eqnarray} &&\hvt_{\mbox{\tiny INIT}}-{\boldsymbol\theta}~=~O_p(u_{n}), ~~ \textcolor{black}{\mbox{and}} \label{hvti}\\ &&\hat{f}_n(\hvt_{\mbox{\tiny INIT}})-f({\boldsymbol\theta})~=~O_p(v_n). \label{hf} \end{eqnarray} \end{assumption} \begin{assumption}\label{api} Recall \textcolor{black}{that} $\pi^*(\cdot)$ is some function such that $\pi^*({\mathbf x})\in(c,1-c)$ for any ${\mathbf x}\in {\cal X}$ and some $c\in(0,1)$. Then\textcolor{black}{,} the function $\hat{D}_N({\mathbf x})\equiv\{\hat{\pi}_N({\mathbf x})\}^{-1}-\{\pi^*({\mathbf x})\}^{-1}$ satisfies\textcolor{black}{:} \begin{eqnarray} &&({\cal E}_{\mathbf X}[\{\hat{D}_N({\mathbf X})\}^2])^{1/2}~=~O_p(s_N), ~~ \textcolor{black}{\mbox{and}} \label{d2} \\ &&\hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|~=~O_p(1)\textcolor{black}{,} \label{dsup} \end{eqnarray} for some positive sequence $s_N$ that is possibly divergent. \end{assumption} \begin{assumption}\label{abound} The function $\phi^*(\cdot,\cdot)$ \textcolor{black}{-- the (possibly misspecified) target of $\hat{\phi}_n(\cdot,\cdot)$ --} is bounded. \textcolor{black}{Further, t}he set $\mathcal{M}:=\{\phi^*({\mathbf X},\theta):\theta\in\mb(\vt,{\varepsilon})\}$ \textcolor{black}{for some $\varepsilon > 0$,} satisfies\textcolor{black}{:} \begin{eqnarray} N_{[\,]}\{\eta,\mathcal{M},L_2({\mathbb P}_{\mathbf X})\}~\leq~ c_1\,\eta^{-c_2}, \label{bmm} \end{eqnarray} where the symbol $N_{[\,]}(\cdot,\cdot,\cdot)$ refers to the \textcolor{black}{\it bracketing number}\textcolor{black}{, as} defined in \citet{van1996weak} and \citet{van2000asymptotic}. In addition, for any sequence $\tilde{\theta}\to{\boldsymbol\theta}$ in probability, \begin{eqnarray} &&\mathbb{G}_n[\{\pi^*({\mathbf X})\}^{-1}T\{\phi^*({\mathbf X},\tilde{\theta})-\phi^*({\mathbf X},{\boldsymbol\theta})\}]~=~o_p(1), ~~\textcolor{black}{\mbox{and}} \label{unipi1}\\ &&\mathbb{G}_{n+N}\{\phi^*({\mathbf X},\tilde{\theta})-\phi^*({\mathbf X},{\boldsymbol\theta})\}~=~o_p(1).\label{unipi2} \end{eqnarray} \end{assumption} \begin{assumption}\label{aest} Denote \begin{eqnarray} &&\hat{\psi}_{n,k}({\mathbf X},\theta)~:=~\hat{\phi}_{n,k}({\mathbf X},\theta)-\phi^*({\mathbf X},\theta), ~~\textcolor{black}{\mbox{and}} \label{error}\\ &&\Delta_k( \mathcal{L})~:=~(\hbox{$\sup_{\theta\in\mbtv}$}{\cal E}_{\mathbf X}[\{\hat{\psi}_{n,k}({\mathbf X},\theta)\}^2])^{1/2} \quad (k=1,\ldots,\mathbb{K}).\nonumber \end{eqnarray} Then\textcolor{black}{,} \textcolor{black}{for some $\varepsilon > 0$,} the set\textcolor{black}{:} \begin{eqnarray} \mathcal{P}_{n,k}~:=~\{\hat{\psi}_{n,k}({\mathbf X},\theta):\theta\in\mb(\vt,{\varepsilon})\} \label{pnk} \end{eqnarray} satisfies that, for any $\eta\in(0,\Delta_k( \mathcal{L})+c\,]$ \textcolor{black}{for some $c > 0$}, \begin{eqnarray} N_{[\,]}\{\eta,\mathcal{P}_{n,k}\mid \mathcal{L},L_2({\mathbb P}_{\mathbf X})\}~\leq~ H( \mathcal{L}) \eta^{-c} \quad (k=1,\ldots,\mathbb{K}) \label{vc} \end{eqnarray} with some function $H( \mathcal{L})>0$ such that $H( \mathcal{L})=O_p(a_n)$ for some positive sequence $a_n$ that is possibly divergent. Here\textcolor{black}{,} $\mathcal{P}_{n,k}$ is indexed by $\theta$ \textcolor{black}{\it only} and treats $\hat{\psi}_{n,k}(\cdot,\theta)$ as a non\textcolor{black}{-}random function $(k=1,\ldots,\mathbb{K})$. Moreover, we assume \textcolor{black}{that:} \begin{eqnarray} &&\hbox{$\sup_{\theta\in\mbtv}$}{\cal E}_{\mathbf X}\{|\hat{\psi}_{n,k}({\mathbf X},\theta)|\}~=~O_p(d_{n,1}),~~~ \Delta_k( \mathcal{L})~=~O_p(d_{n,2}), ~~~\textcolor{black}{\mbox{and}} \nonumber\\ && \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{\psi}_{n,k}(\textcolor{black}{{\mathbf x}},\theta)|~=~O_p(d_{n,\infty}) \quad (k=1,\ldots,\mathbb{K}), \nonumber \end{eqnarray} where $d_{n,1}$, $d_{n,2}$ and $d_{n,\infty}$ are some positive sequences that are possibly divergent. \end{assumption} \begin{remark}\label{remark_qte_assumptions} The basic conditions in Assumption \ref{adensity} ensure the identifiability and estimability of ${\boldsymbol\theta}$. Assumption \ref{ainit} is standard for one-step estimators, regulating the behavior of $\hvt_{\mbox{\tiny INIT}}$ and $\hat{f}_n(\cdot)$. Assumption \ref{api} is an analogue of Assumption \ref{api4}, adapted \textcolor{black}{suitably} for the technical proofs of the QTE estimators. Assumption \ref{abound} outlines the features of a suitable working outcome model $\phi^*(\cdot,\cdot)$. According to Example 19.7 and Lemma 19.24 of \citet{van2000asymptotic}, the conditions \eqref{bmm}--\eqref{unipi2} hold as long as $\phi^{\textcolor{black}{*}}({\mathbf X},\theta)$ is Lipschitz continuous in $\theta$. Lastly, Assumption \ref{aest} imposes restrictions on the bracketing number and norms of the error term \eqref{error}. The requirements in Assumptions \ref{abound} and \ref{aest} should be expected to hold for most reasonable choices of $\{\phi^{\textcolor{black}{*}}(\cdot,\cdot),\hat{\phi}_{n,k}(\cdot,\cdot)\}$ using standard results from empirical process theory \citep{van1996weak, van2000asymptotic}. Again, all the positive sequences in Assumptions \ref{api} and \ref{aest} are possibly divergent, so the relevant restrictions are actually fairly mild and weaker than \textcolor{black}{requiring} $L_\infty$ convergence. The validity of these assumptions for some choices of the nuisance functions and their estimators will be di\textcolor{black}{s}cussed in Section \ref{secnf}. \end{remark} \textcolor{black}{We now} \textcolor{black}{present the asymptotic properties of $\hvt_{\mbox{\tiny SS}}$ in Theorem \ref{thqte} and Corollary \ref{corqte} below.} \begin{theorem}\label{thqte} Suppose that Assumptions \ref{ass_equally_distributed} and \ref{adensity}--\ref{aest} hold, and that either $\pi^*({\mathbf X})=\pi({\mathbf X})$ or $\phi^*({\mathbf X},\theta)=\phi({\mathbf X},\theta)$ but not necessarily both. Then\textcolor{black}{, it holds that:} $\hvt_{\mbox{\tiny SS}}-{\boldsymbol\theta}=$ \begin{eqnarray*} &&\{nf({\boldsymbol\theta})\}^{-1}\hbox{$\sum_{i=1}^n$}\omega_{n,N}({\mathbf Z}_i,{\boldsymbol\theta})~+~O_p\{u_n^2+u_nv_n+n^{-1/2}(r_n+z_{n,N})+s_N d_{n,2}\}~+ \\ &&~I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}O_p(d_{n,1})+I\{\phi^*({\mathbf X},\theta)\neq\phi({\mathbf X},\theta)\}O_p(s_N)+o_p(n^{-1/2})\textcolor{black}{,} \end{eqnarray*} when $\nu\geq 0$, where \begin{eqnarray*} &&r_n~:=~d_{n,2}\{\hbox{log}\,a_n+\hbox{log}(d_{n,2}^{-1})\}~+~n_{\mathbb{K}}^{-1/2}d_{n,\infty}\{(\hbox{log}\,a_n)^2+(\hbox{log}\,d_{n,2})^2\},\\ &&z_{n,N}~:=~s_N\hbox{log}\, (s_N^{-1})~+~n^{-1/2}(\hbox{log}\,s_N)^2, ~~\textcolor{black}{\mbox{and}}\\ &&\omega_{n,N}({\mathbf Z},\theta)~:=~\{\pi^*({\mathbf X})\}^{-1}T\{\phi^*({\mathbf X},\theta)-\psi(Y,\theta)\}-{\cal E}_{n+N}\{\phi^*({\mathbf X},\theta)\}\textcolor{black}{,} \end{eqnarray*} satisfying ${\cal E}\{\omega_{n,N}({\mathbf Z},{\boldsymbol\theta})\}=0$ \textcolor{black}{if either $\phi^*(\cdot) = \phi(\cdot)$ or $\pi^*(\cdot) = \pi(\cdot)$ but not necessarily both.} \end{theorem} \begin{corollary}\label{corqte} Suppose that the conditions in Theorem \ref{thqte} hold true, that $\nu=0$ \textcolor{black}{as in \eqref{disproportion},} and that $\pi^*({\mathbf X})=\pi({\mathbf X})$. Then\textcolor{black}{,} the stochastic expansion of $\hvt_{\mbox{\tiny SS}}$ is \textcolor{black}{given by:} $\hvt_{\mbox{\tiny SS}}-{\boldsymbol\theta}=$ \begin{eqnarray*} &&\{nf({\boldsymbol\theta})\}^{-1}\hbox{$\sum_{i=1}^n$}\omega_{\mbox{\tiny SS}}({\mathbf Z}_i,{\boldsymbol\theta})~+~O_p\{u_n^2+u_nv_n+n^{-1/2}(r_n+z_{n,N})+s_N d_{n,2}\}~+ \\ &&~I\{\phi^*({\mathbf X},\theta)\neq\phi({\mathbf X},\theta)\}O_p(s_N)~+~o_p(n^{-1/2}), \end{eqnarray*} where \begin{eqnarray*} \omega_{\mbox{\tiny SS}}({\mathbf Z},\theta)~:=~\{\pi({\mathbf X})\}^{-1}T\{\phi^*({\mathbf X},\theta)-\psi(Y,\theta)\}-{\cal E}\{\phi^*({\mathbf X},\theta)\}\textcolor{black}{,} \end{eqnarray*} satisfying ${\cal E}\{\omega_{\mbox{\tiny SS}}({\mathbf Z},{\boldsymbol\theta})\}=0$\textcolor{black}{, and $\phi^*({\mathbf X},\theta)$ is arbitrary, i.e., not necessarily equal to $\phi({\mathbf x},\theta)$.} \vskip0.1in Further, if either $s_N=o(n^{-1/2})$ or $\phi^*({\mathbf X},\theta)=\phi({\mathbf X},\theta)$ but not necessarily both, and \begin{eqnarray} u_n^2+u_nv_n+n^{-1/2}(r_n+z_{n,N})+s_N d_{n,2}~=~o(n^{-1/2}), \label{srn} \end{eqnarray} \textcolor{black}{then} the limiting distribution of $\hvt_{\mbox{\tiny SS}}$ is\textcolor{black}{:} \begin{eqnarray} n^{1/2}f({\boldsymbol\theta})\sigma_{\mbox{\tiny SS}}^{-1}(\hvt_{\mbox{\tiny SS}}-{\boldsymbol\theta})~\xrightarrow{d}~\mathcal{N}(0,1)\quad (n, \textcolor{black}{N}\to\infty)\textcolor{black}{,} \label{qte_normality} \end{eqnarray} with $\sigma_{\mbox{\tiny SS}}^2:={\cal E}[\{\omega_{\mbox{\tiny SS}}({\mathbf Z},{\boldsymbol\theta})\}^2]=\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{\psi(Y,{\boldsymbol\theta})-\phi^*({\mathbf X},{\boldsymbol\theta})\}]$\textcolor{black}{, and t}he asymptotic variance $\{f({\boldsymbol\theta})\}^{-2}\sigma_{\mbox{\tiny SS}}^2$ can be estimated \textcolor{black}{as:} \begin{eqnarray*} \{\hat{f}_n(\hvt_{\mbox{\tiny SS}})\}^{-2}\hbox{var}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1}T\{\psi(Y,\hvt_{\mbox{\tiny SS}})-\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny SS}})\}]. \end{eqnarray*} \end{corollary} \begin{remark}[Robustness and first-order insensitivity of $\hvt_{\mbox{\tiny SS}}$]\label{remark_qte_property} \textcolor{black}{Theorem \ref{thqte} and Corollary \ref{corqte} establish the general properties of $\hvt_{\mbox{\tiny SS}}$, in the same spirit as those \textcolor{black}{of} $\hat{\mu}_{\mbox{\tiny SS}}$ in Section \ref{sec_ate_ss}. The results show, in particular, that $\hvt_{\mbox{\tiny SS}}$} are always DR, while enjoying first-order insensitivity, \textcolor{black}{and} $n^{1/2}$-consistency and asymptotic normality\textcolor{black}{, {\it regardless} of whether $\phi(\cdot,\cdot)$ is misspecified,} as long as we can correctly estimate $\pi({\mathbf X})$ at a\textcolor{black}{n} $L_2$\textcolor{black}{-}rate faster than $n^{-1/2}$ \textcolor{black}{by exploiting the plentiful observations in $ \mathcal{U}$}. In contrast, \textcolor{black}{such} $n^{1/2}$-consistency and asymptotic normality are unachievable \textcolor{black}{(in general)} for purely supervised QTE estimators \textcolor{black}{if} $\phi(\cdot,\cdot)$ is misspecified. This is analogous to the case of the ATE\textcolor{black}{; see} Remark \ref{remark_ate_robustness} for more discussions on these properties. \end{remark} \begin{remark}[Choice\textcolor{black}{s} of $\{\hvt_{\mbox{\tiny INIT}},\hat{f}_n(\cdot)\}$]\label{remark_qte_initial_estimator} While the general conclusions in Theorem \ref{thqte} and Corollary \ref{corqte} hold true for \textcolor{black}{\it any} estimators $\{\hvt_{\mbox{\tiny INIT}},\hat{f}_n(\cdot)\}$ satisfying Assumption \ref{ainit}, a reasonable choice in practice \textcolor{black}{for both would be {\it IPW type estimators}.} Specifically, the initial estimator $\hvt_{\mbox{\tiny INIT}}$ can be obtained by solving\textcolor{black}{:} ${\cal E}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1}T\psi(Y,\hvt_{\mbox{\tiny INIT}})]=0$, while $\hat{f}_n(\cdot)$ may be defined as a kernel density estimator based on the weighted sample\textcolor{black}{:} $\{\{\hat{\pi}_N({\mathbf X}_i)\}^{-1}T_iY_i:i=1,\ldots,n\}$. Under the conditions in Corollary \ref{corqte}, it is not hard to show that Assumption \ref{ainit} as well as the part of \eqref{srn} related to $\{u_n,v_n\}$ \textcolor{black}{are} indeed satisfied by such $\{\hvt_{\mbox{\tiny INIT}},\hat{f}_n(\cdot)\}$, using the basic proof techniques of quantile \textcolor{black}{method}s \citep{k2005} and kernel-based approaches \citep{hansen2008uniform}, \textcolor{black}{along} with suitable modifications \textcolor{black}{used to incorporate the IPW weights.} \end{remark} \subsection{Efficiency comparison}\label{sec_qte_efficiency_comparison} For efficiency comparison among QTE estimators, similar to $\hat{\mu}_{\mbox{\tiny SUP}}^*$ in Section \ref{secos} \textcolor{black}{for the ATE}, we now consider the {\it pseudo-supervised estimator\textcolor{black}{(s)}} \textcolor{black}{of ${\boldsymbol\theta}$:} \begin{eqnarray} &&\hvt_{\mbox{\tiny SUP}}^*~:=~ \hvt_{\mbox{\tiny INIT}} +\{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}({\cal E}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1}T\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}}) - \psi(Y,\hvt_{\mbox{\tiny INIT}})\}]- \label{pseudo_sup_qte}\\ &&\phantom{\hvt_{\mbox{\tiny SUP}}^*~:=~ \hvt_{\mbox{\tiny INIT}} +\{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}(}{\cal E}_{n}\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}), \nonumber \end{eqnarray} \textcolor{black}{i.e., the version of the purely supervised estimator $\hvt_{\mbox{\tiny SUP}}$ in \eqref{sup_qte} with $\hat{\pi}_n(\cdot)$ therein replaced by $\hat{\pi}_N(\cdot)$ from $ \mathcal{U}$. $\hvt_{\mbox{\tiny SUP}}^*$ thus has the same robustness as $\hvt_{\mbox{\tiny SS}}$ and is considered solely for efficiency comparison -- among SS and supervised estimators of ${\boldsymbol\theta}$ (setting aside any robustness benefits the former already enjoys). This is based on the same motivation and rationale as those discussed in detail in Section \ref{sec_ate_efficiency_comparison} in the context of ATE estimation; so we do not repeat those here for brevity. We now present the properties of $\hvt_{\mbox{\tiny SUP}}^*$ followed by the efficiency comparison.} \begin{corollary}\label{corsup} Under the conditions in Corollary \ref{corqte}, the pseudo-supervised estimators $\hvt_{\mbox{\tiny SUP}}^*$ given by \eqref{pseudo_sup_qte} \textcolor{black}{satisfies the following expansion:} $\hvt_{\mbox{\tiny SUP}}^*-{\boldsymbol\theta}=$ \begin{eqnarray} &&\quad\{nf({\boldsymbol\theta})\}^{-1}\hbox{$\sum_{i=1}^n$}\omega_{\mbox{\tiny SUP}}({\mathbf Z}_i,{\boldsymbol\theta})~+~O_p\{u_n^2+u_nv_n+n^{-1/2}(r_n+z_{n,N})+s_N d_{n,2}\}~+ \nonumber\\ &&\quad ~I\{\phi^*({\mathbf X},\theta)\neq\phi({\mathbf X},\theta)\}O_p(s_N)~+~o_p(n^{-1/2}), ~~\textcolor{black}{\mbox{and}} \nonumber\\ &&\quad n^{1/2}f({\boldsymbol\theta})\sigma_{\mbox{\tiny SUP}}^{-1}(\hvt_{\mbox{\tiny SUP}}^*-{\boldsymbol\theta})~\xrightarrow{d}~\mathcal{N}(0,1)\quad (n, \textcolor{black}{N}\to\infty), ~~\textcolor{black}{\mbox{where}} \label{qte_sup_normality} \end{eqnarray} where \begin{eqnarray*} \omega_{\mbox{\tiny SUP}}({\mathbf Z},\theta)~:=~\{\pi({\mathbf X})\}^{-1}T\{\phi^*({\mathbf X},\theta)-\psi(Y,\theta)\}-\phi^*({\mathbf X},\theta)\textcolor{black}{,} \end{eqnarray*} satisfying ${\cal E}\{\omega_{\mbox{\tiny SUP}}({\mathbf Z},{\boldsymbol\theta})\}=0$, and $\sigma_{\mbox{\tiny SUP}}^2:={\cal E}[\{\omega_{\mbox{\tiny SUP}}({\mathbf Z},{\boldsymbol\theta})\}^2]=$ \begin{eqnarray*} \hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{\psi(Y,{\boldsymbol\theta})-\phi^*({\mathbf X},{\boldsymbol\theta})\}]-\hbox{var}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}+ 2\,{\cal E}\{\phi^*({\mathbf X},{\boldsymbol\theta})\psi(Y,{\boldsymbol\theta})\}. \end{eqnarray*} } \end{corollary} \begin{remark}[Efficiency improvement \textcolor{black}{of $\hvt_{\mbox{\tiny SS}}$ and optimality}]\label{remark_qte_efficiency} Inspecting the asymptotic variances in Corollaries \ref{corqte} and \ref{corsup}, we see \textcolor{black}{that} $\sigma_{\mbox{\tiny SS}}^2\leq\sigma_{\mbox{\tiny SUP}}^2$ \textcolor{black}{with {\it any} choice of $\phi^*({\mathbf X},\theta)$ such that} $\phi^*({\mathbf X},\theta)={\cal E}\{\psi(Y,\theta)\mid {\bf g}({\mathbf X})\}$ for some \textcolor{black}{(possibly)} unknown function ${\bf g}(\cdot)$, since \begin{eqnarray*} \sigma_{\mbox{\tiny SUP}}^2-\sigma_{\mbox{\tiny SS}}^2~=~2\,{\cal E}\{\phi^*({\mathbf X},{\boldsymbol\theta})\psi(Y,{\boldsymbol\theta})\}-\hbox{var}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}~=~{\cal E}[\{\phi^*({\mathbf X},{\boldsymbol\theta})\}^2]~\geq~ 0. \end{eqnarray*} Such a comparison reveals the \textcolor{black}{\it superiority} \textcolor{black}{in} efficiency of our SS estimators $\hvt_{\mbox{\tiny SS}}$ over the \textcolor{black}{corresponding} ``best'' achievable ones in supervised settings \textcolor{black}{\it even if} the difference \textcolor{black}{(i.e., improvement)} in robustness is ignored. Further, when $\phi^*({\mathbf X},\theta)={\cal E}\{\psi(Y,\theta)\mid{\mathbf X}\}$, the SS variance\textcolor{black}{:} \begin{eqnarray} \sigma_{\mbox{\tiny SS}}^2&~=~&\hbox{var}(\{\pi({\mathbf X})\}^{-1}T[\psi(Y,{\boldsymbol\theta})-{\cal E}\{\psi(Y,{\boldsymbol\theta})\mid{\mathbf X}\}]) \nonumber\\ &~=~&{\cal E}(\{\pi({\mathbf X})\}^{-2}T[\psi(Y,{\boldsymbol\theta})-{\cal E}\{\psi(Y,{\boldsymbol\theta})\mid{\mathbf X}\}]^2) \label{qte_eff}\\ &~\leq~&{\cal E}[\{\pi({\mathbf X})\}^{-2}T\{\psi(Y,{\boldsymbol\theta})-g({\mathbf X})\}^2]\textcolor{black}{,} \nonumber \end{eqnarray} for any function $g(\cdot)$ while the equality holds only if $g({\mathbf X})={\cal E}\{\psi(Y,{\boldsymbol\theta})\mid{\mathbf X}\}$ almost surely. In this sense $\hvt_{\mbox{\tiny SS}}$ is asymptotically \textcolor{black}{\it optimal} among all regular and asymptotically linear estimators of ${\boldsymbol\theta}$, whose influence functions have the form $\{f({\boldsymbol\theta})\pi({\mathbf X})\}^{-1}T\{g({\mathbf X})-\psi(Y,{\boldsymbol\theta})\}$ for some function $g(\cdot)$. \textcolor{black}{Under the semi-parametric model \eqref{semiparametric_model}}, one can show that, \textcolor{black}{if Assumption \ref{adensity} holds true}, the representation \eqref{qte_eff} equals the efficient asymptotic variance for estimating ${\boldsymbol\theta}$, that is, the \textcolor{black}{SS} estimator $\hvt_{\mbox{\tiny SS}}$ achieves the \textcolor{black}{\it semi-parametric efficiency bound}. In Section \ref{sec_nf_qte}, we will \textcolor{black}{also} detail the above choices of $\phi^*(\cdot,\cdot)$ and some corresponding estimators $\hat{\phi}_{n,k}(\cdot,\cdot)$. \end{remark} \subsection{Final \textcolor{black}{SS} estimator for the QTE}\label{sec_qte_difference} Similar to the arguments \textcolor{black}{used in \textcolor{black}{Section \ref{sec_ate_difference} for} the case} of $\{\hat{\mu}_{\mbox{\tiny SS}}(1),\hat{\mu}_{\mbox{\tiny SS}}(0)\}$ \textcolor{black}{to obtain the ATE estimator,} substituting $\{Y(0),1-T\}$ for $\{Y,T\}$ in the aforementioned discussions concerning $\hvt_{\mbox{\tiny SS}}\equiv\hvt_{\mbox{\tiny SS}}(1)$ and ${\boldsymbol\theta}\equiv{\boldsymbol\theta}(1)$ immediately gives \textcolor{black}{\textcolor{black}{us} a family of \textcolor{black}{SS} estimators} $\hvt_{\mbox{\tiny SS}}(0)$ for ${\boldsymbol\theta}(0)$ as well as their \textcolor{black}{corresponding} properties \textcolor{black}{(as the counterparts of the properties established for $\hvt_{\mbox{\tiny SS}}(1)$ so far)}. \textcolor{black}{Subsequently, we may obtain our final SS estimator(s) for the QTE, i.e., the difference ${\boldsymbol\theta}(1)-{\boldsymbol\theta}(0)$ in \eqref{defqte}, simply as: $\hvt_{\mbox{\tiny SS}}(1)-\hvt_{\mbox{\tiny SS}}(0)$.} Then we know that, if the conditions in Corollary \ref{corqte} for $\hvt_{\mbox{\tiny SS}}(1)$ and their counterparts for $\hvt_{\mbox{\tiny SS}}(0)$ hold, the asymptotic distribution of \textcolor{black}{our \emph{final SS \textcolor{black}{QTE} estimators} $\hvt_{\mbox{\tiny SS}}(1)-\hvt_{\mbox{\tiny SS}}(0)$ is\textcolor{black}{:}} \begin{eqnarray} n^{1/2}\sigma_{\mbox{\tiny QTE}}^{-1}[\{\hvt_{\mbox{\tiny SS}}(1)-\hvt_{\mbox{\tiny SS}}(0)\}-\{{\boldsymbol\theta}(1)-{\boldsymbol\theta}(0)\}]~\xrightarrow{d}~\mathcal{N}(0,1)\quad (n, \textcolor{black}{N}\to\infty), \label{qte_difference_distribution} \end{eqnarray} where the asymptotic variance\textcolor{black}{:} \begin{eqnarray*} &&\sigma_{\mbox{\tiny QTE}}^2~:=~\hbox{var}(\{f({\boldsymbol\theta})\pi({\mathbf X})\}^{-1}T\{\psi(Y,{\boldsymbol\theta})- \phi^*({\mathbf X},{\boldsymbol\theta}) \}- \\ &&\phantom{\sigma_{\mbox{\tiny QTE}}^2~:=~\hbox{var}(}[f\{{\boldsymbol\theta}(0),0\}\{1-\pi({\mathbf X})\}]^{-1}(1-T)[\psi\{Y(0),{\boldsymbol\theta}(0)\}- \phi^*\{{\mathbf X},{\boldsymbol\theta}(0),0\} ]) \end{eqnarray*} can be estimated by\textcolor{black}{:} \begin{eqnarray*} &&\hbox{var}_n(\{\hat{f}_n(\hvt_{\mbox{\tiny SS}})\hat{\pi}_N({\mathbf X})\}^{-1}T\{\psi(Y,\hvt_{\mbox{\tiny SS}})- \hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny SS}}) \}- \\ &&\phantom{\hbox{var}_n(}[\hat{f}_n\{\hvt_{\mbox{\tiny SS}}(0),0\}\{1-\hat{\pi}_N({\mathbf X})\}]^{-1}(1-T)[\psi\{Y(0),\hvt_{\mbox{\tiny SS}}(0)\}- \hat{\phi}_n\{{\mathbf X},\hvt_{\mbox{\tiny SS}}(0),0\} ]). \end{eqnarray*} In the above\textcolor{black}{,} $\hat{f}_n(\cdot,0)$ and $\hat{\phi}_n({\mathbf X},\theta,0)$ are \textcolor{black}{\it some} estimators for the density function $f(\cdot,0)$ of $Y(0)$ and the working model $\phi^*({\mathbf X},\theta,0)$ of ${\cal E}[\psi\{Y(0),\theta\}\mid{\mathbf X}]$, respectively. We will use \eqref{qte_difference_distribution} to construct confidence intervals for the QTE in the data analysis of Section \ref{sec_data_analysis}. \section{Choice and estimation of \textcolor{black}{the} nuisance functions}\label{secnf} In this section, we study some reasonable choices and estimators of the nuisance functions \textcolor{black}{involved} in the SS estimators $\hat{\mu}_{\mbox{\tiny SS}}$ and $\hvt_{\mbox{\tiny SS}}$ from Sections \ref{secos} and \ref{secqte}, which form a critical component in the implementation of \textcolor{black}{all} our approaches. The results claimed in the last two sections, however, are completely general and allow for any choices as long as they satisfy the high-level conditions therein. \textcolor{black}{In Sections \ref{sec_PS}--\ref{sec_nf_qte} below, we discuss some choices of $\pi(\cdot)$ and the outcome models for ATE and QTE.} \subsection{Propensity score}\label{sec_PS} Under the assumption \eqref{disproportion}, the specification and estimation of $\pi(\cdot)$ is a relatively easier task and can be done through applying any reasonable and flexible enough regression method (parametric, semi-parametric or non-parametric) to the plentiful observations for $(T,{\mathbf X}^{\rm T})^{\rm T}$ in $ \mathcal{U}$. For instance, one can use the {\it ``extended'' parametric families} $\pi^*({\mathbf x})\equiv h\{\boldsymbol{\beta}_0^{\rm T}\mbox{\boldmath $\Psi$}({\mathbf x})\}$ as the working model for the propensity score $\pi(\cdot)$, where $h(\cdot)\in(0,1)$ is a {\it known} link function, the components of $\mbox{\boldmath $\Psi$}(\cdot):\mathbb{R}^p\mapsto\mathbb{R}^{p^*}$ are (known) basis functions of ${\mathbf x}$ with $p^*\equiv p^*_n$ allowed to diverge and exceed $n$, and $\boldsymbol{\beta}_0\in\mathbb{R}^{p^*}$ is an {\it unknown} parameter vector. Such a $\pi^*({\mathbf x})$ can be estimated by $\hat{\pi}_N({\mathbf x})\equiv h\{\hat{\bbeta}^{\rm T}\mbox{\boldmath $\Psi$}({\mathbf x})\}$ with $\hat{\bbeta}$ obtained from the corresponding parametric regression process of $T$ vs. $\mbox{\boldmath $\Psi$}({\mathbf X})$ using $ \mathcal{U}$. Regularization may be applied \textcolor{black}{here} via, for example, the $L_1$ penalty if necessary \textcolor{black}{(e.g., in high dimensional settings)}. The families above include, as a special case, the logistic regression models with \begin{eqnarray*} h(x)~\equiv~\{1+\exp(-x)\}^{-1}\hbox{ and } \mbox{\boldmath $\Psi$}({\mathbf x})~\equiv~\{1,\mbox{\boldmath $\Psi$}_1^{\rm T}({\mathbf x}),\mbox{\boldmath $\Psi$}_2^{\rm T}({\mathbf x}),\ldots,\mbox{\boldmath $\Psi$}_M^{\rm T}({\mathbf x})\}^{\rm T}\textcolor{black}{,} \end{eqnarray*} for $\mbox{\boldmath $\Psi$}_m({\mathbf x}):=({\mathbf x}_{[1]}^m,{\mathbf x}_{[2]}^m,\ldots,{\mathbf x}_{[p]}^m)^{\rm T}$ $(m=1,\ldots,M)$ and some positive integer $M$. Section 5.1 of \citet{chakrabortty2019high} along with Section B.1 in the supplementary material of that article provided a detailed discussion on these ``extended'' parametric families and established their (non-asymptotic) properties, sufficient for the high-level conditions on $\{\pi^*(\cdot),\hat{\pi}_N(\cdot)\}$ in Sections \ref{secos} and \ref{secqte}. In addition, it is noteworthy that, in high dimensional scenarios \textcolor{black}{in our setup,} where $n\ll p^*\ll N$, {\it the parameter vector $\boldsymbol{\beta}_0$ is totally free of sparsity} and can be estimated by unregularized methods based on $ \mathcal{U}$. Such a relaxation of assumptions is incurred by the usage of massive unlabeled data and \textcolor{black}{is} generally unachievable in purely supervised settings. \subsection{Outcome model for the ATE}\label{sec_nf_ate} We now consider the working outcome model $m^*(\cdot)$ \textcolor{black}{involved} in our ATE estimators. As discussed in Remark \ref{remark_ate_efficiency}, one may expect to achieve semi-parametric optimality by letting $m^*({\mathbf X})\equiv{\cal E}(Y\mid{\mathbf X})$. However, specifying \textcolor{black}{the} ${\cal E}(Y\mid{\mathbf X})$ correctly in high dimensional scenarios is usually unrealistic while approximating it fully non-parametrically would typically bring \textcolor{black}{in} undesirable issues such as under-smoothing \citep{newey1998undersmoothing} even if there are only a moderate number of covariates. We therefore adopt a principled and flexible semi-parametric strategy, \textcolor{black}{via} conducting dimension reduction followed by non-parametric calibrations and targeting ${\cal E}(Y\mid{\mathbf S})$ instead of ${\cal E}(Y\mid{\mathbf X})$, where ${\mathbf S}:=\mathbf{P}_0^{\rm T}{\mathbf X}\in\mathcal{S}\subset\mathbb{R}^r$ and $\mathbf{P}_0$ is a $r\times p$ {\it transformation matrix} with some fixed and known $r\leq p$. \textcolor{black}{(The choice $r = p$ of course leads to a trivial case with $\mathbf{P}_0 = I_p$.)} It is noteworthy that we \textcolor{black}{\it always} allow the dimension reduction to be \emph{insufficient} and do {\it not} assume anywhere \textcolor{black}{that} \begin{eqnarray} {\cal E}(Y\mid{\mathbf S})~=~{\cal E}(Y\mid{\mathbf X}). \label{sufficient_dimension_reduction} \end{eqnarray} The efficiency comparison in Remark \ref{remark_ate_efficiency} shows that, when\textcolor{black}{ever} $\hat{\pi}_N(\cdot)$ converges to $\pi(\cdot)$ fast enough, setting $m^*({\mathbf X})\equiv{\cal E}(Y\mid\mathbf{P}_0^{\rm T}{\mathbf X})$ \textcolor{black}{\it always} guarantees our SS estimators $\hat{\mu}_{\mbox{\tiny SS}}$ \textcolor{black}{to} dominate any supervised competitors using the same working model $m^*(\cdot)$ \textcolor{black}{--} \emph{no matter} whether \eqref{sufficient_dimension_reduction} holds or not. Hence\textcolor{black}{,} one is free to let $\mathbf{P}_0$ equal \emph{any} user-defined and data-dependent matrix. As long as $\mathbf{P}_0$ is completely determined by the distribution of ${\mathbf X}$, its estimation error is very likely to be negligible owing to the large number of observations for ${\mathbf X}$ provided by $ \mathcal{U}$. An instance \textcolor{black}{of such a choice is} the $r$ leading principal component directions of ${\mathbf X}$. Nevertheless, to make the dimension reduction as ``sufficient'' as possible, one may prefer to use a transformation matrix $\mathbf{P}_0$ which depends on the joint distribution of $(Y,{\mathbf X}^{\rm T})^{\rm T}$\textcolor{black}{, and thus} needs to be estimated with significant errors. We will give some examples of such $\mathbf{P}_0$ in Remark \ref{remark_choice_of_P0}. To \textcolor{black}{estimate} the conditional mean $m^*({\mathbf x})\equiv{\cal E}(Y\mid \mathbf{P}_0^{\rm T}{\mathbf X}=\mathbf{P}_0^{\rm T}{\mathbf x})$, we may employ any \textcolor{black}{suitable} smoothing technique, such as kernel smoothing, kernel machine regression or smoothing splines. For illustration, we focus on the \textcolor{black}{following} \textcolor{black}{\it IPW type kernel smoothing estimator(s):} \begin{eqnarray} \hat{m}_{n,k}({\mathbf x})~\equiv~\hat{m}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)~:=~\{\hat{\ell}^{(0)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)\}^{-1}\hat{\ell}^{(1)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)\quad (k=1,\ldots,\mathbb{K}), \label{ks_ate} \end{eqnarray} where \begin{eqnarray*} \hat{\ell}^{(t)}_{n,k}({\mathbf x},\mathbf{P})~:=~ h_n^{-r} \E_{n,k}[\{\hat{\pi}_N({\mathbf X})\}^{-1}T Y^tK_h\{\mathbf{P}^{\rm T}({\mathbf x}-{\mathbf X})\}]\quad (t=0,1), \end{eqnarray*} with the notation ${\cal E}_{n,k}\{\widehat{g}({\bf Z})\}:=n_{\mathbb{K}^-}^{-1}\hbox{$\sum_{i\in\I_{k}^-}$} \widehat{g}({\bf Z}_i)$ for any possibly random function $\widehat{g}(\cdot)$, \textcolor{black}{and with} $\hat{\mathbf{P}}_k$ \textcolor{black}{being \emph{any}} estimator of $\mathbf{P}_0$ using $ \mathcal{L}_k^-$, $K_h({\mathbf s}):=K(h_n^{-1}{\mathbf s}) $, $K(\cdot)$ a kernel function \textcolor{black}{(e.g., the standard Gaussian kernel)} and $ h_n\to 0$ \textcolor{black}{denoting} a bandwidth sequence. \begin{remark}[Subtlety and benefit\textcolor{black}{s} of the inverse probability weighting scheme]\label{remark_ks_ate_weight} The \textcolor{black}{IPW based} weight\textcolor{black}{s} $\{\hat{\pi}_N({\mathbf X})\}^{-1}$ \textcolor{black}{involved} in $\hat{m}_{n,k}({\mathbf x})$ \textcolor{black}{in \eqref{ks_ate}} \textcolor{black}{play} a key role in \textcolor{black}{its} achieving \textcolor{black}{an \emph{important}} {\it DR \textcolor{black}{property}}, which means $\hat{m}_{n,k}({\mathbf x})$ has the limit ${\cal E}(Y\mid{\mathbf S}={\mathbf s})$ whenever either \eqref{sufficient_dimension_reduction} is true or $\pi^*(\cdot)=\pi(\cdot)$\textcolor{black}{,} but \emph{not} necessarily both. This property will be proved in Theorem \ref{theorem_ks_ate}\textcolor{black}{,} and formally stated \textcolor{black}{and discussed} in Remark \ref{remark_ks_ate_DR}. In contrast, the (standard) complete-case version without the \textcolor{black}{IPW} weights $\{\hat{\pi}_N({\mathbf X})\}^{-1}$ actually targets ${\cal E}(Y\mid{\mathbf S}={\mathbf s},T=1)$ that equals ${\cal E}(Y\mid{\mathbf S}={\mathbf s})$ \emph{only if} \eqref{ks_ate} holds. Recalling the clarification in Remark \ref{remark_ate_efficiency}, we can see that such a subtlety \textcolor{black}{(enabled by the involvement of the weights) in the construction} of $\hat{m}_{n,k}(\cdot)$ ensures the efficiency advantage of our SS estimators $\hat{\mu}_{\mbox{\tiny SS}}$ over any supervised competitors constructed with the same $\hat{m}_{n,k}(\cdot)$, when $\pi(\cdot)$ is correctly specified but $m(\cdot)$ is not. \textcolor{black}{Lastly, a}lthough $\hat{m}_{n,k}(\cdot)$ contains $\hat{\pi}_N(\cdot)$ and thereby involves the unlabeled data $ \mathcal{U}$, we suppress the subscript $N$ \textcolor{black}{in $\hat{m}_{n,k}(\cdot)$} for brevity considering its convergence rate mainly relies on $n$; see Theorem \ref{theorem_ks_ate}. In principle, cross fitting procedures analogous to (\ref{ds1}) and (\ref{ds2}) should be conducted for $ \mathcal{U}$ as well to guarantee the independence of $\hat{m}_{n,k}(\cdot)$ and ${\mathbf X}_i$ in $\hat{m}_{n,k}({\mathbf X}_i)$ $(i=n+1,\ldots,n+N)$. However, from our experience, such extra cross fitting \textcolor{black}{procedures} bring only marginal benefits in practice while making the implementation considerably more laborious. We hence stick to estimating $\pi^*(\cdot)$ using the whole $ \mathcal{U}$ in \textcolor{black}{our} numerical studies. \end{remark} \vskip-0.02in \textcolor{black}{There is substantial literature on kernel smoothing estimators with unknown estimated covariate transformations, but mostly in low (fixed) dimensional settings \citep{mammen2012nonparametric,mammen_rothe_schienle_2016, escanciano2014uniform}.} Considering\textcolor{black}{, however,} that \textcolor{black}{in our setting,} the dimension $p$ of ${\mathbf X}$ can be \textcolor{black}{\it divergent} \textcolor{black}{(possibly exceeding $n$),} and that the transformation matrix $\mathbf{P}_0$ as well as the weights $\{\pi^*({\mathbf X})\}^{-1}$ nee\textcolor{black}{d} to be \textcolor{black}{\it estimated} \textcolor{black}{as well}, establishing the uniform convergence property of $\hat{m}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)$ \textcolor{black}{in \eqref{ks_ate},} in fact\textcolor{black}{,} poses substantial technical challenges and has not been studied in the literature yet. \textcolor{black}{Our results here are thus {\it novel} to the best of our knowledge.} To derive \textcolor{black}{the results} we impose the following \textcolor{black}{conditions.} \begin{assumption}\label{al1} The estimator $\hat{\mathbf{P}}_k$ satisfies $\|\hat{\mathbf{P}}_k-\mathbf{P}_0\|_1=O_p(\alpha_n)$ for some \textcolor{black}{$\alpha_n \geq 0$}. \end{assumption} \begin{assumption}[\textcolor{black}{Smoothness conditions}]\label{akernel (i) The function $K(\cdot):\mathbb{R}^r\mapsto\mathbb{R}$ is a symmetric kernel of order $d\geq 2$ with a finite $d$th moment. Moreover, it is bounded, square integrable and continuously differentiable with a derivative $\nabla K({\mathbf s}):=\partial K({\mathbf s})/\partial {\mathbf s}$ such that $\|\nabla K({\mathbf s})\|\leq c_1\,\|{\mathbf s}\|^{-v_1}$ for some constant $v_1>1$ and any $\|{\mathbf s}\|>c_2$. (ii) The support $\mathcal{S}$ of ${\mathbf S}\equiv\mathbf{P}_0^{\rm T}{\mathbf X}$ is compact. The density function $f_{{\mathbf S}}(\cdot)$ of ${\mathbf S}$ is bounded and bounded away from zero on $\mathcal{S}$. In addition, it is $d$ times continuously differentiable with a bounded $d$th derivative on some open set $\mathcal{S}_0\supset\mathcal{S}$. (iii) For some constant $u>2$, the response $Y$ satisfies $\hbox{$\sup_{\s\in\ms}$}{\cal E}(Y^{2u}\mid{\mathbf S}={\mathbf s})<\infty$. (iv) The function $\kappa_t({\mathbf s}):={\cal E}[\{\pi^*({\mathbf X})\}^{-1}TY^t\mid {\mathbf S}={\mathbf s}]$ $(t=0,1)$ is $d$ times continuously differentiable and has \textcolor{black}{b}ounded $d$th \textcolor{black}{order} derivatives on $\mathcal{S}_0$. \end{assumption} \begin{assumption}[\textcolor{black}{Required only when $\mathbf{P}_0$ needs to be estimated}] \label{ahbey (i) The support $ {\cal X}$ of ${\mathbf X}$ is such that $\hbox{$\sup_{\x\in\mx}$}\|{\mathbf x}\|_\infty<\infty$. (ii) The function $\nabla K(\cdot)$ has a bounded derivative satisfying $\|\partial \{\nabla K({\mathbf s})\}/\partial {\mathbf s}\|\leq c_1\,\|{\mathbf s}\|^{-v_2}$ for some constant $v_2>1$ and any $\|{\mathbf s}\|>c_2$. Further, it is locally Lipschitz continuous, i.e., $\|\nabla K({\mathbf s}_1)-\nabla K({\mathbf s}_2)\|\leq \|{\mathbf s}_1-{\mathbf s}_2\|\rho({\mathbf s}_2)$ for any $\|{\mathbf s}_1-{\mathbf s}_2\|\leq c$, where $\rho(\cdot)$ is some bounded, square integrable and differentiable function with a bounded derivative $\nabla\rho(\cdot)$ such that $\|\nabla\rho({\mathbf s})\|\leq c_1\|{\mathbf s}\|^{-v_3}$ for some constant $v_3>1$ and any $\|{\mathbf s}\|>c_2$. (iii) Let $\mbox{\boldmath $\chi$}_{t[j]}({\mathbf s})$ be the $j$th component of $\mbox{\boldmath $\chi$}_{t}({\mathbf s}):={\cal E}[{\mathbf X} \{\pi^*({\mathbf X})\}^{-1}TY^t\mid {\mathbf S}={\mathbf s}]$. Then\textcolor{black}{,} $\mbox{\boldmath $\chi$}_{t[j]}({\mathbf s})$ is continuously differentiable and has a bounded first derivative on $\mathcal{S}_0$\textcolor{black}{, for each $t=0,1$ and $j=1,\ldots,p$. \end{assumption} \vskip-0.02in In the above, Assumption \ref{al1} regulates the behavior of $\hat{\mathbf{P}}_k$ as an estimator of the transformation matrix $\mathbf{P}_0$. Moreover, the smoothness and moment conditions in Assumption \ref{akernel} are almost adopted from \citet{hansen2008uniform} and are fairly standard in the literature of kernel-based approaches \citep{newey1994large, andrews1995nonparametric, masry1996multivariate}. Further, we require Assumption \ref{ahbey} to control \textcolor{black}{the} errors from approximating $\mathbf{P}_0$ by $\hat{\mathbf{P}}_k$\textcolor{black}{,} while Assumption \ref{ahbey} (ii) \textcolor{black}{in particular} is satisfied by the second-order Gaussian kernel, among others. Similar conditions were imposed by \citet{chakrabortty2018efficient} to study unweighted kernel smoothing estimators with dimension reduction \textcolor{black}{in} low \textcolor{black}{(fixed)} dimensional \textcolor{black}{settings.} Based on these conditions, we now \textcolor{black}{provide} the uniform convergence rate of $\hat{m}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)$ \textcolor{black}{in the following result.} \begin{theorem}[\textcolor{black}{Uniform consistency of $\hat{m}_{n,k}(\cdot)$}]\label{theorem_ks_ate} Set $\xi_n:=\{(nh_n^r)^{-1}\hbox{log}\,n\}^{1/2}$, $b_n^{(1)}:=\xi_n+h_n^d$ and $b_{n,N}^{(2)}:=h_n^{-2}\alpha_n^2+h_n^{-1}\xi_n\alpha_n+\alpha_n+h_n^{-r/2}s_N$. Suppose that Assumptions \ref{ass_equally_distributed}, \ref{api4} and \ref{al1}--\ref{ahbey} hold true and that $b_n^{(1)}+b_{n,N}^{(2)}=o(1)$. Then\textcolor{black}{,} \begin{eqnarray*} \hbox{$\sup_{\x\in\mx}$}|\hat{m}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\widetilde{m}({\mathbf x},\mathbf{P}_0)| ~=~O_p\{b_n^{(1)}+b_{n,N}^{(2)}\} \quad (k=1,\ldots,\mathbb{K}), \end{eqnarray*} where $\widetilde{m}({\mathbf x},\mathbf{P}):=\{\kappa_0(\mathbf{P}^{\rm T}{\mathbf x})\}^{-1}\kappa_1(\mathbf{P}^{\rm T}{\mathbf x})$\textcolor{black}{, with $\kappa_0(\cdot)$ and $\kappa_1(\cdot)$ as given in Assumption \ref{akernel}.} \end{theorem} \begin{remark}[Double robustness \textcolor{black}{of $\hat{m}_{n,k}$}]\label{remark_ks_ate_DR} As long as either $\pi^*({\mathbf x})=\pi({\mathbf x})$ or $m^*({\mathbf x})\equiv {\cal E}(Y\mid {\mathbf S}={\mathbf s})={\cal E}(Y\mid{\mathbf X}={\mathbf x})\equiv m({\mathbf x})$ but {\it not} necessarily both, we have\textcolor{black}{:} \begin{eqnarray*} \widetilde{m}({\mathbf x},\mathbf{P}_0)&~=~&({\cal E}[\{\pi^*({\mathbf X})\}^{-1}\pi({\mathbf X})\mid {\mathbf S}={\mathbf s}])^{-1}{\cal E}[\{\pi^*({\mathbf X})\}^{-1}\pi({\mathbf X})m({\mathbf X})\mid {\mathbf S}={\mathbf s}] \\ &~=~&{\cal E}(Y\mid{\mathbf S}={\mathbf s})~\equiv~ m^*({\mathbf x}). \end{eqnarray*} Theorem \ref{theorem_ks_ate} therefore shows that $\hat{m}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)$ is a \textcolor{black}{\it DR estimator} of $m^*({\mathbf x})$. \textcolor{black}{This is an important consequence of the IPW scheme used in the construction of $\hat{m}_{n,k}(\cdot)$, and its benefits (in the bigger context of our final SS estimator) were discussed in detail in Remark \ref{remark_ks_ate_weight}. } \end{remark} \begin{remark}[Uniform convergence \textcolor{black}{-- some examples}]\label{remark_choice_of_P0} According to the result in Theorem \ref{theorem_ks_ate}, the uniform consistency of $\hat{m}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)$ as an estimator of $\widetilde{m}({\mathbf x},\mathbf{P}_0)$ holds at the \textcolor{black}{\it optimal bandwidth} \textcolor{black}{order} $h_{\hbox{\tiny opt}}=O\{n^{-1/(2d+r)}\}$ for any kernel order $d\geq 2$ and \textcolor{black}{a} fixed $r$, given \begin{eqnarray} s_N~=~o \{n^{-r/(4d+2r)}\}\quad \hbox{and} \quad \alpha_n~=~o\{n^{-1/(2d+r)}\}. \label{s_N_alpha_n} \end{eqnarray} The first part of \eqref{s_N_alpha_n} is actually weaker than the assumption \textcolor{black}{$s_N=o(n^{-1/2})$ used} in Corollary \ref{corate} and thus \textcolor{black}{should be} easy to be ensured in the SS setting \eqref{disproportion}. As regards the validity of the second part, we consider it for \textcolor{black}{some} frequently used {\it choices of $\mathbf{P}_0$} including, for instance, the least square regression parameter $(r=1)$ satisfying ${\cal E}\{{\mathbf X}(Y-\mathbf{P}_0^{\rm T}{\mathbf X})\}=\mathbf{0}_p$, and the $r$ leading eigenvectors of the matrix $\hbox{var}\{{\cal E}({\mathbf X}\mid Y)\}$, which can be estimated by \textcolor{black}{sliced} inverse regression \citep{li1991sliced}. When $p$ is fixed, there typically exist $n^{1/2}$-consistent estimators $\hat{\mathbf{P}}_k$ for $\mathbf{P}_0$\textcolor{black}{,} so the second part of \eqref{s_N_alpha_n} is satisfied by the fact that $\alpha_n=O(n^{-1/2})$. In high dimensional scenarios where $p$ is divergent and greater than $n$, one can obtain $\hat{\mathbf{P}}_k$ from the \textcolor{black}{$L_1$-}regularized version\textcolor{black}{(s)} of linear regression or \textcolor{black}{sliced} inverse regression \citep{lin2019sparse}. The sequence $\alpha_n=O\{q(\hbox{log}\,p/n)^{1/2}\}$ when the $L_1$ penalty is applied under some suitable conditions \citep{buhlmann2011statistics, negahban2012unified, wainwright2019high}, where $q:=\|\mathbf{P}_0\|_0$ represents the sparsity level of $\mathbf{P}_0$. Thus\textcolor{black}{,} the second part of \eqref{s_N_alpha_n} holds as long as \begin{eqnarray*} q(\hbox{log}\,p)^{1/2}~=~o\{n^{(2d+r-2)/(4d+2r)}\}. \end{eqnarray*} \end{remark} \subsection{Outcome model for the QTE}\label{sec_nf_qte} As regards the outcome model $\phi^*(\cdot,\cdot)$ for the QTE, we adopt the same strategy as in Section \ref{sec_nf_ate}. Specifically, \textcolor{black}{with $\mathbf{P}_0$ similar as before,} we set \begin{eqnarray} \phi^*({\mathbf x},\theta)~\equiv~{\cal E}\{\psi(Y,\theta)\mid \mathbf{P}_0^{\rm T}{\mathbf X}=\mathbf{P}_0^{\rm T}{\mathbf x}\}~\equiv~{\cal E}\{\psi(Y,\theta)\mid {\mathbf S}={\mathbf s}\}\textcolor{black}{,} \label{phis} \end{eqnarray} and estimate it by the IPW type kernel smoothing estimator\textcolor{black}{:} \begin{eqnarray} \hat{\phi}_{n,k}({\mathbf x},\theta)\equiv\hat{\phi}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k):=\{\hat{e}^{(0)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)\}^{-1}\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)\quad (k=1,\ldots,\mathbb{K}), \label{ks_qte} \end{eqnarray} where\textcolor{black}{, with $K(\cdot)$, $h_n$ and $K_h(\cdot)$ similarly defined as in Section \ref{sec_nf_ate},} \begin{eqnarray*} \hat{e}^{(t)}_{n,k}({\mathbf x},\theta,\mathbf{P})~:=~h_n^{-r} \E_{n,k}[\{\hat{\pi}_N({\mathbf X})\}^{-1}T \{\psi(Y,\theta)\}^tK_h\{\mathbf{P}^{\rm T}({\mathbf x}-{\mathbf X})\}]\quad (t=0,1). \end{eqnarray*} We first verify Assumption \ref{abound} for \textcolor{black}{a choice of} $\phi^*({\mathbf x},\theta)$ \textcolor{black}{as} in \eqref{phis}\textcolor{black}{, via the following result.} \begin{proposition}\label{thphi} If the conditional density $f(\cdot\mid{\mathbf s})$ of $Y$ given ${\mathbf S}={\mathbf s}$ is such that \begin{eqnarray} {\cal E}[\{\hbox{$\sup_{\theta\in\mbtv}$} f(\theta\mid{\mathbf S})\}^2]~<~\infty, \label{conditional_density} \end{eqnarray} then Assumption \ref{abound} is satisfied by setting $\phi^*({\mathbf X},\theta)\equiv{\cal E}\{\psi(Y,\theta)\mid{\mathbf S}\}$. \end{proposition} We now study the uniform convergence of the estimator $\hat{\phi}_{n,k}({\mathbf x},\theta)$. It is noteworthy that establishing properties of $\hat{\phi}_{n,k}({\mathbf x},\theta)$ is \textcolor{black}{\it even more} technically involved compared to the case of $\hat{m}_{n,k}({\mathbf x})$ in Section \ref{sec_nf_ate}, since handling the function class $\{\psi(Y,\theta):\theta\in \mathcal{B}({\boldsymbol\theta},{\varepsilon})\}$ inevitably needs tools from empirical process theory. We itemize the relevant assumptions as follows. \begin{assumption}[\textcolor{black}{Smoothness conditions}]\label{akernel_qte} (i) Assumption \ref{akernel} (i) \textcolor{black}{holds}. (ii) Assumption \ref{akernel} (ii) \textcolor{black}{holds}. (iii) \textcolor{black}{T}he function $\varphi_t({\mathbf s},\theta):={\cal E}[\{\pi^*({\mathbf X})\}^{-1}T\{\psi(Y,\theta)\}^t\mid {\mathbf S}={\mathbf s}]$ $(t=0,1)$ is $d$ times continuously differentiable \textcolor{black}{with respect to ${\mathbf s}$,} and has \textcolor{black}{b}ounded $d$th \textcolor{black}{order} derivatives on $\mathcal{S}_0\times\mb(\vt,{\varepsilon})$ \textcolor{black}{for some $\varepsilon > 0$}. \end{assumption} \begin{assumption}[\textcolor{black}{Required only \textcolor{black}{if} $\mathbf{P}_0$ needs to be estimated}] \label{ahbe (i) Assumption \ref{ahbey} (i) \textcolor{black}{holds}. (ii) The function $\nabla K(\cdot)$ is continuously differentiable and satisfies $\|\partial \{\nabla K({\mathbf s})\}/\partial {\mathbf s}\|$ \textcolor{black}{$\leq c_1\,\|{\mathbf s}\|^{-v_2}$} for some constant $v_2>1$ and any $\|{\mathbf s}\|>c_2$. Further, it is locally Lipschitz continuous, i.e., $\|\nabla K({\mathbf s}_1)-\nabla K({\mathbf s}_2)\|\leq \|{\mathbf s}_1-{\mathbf s}_2\|\rho({\mathbf s}_2)$ for any $\|{\mathbf s}_1-{\mathbf s}_2\|\leq c$, where $\rho(\cdot)$ is some bounded and square integrable function with a bounded derivative $\nabla\rho(\cdot)$. (iii) Let ${\boldsymbol\eta}_{t[j]}({\mathbf s},\theta)$ be the $j$th component of ${\boldsymbol\eta}_{t}({\mathbf s},\theta):={\cal E}[{\mathbf X} \{\pi^*({\mathbf X})\}^{-1}T\{\psi(Y,\theta)\}^t\mid {\mathbf S}={\mathbf s}]$. Then, with respect to ${\mathbf s}$, the function ${\boldsymbol\eta}_{t[j]}({\mathbf s},\theta)$ is continuously differentiable and has a bounded first derivative on $\mathcal{S}_0\times\mb(\vt,{\varepsilon})$ \textcolor{black}{ for some $\varepsilon > 0$,} \textcolor{black}{for each $t=0,1$ and $j=1,\ldots p$.} \end{assumption} The above two assumptions can be viewed as \textcolor{black}{the natural} variant\textcolor{black}{s} of Assumptions \ref{akernel}--\ref{ahbey} adapted \textcolor{black}{suitably} for the case of the QTE. We now propose the following result \textcolor{black}{for $\hat{\phi}_{n,k}(\cdot,\cdot)$}. \begin{theorem}[\textcolor{black}{Uniform convergence rate of $\hat{\phi}_{n,k}(\cdot,\cdot)$}]\label{thhd} Set $\gamma_n:=[(nh_n^r)^{-1}\{\hbox{log}(h_n^{-r})+\hbox{log}(\hbox{log}\,n)\}]^{1/2}$, $a_{n}^{(1)}:=\gamma_n+h_n^d$ and $a_{n,N}^{(2)}:=h_n^{-2}\alpha_n^2+h_n^{-1}\gamma_n\alpha_n+\alpha_n+h_n^{-r/2}s_N$. Suppose that Assumptions \ref{ass_equally_distributed}, \ref{api}, \ref{al1}, \ref{akernel_qte} and \ref{ahbe} hold true and that $a_{n}^{(1)}+a_{n,N}^{(2)}=o(1)$. Then \begin{eqnarray*} \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{\phi}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-\tilde{\phi}({\mathbf x},\theta,\mathbf{P}_0)| ~=~O_p\{a_{n}^{(1)}+a_{n,N}^{(2)}\} \quad (k=1,\ldots,\mathbb{K}), \end{eqnarray*} where $\tilde{\phi}({\mathbf x},\theta,\mathbf{P}):=\{\varphi_0(\mathbf{P}^{\rm T}{\mathbf x},\theta)\}^{-1}\varphi_1(\mathbf{P}^{\rm T}{\mathbf x},\theta)$ \textcolor{black}{with $\varphi_0(\cdot)$ and $\varphi_1(\cdot)$ as in Assumption \ref{akernel_qte}.} \end{theorem} \begin{remark}[Double robustness and uniform convergence \textcolor{black}{of $\hat{\phi}_{n,k}(\cdot,\cdot)$}] Whenever either $\pi^*({\mathbf x})=\pi({\mathbf x})$ or $\phi^*({\mathbf x},\theta)\equiv {\cal E}\{\psi(Y,\theta)\mid {\mathbf S}={\mathbf s}\}={\cal E}\{\psi(Y,\theta)\mid {\mathbf X}={\mathbf x}\}\equiv\phi({\mathbf x},\theta)$\textcolor{black}{,} but {\it not} necessarily both, we can see \textcolor{black}{that:} \begin{eqnarray*} \tilde{\phi}({\mathbf x},\theta,\mathbf{P}_0)&~=~&({\cal E}[\{\pi^*({\mathbf X})\}^{-1}\pi({\mathbf X})\mid {\mathbf S}={\mathbf s}])^{-1}{\cal E}[\{\pi^*({\mathbf X})\}^{-1}\pi({\mathbf X})\phi({\mathbf X},\theta)\mid {\mathbf S}={\mathbf s}] \\ &~=~&{\cal E}\{\psi(Y,\theta)\mid{\mathbf S}={\mathbf s}\}~\equiv~\phi^*({\mathbf x},\theta). \end{eqnarray*} In this sense\textcolor{black}{,} $\hat{\phi}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)$ is a \textcolor{black}{\it DR estimator} of $\phi^*({\mathbf x},\theta)$. Moreover, it is straightforward to show $\hat{\phi}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)$ is uniformly consistent for $\tilde{\phi}({\mathbf x},\theta,\mathbf{P}_0)$ at the optimal bandwidth rate under the same conditions on $\{s_N,\alpha_n\}$ as those in Remark \ref{remark_choice_of_P0}, while the choices of $\{\mathbf{P}_0,\hat{\mathbf{P}}_k\}$ therein also apply to the case of $\hat{\phi}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)$\textcolor{black}{; s}ee the discussion in Remark \ref{remark_choice_of_P0} for details. \end{remark} Theorem \ref{thhd} \textcolor{black}{therefore} has shown \textcolor{black}{(among other things) that} the sequences $\{d_{n,1}, d_{n,2}, d_{n,\infty}\}$ in \textcolor{black}{our high-level} Assumption \ref{aest} \textcolor{black}{on $\hat{\phi}_{n,k}(\cdot,\cdot)$} are all of \textcolor{black}{order} $o(1)$ when one sets\textcolor{black}{:} \begin{eqnarray} \hat{\psi}_{n,k}({\mathbf X},\theta)~\equiv~\hat{\phi}_{n,k}({\mathbf X},\theta,\hat{\mathbf{P}}_k)-\phi^*({\mathbf X},\theta), \label{psihat} \end{eqnarray} where $\phi^*({\mathbf x},\theta)$ and $\hat{\phi}_{n,k}({\mathbf x},\theta,\mathbf{P})$ are as defined in (\ref{phis}) and (\ref{ks_qte}), respectively. \textcolor{black}{Furthermore, as a final verification of our high-level conditions in Assumption \ref{aest},} we validate the condition \eqref{vc} \textcolor{black}{therein} on the bracketing number \textcolor{black}{via the following proposition}. \begin{proposition}\label{thbn} Under the condition \eqref{conditional_density}, the function $\hat{\psi}_{n,k}({\mathbf X},\theta)$ in \eqref{psihat} satisfies\textcolor{black}{:} \begin{eqnarray*} N_{[\,]}\{\eta,\mathcal{P}_{n,k}\mid \mathcal{L},L_2({\mathbb P}_{\mathbf X})\}~\leq~ c\,(n+1)\eta^{-1}, \end{eqnarray*} where the set $\mathcal{P}_{n,k}$ is as defined in \eqref{pnk}. Therefore\textcolor{black}{,} the sequence $a_n$ \textcolor{black}{characterizing the growth of} the function $H( \mathcal{L})$ in the condition \eqref{vc} \textcolor{black}{of Assumption \ref{aest}} is of \textcolor{black}{order} $O(n)$. \end{proposition} \begin{remark}[Other outcome model estimators]\label{remark_other_nuisance_functions} \textcolor{black}{Finally, as we conclude our discussion on the nuisance functions' estimation, it is worth pointing out that i}n addition to the IPW type kernel smoothing estimators with necessary dimension reduction, which have been investigated thoroughly in Sections \ref{sec_nf_ate}--\ref{sec_nf_qte}, one may also employ \textcolor{black}{\it any} other reasonable choices of $\hat{m}_{n,k}(\cdot)$ and $\hat{\phi}_{n,k}(\cdot,\cdot)$ to construct $\hat{\mu}_{\mbox{\tiny SS}}$ and $\hvt_{\mbox{\tiny SS}}$, as long as they satisfy the high-level conditions in Sections \ref{secos}--\ref{secqte}. Examples include estimators generated by parametric (\textcolor{black}{e.g,} linear$/$logistic) regression \textcolor{black}{methods, possibly with penalization in high dimensional settings \citep{farrell2015robust},} and random forest \citep{breiman2001random} without use of dimension reduction, as well as many \textcolor{black}{other} popular non-parametric machine learning approaches that have been advocated by some recent works for other related problems in analogous settings \citep{chernozhukov2018double, farrell2021deep}. We will consider some of these methods in \textcolor{black}{our} simulations and data analysis\textcolor{black}{,} while omitting their theoretical study, which is not of our primary interest in this article\textcolor{black}{; see} Sections \ref{sec_simulations} and \ref{sec_data_analysis} for their implementation details and numerical performance. \end{remark} \section{Simulations}\label{sec_simulations} We now investigate the numerical performance of our \textcolor{black}{SS ATE and QTE estimators} $\hat{\mu}_{\mbox{\tiny SS}}$ and $\hvt_{\mbox{\tiny SS}}$ on simulated data \textcolor{black}{under a variety of data generating mechanisms}. \textcolor{black}{(We clarify here that without loss of generality we focus on $\mu_0$ and ${\boldsymbol\theta}$ in \eqref{generic_notation} as our targets, though with some abuse of terminology, we occasionally refer to them as ATE and QTE respectively.)} We set the sample sizes $n\in\{200,500\}$ and $N=10,000$ throughout. The covariates ${\mathbf X}$ are drawn from a $p$-dimensional normal distribution with a zero mean and an identity covariance matrix, where $p\in\{10,200\}$ \textcolor{black}{denotes low and high dimensional choices, respectively}. For any kernel smoothing steps involved, we always use the second order Gaussian kernel and select the bandwidths \textcolor{black}{using} cross validation. Regularization is applied to all regression procedures via the $L_1$ penalty when $p=200$, while the tuning parameters are chosen \textcolor{black}{using} ten-fold cross validation. The number of folds in the cross fitting steps \eqref{ds1}--\eqref{ds2} and \eqref{ds3}--\eqref{ds4} is $\mathbb{K}=10$. By the term ``complete-case'', we refer to conducting a process on $\{(Y_i,T_i=1,{\mathbf X}_i^{\rm T})^{\rm T}:i\in{\cal I}^*\}$ without weighting, where ${\cal I}^*\equiv{\cal I}_k^-$ if cross fitting is involved while ${\cal I}^*\equiv{\cal I}$ otherwise. \subsection{\textcolor{black}{Data generating mechanisms and nuisance estimator choices}} \textcolor{black}{We use the following choices as the \emph{true} data generating models for $T \mid {\mathbf X}$ and $Y \mid {\mathbf X} $.} Let ${\mathbf X}_q:=({\mathbf X}_{[1]},\ldots,{\mathbf X}_{[q]})^{\rm T}$ where $q=p$ when $p=10$, and $q\in\{5,\ceil{p^{1/2}}\}$ when $p=200$, \textcolor{black}{representing the (effective) \emph{sparsity} (fully dense for $p = 10$, and sparse or moderately dense for $p = 200$, respectively) of the true data generating models for the nuisance functions, as described below}. \vskip0.05in \textcolor{black}{For the \emph{propensity score} $\pi({\mathbf X})$}, \textcolor{black}{and with $T \mid {\mathbf X} \sim \mbox{Bernoulli} \{\pi({\mathbf X})\}$,} we set \textcolor{black}{the choices:} \begin{enumerate}[(i)] \item $\pi({\mathbf X})\equiv h(\mathbf{1}_q^{\rm T}{\mathbf X}_q/q^{1/2})$, a {\it linear }model; \item $\pi({\mathbf X})\equiv h\{\mathbf{1}_q^{\rm T}{\mathbf X}_q/q^{1/2}+(\mathbf{1}_q^{\rm T}{\mathbf X}_q)^2/(2q)\}$, a {\it single index} model; \item $\pi({\mathbf X})\equiv h\{\mathbf{1}_q^{\rm T}{\mathbf X}_q/q^{1/2}+\|{\mathbf X}_q\|^2/(2q)\}$, a {\it quadratic} model. \end{enumerate} In the above $h(x)\equiv\{1+\exp(-x)\}^{-1}$ \textcolor{black}{denotes the usual ``expit'' link function for a logistic model}. To approximate $\pi({\mathbf X})$ using the data $ \mathcal{U}$, we obtain the \emph{estimator} $\hat{\pi}_N({\mathbf x})$ from\textcolor{black}{:} \begin{enumerate}[I.] \item unregularized or regularized \textcolor{black}{(linear)} logistic regression of $T$ vs. ${\mathbf X}$ (Lin), \textcolor{black}{which correctly specifies the propensity score (i) but misspecifies (ii) and (iii)}; ~~\textcolor{black}{or} \item unregularized or regularized \textcolor{black}{(quadratic)} logistic regression of $T$ vs. $({\mathbf X}^{\rm T},{\mathbf X}_{[1]}^2,\ldots,{\mathbf X}_{[p]}^2)^{\rm T}$ (Quad), \textcolor{black}{which correctly specifies the propensity scores (i) and (iii) but misspecifies (ii)}. \end{enumerate} \textcolor{black}{T}he \emph{conditional outcome model} is $Y\mid{\mathbf X}\sim\mathcal{N}\{m({\mathbf X}),1\}$ with \textcolor{black} choices of $m(\cdot)$ as follows:} \begin{enumerate}[(a)] \item $m({\mathbf X})\equiv \mathbf{1}_q^{\rm T}{\mathbf X}_q$, a {\it linear} model; \item $m({\mathbf X})\equiv \mathbf{1}_q^{\rm T}{\mathbf X}_q+(\mathbf{1}_q^{\rm T}{\mathbf X}_q)^2/q$, a {\it single index} model; \item $m({\mathbf X})\equiv \mathbf{1}_q^{\rm T}{\mathbf X}_q+\|{\mathbf X}_q\|^2/3$, a {\it quadratic} model; \item $m({\mathbf X})\equiv 0$, a {\it null} model; \item $m({\mathbf X})\equiv \mathbf{1}_p^{\rm T}{\mathbf X}\{1+2(\mathbf{0}_{p/2}^{\rm T},\mathbf{1}_{p/2}^{\rm T}){\mathbf X}/p\}$, a {\it double index} model. \end{enumerate} The outcome models (d) and (e) are considered for cases with $p=10$ only and their results are summarized in \textcolor{black}{Appendix \ref{sm_simulations}} of the Supplementary Material. The following discussions mainly focus on the outcome models (a)--(c). \textcolor{black}{T}he \textcolor{black}{\emph{estimators}} $\hat{m}_{n,k}({\mathbf x})$ and $\hat{\phi}_{n,k}({\mathbf x},\hvt_{\mbox{\tiny INIT}})$ are constructed based on the data $ \mathcal{L}_k^-$ through\textcolor{black}{:} \begin{enumerate}[I.] \item kernel smoothing (KS), \textcolor{black}{in} \eqref{ks_ate} and \eqref{ks_qte}, where \textcolor{black}{the $p \times r$ transformation} $\hat{\mathbf{P}}_k$ is \textcolor{black}{chosen as:} \vskip0.04in \begin{enumerate}[1.] \item the slope vector ($r=1$) from the complete-case version of unregularized or regularized linear regression of $Y$ vs. ${\mathbf X}$ (KS$_1$), \textcolor{black}{which correctly specifies the outcome models (a), (b) and (d) but misspecifies (c) and (e)}; ~~\textcolor{black}{or} \item the first two directions ($r=2$) selected by the complete-case version of the unregularized (with $\ceil{n/5}$ slices of equal width) or regularized (with $4$ slices of equal size) sliced inverse regression \citep{li1991sliced, lin2019sparse} of $Y$ vs. ${\mathbf X}$ (KS$_2$), \textcolor{black}{which correctly specifies the outcome models (a), (b), (d) and (e) but misspecifies (c)}; ~~\textcolor{black}{or} \end{enumerate} \vskip0.04in \item parametric regression (PR), giving \begin{eqnarray*} \hat{m}_{n,k}({\mathbf x})~\equiv~(1,{\mathbf x}^{\rm T})^{\rm T}\widehat{\bxi}_k \textcolor{black}{\quad \hbox{and} \quad} \hat{\phi}_{n,k}({\mathbf x},\hvt_{\mbox{\tiny INIT}})~\equiv~ h\{(1,{\mathbf x}^{\rm T})^{\rm T}\widehat{\bgamma}_k\}-\tau\textcolor{black}{,} \end{eqnarray*} with $\widehat{\bxi}_k/\widehat{\bgamma}_k$ \textcolor{black}{respectively being} the slope vector from the complete-case version of unregularized or regularized linear$/$logistic regression of $Y/I(Y<\hvt_{\mbox{\tiny INIT}})$ vs. ${\mathbf X}$ using $ \mathcal{L}_k^-$, \textcolor{black}{which correctly specifies the outcome models \{(a), (d)\} and (d) for the ATE and QTE estimation, respectively, while misspecifying the others.} \end{enumerate} \textcolor{black}{In general, our choices of $\{\pi({\mathbf x}),m({\mathbf x})\}$ incorporate \textcolor{black}{both} linear \textcolor{black}{and non-linear effects, including} quadratic and interaction effects, that are commonly encountered in practice. Also, our approaches to constructing $\{\hat{\pi}_N({\mathbf x}), \hat{m}_{n,k}({\mathbf x}), \hat{\phi}_{n,k}({\mathbf x},\theta)\}$ represent a broad class of flexible and user-friendly (parametric or semi-parametric) strategies \textcolor{black}{often adopted} for modeling the relation between a continuous or binary response and a set of (possibly high dimensional) covariates.} \textcolor{black}{They also allow for a variety of scenarios in terms of correct/incorrect specifications of the (working) nuisance models.} \textcolor{black}{B}ased on the various $\hat{m}_{n,k}(\cdot)$ and $\hat{\phi}_{n,k}(\cdot,\cdot)$ described above, we obtain $\hat{m}_n(\cdot)$ and $\hat{\phi}_n(\cdot,\cdot)$ via the cross fitting procedures \eqref{ds1}--\eqref{ds2} and \eqref{ds3}--\eqref{ds4}. In addition, for the QTE estimation, we plug $\hvt_{\mbox{\tiny INIT}}$ and $\hat{f}_n(\cdot)$ from Remark \ref{remark_qte_initial_estimator} into $\hvt_{\mbox{\tiny SS}}$ defined by \eqref{ss_qte}, while obtaining the initial estimator and estimated density for $\hvt_{\mbox{\tiny SUP}}$ in \eqref{sup_qte} through the same IPW approach but with $\hat{\pi}_n(\cdot)$ instead of $\hat{\pi}_N(\cdot)$ \textcolor{black}{(i.e., the version based on $ \mathcal{L}$ instead of $ \mathcal{U}$). The same $\hat{\pi}_n(\cdot)$ is also used for constructing the supervised ATE estimator $\hat{\mu}_{\mbox{\tiny SUP}}$ in \eqref{sup_ate}.} \textcolor{black}{For all combinations of the true data generating models, and for \emph{any} of the choices of the nuisance function estimators as listed above, we implement our SS ATE and QTE estimators, evaluate their performances for both estimation \textcolor{black}{(see Section \ref{sec_sim_estimation})} and inference \textcolor{black}{(see Section \ref{sec_sim_inference})}, and also compare their estimation efficiency with respect to a variety of corresponding supervised estimators, \eqref{sup_ate} and \eqref{sup_qte}, as well as their oracle versions} \textcolor{black}{(see the\textcolor{black}{ir formal} descriptions \textcolor{black}{in Section \ref{sec_sim_estimation}} below)}. All the results below are summarized from 500 replications. \begin{table} \def~{\hphantom{0}} \caption{Efficiencies of the ATE estimators relative to the corresponding oracle supervised estimators; \textcolor{black}{see Remark \ref{remark_interpretation_RE} for interpretations of these relative efficiencies.} Here\textcolor{black}{,} $n$ denotes the labeled data size, $p$ the number of covariates, $q$ the model sparsity, $m({\mathbf X})\equiv{\cal E}(Y\mid{\mathbf X})$, $\pi({\mathbf X})\equiv{\cal E}(T\mid{\mathbf X})$, $\hat{\pi}({\mathbf X})$ \textcolor{black}{--} the estimated propensity score, Lin \textcolor{black}{--} logistic regression of $T$ vs. ${\mathbf X}$\textcolor{black}{,} and Quad \textcolor{black}{--} logistic regression of $T$ vs. $({\mathbf X}^{\rm T},{\mathbf X}_{[1]}^2,\ldots,{\mathbf X}_{[p]}^2)^{\rm T}$; KS$_1/$KS$_2$ represents kernel smoothing on the one$/$two direction(s) selected by linear regression$/ \textcolor{black}{sliced} inverse regression; PR \textcolor{black}{denotes} parametric regression\textcolor{black}{,} and ORE oracle relative efficiency. The \textbf{\textcolor{navyblue}{blue}} color implies the best efficiency in each case.}{ \resizebox{\textwidth}{!}{ \begin{tabular}{ccc||ccc|ccc||ccc|ccc||c} \hline \multicolumn{3}{c||}{\multirow{2}{*}{$p=10$}} & \multicolumn{6}{c||}{$n=200$} & \multicolumn{6}{c||}{$n=500$} & \multirow{3}{*}{ORE} \\ \cline{4-15} & & & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & \\ \hline \multirow{6}{*}{(a)} & (i) & Lin & 0.87 & 0.86 & 0.96 & 2.99 & 2.74 & \textcolor{navyblue}{\bf 3.72} & 0.99 & 0.98 & 0.99 & 3.35 & 3.19 & \textcolor{navyblue}{\bf 3.70} & 4.37 \\ & & Quad & 0.79 & 0.63 & 0.91 & 3.00 & 2.74 & \textcolor{navyblue}{\bf 3.74} & 0.97 & 0.96 & 0.98 & 3.34 & 3.20 & \textcolor{navyblue}{\bf 3.69} & 4.37 \\ & (ii) & Lin & 0.93 & 0.91 & 0.99 & 3.37 & 3.10 & \textcolor{navyblue}{\bf 4.05} & 1.00 & 1.00 & 0.99 & 3.64 & 3.55 & \textcolor{navyblue}{\bf 3.93} & 4.78 \\ & & Quad & 0.88 & 0.85 & 0.91 & 3.43 & 3.19 & \textcolor{navyblue}{\bf 4.07} & 0.99 & 1.00 & 0.98 & 3.68 & 3.59 & \textcolor{navyblue}{\bf 3.96} & 4.78 \\ & (iii) & Lin & 0.87 & 0.84 & 0.95 & 2.89 & 2.53 & \textcolor{navyblue}{\bf 4.05} & 0.96 & 0.95 & 0.99 & 3.21 & 3.08 & \textcolor{navyblue}{\bf 3.88} & 4.99 \\ & & Quad & 0.86 & 0.81 & 0.91 & 3.08 & 2.70 & \textcolor{navyblue}{\bf 4.13} & 0.98 & 0.98 & 1.00 & 3.44 & 3.31 & \textcolor{navyblue}{\bf 3.92} & 4.99 \\ \hline \multirow{6}{*}{(b)} & (i) & Lin & 0.93 & 0.92 & 0.51 & \textcolor{navyblue}{\bf 3.62} & 3.42 & 1.03 & 0.99 & 0.98 & 0.67 & \textcolor{navyblue}{\bf 3.73} & 3.61 & 1.17 & 5.07 \\ & & Quad & 0.92 & 0.77 & 0.40 & \textcolor{navyblue}{\bf 3.64} & 3.49 & 1.02 & 0.98 & 0.98 & 0.61 & \textcolor{navyblue}{\bf 3.74} & 3.59 & 1.16 & 5.07 \\ & (ii) & Lin & 0.94 & 0.86 & 0.26 & \textcolor{navyblue}{\bf 2.29} & 1.69 & 0.36 & 0.92 & 0.91 & 0.15 & \textcolor{navyblue}{\bf 2.29} & 2.16 & 0.18 & 3.55 \\ & & Quad & 0.85 & 0.81 & 0.28 & \textcolor{navyblue}{\bf 2.35} & 1.76 & 0.41 & 0.91 & 0.90 & 0.17 & \textcolor{navyblue}{\bf 2.34} & 2.20 & 0.21 & 3.55 \\ & (iii) & Lin & 0.90 & 0.89 & 0.51 & \textcolor{navyblue}{\bf 3.10} & 2.83 & 0.88 & 0.97 & 0.97 & 0.60 & \textcolor{navyblue}{\bf 3.05} & 3.00 & 0.84 & 4.39 \\ & & Quad & 0.87 & 0.84 & 0.56 & \textcolor{navyblue}{\bf 3.20} & 2.90 & 1.08 & 0.98 & 0.96 & 0.63 & \textcolor{navyblue}{\bf 3.11} & 3.04 & 1.07 & 4.39 \\ \hline \multirow{6}{*}{(c)} & (i) & Lin & 0.62 & 0.61 & 0.67 & \textcolor{navyblue}{\bf 1.23} & 1.21 & 1.17 & 0.78 & 0.79 & 0.74 & 1.52 & \textcolor{navyblue}{\bf 1.58} & 1.45 & 9.52 \\ & & Quad & 0.61 & 0.54 & 0.60 & \textcolor{navyblue}{\bf 1.21} & 1.21 & 1.15 & 0.84 & 0.85 & 0.80 & 1.50 & \textcolor{navyblue}{\bf 1.56} & 1.41 & 9.52 \\ & (ii) & Lin & 0.70 & 0.66 & 0.56 & \textcolor{navyblue}{\bf 1.32} & 1.17 & 1.01 & 0.85 & 0.84 & 0.55 & \textcolor{navyblue}{\bf 1.58} & 1.52 & 0.96 & 8.71 \\ & & Quad & 0.79 & 0.75 & 0.83 & \textcolor{navyblue}{\bf 1.35} & 1.19 & 1.32 & 0.90 & 0.89 & 0.83 & 1.47 & 1.46 & \textcolor{navyblue}{\bf 1.49} & 8.71 \\ & (iii) & Lin & 0.57 & 0.58 & 0.53 & 0.92 & \textcolor{navyblue}{\bf 0.95} & 0.87 & 0.48 & 0.49 & 0.43 & 0.70 &\textcolor{navyblue}{\bf 0.72} & 0.61 & 9.42 \\ & & Quad & 0.78 & 0.74 & 0.83 & \textcolor{navyblue}{\bf 1.42} & 1.40 & 1.51 & 0.94 & 0.92 & 0.92 & 1.59 & \textcolor{navyblue}{\bf 1.60} & 1.55 & 9.42\\ \hline \multicolumn{16}{c}{ } \\\hline \multicolumn{3}{c||}{\multirow{2}{*}{$p=200,q=5$}} & \multicolumn{6}{c||}{$n=200$} & \multicolumn{6}{c||}{$n=500$} & \multirow{3}{*}{ORE} \\ \cline{4-15} \multicolumn{3}{c||}{} & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & \\ \hline \multirow{6}{*}{(a)} & (i) & Lin & 0.72 & 0.22 & 0.46 & \textcolor{navyblue}{\bf 1.60} & 0.67 & 1.43 & 0.94 & 0.85 & 0.73 & \textcolor{navyblue}{\bf 1.88} & 1.62 & 1.73 & 2.68 \\ & & Quad & 0.70 & 0.20 & 0.43 & \textcolor{navyblue}{\bf 1.61} & 0.67 & 1.42 & 0.94 & 0.83 & 0.68 & \textcolor{navyblue}{\bf 1.89} & 1.62 & 1.72 & 2.68 \\ & (ii) & Lin & 0.87 & 0.45 & 0.70 & \textcolor{navyblue}{\bf 1.89} & 0.91 & 1.73 & 0.97 & 0.88 & 0.80 & \textcolor{navyblue}{\bf 2.15} & 2.00 & 2.05 & 2.89 \\ & & Quad & 0.86 & 0.44 & 0.69 & \textcolor{navyblue}{\bf 1.91} & 0.92 & 1.75 & 0.97 & 0.88 & 0.78 & \textcolor{navyblue}{\bf 2.15} & 1.99 & 2.07 & 2.89 \\ & (iii) & Lin & 0.82 & 0.34 & 0.57 & \textcolor{navyblue}{\bf 1.74} & 0.79 & 1.64 & 0.95 & 0.89 & 0.76 & \textcolor{navyblue}{\bf 2.35} & 2.06 & 2.17 & 3.00 \\ & & Quad & 0.80 & 0.32 & 0.55 & \textcolor{navyblue}{\bf 1.79} & 0.84 & 1.68 & 0.95 & 0.86 & 0.72 & \textcolor{navyblue}{\bf 2.45} & 2.13 & 2.19 & 3.00 \\ \hline \multirow{6}{*}{(b)} & (i) & Lin & 0.86 & 0.35 & 0.76 & \textcolor{navyblue}{\bf 1.60} & 0.94 & 1.06 & 0.95 & 0.95 & 0.65 & \textcolor{navyblue}{\bf 2.04} & 1.97 & 1.04 & 3.37 \\ & & Quad & 0.83 & 0.31 & 0.74 & \textcolor{navyblue}{\bf 1.61} & 0.93 & 1.08 & 0.95 & 0.95 & 0.65 & \textcolor{navyblue}{\bf 2.04} & 1.97 & 1.03 & 3.37 \\ & (ii) & Lin & 0.35 & 0.23 & 0.22 & \textcolor{navyblue}{\bf 0.44} & 0.40 & 0.35 & 0.55 & 0.35 & 0.14 & \textcolor{navyblue}{\bf 0.73} & 0.49 & 0.15 & 2.29 \\ & & Quad & 0.35 & 0.22 & 0.22 & \textcolor{navyblue}{\bf 0.45} & 0.42 & 0.37 & 0.54 & 0.34 & 0.14 & \textcolor{navyblue}{\bf 0.75} & 0.51 & 0.16 & 2.29 \\ & (iii) & Lin & 0.82 & 0.49 & 0.66 & \textcolor{navyblue}{\bf 0.99} & 0.72 & 0.68 & 0.88 & 0.85 & 0.68 & \textcolor{navyblue}{\bf 1.48} & 1.35 & 0.60 & 2.74 \\ & & Quad & 0.80 & 0.45 & 0.64 & \textcolor{navyblue}{\bf 1.13} & 0.78 & 0.80 & 0.90 & 0.86 & 0.71 & \textcolor{navyblue}{\bf 1.66} & 1.55 & 0.84 & 2.74 \\ \hline \multirow{6}{*}{(c)} & (i) & Lin & 0.59 & 0.23 & 0.39 & \textcolor{navyblue}{\bf 1.00} & 0.65 & 0.93 & 0.75 & 0.71 & 0.72 & 1.16 & 1.10 & \textcolor{navyblue}{\bf 1.20} & 4.13 \\ & & Quad & 0.57 & 0.20 & 0.36 & \textcolor{navyblue}{\bf 1.00} & 0.64 & 0.92 & 0.76 & 0.70 & 0.71 & 1.17 & 1.10 & \textcolor{navyblue}{\bf 1.20} & 4.13 \\ & (ii) & Lin & 0.64 & 0.35 & 0.43 & \textcolor{navyblue}{\bf 0.99} & 0.63 & 0.90 & 0.74 & 0.64 & 0.38 & \textcolor{navyblue}{\bf 1.14} & 1.05 & 0.79 & 3.63 \\ & & Quad & 0.64 & 0.34 & 0.42 & \textcolor{navyblue}{\bf 1.02} & 0.64 & 0.94 & 0.74 & 0.64 & 0.37 & \textcolor{navyblue}{\bf 1.21} & 1.12 & 0.91 & 3.63 \\ & (iii) & Lin & 0.39 & 0.19 & 0.25 & \textcolor{navyblue}{\bf 0.68} & 0.47 & 0.60 & 0.38 & 0.32 & 0.26 & \textcolor{navyblue}{\bf 0.50} & 0.47 & 0.43 & 3.78 \\ & & Quad & 0.39 & 0.18 & 0.24 & \textcolor{navyblue}{\bf 0.95} & 0.59 & 0.82 & 0.40 & 0.33 & 0.26 & \textcolor{navyblue}{\bf 1.33} & 1.15 & 1.04 & 3.78 \\ \hline \multicolumn{16}{c}{ } \\ \hline \multicolumn{3}{c||}{\multirow{2}{*}{$p=200,q=\ceil{p^{1/2}}$}} & \multicolumn{6}{c||}{$n=200$} & \multicolumn{6}{c||}{$n=500$} & \multirow{3}{*}{ORE} \\ \cline{4-15} \multicolumn{3}{c||}{} & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & \\ \hline \multirow{6}{*}{(a)} & (i) & Lin & 0.35 & 0.09 & 0.29 & \textcolor{navyblue}{\bf 1.38} & 0.46 & 1.20 & 0.83 & 0.60 & 0.60 & \textcolor{navyblue}{\bf 3.59} & 2.04 & 2.96 & 6.05 \\ & & Quad & 0.34 & 0.09 & 0.28 & \textcolor{navyblue}{\bf 1.36} & 0.43 & 1.17 & 0.81 & 0.55 & 0.55 & \textcolor{navyblue}{\bf 3.57} & 2.01 & 2.87 & 6.05 \\ & (ii) & Lin & 0.68 & 0.23 & 0.61 & \textcolor{navyblue}{\bf 1.74} & 0.51 & 1.64 & 0.97 & 0.73 & 0.80 & \textcolor{navyblue}{\bf 3.90} & 2.55 & 3.71 & 6.65 \\ & & Quad & 0.67 & 0.23 & 0.60 & \textcolor{navyblue}{\bf 1.78} & 0.52 & 1.66 & 0.97 & 0.72 & 0.79 & \textcolor{navyblue}{\bf 3.91} & 2.51 & 3.72 & 6.65 \\ & (iii) & Lin & 0.62 & 0.14 & 0.49 & \textcolor{navyblue}{\bf 2.07} & 0.60 & 1.91 & 0.91 & 0.74 & 0.70 & \textcolor{navyblue}{\bf 3.77} & 2.65 & 3.54 & 6.99 \\ & & Quad & 0.60 & 0.13 & 0.48 & \textcolor{navyblue}{\bf 2.13} & 0.60 & 1.94 & 0.90 & 0.69 & 0.66 & \textcolor{navyblue}{\bf 3.80} & 2.67 & 3.50 & 6.99 \\ \hline \multirow{6}{*}{(b)} & (i) & Lin & 0.40 & 0.11 & 0.34 & \textcolor{navyblue}{\bf 1.29} & 0.55 & 1.16 & 0.91 & 0.77 & 0.89 & \textcolor{navyblue}{\bf 3.89} & 2.96 & 2.27 & 6.78 \\ & & Quad & 0.38 & 0.11 & 0.33 & \textcolor{navyblue}{\bf 1.29} & 0.52 & 1.16 & 0.88 & 0.70 & 0.89 & \textcolor{navyblue}{\bf 3.91} & 2.92 & 2.29 & 6.78 \\ & (ii) & Lin & 0.31 & 0.18 & 0.24 & \textcolor{navyblue}{\bf 0.68} & 0.44 & 0.56 & 0.60 & 0.53 & 0.21 & \textcolor{navyblue}{\bf 1.55} & 1.43 & 0.34 & 4.97 \\ & & Quad & 0.31 & 0.17 & 0.23 & \textcolor{navyblue}{\bf 0.65} & 0.42 & 0.54 & 0.59 & 0.52 & 0.21 & \textcolor{navyblue}{\bf 1.52} & 1.39 & 0.34 & 4.97 \\ & (iii) & Lin & 0.63 & 0.18 & 0.54 & \textcolor{navyblue}{\bf 1.64} & 0.75 & 1.33 & 0.96 & 0.82 & 0.93 & \textcolor{navyblue}{\bf 3.43} & 2.71 & 2.09 & 6.14 \\ & & Quad & 0.61 & 0.17 & 0.53 & \textcolor{navyblue}{\bf 1.68} & 0.77 & 1.36 & 0.94 & 0.78 & 0.93 & \textcolor{navyblue}{\bf 3.45} & 2.72 & 2.15 & 6.14 \\ \hline \multirow{6}{*}{(c)} & (i) & Lin & 0.16 & 0.10 & 0.13 & \textcolor{navyblue}{\bf 0.56} & 0.41 & 0.52 & 0.61 & 0.36 & 0.38 & \textcolor{navyblue}{\bf 1.27} & 0.93 & 1.15 & 17.23 \\ & & Quad & 0.16 & 0.09 & 0.12 & \textcolor{navyblue}{\bf 0.56} & 0.39 & 0.51 & 0.59 & 0.32 & 0.34 & \textcolor{navyblue}{\bf 1.26} & 0.91 & 1.13 & 17.23 \\ & (ii) & Lin & 0.31 & 0.22 & 0.26 & 0.65 & 0.49 & \textcolor{navyblue}{\bf 0.67} & 0.63 & 0.48 & 0.36 & \textcolor{navyblue}{\bf 1.23} & 1.07 & 1.06 & 16.30 \\ & & Quad & 0.30 & 0.22 & 0.25 & 0.65 & 0.48 & \textcolor{navyblue}{\bf 0.65} & 0.63 & 0.49 & 0.35 & \textcolor{navyblue}{\bf 1.24} & 1.07 & 1.05 & 16.30 \\ & (iii) & Lin & 0.16 & 0.10 & 0.13 & \textcolor{navyblue}{\bf 0.54} & 0.40 & 0.48 & 0.39 & 0.26 & 0.22 & \textcolor{navyblue}{\bf 0.72} & 0.59 & 0.59 & 17.82 \\ & & Quad & 0.16 & 0.10 & 0.12 & \textcolor{navyblue}{\bf 0.68} & 0.52 & 0.53 & 0.38 & 0.24 & 0.21 & \textcolor{navyblue}{\bf 1.27} & 0.94 & 0.96 & 17.82 \\ \hline \end{tabular} }} \label{table_ate_efficiency} \end{table} \begin{table} \def~{\hphantom{0}} \caption{\textcolor{black}{Efficiencies of QTE estimators.} We consider the same scenario\textcolor{black}{(s)} as \textcolor{black}{in} Table \ref{table_ate_efficiency}, but now the estimand is the QTE.} { \resizebox{\textwidth}{!}{ \begin{tabular}{ccc||ccc|ccc||ccc|ccc||c} \hline \multicolumn{3}{c||}{\multirow{2}{*}{$p=10$}} & \multicolumn{6}{c||}{$n=200$} & \multicolumn{6}{c||}{$n=500$} & \multirow{3}{*}{ORE} \\ \cline{4-15} & & & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & \\ \hline \multirow{6}{*}{(a)} & (i) & Lin & 0.96 & 0.90 & 0.79 & \textcolor{navyblue}{\bf 1.98} & 1.88 & 1.34 & 0.99 & 0.98 & 0.93 & 1.85 & 1.80 & \textcolor{navyblue}{\bf 1.90} & 2.24 \\ & & Quad & 0.74 & 0.69 & 0.65 & \textcolor{navyblue}{\bf 2.05} & 1.93 & 1.36 & 0.99 & 0.98 & 0.91 & 1.86 & 1.82 & \textcolor{navyblue}{\bf 1.89} & 2.24 \\ & (ii) & Lin & 0.86 & 0.85 & 0.82 & \textcolor{navyblue}{\bf 1.56} & 1.44 & 0.98 & 0.99 & 0.97 & 0.97 & 1.55 & 1.51 & \textcolor{navyblue}{\bf 1.59} & 2.12 \\ & & Quad & 0.79 & 0.77 & 0.73 & \textcolor{navyblue}{\bf 1.56} & 1.48 & 1.00 & 0.99 & 0.97 & 0.95 & 1.57 & 1.50 & \textcolor{navyblue}{\bf 1.61} & 2.12 \\ & (iii) & Lin & 0.94 & 0.90 & 0.93 & 1.77 & 1.61 & \textcolor{navyblue}{\bf 1.96} & 1.01 & 1.01 & 1.02 & \textcolor{navyblue}{\bf 2.26} & 2.24 & 2.18 & 2.42 \\ & & Quad & 0.88 & 0.80 & 0.93 & 1.85 & 1.69 & \textcolor{navyblue}{\bf 1.89} & 0.96 & 0.97 & 0.99 & \textcolor{navyblue}{\bf 2.29} & 2.27 & 2.15 & 2.42 \\ \hline \multirow{6}{*}{(b)} & (i) & Lin & 0.93 & 0.90 & 0.85 & \textcolor{navyblue}{\bf 1.82} & 1.70 & 1.42 & 0.95 & 0.93 & 0.92 & 1.78 & 1.73 & \textcolor{navyblue}{\bf 1.84} & 2.13 \\ & & Quad & 0.77 & 0.74 & 0.72 & \textcolor{navyblue}{\bf 1.86} & 1.73 & 1.45 & 0.96 & 0.95 & 0.91 & 1.78 & 1.72 & \textcolor{navyblue}{\bf 1.81} & 2.13 \\ & (ii) & Lin & 0.78 & 0.73 & 0.80 & \textcolor{navyblue}{\bf 1.22} & 1.10 & 1.08 & 0.82 & 0.75 & 0.78 & \textcolor{navyblue}{\bf 1.38} & 1.19 & 1.19 & 1.92 \\ & & Quad & 0.66 & 0.65 & 0.74 & \textcolor{navyblue}{\bf 1.28} & 1.15 & 1.11 & 0.84 & 0.78 & 0.80 & \textcolor{navyblue}{\bf 1.44} & 1.26 & 1.24 & 1.92 \\ & (iii) & Lin & 0.90 & 0.88 & 0.89 & 1.57 & 1.45 & \textcolor{navyblue}{\bf 1.79} & 0.93 & 0.93 & 0.95 & 1.82 & 1.84 & \textcolor{navyblue}{\bf 1.92} & 2.16 \\ & & Quad & 0.85 & 0.83 & 0.90 & 1.74 & 1.60 & \textcolor{navyblue}{\bf 1.89} & 0.92 & 0.91 & 0.96 & 1.89 & 1.93 & \textcolor{navyblue}{\bf 1.97} & 2.16 \\ \hline \multirow{6}{*}{(c)} & (i) & Lin & 0.71 & 0.70 & 0.69 & \textcolor{navyblue}{\bf 1.12} & 1.06 & 1.02 & 0.77 & 0.77 & 0.83 & 1.22 & 1.19 & \textcolor{navyblue}{\bf 1.33} & 2.35 \\ & & Quad & 0.69 & 0.69 & 0.60 & \textcolor{navyblue}{\bf 1.11} & 1.05 & 1.01 & 0.83 & 0.83 & 0.87 & 1.18 & 1.15 & \textcolor{navyblue}{\bf 1.26} & 2.35 \\ & (ii) & Lin & 0.70 & 0.70 & 0.66 & \textcolor{navyblue}{\bf 0.99} & 0.93 & 0.87 & 0.74 & 0.74 & 0.78 & 1.00 & 1.02 & \textcolor{navyblue}{\bf 1.02} & 2.25 \\ & & Quad & 0.82 & 0.79 & 0.74 & \textcolor{navyblue}{\bf 1.08} & 1.02 & 0.94 & 0.84 & 0.84 & 0.87 & 1.16 & \textcolor{navyblue}{\bf 1.19} & 1.09 & 2.25 \\ & (iii) & Lin & 0.61 & 0.63 & 0.65 & 0.82 & 0.80 & \textcolor{navyblue}{\bf 0.96} & 0.58 & 0.58 & 0.63 & 0.77 & 0.77 & \textcolor{navyblue}{\bf 0.88} & 2.55 \\ & & Quad & 0.86 & 0.85 & 0.86 & 1.16 & 1.12 & \textcolor{navyblue}{\bf 1.25} & 0.95 & 0.93 & 0.92 & \textcolor{navyblue}{\bf 1.28} & 1.25 & 1.26 & 2.55 \\\hline \multicolumn{16}{c}{ } \\ \hline \multicolumn{3}{c||}{\multirow{2}{*}{$p=200,q=5$}} & \multicolumn{6}{c||}{$n=200$} & \multicolumn{6}{c||}{$n=500$} & \multirow{3}{*}{ORE} \\ \cline{4-15} \multicolumn{3}{c||}{} & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & \\ \hline \multirow{6}{*}{(a)} & (i) & Lin & 0.73 & 0.39 & 0.35 & \textcolor{navyblue}{\bf 1.29} & 0.72 & 0.81 & 0.92 & 0.93 & 0.71 & \textcolor{navyblue}{\bf 1.45} & 1.40 & 1.22 & 1.78 \\ & & Quad & 0.71 & 0.36 & 0.32 & \textcolor{navyblue}{\bf 1.28} & 0.70 & 0.80 & 0.90 & 0.91 & 0.69 & \textcolor{navyblue}{\bf 1.45} & 1.40 & 1.21 & 1.78 \\ & (ii) & Lin & 0.88 & 0.44 & 0.35 & \textcolor{navyblue}{\bf 1.03} & 0.67 & 0.70 & 0.96 & 0.92 & 0.60 & \textcolor{navyblue}{\bf 1.45} & 1.35 & 1.05 & 1.69 \\ & & Quad & 0.87 & 0.44 & 0.35 & \textcolor{navyblue}{\bf 1.04} & 0.69 & 0.69 & 0.95 & 0.91 & 0.57 & \textcolor{navyblue}{\bf 1.46} & 1.37 & 1.07 & 1.69 \\ & (iii) & Lin & 0.91 & 0.47 & 0.43 & \textcolor{navyblue}{\bf 1.31} & 0.81 & 0.96 & 0.94 & 0.94 & 0.72 & \textcolor{navyblue}{\bf 1.57} & 1.55 & 1.33 & 1.86 \\ & & Quad & 0.88 & 0.43 & 0.39 & \textcolor{navyblue}{\bf 1.41} & 0.83 & 1.00 & 0.96 & 0.95 & 0.71 & \textcolor{navyblue}{\bf 1.61} & 1.59 & 1.36 & 1.86 \\ \hline \multirow{6}{*}{(b)} & (i) & Lin & 0.59 & 0.38 & 0.42 & \textcolor{navyblue}{\bf 1.05} & 0.73 & 0.79 & 0.89 & 0.90 & 0.96 & \textcolor{navyblue}{\bf 1.29} & 1.24 & 1.17 & 1.50 \\ & & Quad & 0.55 & 0.36 & 0.39 & \textcolor{navyblue}{\bf 1.06} & 0.73 & 0.78 & 0.81 & 0.80 & 0.91 & \textcolor{navyblue}{\bf 1.30} & 1.26 & 1.19 & 1.50 \\ & (ii) & Lin & 0.38 & 0.21 & 0.20 & \textcolor{navyblue}{\bf 0.41} & 0.33 & 0.35 & 0.77 & 0.70 & 0.22 & \textcolor{navyblue}{\bf 0.81} & 0.67 & 0.25 & 1.45 \\ & & Quad & 0.38 & 0.21 & 0.20 & \textcolor{navyblue}{\bf 0.43} & 0.34 & 0.35 & 0.75 & 0.68 & 0.21 & \textcolor{navyblue}{\bf 0.81} & 0.69 & 0.26 & 1.45 \\ & (iii) & Lin & 0.69 & 0.45 & 0.41 & \textcolor{navyblue}{\bf 0.76} & 0.64 & 0.67 & 0.95 & 0.93 & 0.88 & \textcolor{navyblue}{\bf 1.08} & 1.04 & 0.82 & 1.50 \\ & & Quad & 0.67 & 0.40 & 0.38 & \textcolor{navyblue}{\bf 0.83} & 0.69 & 0.74 & 0.90 & 0.89 & 0.87 & \textcolor{navyblue}{\bf 1.14} & 1.11 & 0.95 & 1.50 \\ \hline \multirow{6}{*}{(c)} & (i) & Lin & 0.67 & 0.35 & 0.30 & \textcolor{navyblue}{\bf 0.91} & 0.66 & 0.72 & 0.81 & 0.77 & 0.56 & \textcolor{navyblue}{\bf 1.09} & 1.05 & 0.91 & 1.81 \\ & & Quad & 0.63 & 0.33 & 0.28 & \textcolor{navyblue}{\bf 0.91} & 0.67 & 0.71 & 0.81 & 0.77 & 0.55 & \textcolor{navyblue}{\bf 1.08} & 1.03 & 0.87 & 1.81 \\ & (ii) & Lin & 0.66 & 0.34 & 0.30 & \textcolor{navyblue}{\bf 0.77} & 0.51 & 0.61 & 0.77 & 0.75 & 0.44 & 1.03 & \textcolor{navyblue}{\bf 1.03} & 0.75 & 1.74 \\ & & Quad & 0.67 & 0.34 & 0.30 & \textcolor{navyblue}{\bf 0.79} & 0.52 & 0.62 & 0.75 & 0.73 & 0.42 & 1.08 & \textcolor{navyblue}{\bf 1.09} & 0.82 & 1.74 \\ & (iii) & Lin & 0.55 & 0.24 & 0.22 & \textcolor{navyblue}{\bf 0.62} & 0.46 & 0.52 & 0.51 & 0.50 & 0.29 & \textcolor{navyblue}{\bf 0.59} & 0.57 & 0.49 & 1.91 \\ & & Quad & 0.54 & 0.23 & 0.21 & \textcolor{navyblue}{\bf 0.86} & 0.55 & 0.68 & 0.55 & 0.53 & 0.29 & \textcolor{navyblue}{\bf 0.97} & 0.93 & 0.80 & 1.91 \\ \hline \multicolumn{16}{c}{ } \\ \hline \multicolumn{3}{c||}{\multirow{2}{*}{$p=200,q=\ceil{p^{1/2}}$}} & \multicolumn{6}{c||}{$n=200$} & \multicolumn{6}{c||}{$n=500$} & \multirow{3}{*}{ORE} \\ \cline{4-15} \multicolumn{3}{c||}{} & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & \\ \hline \multirow{6}{*}{(a)} & (i) & Lin & 0.53 & 0.14 & 0.09 & \textcolor{navyblue}{\bf 0.89} & 0.44 & 0.43 & 0.85 & 0.80 & 0.45 & \textcolor{navyblue}{\bf 2.06} & 1.74 & 1.16 & 2.62 \\ & & Quad & 0.53 & 0.14 & 0.09 & \textcolor{navyblue}{\bf 0.92} & 0.42 & 0.42 & 0.80 & 0.73 & 0.37 & \textcolor{navyblue}{\bf 2.05} & 1.73 & 1.12 & 2.62 \\ & (ii) & Lin & 0.68 & 0.21 & 0.15 & \textcolor{navyblue}{\bf 0.99} & 0.40 & 0.41 & 0.79 & 0.71 & 0.33 & \textcolor{navyblue}{\bf 1.63} & 1.40 & 0.79 & 2.45 \\ & & Quad & 0.67 & 0.21 & 0.15 & \textcolor{navyblue}{\bf 1.01} & 0.39 & 0.39 & 0.80 & 0.71 & 0.32 & \textcolor{navyblue}{\bf 1.66} & 1.43 & 0.75 & 2.45 \\ & (iii) & Lin & 0.77 & 0.21 & 0.14 & \textcolor{navyblue}{\bf 1.42} & 0.58 & 0.62 & 0.85 & 0.80 & 0.50 & \textcolor{navyblue}{\bf 2.21} & 1.69 & 1.31 & 2.87 \\ & & Quad & 0.76 & 0.20 & 0.14 & \textcolor{navyblue}{\bf 1.40} & 0.58 & 0.61 & 0.81 & 0.74 & 0.43 & \textcolor{navyblue}{\bf 2.14} & 1.68 & 1.32 & 2.87 \\ \hline \multirow{6}{*}{(b)} & (i) & Lin & 0.46 & 0.12 & 0.08 & \textcolor{navyblue}{\bf 0.73} & 0.43 & 0.42 & 0.76 & 0.77 & 0.48 & \textcolor{navyblue}{\bf 1.85} & 1.62 & 1.10 & 2.59 \\ & & Quad & 0.45 & 0.12 & 0.08 & \textcolor{navyblue}{\bf 0.73} & 0.41 & 0.39 & 0.70 & 0.70 & 0.40 & \textcolor{navyblue}{\bf 1.82} & 1.61 & 1.07 & 2.59 \\ & (ii) & Lin & 0.38 & 0.18 & 0.13 & \textcolor{navyblue}{\bf 0.56} & 0.38 & 0.40 & 0.67 & 0.63 & 0.33 & \textcolor{navyblue}{\bf 1.21} & 1.16 & 0.72 & 2.29 \\ & & Quad & 0.37 & 0.17 & 0.13 & \textcolor{navyblue}{\bf 0.56} & 0.35 & 0.37 & 0.69 & 0.64 & 0.32 & \textcolor{navyblue}{\bf 1.15} & 1.14 & 0.70 & 2.29 \\ & (iii) & Lin & 0.68 & 0.19 & 0.13 & \textcolor{navyblue}{\bf 0.97} & 0.62 & 0.61 & 0.82 & 0.74 & 0.50 & \textcolor{navyblue}{\bf 2.06} & 1.66 & 1.37 & 2.73 \\ & & Quad & 0.66 & 0.18 & 0.12 & \textcolor{navyblue}{\bf 0.98} & 0.63 & 0.61 & 0.80 & 0.72 & 0.46 & \textcolor{navyblue}{\bf 1.99} & 1.60 & 1.35 & 2.73 \\ \hline \multirow{6}{*}{(c)} & (i) & Lin & 0.27 & 0.13 & 0.10 & \textcolor{navyblue}{\bf 0.55} & 0.42 & 0.45 & 0.72 & 0.67 & 0.27 & \textcolor{navyblue}{\bf 1.11} & 0.97 & 0.73 & 2.72 \\ & & Quad & 0.27 & 0.13 & 0.09 & \textcolor{navyblue}{\bf 0.53} & 0.41 & 0.43 & 0.67 & 0.61 & 0.23 & \textcolor{navyblue}{\bf 1.09} & 0.95 & 0.69 & 2.72 \\ & (ii) & Lin & 0.37 & 0.22 & 0.17 & \textcolor{navyblue}{\bf 0.54} & 0.42 & 0.47 & 0.67 & 0.57 & 0.21 & \textcolor{navyblue}{\bf 0.94} & 0.80 & 0.51 & 2.58 \\ & & Quad & 0.37 & 0.22 & 0.17 & \textcolor{navyblue}{\bf 0.54} & 0.41 & 0.46 & 0.67 & 0.56 & 0.21 & \textcolor{navyblue}{\bf 0.94} & 0.81 & 0.49 & 2.58 \\ & (iii) & Lin & 0.26 & 0.14 & 0.12 & \textcolor{navyblue}{\bf 0.56} & 0.42 & 0.45 & 0.62 & 0.49 & 0.23 & \textcolor{navyblue}{\bf 0.87} & 0.75 & 0.60 & 3.04 \\ & & Quad & 0.26 & 0.14 & 0.11 & \textcolor{navyblue}{\bf 0.59} & 0.46 & 0.47 & 0.59 & 0.46 & 0.21 & \textcolor{navyblue}{\bf 1.06} & 0.89 & 0.71 & 3.04 \\ \hline \end{tabular} }} \label{table_qte_efficiency} \end{table} \begin{table} \def~{\hphantom{0}} \caption{Inference based on the SS estimators \underline{\textcolor{black}{using} kernel smoothing on the direction selected by linear regression \textcolor{black}{(KS$_1$)}} \textcolor{black}{as the choice of the working outcome model, for the ATE and the QTE,} when $n=500$. Here\textcolor{black}{,} ESE is the empirical standard error, \textcolor{black}{Bias is the empirical bias,} ASE \textcolor{black}{is} the average of the estimated standard errors\textcolor{black}{,} and CR \textcolor{black}{is} the \textcolor{black}{empirical} coverage rate of the 95\% confidence intervals. \textcolor{black}{All o}ther notations are the same as in Table \ref{table_ate_efficiency}. The \textbf{{\color{navyblue} blue}} color \textcolor{black}{highlights settings where} the propensity scores and the outcome models are \textcolor{black}{both} correctly specified, while the \textbf{boldfaces} \textcolor{black}{indicate ones where} the propensity scores are correctly specified but the outcome models are not.}{ \resizebox{\textwidth}{!}{ \begin{tabular}{ccc|cccc|cccc|cccc} \hline \multicolumn{3}{c|}{ATE} & \multicolumn{4}{c|}{$p=10$} & \multicolumn{4}{c|}{$p=200,q=5$} & \multicolumn{4}{c}{$p=200,q=\ceil{p^{1/2}}$} \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & ESE & Bias & ASE & CR & ESE & Bias & ASE & CR & ESE & Bias & ASE & CR \\ \hline & (i) & {\color{navyblue} \textbf{Lin}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.93}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.93}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.93}} \\ & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.93}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.07}} & {\color{navyblue} \textbf{0.95}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.02}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.93}} \\ & (ii) & Lin & 0.07 & 0.00 & 0.08 & 0.95 & 0.07 & 0.00 & 0.07 & 0.97 & 0.08 & 0.00 & 0.08 & 0.95 \\ & & Quad & 0.07 & 0.00 & 0.07 & 0.96 & 0.07 & 0.00 & 0.07 & 0.96 & 0.08 & 0.00 & 0.08 & 0.95 \\ & (iii) & Lin & 0.08 & 0.00 & 0.08 & 0.93 & 0.07 & 0.01 & 0.07 & 0.94 & 0.08 & 0.01 & 0.08 & 0.94 \\ \multirow{-6}{*}{(a)} & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.07}} & {\color{navyblue} \textbf{0.93}} & {\color{navyblue} \textbf{0.07}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.07}} & {\color{navyblue} \textbf{0.94}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.94}} \\ \hline & (i) & {\color{navyblue} \textbf{Lin}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.93}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.95}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.94}} \\ & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.94}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.94}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.94}} \\ & (ii) & Lin & 0.07 & 0.02 & 0.08 & 0.94 & 0.08 & 0.06 & 0.08 & 0.87 & 0.09 & 0.07 & 0.09 & 0.90 \\ & & Quad & 0.07 & 0.02 & 0.07 & 0.95 & 0.08 & 0.06 & 0.08 & 0.87 & 0.09 & 0.07 & 0.09 & 0.89 \\ & (iii) & Lin & 0.08 & 0.00 & 0.07 & 0.93 & 0.08 & 0.01 & 0.08 & 0.96 & 0.08 & 0.01 & 0.08 & 0.95 \\ \multirow{-6}{*}{(b)} & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.07}} & {\color{navyblue} \textbf{0.93}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.07}} & {\color{navyblue} \textbf{0.96}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.95}} \\ \hline & (i) & \textbf{Lin} & \textbf{0.13} & \textbf{0.00} & \textbf{0.13} & \textbf{0.96} & \textbf{0.11} & \textbf{0.01} & \textbf{0.10} & \textbf{0.92} & \textbf{0.17} & \textbf{0.02} & \textbf{0.16} & \textbf{0.93} \\ & & \textbf{Quad} & \textbf{0.13} & \textbf{0.00} & \textbf{0.13} & \textbf{0.95} & \textbf{0.11} & \textbf{0.01} & \textbf{0.10} & \textbf{0.92} & \textbf{0.17} & \textbf{0.03} & \textbf{0.16} & \textbf{0.92} \\ & (ii) & Lin & 0.11 & 0.01 & 0.12 & 0.97 & 0.09 & 0.02 & 0.09 & 0.95 & 0.15 & 0.04 & 0.15 & 0.94 \\ & & Quad & 0.11 & -0.04 & 0.12 & 0.96 & 0.09 & 0.01 & 0.09 & 0.96 & 0.15 & 0.04 & 0.15 & 0.94 \\ & (iii) & Lin & 0.12 & 0.13 & 0.12 & 0.83 & 0.09 & 0.11 & 0.09 & 0.78 & 0.15 & 0.15 & 0.15 & 0.83 \\ \multirow{-6}{*}{(c)} & & \textbf{Quad} & \textbf{0.12} & \textbf{0.01} & \textbf{0.12} & \textbf{0.95} & \textbf{0.09} & \textbf{-0.01} & \textbf{0.10} & \textbf{0.97} & \textbf{0.16} & \textbf{-0.02} & \textbf{0.17} & \textbf{0.96} \\ \hline \multicolumn{15}{c}{ } \\ \hline \multicolumn{3}{c|}{QTE}& \multicolumn{4}{c|}{$p=10$} & \multicolumn{4}{c|}{$p=200,q=5$} & \multicolumn{4}{c}{$p=200,q=\ceil{p^{1/2}}$} \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & ESE & Bias & ASE & CR & ESE & Bias & ASE & CR & ESE & Bias & ASE & CR \\ \hline & (i) & {\color{navyblue} \textbf{Lin}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.04}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.92}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.95}} & {\color{navyblue} \textbf{0.17}} & {\color{navyblue} \textbf{-0.01}} & {\color{navyblue} \textbf{0.17}} & {\color{navyblue} \textbf{0.94}} \\ & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.04}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.93}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.95}} & {\color{navyblue} \textbf{0.17}} & {\color{navyblue} \textbf{-0.01}} & {\color{navyblue} \textbf{0.17}} & {\color{navyblue} \textbf{0.94}} \\ & (ii) & Lin & 0.15 & 0.04 & 0.14 & 0.91 & 0.13 & 0.01 & 0.12 & 0.94 & 0.18 & -0.01 & 0.16 & 0.92 \\ & & Quad & 0.15 & 0.04 & 0.14 & 0.91 & 0.13 & 0.01 & 0.12 & 0.94 & 0.18 & -0.01 & 0.16 & 0.93 \\ & (iii) & Lin & 0.13 & 0.02 & 0.13 & 0.94 & 0.11 & 0.01 & 0.12 & 0.96 & 0.15 & 0.01 & 0.15 & 0.95 \\ \multirow{-6}{*}{(a)} & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.02}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.94}} & {\color{navyblue} \textbf{0.11}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.12}} & {\color{navyblue} \textbf{0.96}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.95}} \\ \hline & (i) & {\color{navyblue} \textbf{Lin}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.02}} & {\color{navyblue} \textbf{0.14}} & {\color{navyblue} \textbf{0.92}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.95}} & {\color{navyblue} \textbf{0.18}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.17}} & {\color{navyblue} \textbf{0.93}} \\ & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.02}} & {\color{navyblue} \textbf{0.14}} & {\color{navyblue} \textbf{0.93}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.95}} & {\color{navyblue} \textbf{0.18}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.17}} & {\color{navyblue} \textbf{0.94}} \\ & (ii) & Lin & 0.14 & 0.05 & 0.14 & 0.94 & 0.12 & 0.07 & 0.12 & 0.94 & 0.19 & 0.05 & 0.17 & 0.92 \\ & & Quad & 0.14 & 0.05 & 0.14 & 0.95 & 0.12 & 0.07 & 0.12 & 0.93 & 0.19 & 0.04 & 0.17 & 0.92 \\ & (iii) & Lin & 0.13 & 0.02 & 0.13 & 0.95 & 0.12 & 0.02 & 0.12 & 0.94 & 0.15 & 0.00 & 0.15 & 0.95 \\ \multirow{-6}{*}{(b)} & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.02}} & {\color{navyblue} \textbf{0.13}} & {\color{navyblue} \textbf{0.95}} & {\color{navyblue} \textbf{0.12}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.12}} & {\color{navyblue} \textbf{0.95}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.15}} & {\color{navyblue} \textbf{0.95}} \\ \hline & (i) & \textbf{Lin} & \textbf{0.19} & \textbf{0.01} & \textbf{0.21} & \textbf{0.96} & \textbf{0.16} & \textbf{0.02} & \textbf{0.16} & \textbf{0.97} & \textbf{0.26} & \textbf{0.00} & \textbf{0.27} & \textbf{0.95} \\ & & \textbf{Quad} & \textbf{0.20} & \textbf{0.01} & \textbf{0.21} & \textbf{0.95} & \textbf{0.16} & \textbf{0.03} & \textbf{0.16} & \textbf{0.97} & \textbf{0.26} & \textbf{0.00} & \textbf{0.27} & \textbf{0.95} \\ & (ii) & Lin & 0.20 & 0.07 & 0.19 & 0.92 & 0.14 & 0.04 & 0.15 & 0.94 & 0.24 & 0.05 & 0.24 & 0.95 \\ & & Quad & 0.19 & 0.01 & 0.19 & 0.95 & 0.14 & 0.02 & 0.15 & 0.95 & 0.24 & 0.04 & 0.24 & 0.96 \\ & (iii) & Lin & 0.18 & 0.15 & 0.18 & 0.88 & 0.15 & 0.13 & 0.15 & 0.86 & 0.22 & 0.15 & 0.23 & 0.91 \\ \multirow{-6}{*}{(c)} & & \textbf{Quad} & \textbf{0.18} & \textbf{0.01} & \textbf{0.18} & \textbf{0.95} & \textbf{0.14} & \textbf{0.05} & \textbf{0.14} & \textbf{0.93} & \textbf{0.22} & \textbf{0.11} & \textbf{0.23} & \textbf{0.93} \\ \hline \end{tabular} }} \label{table_inferece} \end{table} \subsection{\textcolor{black}{Results on estimation efficiency} }\label{sec_sim_estimation} In Tables \ref{table_ate_efficiency}--\ref{table_qte_efficiency}, we report the efficiencies, measured by mean squared errors, of various supervised and SS estimators relative to the corresponding ``oracle'' supervised estimators $\hat{\mu}_{\mbox{\tiny ORA}}$ and $\hvt_{\mbox{\tiny ORA}}$, constructed via substituting $\{\pi(\cdot),m(\cdot),\phi(\cdot,\cdot)\}$ for $\{\hat{\pi}_n(\cdot),\hat{m}_n(\cdot),\hat{\phi}_n(\cdot,\cdot)\}$ in \eqref{sup_ate} and \eqref{sup_qte}. The supervised ``oracle'' estimators of the QTE use the initial estimators and estimated densities from the IPW approach described in Remark \ref{remark_qte_initial_estimator} with $\hat{\pi}_N(\cdot)$ replaced by $\pi(\cdot)$. \textcolor{black}{\textcolor{black}{We clarify here that s}uch ``oracle'' estimators \textcolor{black}{(for both the ATE and the QTE)} are obviously \emph{unrealistic}\textcolor{black}{,} and \textcolor{black}{are used here} just \textcolor{black}{to} serve as suitable benchmarks that are always consistent. Specifically, the relative efficiencies in Table \ref{table_ate_efficiency} are calculated by\textcolor{black}{:}} \begin{eqnarray*} {\cal E}\{(\hat{\mu}_{\mbox{\tiny ORA}}-\mu_0)^2\}/{\cal E}\{(\hat{\mu}_{\mbox{\tiny SUP}}-\mu_0)^2\}\hbox{ and } {\cal E}\{(\hat{\mu}_{\mbox{\tiny ORA}}-\mu_0)^2\}/{\cal E}\{(\hat{\mu}_{\mbox{\tiny SS}}-\mu_0)^2\}, \end{eqnarray*} \textcolor{black}{while those in Table \ref{table_qte_efficiency} are given by\textcolor{black}{:}} \begin{eqnarray*} {\cal E}\{(\hvt_{\mbox{\tiny ORA}}-\theta_0)^2\}/{\cal E}\{(\hvt_{\mbox{\tiny SUP}}-\theta_0)^2\}\hbox{ and } {\cal E}\{(\hvt_{\mbox{\tiny ORA}}-\theta_0)^2\}/{\cal E}\{(\hvt_{\mbox{\tiny SS}}-\theta_0)^2\}. \end{eqnarray*} For reference, we provide the ``oracle'' relative efficiencies \textcolor{black}{(denoted \textcolor{black}{as} ``ORE'' in the tables)} given by\textcolor{black}{:} $\lambda_{\mbox{\tiny SUP}}^2/\lambda_{\mbox{\tiny SS}}^2$ and $\sigma_{\mbox{\tiny SUP}}^2/\sigma_{\mbox{\tiny SS}}^2$ with $\{m^*(\cdot),\phi^*(\cdot,\cdot)\}=\{m(\cdot),\phi(\cdot,\cdot)\}$ as well, where $\lambda_{\mbox{\tiny SUP}}^2$, $\lambda_{\mbox{\tiny SS}}^2$, $\sigma_{\mbox{\tiny SUP}}^2$ and $\sigma_{\mbox{\tiny SS}}^2$ are the asymptotic variances in \eqref{ate_normality}, \eqref{ate_sup_normality}, \eqref{qte_normality} and \eqref{qte_sup_normality}, respectively. The unknown quantities therein as well as the true values of $\mu_0$ and ${\boldsymbol\theta}$ are approximated by Monte Carlo based on $100,000$ realizations of $(Y,T,{\mathbf X}^{\rm T})^{\rm T}$ independent of $ \mathcal{L}\cup \mathcal{U}$. It is noteworthy \textcolor{black}{here} that these ``oracle'' relative efficiencies can be achieved only asymptotically, \textcolor{black}{and that too \emph{only}} when $\{\pi(\cdot),m(\cdot),\phi(\cdot,\cdot)\}$ are all correctly specified and estimated at fast enough rates. \vskip0.08in \textcolor{black}{Generally speaking, the results in Tables \ref{table_ate_efficiency}--\ref{table_qte_efficiency} clearly show that our SS estimators uniformly outperform their supervised competitors\textcolor{black}{,} and even yield better efficiency than the supervised ``oracle'' estimators in most of the cases, indicated by numbers greater than one in the tables. Specifically, inspecting the two tables reveals that, among all the settings, our SS estimators make the most significant efficiency improvement when all the nuisance models are correctly specified. For instance, when $\{m({\mathbf X}),\pi({\mathbf X})\}=\{(a),(i)\}$, the combination of Lin and PR correctly estimate the nuisance functions and give fairly impressive results for the ATE case.} \textcolor{black}{\textcolor{black}{Moreover,} when both correctly approximating $\pi({\mathbf X})$, Lin and Quad yields similar results. However, under the setups with \textcolor{black}{$\{m({\mathbf X}), \pi({\mathbf X})\}=\{(c),(iii)\}$}, for example, where Quad produces estimators converging to the true $\pi({\mathbf X})$ but Lin does not, and all the working outcome models misspecify the underlying relation between $Y/I(Y<{\boldsymbol\theta})$ vs. ${\mathbf X}$, Quad shows notable advantages over Lin. This substantiates the importance of the propensity score estimators $\hat{\pi}_N({\mathbf X})$ in our methods, which has been stated in Corollaries \ref{corate} and \ref{corqte}. As regards the choices of $\hat{m}_{n,k}({\mathbf X})$ and $\hat{\phi}_{n,k}({\mathbf X}, \theta)$, KS$_1$ gives the best efficiency for most of the cases, justifying the approach combining kernel smoothing and dimension reduction to estimating the outcome models, as demonstrated in Sections \ref{sec_nf_ate}--\ref{sec_nf_qte}. Further, we observe that, as the labeled data size increases, the relative efficiencies of our SS estimators rise substantially, except for a few cases\textcolor{black}{,} such as the ATE \textcolor{black}{estimator} with the PR outcome model estimators when $p=10$.} \textcolor{black}{\textcolor{black}{The} improvement verifies the asymptotic properties claimed in Section \ref{sec_ate_ss} and \ref{sec_qte_general}, while \textcolor{black}{any of} the exceptions could be explained by the fact that the performance of the benchmarks for calculating the relative efficiencies, i.e., the ``oracle'' supervised estimators, are improved by more labeled data as well. Considering \textcolor{black}{that} the ``oracle'' supervised estimators are always constructed with the true nuisance functions without \emph{any} estimation errors, the positive effect of increasing $n$ on them is very likely to be more significant than that on our SS estimators. } \textcolor{black}{\textcolor{black}{In} addition, another interesting finding is that, in the scenario $(n,p,q)=(200,200,\ceil{p^{1/2}})$ where $q=O(n^{1/2})$, our SS estimators still beat their supervised counterparts under all the settings, and possess efficiencies close to or even \emph{better} than those of the supervised ``oracle'' estimators, which use the knowledge of the true data generating mechanisms, when all the nuisance models are correctly specified. This \textcolor{black}{(pleasantly)} surprising fact implies \textcolor{black}{that} the performance of our methods is \textcolor{black}{somewhat} \emph{insensitive} to the sparsity condition $q=o(n^{1/2})$, which is often required in the high dimensional \textcolor{black}{inference} literature \citep{buhlmann2011statistics, negahban2012unified, wainwright2019high} to ensure the $L_1$\textcolor{black}{--}consistency assumed in Assumption \ref{al1} for the nuisance estimators; see the relevant discussion in Remark \ref{remark_choice_of_P0} also.} \begin{remark}[Interpretations of the relative efficiencies in Tables \ref{table_ate_efficiency}--\ref{table_qte_efficiency}]\label{remark_interpretation_RE} \textcolor{black}{One may notice that the relative efficiencies of our SS estimators are sometimes quite different from the corresponding oracle quantities (ORE) in the tables. We attribute the differences to two reasons: (a) possible misspecification of the nuisance models, which obviously makes the oracle efficiencies unachievable, and (b) finite sample errors, from which \emph{any} practical methods have to suffer, especially in high dimensional scenarios. In contrast, the oracle relative efficiencies are calculated presuming all the nuisance models are known and the sample sizes are infinite.} \textcolor{black}{\textcolor{black}{Lastly,} it is also \textcolor{black}{worth} point\textcolor{black}{ing} out that the quantities in Tables \ref{table_ate_efficiency}--\ref{table_qte_efficiency} somewhat ``understate'' the efficiency gain of our methods in the sense that the benchmarks, i.e, the ``oracle'' supervised estimators, are \emph{unrealistic} due to requiring the knowledge of the underlying data generating mechanisms. When compared with the \emph{feasible} supervised estimators, the advantage of our methods is even \emph{more significant}. For example, when $(n,p,q)=(200,200,\ceil{p^{1/2}})$, $\{m({\mathbf X}),\pi({\mathbf X})\}=\{(c),(i)\}$ and the nuisance functions are estimated by the combination of Lin and KS$_1$, the efficiencies of our SS estimators relative to the supervised competitors are $0.56/0.16=3.50$ and $0.55/0.27=2.04$ for the cases of the ATE and the QTE, respectively. Relative to the original numbers $0.56$ and $0.55$ in the tables, the ratios $3.50$ and $2.04$ \textcolor{black}{indeed} provide \textcolor{black}{a} more \textcolor{black}{direct and overwhelming} evidence \textcolor{black}{of} the efficiency superiority of our methods, while we choose the ``oracle'' supervised estimators as suitable \textcolor{black}{(common)} benchmarks \textcolor{black}{(for comparing all estimators -- supervised and semi-supervised)} just because they are always consistent, \textcolor{black}{and more importantly, are the \emph{best} achievable supervised estimators (and yet \textcolor{black}{are} idealized/infeasible, with both nuisance functions $\pi(\cdot)$ and $m(\cdot)/\phi(\cdot,\cdot)$ presumed known)}.} \end{remark} \subsection{\textcolor{black}{Results on inference} }\label{sec_sim_inference} Next, Table \ref{table_inferece} presents the results of inference based on our SS estimators using KS$_1$ \textcolor{black}{(as a representative case)} to calculate $\hat{m}_n(\cdot)$ and $\hat{\phi}_n(\cdot,\cdot)$ when $n=500$. We report the bias, the empirical standard error (ESE), the average of the estimated standard errors (ASE), and the coverage rate (CR) of the 95\% confidence intervals. As expected, the biases are negligible as long as either the propensity score or the outcome model is correctly specified, which \textcolor{black}{\emph{verifies}} the DR property of our methods. Moreover, we can see \textcolor{black}{that} whenever $\pi^*(\cdot)=\pi(\cdot)$, the ASEs are fairly close to the corresponding ESEs and the CRs are all around the nominal level \textcolor{black}{of} 0.95\textcolor{black}{,} \textcolor{black}{\emph{even if}} $m^*(\cdot)\neq m(\cdot)$ and $\phi^*(\cdot,\cdot)\neq\phi(\cdot,\cdot)$. See, for example, the results of the configurations marked in bold, where $\pi^*(\cdot)=\pi(\cdot)$ but the outcome model estimators based on KS$_1$ do {\it not} converge to $m(\cdot)$ (for the ATE) or $\phi(\cdot,\cdot)$ (for the QTE). Such an observation confirms that, owing to the use of the massive unlabeled data, the \textcolor{black}{\emph{$n^{1/2}$-consistency and asymptotic normality}} of our \textcolor{black}{SS} ATE and QTE estimators \textcolor{black}{\emph{only}} require correct specifications of $\pi(\cdot)$ as claimed in Corollaries \ref{corate} and \ref{corqte}. Also, it justifies the limiting distributions and variance estimations proposed in the two corollaries. \textcolor{black}{Lastly, as mentioned before, we only present results of inference for one case as an illustration.} When we set $n=200$ or take other choices of $\{\hat{m}_n(\cdot),\hat{\phi}_n(\cdot,\cdot)\}$, our estimators still give satisfactory inference results similar \textcolor{black}{in flavor} to those in Table \ref{table_inferece}. We \textcolor{black}{therefore} skip them \textcolor{black}{here} for the sake of brevity. \section{Real \textcolor{black}{d}ata \textcolor{black}{a}nalysis}\label{sec_data_analysis} In this section, we apply our proposed methods to a data set from \citet{baxter2006genotypic} \textcolor{black}{that is} available at the Stanford University HIV Drug Resistance Database \citep{rhee2003human} (https://hivdb.stanford.edu/pages/genopheno.dataset.html). \textcolor{black}{This data was also considered in \citet{zhang2019high} for illustration of their SS mean estimator\footnote{We are grateful to Yuqian Zhang for sharing details on data pre-processing used in \citet{zhang2019high}.}.} In the data set, there is an observed outcome\textcolor{black}{,} $\mathbb{Y}$\textcolor{black}{,} representing the drug resistance to \textcolor{black}{lamivudine} (3TC), a nucleoside reverse transcriptase inhibitor, along with the indicators of mutations on $240$ positions of the HIV reverse transcriptase. \textcolor{black}{Our goal was to investigate the causal effect(s) (ATE$/$QTE) of these mutations on drug resistance.} We set the treatment indicator $T$ to be the existence of mutations on the $m$th position while regarding the other $p=239$ indicators as the covariates ${\mathbf X}$. In the interest of space, we only take $m\in\{39,69,75,98,123,162,184,203\}$, a randomly selected subset of $\{1,\ldots,240\}$, for illustration. Analysis with other choices of $m$ can be conducted analogously. As regards \textcolor{black}{the} sample sizes, the labeled and unlabeled data contain $n=423$ and $N=2458$ observations, respectively. To test if the labeled and unlabeled data are equally distributed \textcolor{black}{and satisfy Assumption \ref{ass_equally_distributed}}, we calculate the Pearson \textcolor{black}{test} statistic and obtain the corresponding \textcolor{black}{$p$}-value \textcolor{black}{as} $0.18$ using a permutation distribution \citep{agresti2005multivariate}, implying that the labeling is indeed independent of $(T,{\mathbf X}^{\rm T})^{\rm T}$. In the following, we will estimate the ATE \eqref{ate} and the QTE \eqref{qte} (with $\tau=0.5$) \textcolor{black}{with this data,} based on the limiting distributions \eqref{ate_difference_distribution} and \eqref{qte_difference_distribution}\textcolor{black}{,} rather than focusing on $\mu_0(1)$ and ${\boldsymbol\theta}(1)$ only. \textcolor{black}{For implementing our estimators, i}n addition to the \textcolor{black}{nuisance estimation} approaches leveraged in Section \ref{sec_simulations}, we also estimate the propensity score and outcome models using random forest \textcolor{black}{here}, treating $T$, $Y$ or $I(Y<\hvt_{\mbox{\tiny INIT}})$ as the response, growing $500$ trees and randomly sampling $\ceil{p^{1/2}}$ covariates as candidates at each split. In Figures \ref{figure_ate} and \ref{figure_qte}, we display the 95\% confidence intervals of the ATE and the QTE, respectively, averaging over 10 replications to remove potential randomness from cross fitting. \textcolor{black}{(The} confidence intervals are also presented numerically in \textcolor{black}{Appendix} \ref{sm_data_analysis} of the Supplementary Material.\textcolor{black}{)} \textcolor{black}{From the plots, w}e observe that our SS approaches generally yield \textcolor{black}{\it shorter} confidence intervals than their supervised counterparts, confirming again the efficiency gain from the usage of unlabeled data. Moreover, we notice that, when $m=203$, all the SS confidence intervals of the QTE are strictly above zero, indicating significantly positive median treatment effect. This finding is, however, very likely to be ignored in the supervised setting since zero is included by the confidence intervals constructed based on the labeled data only. Such a contrast \textcolor{black}{thereby reinforces the fact that} our SS methods \textcolor{black}{in comparison} are notably more powerful in detecting significant treatment effects. \begin{figure} \centering \caption{\textcolor{black}{Data analysis:} $95\%$ confidence intervals for the ATE of \textcolor{black}{the mutations on} the drug resistance to \textcolor{black}{3TC} based on the supervised estimator \eqref{sup_ate} (\underline{\textcolor{black}{undashed bars}}) and the SS estimator \eqref{ss_ate} (\underline{\textcolor{black}{dashed bars}}). Here\textcolor{black}{,} $m$ is the position of mutation regarded as the treatment indicator. We consider three different combinations to estimate the ``propensity score \& outcome model'' \textcolor{black}{(denoted by the three bar colors)}: $(\mathrm{i})$ regularized logistic regression \& kernel smoothing on the first two directions selected by the regularized sliced inverse regression ({\color{red} \textbf{red}} fill); $(\mathrm{ii})$ regularized logistic regression \& regularized parametric regression ({\color{darkpastelgreen} \textbf{green}} fill); $(\mathrm{iii})$ random forest \& random forest ({\color{bleudefrance} \textbf{blue}} fill).} \includegraphics[scale=0.6]{ate} \label{figure_ate} \end{figure} \begin{figure} \centering \caption{\textcolor{black}{Data analysis:} We consider the same scenario as \textcolor{black}{in} Figure \ref{figure_ate}, but now the estimand is the QTE $(\tau=0.5)$.} \includegraphics[scale=0.6]{qte} \label{figure_qte} \end{figure} \section{\textcolor{black}{Concluding discussion}} \label{sec_conclusion_discussion} We have developed \textcolor{black}{here} a family of SS estimators for (a) the ATE and (b) the QTE\textcolor{black}{, in possibly high dimensional settings,} \textcolor{black}{and more importantly, we have developed a unified understanding of SS causal inference and its benefits -- {\it both} in robustness and efficiency -- something we feel has been missing in the literature}. In addition to the DR property in consistency that can be attained by purely supervised methods as well, we have proved our estimators also possess $n^{1/2}$-consistency and asymptotic normality whenever the propensity score $\pi(\cdot)$ is correctly specified. This property is useful for inference while generally unachievable in supervised settings. Even if \textcolor{black}{this} difference in robustness is ignored, our estimators are still guaranteed to be more efficient than their supervised counterparts. Further, as long as all the nuisance functions are correctly specified, our approaches have been shown to attain semi-parametric optimality \textcolor{black}{as well.} \textcolor{black}{All our theoretical claims above have also been validated numerically via extensive simulation studies as well as an empirical data analysis.} \textcolor{black}{Further, a}s a principled and flexible choice for estimating the outcome models in our methods, \textcolor{black}{we have studied thoroughly} IPW type kernel smoothing estimators \textcolor{black}{in high dimensional settings} with \textcolor{black}{possible use of} dimension reduction \textcolor{black}{techniques}. We have shown they uniformly converge in probability to ${\cal E}(Y\mid\mathbf{P}_0^{\rm T}{\mathbf X})$ (for the case of the ATE) or ${\cal E}\{\psi(Y,\theta)\mid\mathbf{P}_0^{\rm T}{\mathbf X}\}$ (for the case of the QTE) with some transformation matrix $\mathbf{P}_0$, given either the propensity score or the outcome model is correctly specified but {\it not} necessarily both. The precise convergence rates have been derived as well. This DR property guarantees the efficiency advantage of our SS methods over their supervised competitors. We view these results \textcolor{black}{also} as one of our major contributions. \textcolor{black}{To the best of our knowledge, results of this flavor (especially, in high dimensions, with $p$ diverging) have not been established in the relevant existing literature.} \textcolor{black}{They} can be applicable to many other problems \textcolor{black}{as well and should therefore be of independent interest}. \paragraph*{\textcolor{black}{Extensions}} \textcolor{black} \textcolor{black}{As mentioned in Section \ref{sec:psetup}, while we focus on the ATE and QTE for simplicity and clarity of the main messages,} our \textcolor{black}{SS} methods \textcolor{black}{\it can} be easily extended to \textcolor{black}{other causal estimands, including} the \textcolor{black}{\it general $Z$-estimation problem} \textcolor{black}{\citep{van2000asymptotic,van1996weak}}, targeting a parameter defined as the solution to an estimating equation. As long as the estimand has a close form like $\mu_0\equiv{\cal E}(Y)$, one can construct a family of SS estimators in the same \textcolor{black}{spirit} as our ATE estimators \eqref{ss_ate}. An example is the \emph{linear regression parameter} $\boldsymbol{\beta}_0^{\mbox{\tiny LIN}}:=\{{\cal E}(\overrightarrow{\X}\Xarrow^{\rm T})\}^{-1}{\cal E}(\overrightarrow{\X} Y)$\textcolor{black}{,} that solves the equation\textcolor{black}{:} ${\cal E}\{\overrightarrow{\X}(Y-\overrightarrow{\X}^{\rm T}\boldsymbol{\beta}_0^{\mbox{\tiny LIN}})\}={\mathbf 0}_d$, where $\overrightarrow{\X}:=(1,{\mathbf X}^{\rm T})^{\rm T}$. On the other hand, for estimating equations that cannot be solved straightforwardly, the one-step update strategy, used for our QTE estimators \eqref{ss_qte}, allows for simple and flexible implementations of SS estimation and inference with various choices of nuisance estimators. For instance, our approach to constructing the SS QTE estimators can be adapted for the \emph{quantile regression parameter} $\boldsymbol{\beta}_0^{\mbox{\tiny QUAN}}$, defined by the equation ${\cal E}[\overrightarrow{\X}\{I(Y<\overrightarrow{\X}^{\rm T}\boldsymbol{\beta}_0^{\mbox{\tiny QUAN}})-\tau\}]={\mathbf 0}_d$, with extra technical effort. These SS estimators for the general estimating equation problems are expected to possess desirable properties, such as improved robustness and efficiency relative to their supervised counterparts, which are similar in spirit to those stated in Sections \ref{secos} and \ref{secqte} for our SS ATE and QTE estimators. \textcolor{magenta}{We will briefly discuss in Appendix \ref{sm_Z_estimation} \textcolor{black}{the methodological details of these} possible extensions of our SS inference methods to the general $Z$-estimation problem under the potential outcome framework.} \textcolor{black}{However, a detailed \textcolor{black}{theoretical} analysis is beyond the scope (and \textcolor{black}{primary} goals) of the current work, and therefore, we choose not to delve any further into these aspects here.}} \vskip0.05in \textcolor{black}{Lastly, i}n this article, we have only considered cases where the labeled and unlabeled data are equally distributed \textcolor{black}{and thereby satisfy Assumption \ref{ass_equally_distributed}}. However, the labeling mechanisms in some practical problems are in fact not determined by design and \textcolor{black}{hence,} \textcolor{black}{\it labeling bias} \textcolor{black}{can exist} between $ \mathcal{L}$ and $ \mathcal{U}$. It is \textcolor{black}{important to note} that, due to the disproportion assumption \eqref{disproportion}, one \textcolor{black}{\it cannot} simply analyze such settings by \textcolor{black}{using} classical missing data theory \citep{tsiatis2007semiparametric, little2019statistical}, which requires the proportion of complete observations is bounded away from zero in the sample. Some recent attention has been paid to SS inference with labeling bias in the context of linear regression \textcolor{black}{\citep[Section II]{chakrabortty2018efficient}} and mean \textcolor{black}{estimation} \citep{zhang2021double_robust}. For treatment effect estimation, which is more technically complicated owing to the potential outcome framework, a primary challenge is that there exists no consistent supervised method when the labeled and unlabeled data follow different distributions\textcolor{black}{; so} the goal of using unlabeled data to \textcolor{black}{`improve'} estimation accuracy compared to supervised approaches becomes somewhat ambiguous. With biased labeling mechanisms, we believe SS inference for treatment effect needs to be studied under a novel framework and thus poses an interesting problem for future research. \begin{appendix} {\color{black} \par\smallskip \section{Extension to general $Z$-estimation problems}\label{sm_Z_estimation} In this section, we briefly discuss the SS \textcolor{black}{inference} strategy for the \emph{general $Z$-estimation problem} \citep{van1996weak, van2000asymptotic} under the potential outcome framework, based on a natural extension of our proposed methods for the ATE and the QTE in Sections \ref{secos} and \ref{secqte}. Specifically, for some \textcolor{black}{\it fixed} $d\geq 1$, we are interested in a $d$-dimensional parameter $\boldsymbol \theta_0\in\Lambda\subset\mathbb{R}^d$, \textcolor{black}{for some $\Lambda$,} defined as the solution to the \textcolor{black}{\it estimating equation}: \begin{eqnarray} {\cal E}\{\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta_0)\}~=~{\mathbf 0}_d, \label{Z_estimation} \end{eqnarray} where $\boldsymbol{\psi}(\cdot,\cdot,\cdot)\in\mathbb{R}^d$ \textcolor{black}{is some known function that satisfies:} ${\cal E}\{\|\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta)\|^2\}<\infty$ for any $\boldsymbol \theta\in\Lambda$, and that $\mathbf{H}(\boldsymbol \theta):=\partial{\cal E}\{\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta)\}/\partial\boldsymbol \theta$ exists and is non-singular in a neighborhood $ \mathcal{B}(\boldsymbol \theta_0,\varepsilon)$ of $\boldsymbol \theta_0$ for some $\varepsilon>0$. In particular, the special cases \textcolor{black}{with} $\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta)\equiv Y-\boldsymbol \theta$ and $\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta)\equiv I(Y<\boldsymbol \theta)-\tau$, with $d=1$, \textcolor{black}{correspond to the earlier cases of} the ATE and the QTE, respectively. This type of SS $Z$-estimation problems \eqref{Z_estimation} \textcolor{black}{-- but} \emph{without} the missingness of the potential outcome $Y$ in the labeled data, which can be viewed as a special case of the following discussion with $T\equiv 1$, has been studied in Chapter 2 of \citet{Chakrabortty_Thesis_2016}. \paragraph*{\textcolor{black}{SS estimators}} Similar in spirit to \eqref{qte_dr_representation}, we know that the following \textcolor{black}{\it DR type representation}: \begin{eqnarray} {\mathbf 0}_d&~=~&{\cal E}\{\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta_0)\} \label{EE_DR_representation}\\ &~=~&{\cal E}\{ \boldsymbol{\phi}^*({\mathbf X},\boldsymbol \theta_0)\}+ {\cal E}[\{\pi^*({\mathbf X})\}^{-1} T\{\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta_0) - \boldsymbol{\phi}^*({\mathbf X},\boldsymbol \theta_0)\}], \nonumber \end{eqnarray} with arbitrary functions $\{\pi^*(\cdot),\boldsymbol{\phi}^*(\cdot,\cdot)\}$, holds true for the estimating equation \eqref{Z_estimation}, as long as either $\pi^*({\mathbf X})=\pi({\mathbf X})$ or $\boldsymbol{\phi}^*({\mathbf X},\boldsymbol \theta)=\boldsymbol{\phi}({\mathbf X},\boldsymbol \theta):={\cal E}\{\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta)\mid{\mathbf X}\}$, but \textcolor{black}{\it not} necessarily both. \textcolor{black}{The {\it empirical version} of \eqref{EE_DR_representation} constructed based on $ \mathcal{L}\cup \mathcal{U}$} is \textcolor{black}{then} given by: \begin{eqnarray} {\cal E}_{n+N}\{ \hat{\bphi}_n({\mathbf X},\boldsymbol \theta)\}+ {\cal E}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1} T\{\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta) - \hat{\bphi}_n({\mathbf X},\boldsymbol \theta)\}]~=~{\mathbf 0}_d, \label{sample_DR_representation} \end{eqnarray} where $\hat{\bphi}_n(\cdot,\cdot)$ is some estimator of $\boldsymbol{\phi}^*(\cdot,\cdot)$ from $ \mathcal{L}$, constructed via the cross-fitting procedures similar to \eqref{ds3}--\eqref{ds4} so that ${\mathbf X}_i$ and $\hat{\bphi}_n(\cdot,\cdot)$ are independent in $\hat{\bphi}_n({\mathbf X}_i,\boldsymbol \theta)$ $(i=1,\ldots,n)$\textcolor{black}{, and $\hat{\pi}_N(\cdot)$ is some estimator of $\pi(\cdot)$ based on $ \mathcal{U}$, same as in Sections \ref{secos}--\ref{secqte} before.} \textcolor{black}{Then,} following derivations analogous to those at the beginning of Section \ref{sec_qte_general}, which yielded our SS QTE estimators \eqref{ss_qte}, we can implement the one-step update approach based on the influence function corresponding to \eqref{sample_DR_representation}, and obtain \emph{a family of semi-supervised \textcolor{black}{$Z$-}estimators} for $\boldsymbol \theta_0$: \begin{eqnarray} &&\quad\hat{\btheta}_{\mbox{\tiny SS}}~:=~ \hat{\btheta}_{\mbox{\tiny INIT}} +\{\hat{\bH}_n(\hat{\btheta}_{\mbox{\tiny INIT}})\}^{-1}({\cal E}_n[\{\hat{\pi}_N({\mathbf X})\}^{-1}T\{\hat{\bphi}_n({\mathbf X},\hat{\btheta}_{\mbox{\tiny INIT}}) - \boldsymbol{\psi}(Y,{\mathbf X},\hat{\btheta}_{\mbox{\tiny INIT}})\}]- \label{EE_one_step}\\ &&\phantom{\quad\hat{\btheta}_{\mbox{\tiny SS}}~:=~ \hat{\btheta}_{\mbox{\tiny INIT}} +\{\hat{\bH}_n(\hat{\btheta}_{\mbox{\tiny INIT}})\}^{-1}(}{\cal E}_{n+N}\{\hat{\bphi}_n({\mathbf X},\hat{\btheta}_{\mbox{\tiny INIT}})\}), \nonumber \end{eqnarray} indexed by $\{\hat{\pi}_N(\cdot),\hat{\bphi}_n(\cdot,\cdot),\hat{\btheta}_{\mbox{\tiny INIT}},\hat{\bH}_n(\cdot)\}$, where $\hat{\btheta}_{\mbox{\tiny INIT}}$ is an initial estimator of $\boldsymbol \theta_0$ and $\hat{\bH}_n(\cdot)$ is an estimator of $\mathbf{H}(\cdot)$, both based on $ \mathcal{L}$. Of course, if the analytical solution, with respect to $\boldsymbol \theta$, of \eqref{sample_DR_representation} exists, one can directly take it as the SS estimator \textcolor{black}{$\hat{\btheta}_{\mbox{\tiny SS}}$ itself.} Our SS ATE estimators $\hat{\mu}_{\mbox{\tiny SS}}$, given in \eqref{ss_ate}, are examples of this type. However, the one-step update \eqref{EE_one_step} is obviously a more general strategy that is implementation-friendly and is broadly applicable to estimating equations of various forms, regardless of whether their analytical solutions exist or not. \paragraph*{\textcolor{black}{Properties of $\hat{\btheta}_{\mbox{\tiny SS}}$ (brief sketch)}} To derive the properties of our SS estimators $\hat{\btheta}_{\mbox{\tiny SS}}$, we need to impose the following restrictions on the complexity of the class of the estimating functions: \begin{eqnarray} &&\hbox{For some $\varepsilon>0$, the (random) function class $\{\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta):\boldsymbol \theta\in \mathcal{B}(\boldsymbol \theta_0,\varepsilon)\}$} \label{EE_P_Donsker}\\ &&\hbox{lies in a ${\mathbb P}$-Donsker class with square integrable envelope functions, \textcolor{black}{~and}}\nonumber \\ &&\hbox{${\cal E}_{\mathbf Z}\{\|\boldsymbol{\psi}(Y,{\mathbf X},\tilde{\boldsymbol \theta})-\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta_0)\|^2\}\textcolor{black}{~\xrightarrow{p}~}0$ \textcolor{black}{~for} any (random) \textcolor{black}{sequence~} $\tilde{\boldsymbol \theta}\xrightarrow{p}\boldsymbol \theta_0$.} \nonumber \end{eqnarray} \textcolor{black}{Further,} we require the function $\boldsymbol{\psi}_0(\boldsymbol \theta):={\cal E}\{\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta)\}$ to be smooth enough so that, in $ \mathcal{B}(\boldsymbol \theta_0,\varepsilon)$ for some $\varepsilon>0$, it satisfies the Taylor expansion: \begin{eqnarray} &&\boldsymbol{\psi}_0(\boldsymbol \theta)~=~\boldsymbol{\psi}_0(\boldsymbol \theta_0)+\mathbf{H}(\boldsymbol \theta_0)(\boldsymbol \theta-\boldsymbol \theta_0)+{\bf r}(\boldsymbol \theta,\boldsymbol \theta_0) \hbox{ \textcolor{black}{~for} some ${\bf r}(\boldsymbol \theta,\boldsymbol \theta_0)$\textcolor{black}{,}} \label{EE_Taylor}\\ &&\hbox{such that $\|{\bf r}(\boldsymbol \theta,\boldsymbol \theta_0)\|\textcolor{black}{~=~} O(\|\boldsymbol \theta-\boldsymbol \theta_0\|^2)$ \textcolor{black}{~as~} $\boldsymbol \theta\to\boldsymbol \theta_0$.} \nonumber \end{eqnarray} These conditions \eqref{EE_P_Donsker}--\eqref{EE_Taylor} are fairly mild and standard for estimating equation problems, while their analogues can be found in the \textcolor{black}{(supervised)} $Z$-estimation literature such as \citet{van2000asymptotic}. It is also noteworthy that, under the basic Assumption \ref{adensity}, \eqref{EE_P_Donsker}--\eqref{EE_Taylor} are in fact satisfied by the special case $\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta)\equiv I(Y<\boldsymbol \theta)-\tau$ with $d=1$, which is the estimating function \textcolor{black}{corresponding to the QTE;} see the proof of Theorem \ref{thqte} in Section \ref{proof_theorem_qte} for details. \textcolor{black}{Further}, we need to regulate the behavior of the components $\{\hat{\pi}_N(\cdot),\hat{\bphi}_n(\cdot,\cdot),\hat{\btheta}_{\mbox{\tiny INIT}},\hat{\bH}_n(\cdot)\}$ in \eqref{EE_one_step} and the possibly misspecified limits $\{\pi^*(\cdot),\boldsymbol{\phi}^*(\cdot,\cdot)\}$ of $\{\hat{\pi}_N^*(\cdot),\hat{\bphi}_n^*(\cdot,\cdot)\}$. Noticing that the \emph{high-level} conditions on $\{\hat{\pi}_N(\cdot),\hat{\phi}_n(\cdot,\cdot),\hvt_{\mbox{\tiny INIT}},\hat{f}_n(\cdot),\pi^*(\cdot),\phi^*(\cdot,\cdot)\}$ \textcolor{black}{that were} enlisted in Assumptions \ref{ainit}--\ref{aest}, do \emph{not} require \emph{any} specific forms of these components, we can easily adapt them for the case of the general estimating equation \eqref{Z_estimation}, with appropriate modifications for the (fixed-dimensional) vector/matrix-valued (random) functions involved, e.g., taking the \emph{column-wise $L_2$-norms} $\|\cdot\|$ of these functions and their moments; see the definition of $\|\cdot\|$ in the Notation paragraph at the beginning of Section \ref{secos}. Under the above assumptions on the estimating functions and the nuisance components, as well as some necessary (and \textcolor{black}{fairly} reasonable) convergence rate conditions, we can show the following results for our SS estimators $\hat{\btheta}_{\mbox{\tiny SS}}$, which are similar in flavor to those established for our SS ATE and QTE estimators in Sections \ref{secos}--\ref{secqte}. \begin{enumerate}[(i)] \item \emph{Double robustness:} Whenever either $\pi^*(\cdot)=\pi(\cdot)$ or $\boldsymbol{\phi}^*(\cdot,\cdot)=\boldsymbol{\phi}(\cdot,\cdot)$ holds, but not necessarily both, our SS estimators $\hat{\btheta}_{\mbox{\tiny SS}}$ is consistent for $\boldsymbol \theta_0$. \item \emph{$n^{1/2}$ consistency and asymptotic normality}: Suppose that $\pi^*(\cdot)=\pi(\cdot)$. Then, if either $\boldsymbol{\phi}^*(\cdot,\cdot)=\boldsymbol{\phi}(\cdot,\cdot)$ or we can use the massive unlabeled data to estimate $\pi(\cdot)$ at a rate faster than $n^{-1/2}$, but \textcolor{black}{\it not} necessarily both, our estimators $\hat{\btheta}_{\mbox{\tiny SS}}$ has the following expansion: \begin{eqnarray} &&\hat{\btheta}_{\mbox{\tiny SS}}-\boldsymbol \theta_0~=~n^{-1}\hbox{$\sum_{i=1}^n$}\boldsymbol{\omega}_{\mbox{\tiny SS}}({\mathbf Z}_i,\boldsymbol \theta_0)+o_p(n^{-1/2}), \hbox{ with } \boldsymbol{\omega}_{\mbox{\tiny SS}}({\mathbf Z},\boldsymbol \theta_0):= \label{EE_expansion}\\ &&\{\mathbf{H}(\boldsymbol \theta_0)\}^{-1}[\{\pi({\mathbf X})\}^{-1}T\{\boldsymbol{\phi}^*({\mathbf X},\boldsymbol \theta_0)-\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta_0)\}-{\cal E}\{\boldsymbol{\phi}^*({\mathbf X},\boldsymbol \theta_0)\}], \nonumber \end{eqnarray} for an \emph{arbitrary} $\boldsymbol{\phi}^*(\cdot,\cdot)$, \emph{not} necessarily equal to $\boldsymbol{\phi}(\cdot,\cdot)$. This property is generally \emph{unachievable} in purely supervised settings \textcolor{black}{(similar in spirit to our discussions in Remarks \ref{remark_ate_robustness} and \ref{remark_qte_property})}. Further, the expansion \eqref{EE_expansion} implies the limiting distribution of $\hat{\btheta}_{\mbox{\tiny SS}}$, given by: \begin{eqnarray*} n^{1/2}(\hat{\btheta}_{\mbox{\tiny SS}}-\boldsymbol \theta_0)~\xrightarrow{d}~\mathcal{N}_d[\,{\mathbf 0}_d,\hbox{cov}\{\boldsymbol{\omega}_{\mbox{\tiny SS}}({\mathbf Z},\boldsymbol \theta_0)\}\,]\quad (n,N\to\infty). \end{eqnarray*} \item \emph{Efficiency improvement and optimality}: Setting aside the difference in robustness from our SS estimators, as stated in (ii), the \emph{best achievable influence function} of a supervised estimator for $\boldsymbol \theta_0$, with the same outcome model estimator $\hat{\bphi}_n(\cdot,\cdot)$, is given by: \begin{eqnarray*} \boldsymbol{\omega}_{\mbox{\tiny SUP}}({\mathbf Z},\boldsymbol \theta_0)~:=~\{\mathbf{H}(\boldsymbol \theta_0)\}^{-1}[\{\pi({\mathbf X})\}^{-1}T\{\boldsymbol{\phi}^*({\mathbf X},\boldsymbol \theta_0)-\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta_0)\}-\boldsymbol{\phi}^*({\mathbf X},\boldsymbol \theta_0)]. \end{eqnarray*} Comparing the supervised and semi-supervised asymptotic covariance matrices, when $\boldsymbol{\phi}^*({\mathbf X},\boldsymbol \theta)\equiv{\cal E}\{\boldsymbol{\psi}(Y,{\mathbf X},\boldsymbol \theta)\mid{\bf g}({\mathbf X})\}$ for some function ${\bf g}(\cdot)$, we notice that \begin{eqnarray*} \hbox{cov}\{\boldsymbol{\omega}_{\mbox{\tiny SUP}}({\mathbf Z},\boldsymbol \theta_0)\}-\hbox{cov}\{\boldsymbol{\omega}_{\mbox{\tiny SS}}({\mathbf Z},\boldsymbol \theta_0)\} ~=~\{\mathbf{H}(\boldsymbol \theta_0)\}^{-1}\hbox{cov}\{\boldsymbol{\phi}^*({\mathbf X},\boldsymbol \theta_0)\}\{\mathbf{H}(\boldsymbol \theta_0)\}^{-1}, \end{eqnarray*} which is positive semi-definite. This indicates the efficiency superiority of our SS estimators over their supervised counterparts. Moreover, as long as both the propensity score $\pi(\cdot)$ and the outcome model $\boldsymbol{\phi}(\cdot,\cdot)$ are correctly specified, the SS \textcolor{black}{estimator's} influence function $\boldsymbol{\omega}_{\mbox{\tiny SS}}({\mathbf Z},\boldsymbol \theta_0)$, given in \eqref{EE_expansion}, actually equals the \emph{efficient influence function} for estimating $\boldsymbol \theta_0$ under the semi-parametric model \eqref{semiparametric_model}, \textcolor{black}{thus implying that} $\hat{\btheta}_{\mbox{\tiny SS}}$ attains the \textcolor{black}{corresponding} \emph{semi-parametric efficiency bound} and is \emph{(locally) semi-parametric efficient}. \end{enumerate} } \section{Technical details}\label{sm_technical} \subsection{Preliminary lemmas}\label{sm_lemmas} The following Lemma \ref{1v2} would be useful in the proofs of the main theorems\textcolor{black}{, in particular, the results in Section \ref{secqte} regarding QTE estimation}. \begin{lemma}\label{1v2} Suppose there are two independent samples, $\mathcal{S}_1$ and $\mathcal{S}_2$, consisting of $n$ and $m$ independent copies of $({\mathbf X}^{\rm T},Y)^{\rm T}$, respectively. For $\mbox{\boldmath $\gamma$}\in\mathbb{R}^d$ with some fixed $d$, let $\widehat{g}_{n}({\mathbf x},\mbox{\boldmath $\gamma$})$ be an estimator of a measurable function $g({\mathbf x},\mbox{\boldmath $\gamma$})\in\mathbb{R}$ based on $\mathcal{S}_1$ and \textcolor{black}{define:} \begin{eqnarray*} \mathbb{G}_{m}\{\widehat{g}_{n}({\mathbf X},\mbox{\boldmath $\gamma$})\}~:=~ m^{1/2}[m^{-1}\hbox{$\sum_{({\mathbf X}_i^{\rm T},Y_i)^{\rm T}\in\mathcal{S}_2}$}\widehat{g}_{n}({\mathbf X}_i,\mbox{\boldmath $\gamma$})-{\cal E}_{\mathbf X}\{\widehat{g}_{n}({\mathbf X},\mbox{\boldmath $\gamma$})\}]. \end{eqnarray*} For some set $ \mathcal{T}\subset\mathbb{R}^d$, denote \begin{eqnarray*} \Delta(\mathcal{S}_1)~:=~(\hbox{$\sup_{\bgamma\in\ct}$}{\cal E}_{\mathbf X}[\{\widehat{g}_n({\mathbf X},\mbox{\boldmath $\gamma$})\}^2])^{1/2},\ M(\mathcal{S}_1):=\hbox{$\sup_{\x\in\mx,\,\bgamma\in\ct}$}|\widehat{g}_n({\mathbf x},\mbox{\boldmath $\gamma$})|. \end{eqnarray*} For any $\eta\in(0,\Delta(\mathcal{S}_1)+c\,]$, suppose ${\cal G}_{n}:=\{\widehat{g}_{n}({\mathbf X},\mbox{\boldmath $\gamma$}):\mbox{\boldmath $\gamma$}\in \mathcal{T}\}$ satisfies that \begin{eqnarray} N_{[\,]}\{\eta,{\cal G}_{n}\mid\mathcal{S}_1,L_2({\mathbb P}_{\mathbf X})\}~\leq~ H(\mathcal{S}_1)\eta^{-c}\textcolor{black}{,} \label{bracket2} \end{eqnarray} with some function $H(\mathcal{S}_1)>0$. Here ${\cal G}_n$ is indexed by $\mbox{\boldmath $\gamma$}$ only and treats $\widehat{g}_n(\cdot,\mbox{\boldmath $\gamma$})$ as a nonrandom function. Assume $H(\mathcal{S}_1)=O_p(a_n)$, $\Delta(\mathcal{S}_1)=O_p(d_{n,2})$ and $M(\mathcal{S}_1)=O_p(d_{n,\infty})$ with some positive sequences $a_n$, $d_{n,2}$ and $d_{n,\infty}$ allowed to diverge, then we have\textcolor{black}{:} \begin{eqnarray*} \hbox{$\sup_{\bgamma\in\ct}$}|\mathbb{G}_m\{\widehat{g}_n({\mathbf X},\mbox{\boldmath $\gamma$})\}|~=~O_p(r_{n,m}), \end{eqnarray*} where $r_{n,m}=d_{n,2}\{\hbox{log}\,a_n+\hbox{log}\,(d_{n,2}^{-1})\}+m^{-1/2}d_{n,\infty}\{(\hbox{log}\,a_n)^2+(\hbox{log}\,d_{n,2})^2\}$. \end{lemma} \subsection{Proof of Lemma \ref{1v2}} For any $\delta\in(0,\Delta(\mathcal{S}_1)+c\,]$, we have that the bracketing integral \begin{eqnarray*} J_{[\,]}\{\delta,{\cal G}_n\mid\mathcal{S}_1,L_2({\mathbb P}_{\mathbf X})\}&~\equiv~&\hbox{$\int_0^\delta$}[1+\hbox{log}\,N_{[\,]}\{\eta,{\cal G}_n\mid\mathcal{S}_1,L_2({\mathbb P}_{\mathbf X})\}]^{1/2}d\eta \\ &~\leq~&\hbox{$\int_0^\delta$}1+\hbox{log} \,N_{[\,]}\{\eta,{\cal G}_n\mid\mathcal{S}_1,L_2({\mathbb P}_{\mathbf X})\}d\eta \\ &~\leq~&\hbox{$\int_0^\delta$}1+\hbox{log}\,H(\mathcal{S}_1)-c\,\hbox{log}\,\eta\, d\eta \\ &~=~&\delta\{1+\hbox{log}\,H(\mathcal{S}_1)\}+c\,(\delta-\delta\,\hbox{log}\,\delta), \end{eqnarray*} where the third step is due to (\ref{bracket2}). This, combined with Lemma 19.36 of \citet{van2000asymptotic}, implies\textcolor{black}{:} \begin{eqnarray*} &&\phantom{~=~}{\cal E}_{\mathbf X}[\hbox{$\sup_{\bgamma\in\ct}$}|\mathbb{G}_m\{\widehat{g}_n({\mathbf X},\mbox{\boldmath $\gamma$})\}|] \\ &&~\leq~ J_{[\,]}\{\delta,{\cal G}_n\mid\mathcal{S}_1,L_2({\mathbb P}_{\mathbf X})\}+[J_{[\,]}\{\delta,{\cal G}_n\mid\mathcal{S}_1,L_2({\mathbb P}_{\mathbf X})\}]^2M(\mathcal{S}_1)\delta^{-2}m^{-1/2} \\ &&~\leq~ \delta\{1+\hbox{log}\,H(\mathcal{S}_1)\}+c\,(\delta-\delta\,\hbox{log}\,\delta)+\{1+\hbox{log}\,H(\mathcal{S}_1)+c\,(1-\hbox{log}\,\delta)\}^2M(\mathcal{S}_1)m^{-1/2} \end{eqnarray*} for any $\delta\in(\Delta(\mathcal{S}_1),\Delta(\mathcal{S}_1)+c\,]$. Therefore\textcolor{black}{,} \begin{eqnarray*} {\cal E}_{\mathbf X}[\hbox{$\sup_{\bgamma\in\ct}$}|\mathbb{G}_m\{\widehat{g}_n({\mathbf X},\mbox{\boldmath $\gamma$})\}|] &~\leq~& \Delta(\mathcal{S}_1)\{1+\hbox{log}\,H(\mathcal{S}_1)\}+c\,\{\Delta(\mathcal{S}_1)-\Delta(\mathcal{S}_1)\,\hbox{log}\,\Delta(\mathcal{S}_1)\}+\\ &&~~[1+\hbox{log}\,H(\mathcal{S}_1)+c\,\{1-\hbox{log}\,\Delta(\mathcal{S}_1)\}]^2M(\mathcal{S}_1)m^{-1/2}. \end{eqnarray*} Since the right hand side in the above is $O_p(r_{n,m})$, it gives that \begin{eqnarray} {\cal E}_{\mathbf X}[\hbox{$\sup_{\bgamma\in\ct}$}|\mathbb{G}_m\{\widehat{g}_n({\mathbf X},\mbox{\boldmath $\gamma$})\}|] ~=~O_p(r_{n,m}). \label{ex} \end{eqnarray} Then, for any positive sequence $t_n\to\infty$, we have \begin{eqnarray*} &&\phantom{=}{\mathbb P}_{\mathcal{S}_2}[\hbox{$\sup_{\bgamma\in\ct}$}|\mathbb{G}_m\{\widehat{g}_n({\mathbf X},\mbox{\boldmath $\gamma$})\}|>t_n r_{n,m}\mid\mathcal{S}_1] \\ &&~\leq~ (t_n r_{n,m})^{-1}{\cal E}_{\mathbf X}[\hbox{$\sup_{\bgamma\in\ct}$}|\mathbb{G}_m\{\widehat{g}_n({\mathbf X},\mbox{\boldmath $\gamma$})\}|] ~=~o_p(1), \end{eqnarray*} where the first step holds by Markov's inequality and the last step is due to (\ref{ex}). This, combined with Lemma 6.1 of \citet{chernozhukov2018double}, gives that \begin{eqnarray*} {\mathbb P}[\hbox{$\sup_{\bgamma\in\ct}$}|\mathbb{G}_m\{\widehat{g}_n({\mathbf X},\mbox{\boldmath $\gamma$})\}|>t_n r_{n,m}]~\to~ 0, \end{eqnarray*} which completes the proof. \subsection{Proof of Theorem \ref{thate}} Denote $\E_{n,k}^*\{\widehat{g}({\mathbf Z})\}:=n_{\mathbb{K}}^{-1}\sum_{i\in{\cal I}_k}\widehat{g}({\mathbf Z}_i)$ for any random function $\widehat{g}(\cdot)$ $(k=1,\ldots,\mathbb{K})$. Write \begin{eqnarray} \hat{\mu}_{\mbox{\tiny SS}}-\mu_0~=~S_1+S_2+S_3+S_4+S_5, \label{date} \end{eqnarray} where \begin{eqnarray} S_1&~:=~&{\cal E}_n[\{\pi^*({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}]+{\cal E}_{n+N}\{ m^*({\mathbf X}) \}-\mu_0, \label{s1}\\ S_2&~:=~&{\cal E}_n([\nu_{n,N}-\{\pi^*({\mathbf X})\}^{-1}T]\{\hat{m}_n({\mathbf X})- m^*({\mathbf X}) \})=\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$} S_{2,k} \nonumber\\ &~:=~&\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$}\E_{n,k}^*([\nu_{n,N}-\{\pi^*({\mathbf X})\}^{-1}T]\{\hat{m}_{n,k}({\mathbf X})- m^*({\mathbf X}) \}), \nonumber\\ S_3&~:=~&(1-\nu_{n,N}){\cal E}_{N}\{\hat{m}_n({\mathbf X})- m^*({\mathbf X}) \}=\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$} S_{3,k} \nonumber\\ &~:=~&\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$}[(1-\nu_{n,N}){\cal E}_N\{\hat{m}_{n,k}({\mathbf X})- m^*({\mathbf X}) \}], \nonumber\\ S_4&~:=~&{\cal E}_n[\hat{D}_N({\mathbf X}) T\{Y- m^*({\mathbf X}) \}],\ S_5:={\cal E}_n[\hat{D}_N({\mathbf X}) T\{ m^*({\mathbf X}) -\hat{m}_n({\mathbf X})\}]. \nonumber \end{eqnarray} We first handle $S_2$ and $S_3$. \textcolor{black}{To this end, w}e have\textcolor{black}{:} \begin{eqnarray*} &&\phantom{~=~}{\cal E}_{\mathbf Z}\{([\nu_{n,N}-\{\pi^*({\mathbf X})\}^{-1}T]\{\hat{m}_{n,k}({\mathbf X})- m^*({\mathbf X}) \})^2\} \\ &&~\leq~ c\,{\cal E}_{\mathbf X}[\{\hat{m}_{n,k}({\mathbf X})- m^*({\mathbf X}) \}^2] ~=~O_p(w_{n,2}^2), \end{eqnarray*} where the first step uses the boundedness of $\{\pi^*({\mathbf X})\}^{-1}$ from Assumption \ref{api4} and the last step is due to (\ref{wn2}) of Assumption \ref{ahmu}. It now follows that \begin{eqnarray*} \hbox{var}(S_{2,k}\mid \mathcal{L}_k^-)~=~O_p(n^{-1}w_{n,2}^2),\ \hbox{var}(S_{3,k}\mid \mathcal{L}_k^-)~=~O_p(N^{-1}w_{n,2}^2). \end{eqnarray*} Thus\textcolor{black}{,} Chebyshev's inequality gives that, for any positive sequence $t_n\to\infty$, \begin{eqnarray*} &&{\mathbb P}_{ \mathcal{L}_k}(|S_{2,k}-{\cal E}_{\mathbf Z}(S_{2,k})|\geq t_n n^{-1/2}w_{n,2}\mid \mathcal{L}_k^-)~\leq~ n(t_nw_{n,2})^{-2}\hbox{var}(S_{2,k}\mid \mathcal{L}_k^-) ~=~o_p(1), \\ &&{\mathbb P}_{ \mathcal{U}}(|S_{3,k}-{\cal E}_{\mathbf Z}(S_{3,k})|\geq t_n n^{-1/2}w_{n,2}\mid \mathcal{L}_k^-)~\leq~ n(t_nw_{n,2})^{-2}\hbox{var}(S_{3,k}\mid \mathcal{L}_k^-) ~=~o_p(1). \end{eqnarray*} Then\textcolor{black}{,} Lemma 6.1 of \citet{chernozhukov2018double} implies \begin{eqnarray*} |S_{2,k}-{\cal E}_{\mathbf Z}(S_{2,k})|~=~O_p(n^{-1/2}w_{n,2}),\ |S_{3,k}-{\cal E}_{\mathbf Z}(S_{3,k})|~=~O_p(N^{-1/2}w_{n,2}), \end{eqnarray*} which gives that \begin{eqnarray} |S_{2,k}+S_{3,k}-{\cal E}_{\mathbf Z}(S_{2,k}+S_{3,k})|~=~O_p(n^{-1/2}w_{n,2}). \label{s23e} \end{eqnarray} In addition, we know that \begin{eqnarray*} |{\cal E}_{\mathbf Z}(S_{2,k}+S_{3,k})|&~=~&|{\cal E}_{\mathbf Z}([1-\{\pi^*({\mathbf X})\}^{-1}T]\{\hat{m}_{n,k}({\mathbf X})- m^*({\mathbf X}) \})| \\ &~\leq~&c\,I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}{\cal E}\{|\hat{m}_{n,k}({\mathbf X})- m^*({\mathbf X}) |\} \\ &~=~&I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}O_p(w_{n,1}), \end{eqnarray*} where the second step uses the boundedness of $\{\pi^*({\mathbf X})\}^{-1}$ from Assumption \ref{api4} as well as the fact that \begin{eqnarray*} {\cal E}_{\mathbf Z}([1-\{\pi({\mathbf X})\}^{-1}T]\{\hat{m}_{n,k}({\mathbf X})- m^*({\mathbf X}) \})~=~0, \end{eqnarray*} and the last step holds by (\ref{wn1}) of Assumption \ref{ahmu}. This, combined with (\ref{s23e}), gives \begin{eqnarray*} |S_{2,k}+S_{3,k}|~=~O_p(n^{-1/2}w_{n,2})+I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}O_p(w_{n,1}), \end{eqnarray*} which implies\textcolor{black}{:} \begin{eqnarray} |S_2+S_3|&~\leq~&\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$}|S_{2,k}+S_{3,k}| \nonumber\\ &~=~&O_p(n^{-1/2}w_{n,2})+I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}O_p(w_{n,1}). \label{s23} \end{eqnarray} Next, we control $S_4$. We know that \begin{eqnarray*} {\cal E}_{\mathbf Z}([\hat{D}_N({\mathbf X})T\{Y- m^*({\mathbf X}) \}]^2)~\leq~{\cal E}_{\mathbf Z}([\hat{D}_N({\mathbf X})\{Y- m^*({\mathbf X}) \}]^2)~=~O_p(b_N^2), \end{eqnarray*} where the last step holds by (\ref{sn4}) of Assumption \ref{api4}. This implies\textcolor{black}{:} \begin{eqnarray*} \hbox{var}(S_{4}\mid \mathcal{U})~=~O_p(n^{-1}b_N^2). \end{eqnarray*} Thus Chebyshev's inequality gives that, for any positive sequence $t_n\to\infty$, \begin{eqnarray*} {\mathbb P}_ \mathcal{L}(|S_{4}-{\cal E}_{\mathbf Z}(S_4)|\geq t_n n^{-1/2}b_N\mid \mathcal{U})~\leq~ n(t_nb_N)^{-2}\hbox{var}(S_{4}\mid \mathcal{U}) ~=~o_p(1). \end{eqnarray*} Then, by Lemma 6.1 of \citet{chernozhukov2018double}, we have \begin{eqnarray} |S_{4}-{\cal E}_{\mathbf Z}(S_4)|~=~O_p(n^{-1/2}b_N). \label{s41} \end{eqnarray} In addition, if $ m^*({\mathbf X}) = m({\mathbf X}) $, then \begin{eqnarray*} {\cal E}_{\mathbf Z}(S_4)~=~{\cal E}({\cal E}[\hat{D}_N({\mathbf X}) T\{Y- m({\mathbf X}) \}\mid \mathcal{U},{\mathbf X}]\mid \mathcal{U})~=~0. \end{eqnarray*} Otherwise, we have \begin{eqnarray*} |{\cal E}_{\mathbf Z}(S_{4})|~\leq~({\cal E}_{\mathbf X}[\{\hat{D}_N({\mathbf X})\}^2]{\cal E}[\{Y- m^*({\mathbf X}) \}^2])^{1/2} ~=~O_p(s_N), \end{eqnarray*} where the first step uses H\"older's inequality and the last step is due to (\ref{sn2}) of Assumption \ref{api4}. Therefore $|{\cal E}_{\mathbf Z}(S_4)|=I\{ m^*({\mathbf X}) \neq m({\mathbf X}) \}O_p(s_N)$. This, combined with (\ref{s41}), implies\textcolor{black}{:} \begin{eqnarray} |S_4|~=~O_p(n^{-1/2}b_N)+I\{ m({\mathbf X}) \neq m^*({\mathbf X}) \}O_p(s_N). \label{s4} \end{eqnarray} Now, we consider $S_5$. Markov's inequality gives that, for any positive sequence $t_n\to\infty$, \begin{eqnarray} &&\phantom{~=~}{\mathbb P}_ \mathcal{L}(\E_{n,k}^* [\{\hat{D}_N({\mathbf X})\}^2]\geq t_ns_N^2\mid \mathcal{U})~\leq~ t_n^{-1}s_N^{-2}{\cal E}_{\mathbf X} [\{\hat{D}_N({\mathbf X})\}^2]~=~o_p(1), \label{pdn}\\ &&\phantom{~=~}{\mathbb P}_{ \mathcal{L}_k}(\E_{n,k}^*[\{ m^*({\mathbf X}) -\hat{m}_{n,k}({\mathbf X})\}^2]\geq t_nw_{n,2}^2\mid \mathcal{L}_k^-) \nonumber\\ &&~\leq~ t_n^{-1}w_{n,2}^{-2}{\cal E}_{\mathbf X}[\{ m^*({\mathbf X}) -\hat{m}_{n,k}({\mathbf X})\}^2]=o_p(1)\quad (k=1,\ldots,\mathbb{K}), \label{pmun} \end{eqnarray} where (\ref{pdn}) uses (\ref{sn2}) of Assumption \ref{api4} and (\ref{pmun}) holds by (\ref{wn2}) of Assumption \ref{ahmu}. Then, by Lemma 6.1 of \citet{chernozhukov2018double}, we have \begin{eqnarray} &&\E_{n,k}^* [\{\hat{D}_N({\mathbf X})\}^2]~=~O_p(s_N^2), \label{sn}\\ &&\E_{n,k}^* [\{ m^*({\mathbf X}) -\hat{m}_{n,k}({\mathbf X})\}^2]~=~O_p(w_{n,2}^2)\quad (k=1,\ldots,\mathbb{K}). \label{en} \end{eqnarray} Hence\textcolor{black}{,} H\"older's inequality implies\textcolor{black}{:} \begin{eqnarray} |S_5|&~\leq~&\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$}\E_{n,k}^*[|\hat{D}_N({\mathbf X})\{ m^*({\mathbf X}) -\hat{m}_{n,k}({\mathbf X})\}|] \nonumber\\ &~\leq~&\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$}(\E_{n,k}^*[\{\hat{D}_N({\mathbf X})\}^2]\E_{n,k}^*[\{ m^*({\mathbf X}) -\hat{m}_{n,k}({\mathbf X})\}^2])^{1/2} =O_p(s_N\,w_{n,2}), \label{s5} \end{eqnarray} where the last step holds by (\ref{sn}) and(\ref{en}). Summing up, the equations (\ref{date}), (\ref{s1}), (\ref{s23}), (\ref{s4}) and (\ref{s5}) conclude the result. \subsection{Proof of Corollary \ref{corate}} Since $\nu=0$, we have \begin{eqnarray*} {\cal E}_{n+N}\{ m^*({\mathbf X}) \}~=~{\cal E}\{ m^*({\mathbf X}) \}+O_p\{(n+N)^{-1/2}\}~=~{\cal E}\{ m^*({\mathbf X}) \}+ o_p(n^{-1/2}). \end{eqnarray*} by the central limit theorem. Then the stochastic expansion directly follows from Theorem \ref{thate} and the asymptotic normality is obvious. \subsection{Proof of Corollary \ref{coratesup}} With ${\cal E}_{n+N}\{\hat{m}_n({\mathbf X})\}$ substituted by ${\cal E}_n\{\hat{m}_n({\mathbf X})\}$, the proof of Theorem \ref{thate} directly gives the stochastic expansion followed by the asymptotic normality. Then\textcolor{black}{,} we have \begin{eqnarray*} &&\phantom{~=~}\hbox{cov}[\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}, m^*({\mathbf X}) ] \\ &&~=~{\cal E}\{ m^*({\mathbf X}) Y\}-{\cal E}[\{ m^*({\mathbf X}) \}^2]-{\cal E}\{Y- m^*({\mathbf X}) \}{\cal E}\{ m^*({\mathbf X}) \} \\ &&~=~{\cal E}\{ m^*({\mathbf X}) Y\}-\hbox{var}\{ m^*({\mathbf X}) \}. \end{eqnarray*} Therefore\textcolor{black}{,} \begin{eqnarray*} &&\lambda_{\mbox{\tiny SUP}}^2~=~\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}]+\hbox{var}\{ m^*({\mathbf X}) \}+ \\ &&\phantom{\lambda_{\mbox{\tiny SUP}}^2=~~}2\,\hbox{cov}[\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}, m^*({\mathbf X}) ] \\ &&\phantom{\lambda_{\mbox{\tiny SUP}}^2}~=~\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{Y- m^*({\mathbf X}) \}]-\hbox{var}\{ m^*({\mathbf X}) \}+2\,{\cal E}\{ m^*({\mathbf X}) (Y-\mu_0)\}. \end{eqnarray*} \subsection{Proof of Corollary \ref{corate_dagger}} The stochastic expansion can be obtained from the proof of Theorem \ref{thate} with $\hat{\pi}_N(\cdot)$ replaced by $\hat{\pi}_n(\cdot)$. The asymptotic normality directly follows. \subsection{Proof of Theorem \ref{thqte}}\label{proof_theorem_qte} Write \begin{eqnarray} \hvt_{\mbox{\tiny SS}}-{\boldsymbol\theta}~=~\{T_1(\hvt_{\mbox{\tiny INIT}})-{\boldsymbol\theta}\}+\{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}\{T_2(\hvt_{\mbox{\tiny INIT}})+T_3(\hvt_{\mbox{\tiny INIT}})+T_4(\hvt_{\mbox{\tiny INIT}})\}, \label{dde} \end{eqnarray} where \begin{eqnarray*} T_1(\theta)&~:=~&\theta+\{\hat{f}_n(\theta)\}^{-1}({\cal E}_n[\{\pi^*({\mathbf X})\}^{-1}T\{\phi^*({\mathbf X},\theta)-\psi(Y,\theta)\}]-{\cal E}_{n+N}\{\phi^*({\mathbf X},\theta)\}), \\ T_2(\theta)&~:=~&{\cal E}_n([\{\pi^*({\mathbf X})\}^{-1}T-\nu_{n,N}]\{\hat{\phi}_n({\mathbf X},\theta)-\phi^*({\mathbf X},\theta)\})-\\ &&~~(1-\nu_{n,N}){\cal E}_{N}\{\hat{\phi}_n({\mathbf X},\theta)-\phi^*({\mathbf X},\theta)\}, \\ T_3(\theta)&~:=~&{\cal E}_n[\hat{D}_N({\mathbf X}) T\{\phi^*({\mathbf X},\theta)-\psi(Y,\theta)\}],\\ T_4(\theta)&~:=~&{\cal E}_n[\hat{D}_N({\mathbf X}) T\{\hat{\phi}_n({\mathbf X},\theta)-\phi^*({\mathbf X},\theta)\}]. \end{eqnarray*} First, the conditions (\ref{hvti}) and (\ref{hf}) of Assumption \ref{ainit} give \begin{eqnarray} &&{\mathbb P}\{\hvt_{\mbox{\tiny INIT}}\in\mb(\vt,{\varepsilon})\}~\to~ 1, \label{belong}\\ &&\hat{L}_n~:=~\{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}-\{f({\boldsymbol\theta})\}^{-1}~=~O_p(v_n)~=~o_p(1). \label{hl} \end{eqnarray} Also, we have \begin{eqnarray} \hat{f}_n(\hvt_{\mbox{\tiny INIT}})~=~O_p(1)\textcolor{black}{,} \label{hfo} \end{eqnarray} due to (\ref{hf}) of Assumption \ref{ainit} and the fact that $f({\boldsymbol\theta})>0$ from Assumption \ref{adensity}. Now\textcolor{black}{,} we consider $T_1(\hvt_{\mbox{\tiny INIT}})$. According to (\ref{hvti}) of Assumption \ref{ainit} and (\ref{unipi1}) of Assumption \ref{abound}, we have \begin{eqnarray*} n^{-1/2}\mathbb{G}_n[\{\pi^*({\mathbf X})\}^{-1}T\phi^*({\mathbf X},\hvt_{\mbox{\tiny INIT}})]~=~n^{-1/2}\mathbb{G}_n[\{\pi^*({\mathbf X})\}^{-1}T\phi^*({\mathbf X},{\boldsymbol\theta})]+o_p(n^{-1/2}), \end{eqnarray*} which implies that \begin{eqnarray} &&\phantom{~=~}{\cal E}_n[\{\pi^*({\mathbf X})\}^{-1}T\phi^*({\mathbf X},\hvt_{\mbox{\tiny INIT}})]\nonumber\\ &&~=~{\cal E}_{\mathbf Z}[\{\pi^*({\mathbf X})\}^{-1}T\phi^*({\mathbf X},\hvt_{\mbox{\tiny INIT}})] +{\cal E}_n[\{\pi^*({\mathbf X})\}^{-1}T\phi^*({\mathbf X},{\boldsymbol\theta})]- \nonumber\\ &&\phantom{~=~}{\cal E}_{\mathbf Z}[\{\pi^*({\mathbf X})\}^{-1}T\phi^*({\mathbf X},{\boldsymbol\theta})]+o_p(n^{-1/2}). \label{t11} \end{eqnarray} Considering that $\{\psi(Y,\theta):\theta\in\mb(\vt,{\varepsilon})\}$ is a ${\mathbb P}$-Donsker class from Theorem 19.3 of \citet{van2000asymptotic} and the permanence properties of ${\mathbb P}$-Donsker classes \citet{van1996weak}, Theorem 2.10.6 of \citet{van1996weak} gives that $ {\cal D}^*=\{\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta):\theta\in\mb(\vt,{\varepsilon})\}$ is ${\mathbb P}$-Donsker since $\{\pi^*({\mathbf X})\}^{-1}T$ and $\psi(Y,\theta)$ are bounded. Moreover, the convergence (\ref{belong}) implies that $\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\hvt_{\mbox{\tiny INIT}})$ is in $ {\cal D}^*$ with probability tending to one. In addition, we have \begin{eqnarray*} &&\phantom{~=~}{\cal E}_{\mathbf Z}[\{\pi^*({\mathbf X})\}^{-2}T\{\psi(Y,\hvt_{\mbox{\tiny INIT}})-\psi(Y,{\boldsymbol\theta})\}^2] \\ &&~\leq~ c\, {\cal E}_{\bf Z}[\{I(Y<\hvt_{\mbox{\tiny INIT}})-I(Y<{\boldsymbol\theta})\}^2] =c\,F(\hvt_{\mbox{\tiny INIT}})+F({\boldsymbol\theta})-2F\{\min(\hvt_{\mbox{\tiny INIT}},{\boldsymbol\theta})\}\to 0 \end{eqnarray*} in probability\textcolor{black}{,} because of the boundedness of $\{\pi^*({\mathbf X})\}^{-2}T$, the continuity of $F(\cdot)$ from Assumption \ref{adensity} and the consistency of $\hvt_{\mbox{\tiny INIT}}$ from Assumption \ref{ainit}. Hence Lemma 19.24 of \citet{van2000asymptotic} gives that \begin{eqnarray*} \mathbb{G}_n[\{\pi^*({\mathbf X})\}^{-1}T\{\psi(Y,\hvt_{\mbox{\tiny INIT}})-\psi(Y,{\boldsymbol\theta})\}]~=~o_p(1), \end{eqnarray*} which implies\textcolor{black}{:} \begin{eqnarray} {\cal E}_n[\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\hvt_{\mbox{\tiny INIT}})]&~=~&{\cal E}_{\mathbf Z}[\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\hvt_{\mbox{\tiny INIT}})] +{\cal E}_n[\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,{\boldsymbol\theta})]- \nonumber\\ &&{\cal E}_{\mathbf Z}[\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,{\boldsymbol\theta})]+o_p(n^{-1/2}).\label{t12} \end{eqnarray} Further, the condition (\ref{unipi2}) gives \begin{eqnarray} {\cal E}_{n+N}\{\phi^*({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}&~=~&{\cal E}_{\mathbf X}\{\phi^*({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}+{\cal E}_{n+N}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}- \nonumber\\ &&{\cal E}_{\mathbf X}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}+o_p(n^{-1/2}). \label{t13} \end{eqnarray} Since either $\phi^*(\cdot,\cdot)=\phi(\cdot,\cdot)$ or $\pi^*(\cdot)=\pi(\cdot)$, we know that \begin{eqnarray} {\cal E}_{\mathbf Z}[\{\pi^*({\mathbf X})\}^{-1}T\{\phi^*({\mathbf X},{\boldsymbol\theta})-\psi(Y,{\boldsymbol\theta})\}]-{\cal E}_{\mathbf X}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}~=~0, \label{t14} \end{eqnarray} and that \begin{eqnarray} &&\phantom{~=~}{\cal E}_{\mathbf Z}[\{\pi^*({\mathbf X})\}^{-1}T\{\phi^*({\mathbf X},\hvt_{\mbox{\tiny INIT}})-\psi(Y,\hvt_{\mbox{\tiny INIT}})\}]-{\cal E}_{\mathbf X}\{\phi^*({\mathbf X},\hvt_{\mbox{\tiny INIT}})\} \nonumber\\ &&~=~ -{\cal E}_{\mathbf Z}\{\psi(Y,\hvt_{\mbox{\tiny INIT}})\}. \label{t15} \end{eqnarray} In addition, Taylor's expansion gives that \begin{eqnarray} {\cal E}_{\bf Z}\{\psi(Y,\hvt_{\mbox{\tiny INIT}})\}&~=~&f({\boldsymbol\theta})(\hvt_{\mbox{\tiny INIT}}-{\boldsymbol\theta})+O_p(|\hvt_{\mbox{\tiny INIT}}-{\boldsymbol\theta}|^2)\nonumber\\ &~=~&f({\boldsymbol\theta})(\hvt_{\mbox{\tiny INIT}}-{\boldsymbol\theta})+O_p(u_n^2) \label{df12} \\ &~=~&O_p(u_n),\label{df122} \end{eqnarray} where the residual term in the first step is due to (\ref{belong}) and the fact that $f(\cdot)$ has a bounded derivative in $\mb(\vt,{\varepsilon})$ from Assumption \ref{adensity}, the second step uses (\ref{hvti}) in Assumption \ref{ainit} and the last step holds by the fact that $u_n=o(1)$ from Assumption \ref{ainit}. Therefore\textcolor{black}{,} \begin{eqnarray} {\cal E}_n\{\omega_{n,N}({\mathbf Z},\hvt_{\mbox{\tiny INIT}})\}&~=~&{\cal E}_n\{\omega_{n,N}({\mathbf Z},{\boldsymbol\theta})\}- {\cal E}_{\mathbf Z}\{\psi(Y,\hvt_{\mbox{\tiny INIT}})\}+o_p(n^{-1/2}) \nonumber\\ &~=~&{\cal E}_n\{\omega_{n,N}({\mathbf Z},{\boldsymbol\theta})\}-f({\boldsymbol\theta})(\hvt_{\mbox{\tiny INIT}}-{\boldsymbol\theta})+O_p(u_n^2)+o_p(n^{-1/2}) \label{taylor}\\ &~=~&{\cal E}_n\{\omega_{n,N}({\mathbf Z},{\boldsymbol\theta})\}+O_p(u_n)+o_p(n^{-1/2}), \nonumber \end{eqnarray} where the first step uses (\ref{t11})--(\ref{t15}), the second step is due to (\ref{df12}) and the last step holds by (\ref{df122}). It now follows that \begin{eqnarray} \hat{L}_n{\cal E}_n\{\omega_{n,N}({\mathbf Z},\hvt_{\mbox{\tiny INIT}})\}~=~O_p(u_nv_n)+o_p(n^{-1/2})\textcolor{black}{,}\label{diffl} \end{eqnarray} from (\ref{hl}) and the fact that ${\cal E}_n\{\omega_{n,N}({\mathbf Z},{\boldsymbol\theta})\}=O_p(n^{-1/2})$ from the central limit theorem. Hence\textcolor{black}{,} we have \begin{eqnarray} &&T_1(\hvt_{\mbox{\tiny INIT}})-{\boldsymbol\theta}~=~\hvt_{\mbox{\tiny INIT}}-{\boldsymbol\theta}+\{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}{\cal E}_n\{\omega_{n,N}({\mathbf Z},\hvt_{\mbox{\tiny INIT}})\}\nonumber\\ &&\phantom{T_1(\hvt_{\mbox{\tiny INIT}})-{\boldsymbol\theta}}~=~\hvt_{\mbox{\tiny INIT}}-{\boldsymbol\theta}+\{f({\boldsymbol\theta})\}^{-1}{\cal E}_n\{\omega_{n,N}({\mathbf Z},\hvt_{\mbox{\tiny INIT}})\}+O_p(u_nv_n)+o_p(n^{-1/2}) \nonumber\\ &&\phantom{T_1(\hvt_{\mbox{\tiny INIT}})-{\boldsymbol\theta}}~=~\hvt_{\mbox{\tiny INIT}}-{\boldsymbol\theta}+\{f({\boldsymbol\theta})\}^{-1}[{\cal E}_n\{\omega_{n,N}({\mathbf Z},{\boldsymbol\theta})\}-f({\boldsymbol\theta})(\hvt_{\mbox{\tiny INIT}}-{\boldsymbol\theta})]+ \nonumber\\ &&\phantom{T_1(\hvt_{\mbox{\tiny INIT}})-{\boldsymbol\theta}=}O_p(u_n^2+u_nv_n)+o_p(n^{-1/2}) \nonumber\\ &&\phantom{T_1(\hvt_{\mbox{\tiny INIT}})-{\boldsymbol\theta}}~=~\{f({\boldsymbol\theta})\}^{-1}{\cal E}_n\{\omega_{n,N}({\mathbf Z},{\boldsymbol\theta})\}+O_p(u_nv_n+u_n^2)+o_p(n^{-1/2}), \label{t1} \end{eqnarray} where the second step uses (\ref{diffl}) and the third step is due to (\ref{taylor}). Next, we control $T_2(\hvt_{\mbox{\tiny INIT}})$. Denote \begin{eqnarray*} \mathcal{P}_{n,k}^*~:=~\{[\{\pi^*({\mathbf X})\}^{-1}T-\nu_{n,N}]\hat{\psi}_{n,k}({\mathbf X},\theta):\theta\in\mb(\vt,{\varepsilon})\}. \end{eqnarray*} Due to the boundedness of $[\{\pi^*({\mathbf X})\}^{-1}T-\nu_{n,N}]$ from Assumption \ref{api}, we have \begin{eqnarray} &&\phantom{=}N_{[\,]} \{c_1\,\eta,\mathcal{P}_{n,k}^*\mid \mathcal{L},L_2({\mathbb P}_{\mathbf X})\}~\leq~ N_{[\,]} \{\eta,\mathcal{P}_{n,k}\mid \mathcal{L},L_2({\mathbb P}_{\mathbf X})\}~\leq~ H( \mathcal{L})\eta^{-c}, \label{bracket}\\ &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|[\{\pi^*({\mathbf X})\}^{-1}T-\nu_{n,N}]\hat{\psi}_{n,k}({\mathbf X},\theta)| \nonumber\\ &&~\leq~ c\,\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{\psi}_{n,k}({\mathbf X},\theta)| =O_p(d_{n,\infty}), \label{bn}\\ &&\phantom{~~}[\hbox{$\sup_{\theta\in\mbtv}$}{\cal E}_{\mathbf Z}\{([\{\pi^*({\mathbf X})\}^{-1}T-\nu_{n,N}]\hat{\psi}_{n,k}({\mathbf X},\theta))\}^2]^{1/2} \nonumber\\ &&~\leq~ c\,\Delta_{k}( \mathcal{L})=O_p(d_{n,2}) \quad (k=1,\ldots,\mathbb{K})\textcolor{black}{,} \label{dn} \end{eqnarray} from Assumption \ref{aest}. Then\textcolor{black}{,} (\ref{bracket}) implies\textcolor{black}{:} \begin{eqnarray} N_{[\,]} \{\eta,\mathcal{P}_{n,k}^*\mid \mathcal{L},L_2({\mathbb P}_{\mathbf X})\}~\leq~ c_1^{c_2} H( \mathcal{L})\eta^{-c_2}. \label{an} \end{eqnarray} Since $c_1^{c_2} H( \mathcal{L})=O_p(a_n)$ from Assumption \ref{aest}, combining (\ref{bn})--(\ref{an}) and applying Lemma \ref{1v2} yield that \begin{eqnarray} \hbox{$\sup_{\theta\in\mbtv}$}|\mathbb{G}_{n_\mathbb{K},k}([\{\pi^*({\mathbf X})\}^{-1}T-\nu_{n,N}]\hat{\psi}_{n,k}({\mathbf X},\theta))|~=~ O_p(r_n)\textcolor{black}{,} \label{mbgpi} \end{eqnarray} with the notation \begin{eqnarray*} \mathbb{G}_{n_\mathbb{K},k}\{\widehat{g}({\mathbf Z})\}~:=~n_\mathbb{K}^{1/2}[n_\mathbb{K}^{-1}\hbox{$\sum_{i\in{\cal I}_k}$}\widehat{g}({\mathbf Z}_i)-{\cal E}_{\mathbf X}\{\widehat{g}({\mathbf Z})\}]\quad (k=1,\ldots,\mathbb{K})\textcolor{black}{,} \end{eqnarray*} for any random function $\widehat{g}(\cdot)$. In addition, we have \begin{eqnarray} &&\phantom{=}\hbox{$\sup_{\theta\in\mbtv}$}|{\cal E}_{\mathbf Z}([\{\pi^*({\mathbf X})\}^{-1}T-1]\hat{\psi}_{n,k}({\mathbf X},\theta))| \nonumber\\ &&~\leq~ c\, I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\} \hbox{$\sup_{\theta\in\mbtv}$}{\cal E}_{\mathbf Z}\{|\hat{\psi}_{n,k}({\mathbf X},\theta)|\}\nonumber\\ &&~=~I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}O_p(d_{n,1}), \label{idn} \end{eqnarray} where the first step holds by the boundedness of $\{\pi^*({\mathbf X})\}^{-1}$ from Assumption \ref{api} and the fact that \begin{eqnarray*} {\cal E}_{\mathbf Z}([\{\pi({\mathbf X})\}^{-1}T-1]\hat{\psi}_{n,k}({\mathbf X},\theta))~=~0, \end{eqnarray*} and the last step is due to Assumption \ref{aest}. Moreover, under Assumption \ref{aest}, Lemma \ref{1v2} implies that \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\theta\in\mbtv}$}|\mathbb{G}_N\{\hat{\psi}_{n,k}({\mathbf X},\theta)\}| \nonumber\\ &&~=~O_p[d_{n,2}\{\hbox{log}\,a_n+\hbox{log}\,(d_{n,2}^{-1})\}+N^{-1/2}d_{n,\infty}\{(\hbox{log}\,a_n)^2+(\hbox{log}\,d_{n,2})^2\}] \nonumber\\ &&~=~O_p(r_n)\quad (k=1,\ldots,\mathbb{K}). \label{rr2} \end{eqnarray} Considering (\ref{mbgpi})--(\ref{rr2}), we know that \begin{eqnarray*} &&T_2(\hvt_{\mbox{\tiny INIT}})~=~\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$} \{n_\mathbb{K}^{-1/2}\mathbb{G}_{n_\mathbb{K},k}([\{\pi^*({\mathbf X})\}^{-1}T-\nu_{n,N}]\hat{\psi}_{n,k}({\mathbf X},\hvt_{\mbox{\tiny INIT}}))- \\ &&\phantom{T_2(\hvt_{\mbox{\tiny INIT}})=\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$}\{}N^{-1/2}(1-\nu_{n,N})\mathbb{G}_N\{\hat{\psi}_{n,k}({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}+ \\ &&\phantom{T_2(\hvt_{\mbox{\tiny INIT}})=\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$}\{}{\cal E}_{\mathbf Z}([\{\pi^*({\mathbf X})\}^{-1}T-1]\hat{\psi}_{n,k}({\mathbf X},\hvt_{\mbox{\tiny INIT}}))\} \\ &&\phantom{T_2(\hvt_{\mbox{\tiny INIT}})}~=~O_p(n^{-1/2}r_n)+I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}O_p(d_{n,1}), \end{eqnarray*} which, combined with (\ref{hfo}), implies that \begin{eqnarray} \{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}T_2(\hvt_{\mbox{\tiny INIT}})~=~O_p(n^{-1/2}r_n)+I\{\pi^*({\mathbf X})\neq\pi({\mathbf X})\}O_p(d_{n,2}). \label{t2} \end{eqnarray} Further, we \textcolor{black}{now} handle $T_3(\hvt_{\mbox{\tiny INIT}})$. Let $\mathcal{H}_{N}:= \{\hat{D}_N({\mathbf X})T\phi^*({\mathbf X},\theta):\theta\in\mb(\vt,{\varepsilon})\}$ and recall $\mathcal{M}= \{\phi^*({\mathbf X},\theta):\theta\in\mb(\vt,{\varepsilon})\}$. We have \begin{eqnarray} &&\phantom{=}N_{[\,]} \{\hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|\eta,\mathcal{H}_{N}\mid \mathcal{U},L_2({\mathbb P}_{\mathbf X})\} ~\leq~ N_{[\,]} \{\eta,\mathcal{M},L_2({\mathbb P}_{\mathbf X})\}\leq c_1\,\eta^{-c_2}, \label{bracketh}\\ &&\phantom{=}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{D}_N({\mathbf X})T\phi^*({\mathbf X},\theta)|~=~O_p(1), \label{bnh}\\ &&\phantom{=}(\hbox{$\sup_{\theta\in\mbtv}$}{\cal E}_{\mathbf Z}[\{\hat{D}_N({\mathbf X})T\phi^*(Y,\theta)\}^2])^{1/2}~=~O_p(s_N), \label{dnh} \end{eqnarray} where (\ref{bracketh}) uses (\ref{bmm}) of Assumption \ref{abound}, (\ref{bnh}) holds by (\ref{dsup}) of Assumption \ref{api} and the boundedness of $\phi^*({\mathbf X},\theta)$ from Assumption \ref{abound}, and (\ref{dnh}) is due to (\ref{d2}) of Assumption \ref{api} and the boundedness of $\phi^*({\mathbf X},\theta)$ from Assumption \ref{abound}. Then\textcolor{black}{,} (\ref{bracketh}) gives \begin{eqnarray} N_{[\,]} \{\eta,\mathcal{H}_{N}\mid \mathcal{U},L_2({\mathbb P}_{\mathbf X})\} ~\leq~ c_1\,\{\hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|\}^{c_2}\eta^{-c_2}. \label{anh} \end{eqnarray} Since $c_1\,\{\hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|\}^{c_2}=O_p(1)$ from Assumption \ref{api}, combining (\ref{bnh})--(\ref{anh}) and applying Lemma \ref{1v2} yield that \begin{eqnarray*} \hbox{$\sup_{\theta\in\mbtv}$}|\mathbb{G}_{n}\{\hat{D}_N({\mathbf X})T\phi^*(Y,\theta)\}|~=~O_p(z_{n,N}), \end{eqnarray*} which gives that \begin{eqnarray} |{\cal E}_n\{\hat{D}_N({\mathbf X})T\phi^*(Y,\hvt_{\mbox{\tiny INIT}})\}-{\cal E}_{\mathbf Z}\{\hat{D}_N({\mathbf X})T\phi^*(Y,\hvt_{\mbox{\tiny INIT}})\}|~=~O_p(n^{-1/2}z_{n,N}). \label{t311} \end{eqnarray} Analogously, by Example19.6 of \citet{van2000asymptotic} and the boundedness of $\psi(Y,\theta)$, we know that \begin{eqnarray} |{\cal E}_n\{\hat{D}_N({\mathbf X})T\psi(Y,\hvt_{\mbox{\tiny INIT}})\}-{\cal E}_{\mathbf Z}\{\hat{D}_N({\mathbf X})T\psi(Y,\hvt_{\mbox{\tiny INIT}})\}|~=~O_p(n^{-1/2}z_{n,N}). \label{t312} \end{eqnarray} Combining (\ref{t311}) and (\ref{t312}) yields\textcolor{black}{:} \begin{eqnarray} |T_3(\hvt_{\mbox{\tiny INIT}})-{\cal E}_{\mathbf Z}\{T_3(\hvt_{\mbox{\tiny INIT}})\}|~=~O_p(n^{-1/2}z_{n,N}). \label{et3} \end{eqnarray} In addition, if $\phi^*({\mathbf X},\theta)=\phi({\mathbf X},\theta)$, then \begin{eqnarray*} {\cal E}_{\mathbf Z}\{T_3(\hvt_{\mbox{\tiny INIT}})\}~=~{\cal E}_{\mathbf Z}({\cal E}_{\mathbf Z}[\hat{D}_N({\mathbf X})T\{\phi^*({\mathbf X},\hvt_{\mbox{\tiny INIT}})-\psi(Y,\hvt_{\mbox{\tiny INIT}})\}\mid{\mathbf X}])~=~0. \end{eqnarray*} Otherwise, we have \begin{eqnarray*} |{\cal E}_{\mathbf Z}\{T_{3}(\hvt_{\mbox{\tiny INIT}})\}|~\leq~({\cal E}_{\mathbf X}[\{\hat{D}_N({\mathbf X})\}^2]{\cal E}[\{\phi^*({\mathbf X},\hvt_{\mbox{\tiny INIT}})-\psi(Y,\hvt_{\mbox{\tiny INIT}})\}^2])^{1/2} ~=~O_p(s_N), \end{eqnarray*} where the last step uses the boundedness of $\phi^*({\mathbf X},\theta)$ from Assumption \ref{abound}. Hence\textcolor{black}{,} \begin{eqnarray*} |{\cal E}_{\mathbf Z}\{T_{3}(\hvt_{\mbox{\tiny INIT}})\}|~=~I\{\phi^*({\mathbf X},\theta)\neq\phi({\mathbf X},\theta)\}O_p(s_N). \end{eqnarray*} This, combined with (\ref{hfo}) and (\ref{et3}), implies\textcolor{black}{:} \begin{eqnarray} \{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}T_3(\hvt_{\mbox{\tiny INIT}})~=~O_p(n^{-1/2}z_{n,N})+I\{\phi^*({\mathbf X},\theta)\neq\phi({\mathbf X},\theta)\}O_p(s_N). \label{t3} \end{eqnarray} Eventually, we deal with $T_4(\hvt_{\mbox{\tiny INIT}})$. Denote \begin{eqnarray*} \mathcal{Q}_{n,N,k}~:=~\{\hat{D}_N ({\mathbf X})T\hat{\psi}_{n,k}({\mathbf X},\theta):\theta\in\mb(\vt,{\varepsilon})\}. \end{eqnarray*} Due to (\ref{dsup}) of Assumption \ref{api}, we have \begin{eqnarray} &&\phantom{~=~}N_{[\,]} \{\hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|\eta,\mathcal{Q}_{n,N,k}\mid \mathcal{L}\cup \mathcal{U},L_2({\mathbb P}_{\mathbf X})\} \nonumber\\ &&~\leq~ N_{[\,]} \{\eta,\mathcal{P}_{n,k}\mid \mathcal{L},L_2({\mathbb P}_{\mathbf X})\}\leq H( \mathcal{L})\eta^{-c}, \label{bracket1}\\ &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{D}_N({\mathbf X})\hat{\psi}_{n,k}({\mathbf X},\theta)| \nonumber\\ &&~\leq~ \hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{\psi}_{n,k}({\mathbf X},\theta)| =O_p(d_{n,\infty}), \label{bn1}\\ &&\phantom{~=~}(\hbox{$\sup_{\theta\in\mbtv}$}{\cal E}_{\mathbf X}[\{\hat{D}_N({\mathbf X})\hat{\psi}_{n,k}({\mathbf X},\theta)\}^2])^{1/2} \nonumber\\ &&~\leq~ \hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|\Delta_{k}( \mathcal{L})=O_p(d_{n,2}) \quad (k=1,\ldots,\mathbb{K})\textcolor{black}{,} \label{dn1} \end{eqnarray} from Assumption \ref{aest}. Then\textcolor{black}{,} (\ref{bracket1}) implies\textcolor{black}{:} \begin{eqnarray} N_{[\,]} \{\eta,\mathcal{Q}_{n,N,k}\mid \mathcal{L}\cup \mathcal{U},L_2({\mathbb P}_{\mathbf X})\}~\leq~ \{\hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|\}^c H( \mathcal{L})\eta^{-c}. \label{an1} \end{eqnarray} Since $\{\hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|\}^c H( \mathcal{L})=O_p(a_n)$ from Assumptions \ref{aest} and \ref{api}, combining (\ref{bn1})--(\ref{an1}) and applying Lemma \ref{1v2} yield that \begin{eqnarray} \hbox{$\sup_{\theta\in\mbtv}$}|\mathbb{G}_{n_\mathbb{K},k}\{\hat{D}_N({\mathbf X})\hat{\psi}_{n,k}({\mathbf X},\theta)\}|~=~O_p(r_n). \label{mbgpi1} \end{eqnarray} In addition, we have \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\theta\in\mbtv}$}|{\cal E}_{\mathbf X}\{\hat{D}_N({\mathbf X})\hat{\psi}_{n,k}({\mathbf X},\theta)\}|\nonumber\\ &&~\leq~ ({\cal E}_{\mathbf X}[\{\hat{D}_N({\mathbf X})\}^2]\hbox{$\sup_{\theta\in\mbtv}$}{\cal E}_{\mathbf X}[\{\hat{\psi}_{n,k}({\mathbf X},\theta)\}^2])^{1/2}=O_p(s_N d_{n,2}), \label{idn1} \end{eqnarray} where the first step holds by H\"older's inequality and the last step is due to Assumptions \ref{api} and \ref{aest}. Considering (\ref{mbgpi1}) and (\ref{idn1}), we know that \begin{eqnarray*} T_4(\hvt_{\mbox{\tiny INIT}})&~=~&\mathbb{K}^{-1}\hbox{$\sum_{k=1}^\kK$} [n_\mathbb{K}^{-1/2}\mathbb{G}_{n_\mathbb{K},k}\{\hat{D}_N({\mathbf X})\hat{\psi}_{n,k}({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}+{\cal E}_{\mathbf X}\{\hat{D}_N({\mathbf X})\hat{\psi}_{n,k}({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}] \\ &~=~&O_p(n^{-1/2}r_n+s_N d_{n,2}), \end{eqnarray*} which, combined with (\ref{hfo}), implies that \begin{eqnarray} \{\hat{f}_n(\hvt_{\mbox{\tiny INIT}})\}^{-1}T_4(\hvt_{\mbox{\tiny INIT}})~=~O_p(n^{-1/2}r_n+s_N d_{n,2}). \label{t4} \end{eqnarray} Summing up, the equations (\ref{t1}), (\ref{t2}), (\ref{t3}) and (\ref{t4}) conclude the result. \subsection{Proof of Corollary \ref{corqte}} Since $\nu=0$, we have \begin{eqnarray*} {\cal E}_{n+N}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}={\cal E}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}+O_p\{(n+N)^{-1/2}\}~=~{\cal E}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}+ o_p(n^{-1/2})\textcolor{black}{,} \end{eqnarray*} by the central limit theorem. Then\textcolor{black}{,} the stochastic expansion directly follows from Theorem \ref{thqte} and the asymptotic normality is obvious. \subsection{Proof of Corollary \ref{corsup}} With ${\cal E}_{n+N}\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}$ substituted by ${\cal E}_n\{\hat{\phi}_n({\mathbf X},\hvt_{\mbox{\tiny INIT}})\}$, the proof of Theorem \ref{thqte} directly gives the stochastic expansion followed by the asymptotic normality. Then\textcolor{black}{,} we have \begin{eqnarray*} &&\phantom{~=~}\hbox{cov}[\{\pi({\mathbf X})\}^{-1}T\{\phi^*({\mathbf X},{\boldsymbol\theta})-\psi(Y,{\boldsymbol\theta})\},\phi^*({\mathbf X},{\boldsymbol\theta})] \\ &&~=~{\cal E}[\{\phi^*({\mathbf X},{\boldsymbol\theta})\}^2]-{\cal E}\{\phi^*({\mathbf X},{\boldsymbol\theta})\psi(Y,{\boldsymbol\theta})\}-{\cal E}\{\phi^*({\mathbf X},{\boldsymbol\theta})-\psi(Y,{\boldsymbol\theta})\}{\cal E}\{\phi^*({\mathbf X},{\boldsymbol\theta})\} \\ &&~=~\hbox{var}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}-{\cal E}\{\phi^*({\mathbf X},{\boldsymbol\theta})\psi(Y,{\boldsymbol\theta})\}. \end{eqnarray*} Therefore\textcolor{black}{,} \begin{eqnarray*} &&\sigma_{\mbox{\tiny SUP}}^2~=~\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{\psi(Y,{\boldsymbol\theta})-\phi^*({\mathbf X},{\boldsymbol\theta})\}]+\hbox{var}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}- \\ &&\phantom{\sigma_{\mbox{\tiny SUP}}^2~=~}2\,\hbox{cov}[\{\pi({\mathbf X})\}^{-1}T\{\phi^*({\mathbf X},{\boldsymbol\theta})-\psi(Y,{\boldsymbol\theta})\},\phi^*({\mathbf X},{\boldsymbol\theta})] \\ &&\phantom{\sigma_{\mbox{\tiny SUP}}^2}~=~\hbox{var}[\{\pi({\mathbf X})\}^{-1}T\{\psi(Y,{\boldsymbol\theta})-\phi^*({\mathbf X},{\boldsymbol\theta})\}]-\hbox{var}\{\phi^*({\mathbf X},{\boldsymbol\theta})\}+2\,{\cal E}\{\phi^*({\mathbf X},{\boldsymbol\theta})\psi(Y,{\boldsymbol\theta})\}. \end{eqnarray*} \subsection{Proof of Theorem \ref{theorem_ks_ate}} Denote $\ell^{(t)}({\mathbf x},\mathbf{P})=\kappa_t(\mathbf{P}^{\rm T}{\mathbf x})f_{\mathbf S}(\mathbf{P}^{\rm T}{\mathbf x})$ $(t=0,1)$. We now derive the convergence rate of $\hat{\ell}^{(1)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\ell^{(1)}({\mathbf x},\mathbf{P})$. The case of $\hat{\ell}^{(0)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\ell^{(0)}({\mathbf x},\mathbf{P})$ is similar. We first deal with the error from estimating $\mathbf{P}_0$ by $\hat{\mathbf{P}}_k$, i.e., $\hat{\ell}^{(1)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\hat{\ell}^{(1)}_{n,k}({\mathbf x},\mathbf{P}_0)$. Taylor's expansion gives that, for \begin{eqnarray} \bar{{\mathbf s}}_n~:=~h_n^{-1}\{\mathbf{P}_0^{\rm T}+\mathbf{M}(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}\}({\mathbf x}-{\mathbf X})\textcolor{black}{,} \label{ysbar} \end{eqnarray} with some $\mathbf{M}:=\hbox{diag}(\mu_1,\ldots,\mu_r)$ and $\mu_j\in(0,1)$ $(j=1,\ldots,r)$, \begin{eqnarray} &&\phantom{~=~}\hat{\ell}^{(1)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\hat{\ell}^{(1)}_{n,k}({\mathbf x},\mathbf{P}_0) \nonumber \\ &&~=~h_n^{-(r+1)}\E_{n,k}[\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}({\mathbf x}-{\mathbf X})\{\hat{\pi}_N({\mathbf X})\}^{-1}TY] \nonumber\\ &&~=~U_n({\mathbf x})+V_{n,N}({\mathbf x}) , \label{ydhbe} \end{eqnarray} where \begin{eqnarray*} &&U_n({\mathbf x})~:=~h_n^{-(r+1)}\E_{n,k}[\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}({\mathbf x}-{\mathbf X})\{\pi^*({\mathbf X})\}^{-1}TY] , \nonumber \\ &&V_{n,N}({\mathbf x})~:=~h_n^{-(r+1)}\E_{n,k}[\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}({\mathbf x}-{\mathbf X})\hat{D}_N({\mathbf X})TY]. \end{eqnarray*} To control $U_n({\mathbf x})$, write \begin{eqnarray} U_n({\mathbf x})&~=~& h_n^{-(r+1)}\hbox{trace} ((\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T} \E_{n,k}[({\mathbf x}-{\mathbf X})\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}\{\pi^*({\mathbf X})\}^{-1}TY]) \nonumber\\ &~=~&h_n^{-(r+1)}\hbox{trace}[(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}\{{\bf U}_{n,1}({\mathbf x})+{\bf U}_{n,2}({\mathbf x})-{\bf U}_{n,3}({\mathbf x})\}], \label{yun} \end{eqnarray} where \begin{eqnarray*} &&{\bf U}_{n,1}({\mathbf x})~:=~\E_{n,k}(({\mathbf x}-{\mathbf X})[\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\{\pi^*({\mathbf X})\}^{-1}TY), \\ &&{\bf U}_{n,2}({\mathbf x})~:=~\E_{n,k}({\mathbf x} [\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\{\pi^*({\mathbf X})\}^{-1}TY), \\ &&{\bf U}_{n,3}({\mathbf x})~:=~\E_{n,k}({\mathbf X} [\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\{\pi^*({\mathbf X})\}^{-1}TY). \end{eqnarray*} We know \begin{eqnarray} \hbox{$\sup_{\s\in\ms}$}{\cal E}[h_n^{-r}\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}|Y|]&~=~&\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}h_n^{-r}\rho\{h_n^{-1}({\mathbf s}-{\bf v}) \}{\cal E}(|Y|\mid{\mathbf S}={\bf v}) f_{\mathbf S}({\bf v})d{\bf v} \nonumber\\ &~=~&\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}\rho({\bf t} ){\cal E}(|Y|\mid{\mathbf S}={\mathbf s}-h_n{\bf t}) f_{\mathbf S}({\mathbf s}-h_n{\bf t})d{\bf t} \nonumber\\ &~=~& O(1). \label{ygrho} \end{eqnarray} where the second step uses change of variables while the last step holds by the boundedness of ${\cal E}(|Y|\mid{\mathbf S}=\cdot)f_{\mathbf S}(\cdot)$ from Assumptions \ref{akernel} (ii)--(iii) and the integrability of $\rho(\cdot)$ from Assumption \ref{ahbey} (ii). Moreover, under Assumptions \ref{akernel} (ii)--(iii) and \ref{ahbey} (ii), Theorem 2 of \citet{hansen2008uniform} gives\textcolor{black}{:} \begin{eqnarray*} \hbox{$\sup_{\s\in\ms}$}(\E_{n,k}[h_n^{-r}\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}Y]-{\cal E}[h_n^{-r}\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}Y])~=~O_p(\xi_n)~=~o_p(1)\textcolor{black}{.} \end{eqnarray*} This, combined with (\ref{ygrho}), implies\textcolor{black}{:} \begin{eqnarray} \hbox{$\sup_{\s\in\ms}$}\E_{n,k}[h_n^{-r}\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}Y]~=~O_p(1). \label{yexrho} \end{eqnarray} Next, we have \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}\E_{n,k} (\|[\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]Y\|) \nonumber\\ &&~\leq~ \hbox{$\sup_{\x\in\mx}$}\E_{n,k} [\|\bar{{\mathbf s}}_n-h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\|\rho\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}|Y|] \nonumber\\ &&~\leq~\hbox{$\sup_{\x\in\mx}$}\E_{n,k} [\|(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}({\mathbf x}-{\mathbf X})\|h_n^{-1}\rho\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}|Y|] \nonumber\\ &&~\leq~ c\,\|\hat{\mathbf{P}}_k-\mathbf{P}_0\|_1\hbox{$\sup_{\x,\X\in\mx}$}\|{\mathbf x}-{\mathbf X}\|_{\infty}\hbox{$\sup_{\s\in\ms}$}\E_{n,k} [h_n^{-1}\rho\{h_n^{-1}({\mathbf s}-{\mathbf S})\}|Y|]\nonumber \\ &&~=~O_p(h_n^{r-1}\alpha_n), \label{yalphan} \end{eqnarray} where the first step uses the local Lipschitz continuity of $\nabla K(\cdot)$ from Assumption \ref{ahbey} (ii), the second step is due to the definition (\ref{ysbar}) of $\bar{{\mathbf s}}_n$, the third step holds by H\"older's inequality, and the last step is because of Assumptions \ref{al1}, \ref{ahbe} (i) and the equation (\ref{yexrho}). Hence\textcolor{black}{,} \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}\|{\bf U}_{n,1}({\mathbf x})\|_{\infty} \\ &&~\leq~ c\,\hbox{$\sup_{\x\in\mx}$}\E_{n,k} (\|{\mathbf x}-{\mathbf X}\|_{\infty}\|[\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]Y\|) \\ &&~\leq~ c\,\hbox{$\sup_{\x\in\mx}$}\E_{n,k} (\|[\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]Y\|) =O_p(h_n^{r-1}\alpha_n). \end{eqnarray*} where the first step holds by the boundedness of $\{\pi^*({\mathbf X})\}^{-1}T$, the second step is due to Assumption \ref{ahbe} (i), and the last step uses (\ref{yalphan}). This, combined with Assumption \ref{al1} and H\"older's inequality, implies\textcolor{black}{:} \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}\|(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T} {\bf U}_{n,1}({\mathbf x})\|_\infty \nonumber\\ &&~\leq~\|\hat{\mathbf{P}}_k-\mathbf{P}_0\|_1\hbox{$\sup_{\x\in\mx}$}\|{\bf U}_{n,1}({\mathbf x})\|_{\infty}=O_p(h_n^{r-1}\alpha_n^2). \label{ybdn1} \end{eqnarray} Then, under Assumptions \ref{akernel} (ii)--(iii) and \ref{ahbey} (ii), Theorem 2 of \citet{hansen2008uniform} gives \begin{eqnarray} &&\hbox{$\sup_{\x\in\mx}$}\|{\bf U}_{n,2}({\mathbf x})-{\cal E}\{{\bf U}_{n,2}({\mathbf x})\}\|_{\infty}~=~O_p(h_n^{r}\xi_n), \label{ydn2}\\ &&\hbox{$\sup_{\x\in\mx}$}\|{\bf U}_{n,3}({\mathbf x})-{\cal E}\{{\bf U}_{n,3}({\mathbf x})\}\|_{\infty}~=~O_p(h_n^{r}\xi_n). \label{ydn3} \end{eqnarray} Let $\delta({\mathbf s}):=f_{\mathbf S}({\mathbf s})\kappa_1({\mathbf s})$ and $\nabla\delta({\mathbf s}):=\partial \delta({\mathbf s})/\partial {\mathbf s}$. We \textcolor{black}{then} have \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}\|{\cal E}\{{\bf U}_{n,2}({\mathbf x})\}\|_\infty \nonumber\\ &&~\leq~ \hbox{$\sup_{\x\in\mx}$}\|{\mathbf x}\hbox{$\int$}\delta({\mathbf s})[\nabla K\{h_n^{-1}(\mathbf{P}_0^{\rm T}{\mathbf x}-s)\}]^{\rm T} ds\|_\infty \nonumber\\ &&~=~h_n^{r+1}\hbox{$\sup_{\x\in\mx}$}\|{\mathbf x}\hbox{$\int$}\{\nabla\delta(\mathbf{P}_0^{\rm T}{\mathbf x}-h_n{\bf t})\}^{\rm T} K({\bf t})d{\bf t}\|_\infty =O(h_n^{r+1}). \label{yedn2} \end{eqnarray} In the above, the second step uses integration by parts and change of variables, and the last step holds by Assumption \ref{ahbey} (i), the boundedness of $\nabla\delta({\mathbf s})$ from Assumptions \ref{akernel} (ii) and (iv), and the integrability of $K(\cdot)$ from Assumption \ref{akernel} (i). Set $\mbox{\boldmath $\zeta$}({\mathbf s}):=f_{\mathbf S}({\mathbf s})\mbox{\boldmath $\chi$}_1({\mathbf s})$ and $\nabla\mbox{\boldmath $\zeta$}({\mathbf s}):=\partial \mbox{\boldmath $\zeta$}({\mathbf s})/\partial {\mathbf s}$. Analogous to (\ref{yedn2}), we know \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}\|{\cal E}\{{\bf U}_{n,3}({\mathbf x})\}\|_\infty \nonumber\\ &&~\leq~ \hbox{$\sup_{\x\in\mx}$}\|\hbox{$\int$}\mbox{\boldmath $\zeta$}({\mathbf s}) [\nabla K\{h_n^{-1}(\mathbf{P}_0^{\rm T}{\mathbf x}-s)\}]^{\rm T} ds\|_\infty \nonumber\\ &&~=~h_n^{r+1}\hbox{$\sup_{\x\in\mx}$}\|\hbox{$\int$}\{\nabla\mbox{\boldmath $\zeta$}(\mathbf{P}_0^{\rm T}{\mathbf x}-h_n{\bf t})\}^{\rm T} K({\bf t})d{\bf t}\|_\infty =O(h_n^{r+1}), \label{yedn3} \end{eqnarray} where the last step holds by the boundedness of $\|\nabla\mbox{\boldmath $\zeta$}({\mathbf s})\|_\infty$ from Assumptions \ref{akernel} (ii) and \ref{ahbey} (iii), and the integrability of $K(\cdot)$ from Assumption \ref{akernel} (i). Combining (\ref{ydn2})--(\ref{yedn3}) yields \begin{eqnarray*} \hbox{$\sup_{\x\in\mx}$}\|{\bf U}_{n,2}({\mathbf x})-{\bf U}_{n,3}({\mathbf x})\|_\infty~=~O_p(h_n^{r}\xi_n+h_n^{r+1}), \end{eqnarray*} which implies that \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}\|(\mathbf{P}_0-\hat{\mathbf{P}}_k)^{\rm T}\{{\bf U}_{n,2}({\mathbf x})-{\bf U}_{n,3}({\mathbf x})\}\|_\infty \\ &&~\leq~\|\mathbf{P}_0-\hat{\mathbf{P}}_k\|_1\hbox{$\sup_{\x\in\mx}$}\|{\bf U}_{n,2}({\mathbf x})-{\bf U}_{n,3}({\mathbf x})\|_{\infty} \\ &&~=~O_p(h_n^{r}\xi_n\alpha_n+h_n^{r+1}\alpha_n)\textcolor{black}{,} \end{eqnarray*} using H\"older's inequality and Assumption \ref{al1}. This, combined with (\ref{yun}) and (\ref{ybdn1}), gives \begin{eqnarray} \hbox{$\sup_{\x\in\mx}$}|U_n({\mathbf x})|~=~O_p(h_n^{-2}\alpha_n^2+h_n^{-1}\xi_n\alpha_n+\alpha_n). \label{yunr} \end{eqnarray} Then\textcolor{black}{,} we consider $V_{n,N}$. Write \begin{eqnarray} V_{n,N}({\mathbf x})&~=~& h_n^{-(r+1)}\hbox{trace} ((\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T} \E_{n,k}[({\mathbf x}-{\mathbf X})\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}\hat{D}_N({\mathbf X})TY]) \nonumber\\ &~=~&h_n^{-(r+1)}\hbox{trace}[(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}\{{\bf V}^{(1)}_{n,N}({\mathbf x})+{\bf V}^{(2)}_{n,N}({\mathbf x})\}], \label{yvn} \end{eqnarray} where \begin{eqnarray*} &&{\bf V}^{(1)}_{n,N}({\mathbf x})~:=~\E_{n,k}(({\mathbf x}-{\mathbf X})[\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\hat{D}_N({\mathbf X})TY), \\ &&{\bf V}^{(2)}_{n,N}({\mathbf x})~:=~\E_{n,k}(({\mathbf x}-{\mathbf X}) [\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\hat{D}_N({\mathbf X})TY). \end{eqnarray*} We know \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\s\in\ms}$}{\cal E}(h_n^{-r}[\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}Y]^2) \nonumber\\ &&~=~\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}h_n^{-r}[\rho\{h_n^{-1}({\mathbf s}-{\bf v}) \}]^2{\cal E}(Y^2\mid{\mathbf S}={\bf v}) f_{\mathbf S}({\bf v})d{\bf v} \nonumber\\ &&~=~\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}\{\rho({\bf t} )\}^2{\cal E}(Y^2\mid{\mathbf S}={\mathbf s}-h_n{\bf t}) f_{\mathbf S}({\mathbf s}-h_n{\bf t})d{\bf t} = O(1). \label{ygrhosq} \end{eqnarray} where the second step uses change of variables while the last step holds by the boundedness of ${\cal E}(Y^2\mid{\mathbf S}=\cdot)f_{\mathbf S}(\cdot)$ from Assumptions \ref{akernel} (ii)--(iii) and the square integrability of $\rho(\cdot)$ from Assumption \ref{ahbey} (ii). Moreover, under Assumptions \ref{akernel} (ii)--(iii) and \ref{ahbey} (ii), Theorem 2 of \citet{hansen2008uniform} gives \begin{eqnarray*} \hbox{$\sup_{\s\in\ms}$}\{\E_{n,k}(h_n^{-r}[\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}Y]^2)-{\cal E}(h_n^{-r}[\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}Y]^2)\}=O_p(\xi_n)~=~o_p(1)\textcolor{black}{.} \end{eqnarray*} This, combined with (\ref{ygrhosq}), implies \begin{eqnarray} \hbox{$\sup_{\s\in\ms}$}\E_{n,k}(h_n^{-r}[\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}Y]^2)~=~O_p(1). \label{yexrhosq} \end{eqnarray} Next, we have \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}\E_{n,k} (\|[\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]Y\|^2) \nonumber\\ &&~\leq~\hbox{$\sup_{\x\in\mx}$}\E_{n,k} (\|\bar{{\mathbf s}}_n-h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\|^2[\rho\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}Y]^2) \nonumber\\ &&~\leq~\hbox{$\sup_{\x\in\mx}$}\E_{n,k} (\|(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}({\mathbf x}-{\mathbf X})\|^2h_n^{-2}[\rho\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}Y]^2) \nonumber\\ &&~\leq~ c\,\|\hat{\mathbf{P}}_k-\mathbf{P}_0\|_1^2\hbox{$\sup_{\x,\X\in\mx}$}\|{\mathbf x}-{\mathbf X}\|_{\infty}^2\hbox{$\sup_{\s\in\ms}$}\E_{n,k} (h_n^{-2}[\rho\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}Y]^2)\nonumber \\ &&~=~O_p(h_n^{r-2}\alpha_n^2), \label{yalphansq} \end{eqnarray} where the first step uses the local Lipschitz continuity of $\nabla K(\cdot)$ from Assumption \ref{ahbey} (ii), the second step is due to the definition (\ref{ysbar}) of $\bar{{\mathbf s}}_n$, the third step holds by H\"older's inequality, and the last step is because of Assumptions \ref{al1}, \ref{ahbe} (i) and the equation (\ref{yexrhosq}). Thus\textcolor{black}{,} we have \begin{eqnarray} &&\phantom{~=~}\|{\bf V}^{(1)}_{n,N}({\mathbf x})\|_{\infty} \nonumber\\ &&~\leq~ c\, ({\cal E}_{n,k}[\{\hat{D}_N({\mathbf X})\}^2]\hbox{$\sup_{\x\in\mx}$}\E_{n,k} (\|[\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]Y\|^2))^{1/2} \nonumber \\ && ~=~O_p(h_n^{r/2-1}\alpha_n s_N), \label{yvn1} \end{eqnarray} where the first step uses H\"older's inequality and the boundedness of $\hbox{$\sup_{\x\in\mx}$}\|{\mathbf x}-{\mathbf X}\|_\infty $ from Assumption \ref{ahbey} (i), and the last step holds by (\ref{sn}) and (\ref{yalphansq}). Next, we know that \begin{eqnarray} &&\phantom{~=~}|\hbox{$\sup_{\s\in\ms}$}{\cal E}_{\mathbf S}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}Y]^2)|\nonumber\\ &&~=~|\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}[\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\bf v})\}]^2 E(Y^2\mid{\mathbf S}={\bf v})f_{{\mathbf S}}({\bf v})d{\bf v}| \nonumber\\ &&~=~h_n^{r}|\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}\{\nabla K_{[j]}({\bf t})\}^2E(Y^2\mid{\mathbf S}={\mathbf s}-h_n{\bf t})f_{{\mathbf S}}({\mathbf s}-h_n{\bf t})d{\bf t}|=O(h_n^{r}), \label{yexp1} \end{eqnarray} where the second step uses change of variables while the last step is due to the boundedness of ${\cal E}(Y^2\mid{\mathbf S}=\cdot)f_{\mathbf S}(\cdot)$ from Assumptions \ref{akernel} (ii)--(iii) and the square integrability of $\nabla K_{[j]}(\cdot)$ from Assumption \ref{akernel} (i). Then, under Assumptions \ref{akernel} (ii)--(iii) and \ref{ahbey} (ii), Theorem 2 of \citet{hansen2008uniform} implies\textcolor{black}{:} \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\s\in\ms}$}|\E_{n,k}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}Y]^2)-{\cal E}_{\mathbf S}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}Y]^2)| \\ &&~=~O_p(h_n^{r}\xi_{n})=o_p(h_n^r)\textcolor{black}{,} \end{eqnarray*} where the last step is because we assume $\xi_{n}=o(1)$. This, combined with (\ref{yexp1}), yields \begin{eqnarray} \hbox{$\sup_{\s\in\ms}$}\E_{n,k}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}Y]^2)~=~O_p(h_n^{r}). \label{yone1} \end{eqnarray} Let $v_{ij}({\mathbf x})$ be the $(i,j)$th entry of ${\bf V}^{(2)}_{n,N}({\mathbf x})$ $(i=1,\ldots,p;\,j=1,\ldots,r)$. We know \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}|v_{ij}({\mathbf x})| \\ &&~\equiv~\hbox{$\sup_{\x\in\mx}$}|\E_{n,k}[({\mathbf x}_{[i]}-{\mathbf X}_{[i]}) \nabla K_{[j]}\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}\hat{D}_N({\mathbf X})TY]| \\ &&~\leq~\hbox{$\sup_{\s\in\ms}$}\E_{n,k}[|\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}\hat{D}_N({\mathbf X})Y|] \\ &&~\leq~\{\hbox{$\sup_{\s\in\ms}$}\E_{n,k}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}Y]^2)\E_{n,k}[\{\hat{D}_N({\mathbf X})\}^2]\}^{1/2}=O_p(h_n^{r/2}s_N), \end{eqnarray*} where the second step uses the boundedness of $\hbox{$\sup_{\x\in\mx}$}\|{\mathbf x}-{\mathbf X}\|_\infty$ from Assumption \ref{ahbe} (i), the third step is due to H\"older's inequality and the last step holds by (\ref{yone1}) and (\ref{sn}). It now follows that \begin{eqnarray} \hbox{$\sup_{\x\in\mx}$}\|{\bf V}^{(2)}_{n,N}({\mathbf x})\|_\infty~=~O_p(h_n^{r/2}s_N). \label{yvn2} \end{eqnarray} Therefore\textcolor{black}{,} we have \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}\|(\mathbf{P}_0-\hat{\mathbf{P}}_k)^{\rm T}\{{\bf V}^{(1)}_{n,N}({\mathbf x})+{\bf V}^{(2)}_{n,N}({\mathbf x})\}\|_\infty \\ &&~\leq~\|\mathbf{P}_0-\hat{\mathbf{P}}_k\|_1\hbox{$\sup_{\x\in\mx}$}\|{\bf V}^{(1)}_{n,N}({\mathbf x})+{\bf V}^{(2)}_{n,N}({\mathbf x})\|_{\infty} \\ &&~=~O_p(h_n^{r/2-1}\alpha_n^2 s_N+h_n^{r/2}\alpha_n s_N)~=~O_p(h_n^{r/2}\alpha_n s_N), \end{eqnarray*} where the first step is due to H\"older's inequality, the second step uses (\ref{yvn1}), (\ref{yvn2}) and Assumption \ref{al1}, and the last step is because we assume $h_n^{-1}\alpha_n=o(1)$. Combined with (\ref{yvn}), it gives \begin{eqnarray} \hbox{$\sup_{\x\in\mx}$} |V_{n,N}({\mathbf x})|~=~O_p\{h_n^{-(r/2+1)}\alpha_ns_N\}. \label{yvnr} \end{eqnarray} Considering (\ref{ydhbe}), (\ref{yunr}) and (\ref{yvnr}), we know that \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}|\hat{\ell}^{(1)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\hat{\ell}^{(1)}_{n,k}({\mathbf x},\mathbf{P}_0)| \nonumber\\ &&~=~O_p\{h_n^{-2}\alpha_n^2+h_n^{-1}\xi_n\alpha_n+\alpha_n+h_n^{-(r/2+1)}\alpha_n s_N\}. \label{yhmbe} \end{eqnarray} Further, we control the error from estimating $\pi({\mathbf x})$ by $\hat{\pi}_N({\mathbf x})$, i.e., $\hat{\ell}^{(1)}_{n,k}({\mathbf x},\mathbf{P}_0)-\ell_{n,k}^{(1)}({\mathbf x},\mathbf{P}_0)$ with \begin{eqnarray*} \ell_{n,k}^{(1)}({\mathbf x},\mathbf{P})~:=~h_n^{-r}\E_{n,k} [\{\pi^*({\mathbf X})\}^{-1}TY K_h\{\mathbf{P}^{\rm T}({\mathbf x}-{\mathbf X})\}]. \end{eqnarray*} We have \begin{eqnarray} &&\phantom{~=~}|\hbox{$\sup_{\s\in\ms}$}{\cal E}_{\mathbf S}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})Y\}^2]|\nonumber\\ &&~=~h_n^{-r}|\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}[K\{h_n^{-1}({\mathbf s}-{\bf v})\}]^2{\cal E}(Y^2\mid{\mathbf S}={\bf v})f_{{\mathbf S}}({\bf v})d{\bf v}| \nonumber\\ &&~=~|\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}\{K({\bf t})\}^2{\cal E}(Y^2\mid{\mathbf S}={\mathbf s}-h_n{\bf t})f_{{\mathbf S}}({\mathbf s}-h_n{\bf t})d{\bf t}|~=~O(1), \label{yexp} \end{eqnarray} where the second step uses change of variables while the last step is due to the boundedness of ${\cal E}(Y^2\mid{\mathbf S}=\cdot)f_{\mathbf S}(\cdot)$ from Assumptions \ref{akernel} (ii)--(iii) along with the square integrability of $K(\cdot)$ from Assumption \ref{akernel} (i). Then, under Assumptions \ref{akernel}, Theorem 2 of \citet{hansen2008uniform} gives \begin{eqnarray*} \hbox{$\sup_{\s\in\ms}$}|\E_{n,k}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})Y\}^2]-{\cal E}_{\mathbf S}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})Y\}^2]|~=~O_p(\xi_n)~=~o_p(1), \end{eqnarray*} where the last step is because we assume $\xi_n=o(1)$. This, combined with (\ref{yexp}), yields \begin{eqnarray} \hbox{$\sup_{\s\in\ms}$}\E_{n,k}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})Y\}^2]~=~O_p(1). \label{yone} \end{eqnarray} Therefore\textcolor{black}{,} we know that \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}|\hat{\ell}^{(1)}_{n,k}({\mathbf x},\mathbf{P}_0)-\ell_{n,k}^{(1)}({\mathbf x},\mathbf{P}_0)| \nonumber\\ &&~\leq~ c\,\hbox{$\sup_{\s\in\ms}$} \E_{n,k} \{|\hat{D}_N({\mathbf X})h_n^{-r}K_h({\mathbf s}-{\mathbf S})Y| \}\nonumber\\ &&~\leq~ c\,h^{-r/2}\{\E_{n,k} [\{\hat{D}_N({\mathbf X})\}^2] \hbox{$\sup_{\s\in\ms}$}\E_{n,k}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})Y\}^2]\}^{1/2} \nonumber\\ &&~=~O_p(h^{-r/2}s_N), \label{ymnk} \end{eqnarray} where the second step is due to H\"older's inequality and the last step holds by (\ref{sn}) and (\ref{yone}). Combining (\ref{yhmbe}) and (\ref{ymnk}) yields that \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}|\hat{\ell}^{(1)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\ell_{n,k}^{(1)}({\mathbf x},\mathbf{P}_0)| \nonumber\\ &&=O_p\{h_n^{-2}\alpha_n^2+h_n^{-1}\xi_n\alpha_n+\alpha_n+h_n^{-(r/2+1)}\alpha_n s_N+h^{-r/2}s_N\} \nonumber\\ &&~=~O_p\{h_n^{-2}\alpha_n^2+h_n^{-1}\xi_n\alpha_n+\alpha_n+h^{-r/2}s_N\} ~=~O_p\{b_{n,N}^{(2)}\}, \label{yan2} \end{eqnarray} where the second step holds by the fact that $h_n^{-(r/2+1)}\alpha_n s_N=o(h^{-r/2}s_N)$ because we assume $h^{-1}\alpha_n=o(1)$. Now we handle the error $\ell_{n,k}^{(1)}({\mathbf x},\mathbf{P}_0)-\ell^{(1)}({\mathbf x},\mathbf{P}_0)$. Under Assumptions \ref{akernel}, Theorem 2 of \citet{hansen2008uniform} gives \begin{eqnarray} \hbox{$\sup_{\x\in\mx}$}|\ell_{n,k}^{(1)}({\mathbf x},\mathbf{P}_0)-{\cal E}\{\ell_{n,k}^{(1)}({\mathbf x},\mathbf{P}_0)\}|~=~O_p(\xi_n). \label{ypt2} \end{eqnarray} Further, under Assumptions \ref{akernel} (i), (ii) and (iv), standard arguments based on $d$th order Taylor's expansion of $\ell^{(1)}({\mathbf x},\mathbf{P}_0)$ yield that \begin{eqnarray} \hbox{$\sup_{\x\in\mx}$}|{\cal E}\{\ell_{n,k}^{(1)}({\mathbf x},\mathbf{P}_0)\}-\ell^{(1)}({\mathbf x},\mathbf{P}_0)|~=~O(h_n^d). \label{ypt3} \end{eqnarray} Combining (\ref{yan2}), (\ref{ypt2}) and (\ref{ypt3}) yields \begin{eqnarray} \hbox{$\sup_{\x\in\mx}$}|\hat{\ell}^{(1)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\ell^{(1)}({\mathbf x},\mathbf{P}_0)|~=~O_p\{b_n^{(1)}+b_{n,N}^{(2)}\}. \label{ynum} \end{eqnarray} Similar arguments imply that \begin{eqnarray} \hbox{$\sup_{\x\in\mx}$}|\hat{\ell}^{(0)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\ell^{(0)}({\mathbf x},\mathbf{P}_0)|~=~O_p\{b_n^{(1)}+b_{n,N}^{(2)}\}. \label{ydeno} \end{eqnarray} Therefore\textcolor{black}{,} we have \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}|\hat{m}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\widetilde{m}({\mathbf x},\mathbf{P}_0)| \nonumber\\ &&~=~\hbox{$\sup_{\x\in\mx}$}|\{\hat{\ell}^{(0)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)\}^{-1}\hat{\ell}_{n,k}^{(0)}({\mathbf x},\hat{\mathbf{P}}_k)-\{\ell^{(0)}({\mathbf x},\mathbf{P}_0)\}^{-1}\ell^{(1)}({\mathbf x},\mathbf{P}_0)| \\ &&~\leq~\hbox{$\sup_{\x\in\mx}$}|\{\hat{\ell}^{(0)}_{n,k}({\mathbf x},\mathbf{P}_0)\}^{-1}\{\hat{\ell}^{(1)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-\ell^{(1)}({\mathbf x},\mathbf{P}_0)\}|+ \\ &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}|[\{\hat{\ell}^{(0)}_{n,k}({\mathbf x},\mathbf{P}_0)\}^{-1}-\{\ell^{(0)}({\mathbf x},\mathbf{P}_0)\}^{-1}]\ell^{(1)}({\mathbf x},\mathbf{P}_0)| \\ &&~=~O_p\{b_n^{(1)}+b_{n,N}^{(2)}\}, \end{eqnarray*} where the last step follows from the fact that $b_n^{(1)}+b_{n,N}^{(2)}=o(1)$, and repeated use of (\ref{ynum}) and (\ref{ydeno}) as well as Assumptions \ref{api4} and \ref{akernel} (ii). \subsection{Proof of Proposition \ref{thphi}} The function $F(\cdot\mid{\mathbf S})$ is obviously bounded. For any $\theta_1,\theta_2\in\mb(\vt,{\varepsilon})$, Taylor's expansion gives \begin{eqnarray*} &&\phantom{~=~}|[\{\pi^*({\mathbf X})\}^{-1}T]^m\{\phi^*({\mathbf X},\theta_1)-\phi^*({\mathbf X},\theta_2)\}| \\ &&~\leq~ c\,|F(\theta_1\mid{\mathbf S})-F(\theta_2\mid{\mathbf S})| ~\leq~ c\,\hbox{$\sup_{\theta\in\mbtv}$} f(\theta\mid{\mathbf S})|\theta_1-\theta_2|\quad (m=0,1), \end{eqnarray*} where the first step uses the boundedness of $\{\pi^*({\mathbf X})\}^{-1}$ from Assumption \ref{api}. Therefore, the condition \eqref{conditional_density} and Example 19.7 of \citet{van2000asymptotic} give \begin{eqnarray} &&N_{[\,]}\{\eta,\mathcal{M},L_2({\mathbb P}_{\mathbf X})\}~\leq~ c\,\eta^{-1}, \label{mm} \\ &&N_{[\,]}\{\eta,\mathcal{F}^*,L_2({\mathbb P}_{\mathbf X})\}~\leq~ c\,\eta^{-1}\textcolor{black}{,} \nonumber \end{eqnarray} with $\mathcal{F}^*:=\{\{\pi^*({\mathbf X})\}^{-1}T\phi^*({\mathbf X},\theta):\theta\in\mb(\vt,{\varepsilon})\}$, which implies that $\mathcal{F}^*$ and $\mathcal{M}$ are ${\mathbb P}$-Donsker according to Theorem 19.5 of \citet{van2000asymptotic}. Further, we have that, for any sequence $\tilde{\theta}\to{\boldsymbol\theta}$ in probability, \begin{eqnarray*} &&\phantom{~=~}{\cal E}_{\mathbf X}([\{\pi^*({\mathbf X})\}^{-2}T]^m\{\phi^*({\mathbf X},\tilde{\theta})-\phi^*({\mathbf X},{\boldsymbol\theta})\}^2) \\ &&~\leq~ c\,{\cal E}_{\mathbf S}[\{F(\tilde{\theta}\mid{\mathbf S})-F({\boldsymbol\theta}\mid{\mathbf S})\}^2] ~\leq~ c\,(\tilde{\theta}-{\boldsymbol\theta})^2{\cal E}[\{\hbox{$\sup_{\theta\in\mbtv}$} f(\theta\mid{\mathbf S})\}^2]\to 0 \;\; (m=0,1) \end{eqnarray*} in probability, where the first step uses the boundedness of $\{\pi^*({\mathbf X})\}^{-2}$ from Assumption \ref{api}, the second step uses Taylor's expansion as well as the fact that $\tilde{\theta}\in\mb(\vt,{\varepsilon})$ with probability approaching one, and the last step holds by the condition \eqref{conditional_density}. Thus applying Lemma 19.24 of \citet{van2000asymptotic} concludes (\ref{unipi1}) and (\ref{unipi2}). \subsection{Proof of Theorem \ref{thhd}} Denote $e^{(t)}({\mathbf x},\theta,\mathbf{P})=\varphi_t(\mathbf{P}^{\rm T}{\mathbf x},\theta)f_{\mathbf S}(\mathbf{P}^{\rm T}{\mathbf x})$ $(t=0,1)$. We now derive the convergence rate of $\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-e^{(1)}({\mathbf x},\theta,\mathbf{P})$. The case of $\hat{e}^{(0)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k) - e^{(0)}({\mathbf x},\theta,\mathbf{P})$ is similar. We first deal with the error from estimating $\mathbf{P}_0$ by $\hat{\mathbf{P}}_k$, i.e., $\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\mathbf{P}_0)$. Taylor's expansion gives that, for \begin{eqnarray} \bar{{\mathbf s}}_n~:=~h_n^{-1}\{\mathbf{P}_0^{\rm T}+\mathbf{M}(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}\}({\mathbf x}-{\mathbf X}) \label{sbar} \end{eqnarray} with some $\mathbf{M}:=\hbox{diag}(\mu_1,\ldots,\mu_r)$ and $\mu_j\in(0,1)$ $(j=1,\ldots,r)$, \begin{eqnarray} &&\phantom{~=~}\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\mathbf{P}_0) \nonumber \\ &&~=~h_n^{-(r+1)}\E_{n,k}[\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}({\mathbf x}-{\mathbf X})\{\hat{\pi}_N({\mathbf X})\}^{-1}T\psi(Y,\theta)] \nonumber\\ &&~=~U_n({\mathbf x},\theta)+V_{n,N}({\mathbf x},\theta) , \label{dhbe} \end{eqnarray} where \begin{eqnarray*} &&U_n({\mathbf x},\theta)~:=~h_n^{-(r+1)}\E_{n,k}[\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}({\mathbf x}-{\mathbf X})\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta)] , \nonumber \\ &&V_{n,N}({\mathbf x},\theta)~:=~h_n^{-(r+1)}\E_{n,k}[\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}({\mathbf x}-{\mathbf X})\hat{D}_N({\mathbf X})T\psi(Y,\theta)]. \end{eqnarray*} To control $U_n({\mathbf x},\theta)$, write \begin{eqnarray} U_n({\mathbf x},\theta)&~=~& h_n^{-(r+1)}\hbox{trace} ((\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T} \E_{n,k}[({\mathbf x}-{\mathbf X})\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta)]) \nonumber\\ &~=~&h_n^{-(r+1)}\hbox{trace}[(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}\{{\bf U}_{n,1}({\mathbf x},\theta)+{\bf U}_{n,2}({\mathbf x},\theta)-{\bf U}_{n,3}({\mathbf x},\theta)\}], \label{un} \end{eqnarray} where \begin{eqnarray*} &&{\bf U}_{n,1}({\mathbf x},\theta)~:=~\E_{n,k}(({\mathbf x}-{\mathbf X})[\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta)), \\ &&{\bf U}_{n,2}({\mathbf x},\theta)~:=~\E_{n,k}({\mathbf x} [\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta)), \\ &&{\bf U}_{n,3}({\mathbf x},\theta)~:=~\E_{n,k}({\mathbf X} [\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta)). \end{eqnarray*} For the function $\rho(\cdot)$ in Assumption \ref{ahbe} (ii), denote $\mathcal{J}_n:=\{h^{-r}_n\rho\{h_n^{-1}({\mathbf s}-\mathbf{P}_0^{\rm T}{\mathbf X})\}:{\mathbf s}\in\mathcal{S}\}$. Taylor's expansion gives that, for any ${\mathbf s}_1,{\mathbf s}_2\in\mathcal{S}$ and some $\bar{{\mathbf s}}:={\mathbf s}_1+\mathbf{M}({\mathbf s}_2-{\mathbf s}_1)$ with $\mathbf{M}:=\hbox{diag}(\mu_1,\ldots,\mu_r)$ and $\mu_j\in(0,1)$ $(j=1,\ldots,r)$, \begin{eqnarray*} &&\phantom{~=~}h^{-r}_n|\rho\{h_n^{-1}({\mathbf s}_1-\mathbf{P}_0^{\rm T}{\mathbf X})\}-\rho\{h_n^{-1}({\mathbf s}_2-\mathbf{P}_0^{\rm T}{\mathbf X})\}| \\ &&~=~ h_n^{-(r+1)}|[\nabla\rho\{h_n^{-1}(\bar{{\mathbf s}}-\mathbf{P}_0^{\rm T}{\mathbf X})\}]^{\rm T}({\mathbf s}_1-{\mathbf s}_2)|\leq c\,h^{-(r+1)}_n\|{\mathbf s}_1-{\mathbf s}_2\|, \end{eqnarray*} where the second step uses the boundedness of $\nabla\rho(\cdot)$ from Assumption \ref{ahbe} (ii). Therefore Example 19.7 of \citet{van2000asymptotic} implies \begin{eqnarray} N_{[\,]}\{\eta ,\mathcal{J}_n,L_2({\mathbb P}_{\mathbf X})\}~\leq~ c\,h_n^{-(r+1)}\eta^{-r}. \label{bracj} \end{eqnarray} Moreover, we have that \begin{eqnarray} \hbox{$\sup_{\s\in\ms\,\x\in\mx}$} [h^{-r}_n\rho\{h_n^{-1}({\mathbf s}-\mathbf{P}_0^{\rm T}{\mathbf x})\}]~=~O(h_n^{-r}). \label{supj} \end{eqnarray} due to the boundedness of $\rho(\cdot)$ from Assumption \ref{ahbe} (ii). In addition, we know that \begin{eqnarray} \hbox{$\sup_{\s\in\ms}$}{\cal E}_{\mathbf S}([h_n^{-r}\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}]^2)&~=~&h^{-r}\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}h_n^{-r}[\rho\{h_n^{-1}({\mathbf s}-{\bf v}) \}]^2 f_{\mathbf S}({\bf v})d{\bf v} \nonumber\\ &~=~&h_n^{-r}\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}\{\rho({\bf t} )\}^2 f_{\mathbf S}({\mathbf s}-h_n{\bf t})d{\bf t} ~=~ O(h_n^{-r}), \label{varj} \end{eqnarray} where the second step uses change of variables while the last step holds by the boundedness of $f_{\mathbf S}(\cdot)$ from Assumption \ref{akernel_qte} (ii) and the square integrability of $\rho(\cdot)$ from Assumption \ref{ahbe} (ii). Based on (\ref{bracj})--(\ref{varj}), applying Lemma \ref{1v2} yields that \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\s\in\ms}$}|\E_{n,k}[h^{-r}_n\rho\{h_n^{-1}({\mathbf s}-\mathbf{P}_0^{\rm T}{\mathbf X})\}]-{\cal E}_{\mathbf X}[h^{-r}_n\rho\{h_n^{-1}({\mathbf s}-\mathbf{P}_0^{\rm T}{\mathbf X})\}]| \nonumber\\ &&~=~O_p\{n_{\mathbb{K}^-}^{-1/2}h_n^{-r/2}\hbox{log}(h_n^{-1})+n_{\mathbb{K}^-}^{-1}h_n^{-r}(\hbox{log}\,h_n)^2\}~=~o_p(1), \label{grho} \end{eqnarray} where the second step is because we assume $(nh_n^r)^{-1/2}\hbox{log}(h_n^{-r})=o(1)$. Then we know \begin{eqnarray*} \hbox{$\sup_{\s\in\ms}$}{\cal E}_{\mathbf S}[h_n^{-r}\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}]&~=~&\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}h_n^{-r}\rho\{h_n^{-1}({\mathbf s}-{\bf v}) \} f_{\mathbf S}({\bf v})d{\bf v} \\ &~=~&\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}\rho({\bf t} ) f_{\mathbf S}({\mathbf s}-h_n{\bf t})d{\bf t} ~=~ O(1). \end{eqnarray*} where the second step uses change of variables while the last step holds by the boundedness of $f_{\mathbf S}(\cdot)$ from Assumption \ref{akernel_qte} (ii) and the integrability of $\rho(\cdot)$ from Assumption \ref{ahbe} (ii). This, combined with (\ref{grho}), implies\textcolor{black}{:} \begin{eqnarray} \hbox{$\sup_{\s\in\ms}$}\E_{n,k}[h_n^{-r}\rho \{h_n^{-1}({\mathbf s}-{\mathbf S}) \}]~=~O_p(1). \label{exrho} \end{eqnarray} Next, we have \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx}$}\E_{n,k} [\|\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}\|] \nonumber\\ &&~\leq~\hbox{$\sup_{\x\in\mx}$}\E_{n,k} [\|\bar{{\mathbf s}}_n-h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\|\rho\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}] \nonumber\\ &&~\leq~\hbox{$\sup_{\x\in\mx}$}\E_{n,k} [\|(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}({\mathbf x}-{\mathbf X})\|h_n^{-1}\rho\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}] \nonumber\\ &&~\leq~ c\,\|\hat{\mathbf{P}}_k-\mathbf{P}_0\|_1\hbox{$\sup_{\x,\X\in\mx}$}\|{\mathbf x}-{\mathbf X}\|_{\infty}\hbox{$\sup_{\s\in\ms}$}\E_{n,k} [h_n^{-1}\rho\{h_n^{-1}({\mathbf s}-{\mathbf S})\}]\nonumber \\ &&~=~O_p(h_n^{r-1}\alpha_n), \label{alphan} \end{eqnarray} where the first step uses the local Lipschitz continuity of $\nabla K(\cdot)$ from Assumption \ref{ahbe} (ii), the second step is due to the definition (\ref{sbar}) of $\bar{{\mathbf s}}_n$, the third step holds by H\"older's inequality, and the last step is because of Assumptions \ref{al1}, \ref{ahbe} (i) and the equation (\ref{exrho}). Hence \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\bf U}_{n,1}({\mathbf x},\theta)\|_{\infty} \\ &&~\leq~ c\,\hbox{$\sup_{\x\in\mx}$}\E_{n,k} [\|{\mathbf x}-{\mathbf X}\|_{\infty}\|\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}\|] \\ &&~\leq~ c\,\hbox{$\sup_{\x\in\mx}$}\E_{n,k} [\|\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}\|] ~=~O_p(h_n^{r-1}\alpha_n). \end{eqnarray*} where the first step holds by the boundedness of $\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta)$, the second step is due to Assumption \ref{ahbe} (i), and the last step uses (\ref{alphan}). This, combined with Assumption \ref{al1} and H\"older's inequality, implies \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T} {\bf U}_{n,1}({\mathbf x},\theta)\|_\infty \nonumber\\ &&~\leq~\|\hat{\mathbf{P}}_k-\mathbf{P}_0\|_1\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\bf U}_{n,1}({\mathbf x},\theta)\|_{\infty}~=~O_p(h_n^{r-1}\alpha_n^2). \label{bdn1} \end{eqnarray} Then, under Assumptions \ref{akernel_qte} (ii) and \ref{ahbe} (ii), as well as the fact that $\{\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta):\theta\in\mb(\vt,{\varepsilon})\}$ is a VC class with a bounded envelope function $\hbox{$\sup_{\theta\in\mbtv}$}[\{\pi^*({\mathbf X})\}^{-1}T|\psi(Y,\theta)|]$ from Assumption \ref{api}, Lemma B.4 of \citet{escanciano2014uniform} gives that \begin{eqnarray} &&\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\bf U}_{n,2}({\mathbf x},\theta)-{\cal E}\{{\bf U}_{n,2}({\mathbf x},\theta)\}\|_{\infty}~=~O_p(h_n^{r}\gamma_n), \label{dn2}\\ &&\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\bf U}_{n,3}({\mathbf x},\theta)-{\cal E}\{{\bf U}_{n,3}({\mathbf x},\theta)\}\|_{\infty}~=~O_p(h_n^{r}\gamma_n). \label{dn3} \end{eqnarray} Let $\delta({\mathbf s},\theta):=f_{\mathbf S}({\mathbf s})\varphi_1({\mathbf s},\theta)$ and $\nabla\delta({\mathbf s},\theta):=\partial \delta({\mathbf s},\theta)/\partial {\mathbf s}$. We have \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\cal E}\{{\bf U}_{n,2}({\mathbf x},\theta)\}\|_\infty \nonumber\\ &&~\leq~ \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\mathbf x}\hbox{$\int$}\delta({\mathbf s},\theta)[\nabla K\{h_n^{-1}(\mathbf{P}_0^{\rm T}{\mathbf x}-s)\}]^{\rm T} ds\|_\infty \nonumber\\ &&~=~h_n^{r+1}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\mathbf x}\hbox{$\int$}\{\nabla\delta(\mathbf{P}_0^{\rm T}{\mathbf x}-h_n{\bf t},\theta)\}^{\rm T} K({\bf t})d{\bf t}\|_\infty ~=~O(h_n^{r+1}). \label{edn2} \end{eqnarray} In the above, the second step uses integration by parts and change of variables, while the last step holds by Assumption \ref{ahbe} (i), the boundedness of $\nabla\delta({\mathbf s},\theta)$ from Assumptions \ref{akernel_qte} (ii)--(iii), as well as the integrability of $K(\cdot)$ from Assumption \ref{akernel_qte} (i). Set $\mbox{\boldmath $\zeta$}({\mathbf s},\theta):=f_{\mathbf S}({\mathbf s}){\boldsymbol\eta}_1({\mathbf s},\theta)$ and $\nabla\mbox{\boldmath $\zeta$}({\mathbf s},\theta):=\partial \mbox{\boldmath $\zeta$}({\mathbf s},\theta)/\partial {\mathbf s}$. Analogous to (\ref{edn2}), we know \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\cal E}\{{\bf U}_{n,3}({\mathbf x},\theta)\}\|_\infty \nonumber\\ &&~\leq~ \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|\hbox{$\int$}\mbox{\boldmath $\zeta$}({\mathbf s},\theta) [\nabla K\{h_n^{-1}(\mathbf{P}_0^{\rm T}{\mathbf x}-s)\}]^{\rm T} ds\|_\infty \nonumber\\ &&~=~h_n^{r+1}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|\hbox{$\int$}\{\nabla\mbox{\boldmath $\zeta$}(\mathbf{P}_0^{\rm T}{\mathbf x}-h_n{\bf t},\theta)\}^{\rm T} K({\bf t})d{\bf t}\|_\infty ~=~O(h_n^{r+1}), \label{edn3} \end{eqnarray} where the last step holds by the boundedness of $\|\nabla\mbox{\boldmath $\zeta$}({\mathbf s},\theta)\|_\infty$ from Assumptions \ref{akernel_qte} (ii) and \ref{ahbe} (iii), as well as the integrability of $K(\cdot)$ from Assumption \ref{akernel_qte} (i). Combining (\ref{dn2})--(\ref{edn3}) yields \begin{eqnarray*} \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\bf U}_{n,2}({\mathbf x},\theta)-{\bf U}_{n,3}({\mathbf x},\theta)\|_\infty~=~O_p(h_n^{r}\gamma_n+h_n^{r+1}), \end{eqnarray*} which implies that \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|(\mathbf{P}_0-\hat{\mathbf{P}}_k)^{\rm T}\{{\bf U}_{n,2}({\mathbf x},\theta)-{\bf U}_{n,3}({\mathbf x},\theta)\}\|_\infty \\ &&~\leq~\|\mathbf{P}_0-\hat{\mathbf{P}}_k\|_1\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\bf U}_{n,2}({\mathbf x},\theta)-{\bf U}_{n,3}({\mathbf x},\theta)\|_{\infty} \\ &&~=~O_p(h_n^{r}\gamma_n\alpha_n+h_n^{r+1}\alpha_n)\textcolor{black}{,} \end{eqnarray*} using H\"older's inequality and Assumption \ref{al1}. This, combined with (\ref{un}) and (\ref{bdn1}), gives \begin{eqnarray} \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|U_n({\mathbf x},\theta)|~=~O_p(h_n^{-2}\alpha_n^2+h_n^{-1}\gamma_n\alpha_n+\alpha_n). \label{unr} \end{eqnarray} Then\textcolor{black}{,} we consider $V_{n,N}$. Write \begin{eqnarray} V_{n,N}({\mathbf x},\theta)&~=~& h_n^{-(r+1)}\hbox{trace} ((\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T} \E_{n,k}[({\mathbf x}-{\mathbf X})\{\nabla K(\bar{{\mathbf s}})\}^{\rm T}\hat{D}_N({\mathbf X})T\psi(Y,\theta)]) \nonumber\\ &~=~&h_n^{-(r+1)}\hbox{trace}[(\hat{\mathbf{P}}_k-\mathbf{P}_0)^{\rm T}\{{\bf V}^{(1)}_{n,N}({\mathbf x},\theta)+{\bf V}^{(2)}_{n,N}({\mathbf x},\theta)\}], \label{vn} \end{eqnarray} where \begin{eqnarray*} &&{\bf V}^{(1)}_{n,N}({\mathbf x},\theta)~:=~\E_{n,k}(({\mathbf x}-{\mathbf X})[\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\hat{D}_N({\mathbf X})T\psi(Y,\theta)), \\ &&{\bf V}^{(2)}_{n,N}({\mathbf x},\theta)~:=~\E_{n,k}(({\mathbf x}-{\mathbf X}) [\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}]^{\rm T}\hat{D}_N({\mathbf X})T\psi(Y,\theta)). \end{eqnarray*} We have \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\bf V}^{(1)}_{n,N}({\mathbf x},\theta)\|_{\infty} \nonumber\\ &&~\leq~ c\,\hbox{$\sup_{\x\in\mx}$}|\hat{D}_N({\mathbf x})|\hbox{$\sup_{\x\in\mx}$}\E_{n,k} [\|\nabla K(\bar{{\mathbf s}}_n)-\nabla K\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}\|] \nonumber \\ && ~=~O_p(h_n^{r-1}\alpha_n), \label{vn1} \end{eqnarray} where the first step uses the boundedness of $\hbox{$\sup_{\x\in\mx}$}\|{\mathbf x}-{\mathbf X}\|_\infty T\psi(Y,\theta)$ from Assumption \ref{ahbe} (i), and the last step holds by (\ref{alphan}) and (\ref{dsup}) in Assumption \ref{api}. Next, we know that \begin{eqnarray} &&\phantom{~=~}|\hbox{$\sup_{\s\in\ms}$}{\cal E}_{\mathbf S}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}]^2)|\nonumber\\ &&~=~|\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}[\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\bf v})\}]^2 f_{{\mathbf S}}({\bf v})d{\bf v}| \nonumber\\ &&~=~h_n^{r}|\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}\{\nabla K_{[j]}({\bf t})\}^2f_{{\mathbf S}}({\mathbf s}-h_n{\bf t})d{\bf t}|=O(h_n^{r}), \label{exp1} \end{eqnarray} where the second step uses change of variables while the last step is due to the boundedness of $f_{\mathbf S}(\cdot)$ from Assumption \ref{akernel_qte} (ii) and the square integrability of $\nabla K_{[j]}(\cdot)$ from Assumption \ref{akernel_qte} (i). Then, under Assumptions \ref{akernel_qte} (ii) and \ref{ahbe} (ii), Lemma B.4 of \citet{escanciano2014uniform} implies\textcolor{black}{:} \begin{eqnarray*} \hbox{$\sup_{\s\in\ms}$}|\E_{n,k}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}]^2)-{\cal E}_{\mathbf S}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}]^2)|~=~O_p(h_n^{r}\gamma_{n})~=~o_p(h_n^r) \end{eqnarray*} where the last step is because we assume $\gamma_{n}=o(1)$. This, combined with (\ref{exp1}), yields \begin{eqnarray} \hbox{$\sup_{\s\in\ms}$}\E_{n,k}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}]^2)~=~O_p(h_n^{r}). \label{one1} \end{eqnarray} Let $v_{ij}({\mathbf x},\theta)$ be the $(i,j)$th entry of ${\bf V}^{(2)}_{n,N}({\mathbf x},\theta)$ $(i=1,\ldots,p;\,j=1,\ldots,r)$. We know \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|v_{ij}({\mathbf x},\theta)| \\ &&~\equiv~\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\E_{n,k}[({\mathbf x}_{[i]}-{\mathbf X}_{[i]}) \nabla K_{[j]}\{h_n^{-1}\mathbf{P}_0^{\rm T}({\mathbf x}-{\mathbf X})\}\hat{D}_N({\mathbf X})T\psi(Y,\theta)]| \\ &&~\leq~\hbox{$\sup_{\s\in\ms}$}\E_{n,k}[|\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}\hat{D}_N({\mathbf X})|] \\ &&~\leq~\{\hbox{$\sup_{\s\in\ms}$}\E_{n,k}([\nabla K_{[j]}\{h_n^{-1}({\mathbf s}-{\mathbf S})\}]^2)\E_{n,k}[\{\hat{D}_N({\mathbf X})\}^2]\}^{1/2}~=~O_p(h_n^{r/2}s_N), \end{eqnarray*} where the second step uses the boundedness of $\hbox{$\sup_{\x\in\mx}$}\|{\mathbf x}-{\mathbf X}\|_\infty T\psi(Y,\theta)$ from Assumption \ref{ahbe} (i), the third step is due to H\"older's inequality and the last step holds by (\ref{one1}) and (\ref{sn}). Therefore it follows that \begin{eqnarray} \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\bf V}^{(2)}_{n,N}({\mathbf x},\theta)\|_\infty~=~O_p(h_n^{r/2}s_N). \label{vn2} \end{eqnarray} Therefore, we have \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|(\mathbf{P}_0-\hat{\mathbf{P}}_k)^{\rm T}\{{\bf V}^{(1)}_{n,N}({\mathbf x},\theta)+{\bf V}^{(2)}_{n,N}({\mathbf x},\theta)\}\|_\infty \\ &&~\leq~\|\mathbf{P}_0-\hat{\mathbf{P}}_k\|_1\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}\|{\bf V}^{(1)}_{n,N}({\mathbf x},\theta)+{\bf V}^{(2)}_{n,N}({\mathbf x},\theta)\|_{\infty} \\ &&~=~O_p(h_n^{r-1}\alpha_n^2+h_n^{r/2}\alpha_n s_N), \end{eqnarray*} where the first step is due to H\"older's inequality and the last step uses (\ref{vn1}), (\ref{vn2}) and Assumption \ref{al1}. Combined with (\ref{vn}), it gives \begin{eqnarray} \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$} |V_{n,N}({\mathbf x},\theta)|~=~O_p\{h_n^{-2}\alpha_n^2+h_n^{-(r/2+1)}\alpha_ns_N\}. \label{vnr} \end{eqnarray} Considering (\ref{dhbe}), (\ref{unr}) and (\ref{vnr}), we know that \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\mathbf{P}_0)| \nonumber\\ &&~=~O_p\{h_n^{-2}\alpha_n^2+h_n^{-1}\gamma_n\alpha_n+\alpha_n+h_n^{-(r/2+1)}\alpha_n s_N\}. \label{hmbe} \end{eqnarray} Further, we control the error from estimating $\pi({\mathbf x})$ by $\hat{\pi}_N({\mathbf x})$, i.e., $\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\mathbf{P}_0)-e_{n,k}^{(1)}({\mathbf x},\theta,\mathbf{P}_0)$ with \begin{eqnarray*} e_{n,k}^{(1)}({\mathbf x},\theta,\mathbf{P})~:=~h_n^{-r}\E_{n,k} [\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta) K_h\{\mathbf{P}^{\rm T}({\mathbf x}-{\mathbf X})\}]. \end{eqnarray*} We have \begin{eqnarray} &&\phantom{~=~}|\hbox{$\sup_{\s\in\ms}$}{\cal E}_{\mathbf S}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})\}^2]|\nonumber\\ &&~=~h_n^{-r}|\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}[K\{h_n^{-1}({\mathbf s}-{\bf v})\}]^2f_{{\mathbf S}}({\bf v})d{\bf v}| \nonumber\\ &&~=~|\hbox{$\sup_{\s\in\ms}$}\hbox{$\int$}\{K({\bf t})\}^2f_{{\mathbf S}}({\mathbf s}-h_n{\bf t})d{\bf t}|~=~O(1), \label{exp} \end{eqnarray} where the second step uses change of variables while the last step is due to the boundedness of $f_{\mathbf S}(\cdot)$ from Assumption \ref{akernel_qte} (ii) and the square integrability of $K(\cdot)$ from Assumption \ref{akernel_qte} (i). Then, under Assumptions \ref{akernel_qte} (i)--(ii) , Lemma B.4 of \citet{escanciano2014uniform} implies\textcolor{black}{:} \begin{eqnarray*} \hbox{$\sup_{\s\in\ms}$}|\E_{n,k}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})\}^2]-{\cal E}_{\mathbf S}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})\}^2]|~=~O_p(\gamma_{n})~=~o_p(1), \end{eqnarray*} where the last step is because we assume $\gamma_{n}=o(1)$. This, combined with (\ref{exp}), yields \begin{eqnarray} \hbox{$\sup_{\s\in\ms}$}\E_{n,k}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})\}^2]~=~O_p(1). \label{one} \end{eqnarray} Therefore\textcolor{black}{,} we know that \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\mathbf{P}_0)-e_{n,k}^{(1)}({\mathbf x},\theta,\mathbf{P}_0)| \nonumber\\ &&~\leq~ c\,\hbox{$\sup_{\s\in\ms}$} \E_{n,k} \{|\hat{D}_N({\mathbf X})h_n^{-r}K_h({\mathbf s}-{\mathbf S})| \}\nonumber\\ &&~\leq~ c\,h^{-r/2}\{\E_{n,k} [\{\hat{D}_N({\mathbf X})\}^2] \hbox{$\sup_{\s\in\ms}$}\E_{n,k}[h_n^{-r}\{K_h({\mathbf s}-{\mathbf S})\}^2]\}^{1/2} \nonumber\\ &&~=~O_p(h^{-r/2}s_N)\textcolor{black}{,} \label{mnk} \end{eqnarray} where the first step uses the boundedness of $T\psi(Y,\theta)$, the second step is due to H\"older's inequality and the last step holds by (\ref{sn}) and (\ref{one}). Combining (\ref{hmbe}) and (\ref{mnk}) yields that \begin{eqnarray} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-e_{n,k}^{(1)}({\mathbf x},\theta,\mathbf{P}_0)| \nonumber\\ &&~=~O_p\{h_n^{-2}\alpha_n^2+h_n^{-1}\gamma_n\alpha_n+\alpha_n+h_n^{-(r/2+1)}\alpha_n s_N+h^{-r/2}s_N\} \nonumber\\ &&~=~O_p\{h_n^{-2}\alpha_n^2+h_n^{-1}\gamma_n\alpha_n+\alpha_n+h^{-r/2}s_N\} ~=~O_p\{a_{n,N}^{(2)}\}, \label{an2} \end{eqnarray} where the second step holds by the fact that $h_n^{-(r/2+1)}\alpha_n s_N=o(h^{-r/2}s_N)$ because we assume $h^{-1}\alpha_n=o(1)$. Now\textcolor{black}{,} we handle the error $e_{n,k}^{(1)}({\mathbf x},\theta,\mathbf{P}_0)-e^{(1)}({\mathbf x},\theta,\mathbf{P}_0)$. Under Assumptions \ref{akernel_qte} (i)--(ii) and the fact that $\{\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta):\theta\in\mb(\vt,{\varepsilon})\}$ is a VC class with a bounded envelope function $\hbox{$\sup_{\theta\in\mbtv}$}[\{\pi^*({\mathbf X})\}^{-1}T\psi(Y,\theta)]$ from Assumption \ref{api}, Lemma B.4 of \citet{escanciano2014uniform} gives that \begin{eqnarray} \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|e_{n,k}^{(1)}({\mathbf x},\theta,\mathbf{P}_0)-{\cal E}\{e_{n,k}^{(1)}({\mathbf x},\theta,\mathbf{P}_0)\}|~=~O_p(\gamma_n). \label{pt2} \end{eqnarray} Further, under Assumptions \ref{akernel_qte}, standard arguments based on $d$th order Taylor's expansion of $e^{(1)}({\mathbf x},\theta,\mathbf{P}_0)$ yield that \begin{eqnarray} \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|{\cal E}\{e_{n,k}^{(1)}({\mathbf x},\theta,\mathbf{P}_0)\}-e^{(1)}({\mathbf x},\theta,\mathbf{P}_0)|~=~O(h_n^d). \label{pt3} \end{eqnarray} Combining (\ref{an2}), (\ref{pt2}) and (\ref{pt3}) yields \begin{eqnarray} \hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-e^{(1)}({\mathbf x},\theta,\mathbf{P}_0)|~=~O_p\{a_{n}^{(1)}+a_{n,N}^{(2)}\}. \label{num} \end{eqnarray} Similar arguments imply that \begin{eqnarray} \hbox{$\sup_{\x\in\mx}$}|\hat{e}^{(0)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)-e^{(0)}({\mathbf x},\mathbf{P}_0)|~=~O_p\{a_{n}^{(1)}+a_{n,N}^{(2)}\}, \label{deno} \end{eqnarray} where $\hat{e}^{(0)}_{n,k}({\mathbf x},\mathbf{P})\equiv\hat{e}^{(0)}_{n,k}({\mathbf x},\theta,\mathbf{P})$ and $\ e^{(0)}({\mathbf x},\mathbf{P})\equiv e^{(0)}({\mathbf x},\theta,\mathbf{P})$. Therefore\textcolor{black}{,} we have \begin{eqnarray*} &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\hat{\phi}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-\tilde{\phi}({\mathbf x},\theta,\mathbf{P}_0)| \nonumber\\ &&~=~\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\{\hat{e}^{(0)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)\}^{-1}\hat{e}^{(0)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-\{e^{(0)}({\mathbf x},\mathbf{P}_0)\}^{-1}e^{(1)}({\mathbf x},\theta,\mathbf{P}_0)| \\ &&~\leq~\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|\{\hat{e}^{(0)}_{n,k}({\mathbf x},\mathbf{P}_0)\}^{-1}\{\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)-e^{(1)}({\mathbf x},\theta,\mathbf{P}_0)\}|+ \\ &&\phantom{~=~}\hbox{$\sup_{\x\in\mx,\,\theta\in\mbtv}$}|[\{\hat{e}^{(0)}_{n,k}({\mathbf x},\mathbf{P}_0)\}^{-1}-\{e^{(0)}({\mathbf x},\mathbf{P}_0)\}^{-1}]e^{(1)}({\mathbf x},\theta,\mathbf{P}_0)| \\ &&~=~O_p\{a_{n}^{(1)}+a_{n,N}^{(2)}\}, \end{eqnarray*} where the last step follows from the fact that $a_{n}^{(1)}+a_{n,N}^{(2)}=o(1)$, and repeated use of (\ref{num}) and (\ref{deno}) as well as Assumptions \ref{api} and \ref{akernel_qte} (ii). \subsection{Proof of Proposition \ref{thbn}} Considering \begin{eqnarray*} \hat{\phi}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)~\equiv~ \{\hat{e}^{(0)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)\}^{-1}\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)\equiv\{\hat{e}^{(0)}_{n,k}({\mathbf x},\hat{\mathbf{P}}_k)\}^{-1}\hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\hat{\mathbf{P}}_k)\textcolor{black}{,} \end{eqnarray*} with \begin{eqnarray*} \hat{e}^{(1)}_{n,k}({\mathbf x},\theta,\mathbf{P})~\equiv~ h_n^{-r}\E_{n,k}[\{\hat{\pi}_N({\mathbf X})\}^{-1}T \{I(Y<\theta)-\tau\}K_h\{\mathbf{P}^{\rm T}({\mathbf x}-{\mathbf X})\}, \end{eqnarray*} it is obvious that, given $ \mathcal{L}$, \begin{eqnarray*} \{\hat{\phi}_{n,k}({\mathbf X},\theta,\hat{\mathbf{P}}_k):\theta\in\mb(\vt,{\varepsilon})\}\subset\{\hat{\phi}_{n,k}({\mathbf X},\theta_i,\hat{\mathbf{P}}_k):i=1,\ldots,n+1\}, \end{eqnarray*} for any $\theta_1<Y_{(1)}$, $\theta_i\in[Y_{(i-1)},Y_{(i)})$ $(i=2,\ldots,n)$ and $\theta_{n+1}\geq Y_{(n)}$, where $Y_{(i)}$ is the $i$th order statistic of $\{Y_i:i=1,\ldots,n\}$. Therefore the set $\{\hat{\phi}_{n,k}({\mathbf X},\theta,\hat{\mathbf{P}}_k):\theta\in\mb(\vt,{\varepsilon})\}$ contains at most $(n+1)$ different functions given $ \mathcal{L}$. This, combined with (\ref{mm}), implies the set \begin{eqnarray*} \mathcal{P}_{n,k}~\equiv~\{\hat{\phi}_{n,k}({\mathbf X},\theta,\hat{\mathbf{P}}_k)-\phi^*({\mathbf X},\theta):\theta\in\mb(\vt,{\varepsilon})\} \end{eqnarray*} satisfies $N_{[\,]}\{\eta,\mathcal{P}_{n,k}\mid \mathcal{L},L_2({\mathbb P}_{\mathbf X})\}\leq c\,(n+1)\eta^{-1}$. \section{Additional simulation results}\label{sm_simulations} We \textcolor{black}{present here} in Tables \ref{table_supp_efficiency} (efficiency) and \ref{table_supp_infernce} (inference) the results of \textcolor{black}{our} simulation\textcolor{black}{s for the} cases with the null and double index outcome models (d)--(e)\textcolor{black}{; s}ee Section \ref{sec_simulations} for detailed descriptions of the simulation setups. In the null model (d) where $Y$ and ${\mathbf X}$ are independent, it is apparent that the unlabeled data cannot help the estimation in theory, so the supervised and SS methods \textcolor{black}{not surprisingly} have close efficiencies. When the outcome model is (e), our SS estimators show significant superiority over the supervised competitors and even outperform the ``oracle'' supervised estimators most of time. As regards inference in the models (d) and (e), our methods still produce satisfactory results analogous \textcolor{black}{in pattern} to those in Table \ref{table_inferece} of Section \ref{sec_simulations}. The quantities in Tables \ref{table_supp_efficiency} and \ref{table_supp_infernce} again confirm the advantage of our SS estimators compared to their supervised counterparts in terms of robustness and efficiency, which have already been demonstrated \textcolor{black}{in detail} by the simulation results in Section \ref{sec_simulations}. \begin{table \def~{\hphantom{0}} \caption Efficiencies of the ATE and the QTE estimators relative to the corresponding oracle supervised estimators when $p=10$; \textcolor{black}{see Remark \ref{remark_interpretation_RE} for interpretations of these relative efficiencies.} Here\textcolor{black}{,} $n$ denotes the labeled data size, $p$ the number of covariates, $q$ the model sparsity, $m({\mathbf X})\equiv{\cal E}(Y\mid{\mathbf X})$, $\pi({\mathbf X})\equiv{\cal E}(T\mid{\mathbf X})$, $\hat{\pi}({\mathbf X})$ \textcolor{black}{--} the estimated propensity score, Lin \textcolor{black}{--} logistic regression of $T$ vs. ${\mathbf X}$\textcolor{black}{,} and Quad \textcolor{black}{--} logistic regression of $T$ vs. $({\mathbf X}^{\rm T},{\mathbf X}_{[1]}^2,\ldots,{\mathbf X}_{[p]}^2)^{\rm T}$; KS$_1/$KS$_2$ represents kernel smoothing on the one$/$two direction(s) selected by linear regression$/ \textcolor{black}{sliced} inverse regression; PR \textcolor{black}{denotes} parametric regression\textcolor{black}{,} and ORE \textcolor{black}{denotes the} oracle relative efficiency. The \textbf{\textcolor{navyblue}{blue}} color \textcolor{black}{indicates} the best efficiency in each case.}{ \resizebox{\textwidth}{!}{ \begin{tabular}{ccc||ccc|ccc||ccc|ccc||c} \hline \multicolumn{3}{c||}{\multirow{2}{*}{ATE}} & \multicolumn{6}{c||}{$n=200$} & \multicolumn{6}{c||}{$n=500$} & \multirow{3}{*}{ORE} \\ \cline{4-15} & & & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & \\ \hline \multirow{6}{*}{(d)} & (i) & Lin & 0.89 & 0.83 & 0.87 & \textcolor{navyblue}{\bf 0.95} & 0.94 & 0.91 & 0.93 & 0.95 & 0.94 & 0.93 & \textcolor{navyblue}{\bf 0.97} & 0.93 & 1.00 \\ & & Quad & 0.68 & 0.50 & 0.64 & 0.95 & \textcolor{navyblue}{\bf 0.96} & 0.92 & 0.87 & 0.87 & 0.87 & 0.93 & \textcolor{navyblue}{\bf 0.96} & 0.93 & 1.00 \\ & (ii) & Lin & 0.86 & 0.85 & 0.87 & 0.92 & \textcolor{navyblue}{\bf 0.93} & 0.92 & 0.96 & 0.94 & 0.97 & 0.99 & \textcolor{navyblue}{\bf 1.00} & 0.97 & 1.00 \\ & & Quad & 0.75 & 0.77 & 0.67 & 0.92 & \textcolor{navyblue}{\bf 0.94} & 0.92 & 0.93 & 0.91 & 0.92 & 1.00 & \textcolor{navyblue}{\bf 1.01} & 0.98 & 1.00 \\ & (iii) & Lin & 0.85 & 0.84 & 0.85 & 0.88 & \textcolor{navyblue}{\bf 0.91} & 0.86 & 0.93 & 0.95 & 0.94 & 0.94 & \textcolor{navyblue}{\bf 0.96} & 0.94 & 1.00 \\ & & Quad & 0.71 & 0.72 & 0.72 & 0.90 & \textcolor{navyblue}{\bf 0.92} & 0.87 & 0.92 & 0.93 & 0.93 & 0.94 & \textcolor{navyblue}{\bf 0.97} & 0.95 & 1.00 \\ \hline \multirow{6}{*}{(e)} & (i) & Lin & 0.76 & 0.75 & 0.41 & 1.73 & \textcolor{navyblue}{\bf 1.80} & 0.77 & 0.86 & 0.87 & 0.64 & 2.02 & \textcolor{navyblue}{\bf 2.04} & 0.88 & 5.41 \\ & & Quad & 0.68 & 0.70 & 0.29 & 1.74 & \textcolor{navyblue}{\bf 1.78} & 0.76 & 0.84 & 0.83 & 0.57 & 2.02 & \textcolor{navyblue}{\bf 2.03} & 0.88 & 5.41 \\ & (ii) & Lin & 0.73 & 0.63 & 0.24 & \textcolor{navyblue}{\bf 1.18} & 0.94 & 0.34 & 0.81 & 0.71 & 0.15 & \textcolor{navyblue}{\bf 1.35} & 1.18 & 0.19 & 3.93 \\ & & Quad & 0.69 & 0.59 & 0.27 & \textcolor{navyblue}{\bf 1.25} & 1.00 & 0.38 & 0.85 & 0.76 & 0.18 & \textcolor{navyblue}{\bf 1.41} & 1.23 & 0.21 & 3.93 \\ & (iii) & Lin & 0.75 & 0.71 & 0.41 & \textcolor{navyblue}{\bf 1.60} & 1.57 & 0.72 & 0.74 & 0.77 & 0.53 & 1.32 & \textcolor{navyblue}{\bf 1.43} & 0.65 & 4.78 \\ & & Quad & 0.74 & 0.75 & 0.52 & \textcolor{navyblue}{\bf 1.83} & 1.75 & 0.92 & 0.79 & 0.82 & 0.56 & 1.53 & \textcolor{navyblue}{\bf 1.67} & 0.85 & 4.78 \\ \hline \multicolumn{16}{c}{} \\ \hline \multicolumn{3}{c||}{\multirow{2}{*}{QTE}} & \multicolumn{6}{c||}{$n=200$} & \multicolumn{6}{c||}{$n=500$} & \multirow{3}{*}{ORE} \\ \cline{4-15} & & & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c||}{\textbf{SS}}& \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & KS$_1$ & KS$_2$ & PR & \\ \hline \multirow{6}{*}{(d)} & (i) & Lin & 0.87 & 0.86 & 0.78 & 0.92 & \textcolor{navyblue}{\bf 0.95} & 0.79 & 0.93 & 0.92 & 0.92 & 0.98 & \textcolor{navyblue}{\bf 0.98} & 0.92 & 1.00 \\ & & Quad & 0.72 & 0.73 & 0.55 & 0.92 & \textcolor{navyblue}{\bf 0.95} & 0.79 & 0.89 & 0.88 & 0.89 & \textcolor{navyblue}{\bf 0.99} & 0.99 & 0.92 & 1.00 \\ & (ii) & Lin & 0.87 & 0.86 & 0.89 & 0.93 & \textcolor{navyblue}{\bf 0.94} & 0.89 & 0.92 & 0.90 & \textcolor{navyblue}{\bf 0.99} & 0.95 & 0.93 & 0.97 & 1.00 \\ & & Quad & 0.71 & 0.71 & 0.71 & 0.94 & \textcolor{navyblue}{\bf 0.96} & 0.90 & 0.89 & 0.89 & 0.95 & 0.96 & 0.94 & \textcolor{navyblue}{\bf 0.98} & 1.00 \\ & (iii) & Lin & 0.83 & 0.82 & 0.85 & \textcolor{navyblue}{\bf 0.92} & 0.92 & 0.83 & 0.94 & 0.93 & 0.95 & 0.96 & \textcolor{navyblue}{\bf 0.97} & 0.96 & 1.00 \\ & & Quad & 0.81 & 0.78 & 0.71 & 0.95 & \textcolor{navyblue}{\bf 0.95} & 0.83 & 0.92 & 0.92 & 0.94 & 0.97 & \textcolor{navyblue}{\bf 0.99} & 0.95 & 1.00 \\ \hline \multirow{6}{*}{(e)} & (i) & Lin & 0.82 & 0.79 & 0.78 & \textcolor{navyblue}{\bf 1.30} & 1.23 & 1.13 & 0.85 & 0.84 & 0.89 & 1.37 & 1.34 & \textcolor{navyblue}{\bf 1.42} & 1.85 \\ & & Quad & 0.65 & 0.68 & 0.61 & \textcolor{navyblue}{\bf 1.30} & 1.24 & 1.11 & 0.87 & 0.86 & 0.85 & 1.39 & 1.35 & \textcolor{navyblue}{\bf 1.42} & 1.85 \\ & (ii) & Lin & 0.61 & 0.55 & 0.49 & \textcolor{navyblue}{\bf 0.92} & 0.73 & 0.65 & 0.81 & 0.71 & 0.40 & \textcolor{navyblue}{\bf 1.16} & 0.97 & 0.48 & 1.78 \\ & & Quad & 0.62 & 0.56 & 0.48 & \textcolor{navyblue}{\bf 0.99} & 0.80 & 0.70 & 0.82 & 0.73 & 0.44 & \textcolor{navyblue}{\bf 1.23} & 1.04 & 0.53 & 1.78 \\ & (iii) & Lin & 0.75 & 0.70 & 0.73 & 1.13 & 1.08 & \textcolor{navyblue}{\bf 1.22} & 0.82 & 0.82 & 0.85 & \textcolor{navyblue}{\bf 1.34} & 1.33 & 1.18 & 1.93 \\ & & Quad & 0.78 & 0.74 & 0.84 & 1.28 & 1.23 & \textcolor{navyblue}{\bf 1.44} & 0.86 & 0.87 & 0.85 & \textcolor{navyblue}{\bf 1.45} & 1.44 & 1.31 & 1.93 \\ \hline \end{tabular} }} \label{table_supp_efficiency} \end{table} \begin{table \def~{\hphantom{0}} \caption Inference based on the SS estimators \underline{\textcolor{black}{using} kernel smoothing on the direction selected by linear regression \textcolor{black}{(KS$_1$)}} \textcolor{black}{as the choice of the working outcome model, for the ATE and the QTE,} when $n=500$ and $p=10$. Here\textcolor{black}{,} ESE is the empirical standard error, \textcolor{black}{Bias is the empirical bias,} ASE \textcolor{black}{is} the average of the estimated standard errors\textcolor{black}{,} and CR \textcolor{black}{is} the \textcolor{black}{empirical} coverage rate of the 95\% confidence intervals. \textcolor{black}{All o}ther notations are the same as in Table \ref{table_supp_efficiency}. The \textbf{{\color{navyblue} blue}} color \textcolor{black}{highlights settings where} the propensity scor\textcolor{black}{e} and the outcome mode\textcolor{black}{l} are \textcolor{black}{both} correctly specified, while the \textbf{boldfaces} \textcolor{black}{denote ones where} the propensity scor\textcolor{black}{e is} correctly specified but the outcome mode\textcolor{black}{l is} not.}{ \begin{tabular}{ccc|cccc|cccc} \hline & & & \multicolumn{4}{c|}{ATE} & \multicolumn{4}{c}{QTE} \\ $m({\mathbf X})$ & $\pi({\mathbf X})$ & $\hat{\pi}({\mathbf X})$ & ESE & Bias & ASE & CR & ESE & Bias & ASE & CR \\ \hline & (i) & {\color{navyblue} \textbf{Lin}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.07}} & {\color{navyblue} \textbf{0.94}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.10}} & {\color{navyblue} \textbf{0.96}} \\ & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.07}} & {\color{navyblue} \textbf{0.94}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.10}} & {\color{navyblue} \textbf{0.95}} \\ & (ii) & Lin & 0.07 & 0.00 & 0.07 & 0.95 & 0.08 & 0.01 & 0.09 & 0.94 \\ & & Quad & 0.06 & 0.00 & 0.07 & 0.95 & 0.08 & 0.01 & 0.09 & 0.95 \\ & (iii) & Lin & 0.07 & 0.00 & 0.07 & 0.94 & 0.08 & 0.01 & 0.09 & 0.97 \\ \multirow{-6}{*}{(d)} & & {\color{navyblue} \textbf{Quad}} & {\color{navyblue} \textbf{0.07}} & {\color{navyblue} \textbf{0.00}} & {\color{navyblue} \textbf{0.06}} & {\color{navyblue} \textbf{0.93}} & {\color{navyblue} \textbf{0.08}} & {\color{navyblue} \textbf{0.01}} & {\color{navyblue} \textbf{0.09}} & {\color{navyblue} \textbf{0.96}} \\ \hline & (i) & \textbf{Lin} & \textbf{0.12} & \textbf{0.00} & \textbf{0.11} & \textbf{0.93} & \textbf{0.16} & \textbf{0.03} & \textbf{0.17} & \textbf{0.94} \\ & & \textbf{Quad} & \textbf{0.12} & \textbf{0.00} & \textbf{0.11} & \textbf{0.94} & \textbf{0.16} & \textbf{0.03} & \textbf{0.17} & \textbf{0.94} \\ & (ii) & Lin & 0.10 & 0.04 & 0.11 & 0.95 & 0.15 & 0.06 & 0.16 & 0.96 \\ & & Quad & 0.10 & 0.04 & 0.11 & 0.95 & 0.14 & 0.05 & 0.16 & 0.95 \\ & (iii) & Lin & 0.12 & 0.00 & 0.11 & 0.91 & 0.15 & 0.03 & 0.16 & 0.96 \\ \multirow{-6}{*}{(e)} & & \textbf{Quad} & \textbf{0.11} & \textbf{0.00} & \textbf{0.10} & \textbf{0.91} & \textbf{0.14} & \textbf{0.02} & \textbf{0.15} & \textbf{0.95} \\ \hline \end{tabular} } \label{table_supp_infernce} \end{table} \section{Supplement to the data analysis in Section \ref{sec_data_analysis}} \label{sm_data_analysis} We present in Table \ref{table_data_analysis} the \textcolor{black}{detailed} numerical results of the data analysis in Section \ref{sec_data_analysis}, which \textcolor{black}{were} illustrated \textcolor{black}{in} Figures \ref{figure_ate} and \ref{figure_qte}, \textcolor{black}{in course of our discussion of the analysis and the results.} \begin{table}[H] \def~{\hphantom{0}} \caption{$95\%$ confidence intervals of the ATE and the QTE in the HIV Drug Resistance data. Here\textcolor{black}{,} $m$ is the position of mutatio\textcolor{black}{n} regarded as the treatment. In the first row of the table, the notation\textcolor{black}{s} \textcolor{black}{of the form} \textcolor{black}{`A-B'} \textcolor{black}{refer to} estimating the propensity score and the outcome model by the methods \textcolor{black}{`A'} and \textcolor{black}{`B'}, respectively. Lin stands for logistic regression of $T$ vs. ${\mathbf X}$; KS$_2$ \textcolor{black}{--} kernel smoothing on the two directions selected by \textcolor{black}{sliced} inverse regression, PR \textcolor{black}{--} parametric regression\textcolor{black}{;} and RF \textcolor{black}{--} random forest. The abbreviations Sup and SS refer to supervised and SS estimators, respectively. The \textbf{\textcolor{navyblue}{blue}} color \textcolor{black}{indicates} the shortest SS confidence interval in each case.}{ \resizebox{\textwidth}{!}{ \begin{tabular}{cc|cc|cc|cc} \hline & \multirow{2}{*}{$m$} & \multicolumn{2}{c|}{\bf{Lin-KS$_2$}} & \multicolumn{2}{c|}{\bf{Lin-PR}} & \multicolumn{2}{c}{\bf{RF-RF}} \\ & & Sup & \bf{SS} & Sup & \bf{SS} & Sup & \bf{SS} \\ \hline \multirow{8}{*}{ATE} & 39 & $[ 0.13 , 0.43 ]$ & $[ 0.13 , 0.38 ]$ & $[ 0.10 , 0.41 ]$ & $[ 0.11 , 0.36 ]$ & $[ 0.13 , 0.32 ]$ & $\textcolor{navyblue}{\bf [ 0.13 , 0.32 ]}$ \\ & 69 & $[ 0.12 , 0.44 ]$ & $[ 0.19 , 0.44 ]$ & $[ 0.10 , 0.42 ]$ & $[ 0.18 , 0.43 ]$ & $[ 0.19 , 0.40 ]$ & $\textcolor{navyblue}{\bf [ 0.24 , 0.43 ]}$ \\ & 75 & $[ 0.02 , 0.29 ]$ & $[ 0.08 , 0.32 ]$ & $[ 0.04 , 0.33 ]$ & $[ 0.07 , 0.33 ]$ & $[ 0.14 , 0.33 ]$ & $\textcolor{navyblue}{\bf [ 0.17 , 0.35 ]}$ \\ & 98 & $[ \hbox{-}0.02 , 0.37 ]$ & $[ 0.06 , 0.37 ]$ & $[ 0.01 , 0.40 ]$ & $[ 0.05 , 0.36 ]$ & $[ 0.10 , 0.29 ]$ & $\textcolor{navyblue}{\bf [ 0.13 , 0.33 ]}$ \\ & 123 & $[ \hbox{-}0.16 , 0.15 ]$ & $[ \hbox{-}0.12 , 0.13 ]$ & $[ \hbox{-}0.15 , 0.17 ]$ & $[ \hbox{-}0.10 , 0.15 ]$ & $[ \hbox{-}0.15 , 0.04 ]$ & $\textcolor{navyblue}{\bf [ \hbox{-}0.15 , 0.05 ]}$ \\ & 162 & $[ \hbox{-}0.16 , 0.19 ]$ & $[ \hbox{-}0.14 , 0.12 ]$ & $[ \hbox{-}0.16 , 0.18 ]$ & $[ \hbox{-}0.14 , 0.13 ]$ & $[ \hbox{-}0.13 , 0.07 ]$ & $\textcolor{navyblue}{\bf [ \hbox{-}0.12 , 0.09 ]}$ \\ & 184 & $[ 2.02 , 2.36 ]$ & $[ 2.08 , 2.35 ]$ & $[ 2.03 , 2.37 ]$ & $[ 2.03 , 2.30 ]$ & $[ 2.08 , 2.30 ]$ & $\textcolor{navyblue}{\bf [ 2.12 , 2.31 ]}$ \\ & 203 & $[ 0.08 , 0.50 ]$ & $[ 0.17 , 0.51 ]$ & $[ 0.00 , 0.45 ]$ & $[ 0.08 , 0.45 ]$ & $[ 0.14 , 0.33 ]$ & $\textcolor{navyblue}{\bf [ 0.20 , 0.38 ]}$ \\ \hline \multirow{8}{*}{QTE} & 39 & $[ 0.07 , 0.43 ]$ & $[ 0.12 , 0.38 ]$ & $[ 0.05 , 0.42 ]$ & $[ 0.09 , 0.36 ]$ & $[ \hbox{-}0.01 , 0.32 ]$ & $\textcolor{navyblue}{\bf [ 0.05 , 0.30 ]}$ \\ & 69 & $[ \hbox{-}0.14 , 0.16 ]$ & $\textcolor{navyblue}{\bf [ \hbox{-}0.06 , 0.18 ]}$ & $[ \hbox{-}0.14 , 0.17 ]$ & $[ \hbox{-}0.06 , 0.19 ]$ & $[ \hbox{-}0.13 , 0.22 ]$ & $[ \hbox{-}0.06 , 0.20 ]$ \\ & 75 & $[ \hbox{-}0.06 , 0.29 ]$ & $\textcolor{navyblue}{\bf [ \hbox{-}0.01 , 0.26 ]}$ & $[ \hbox{-}0.09 , 0.26 ]$ & $[ \hbox{-}0.04 , 0.23 ]$ & $[ 0.03 , 0.42 ]$ & $[ 0.11 , 0.39 ]$ \\ & 98 & $[ 0.01 , 0.34 ]$ & $[ 0.00 , 0.29 ]$ & $[ 0.03 , 0.38 ]$ & $[ 0.00 , 0.28 ]$ & $[ \hbox{-}0.04 , 0.37 ]$ & $\textcolor{navyblue}{\bf [ 0.02 , 0.30 ]}$ \\ & 123 & $[ \hbox{-}0.16 , 0.21 ]$ & $\textcolor{navyblue}{\bf [ \hbox{-}0.12 , 0.15 ]}$ & $[ \hbox{-}0.16 , 0.22 ]$ & $[ \hbox{-}0.13 , 0.15 ]$ & $[ \hbox{-}0.17 , 0.29 ]$ & $[ \hbox{-}0.10 , 0.18 ]$ \\ & 162 & $[ \hbox{-}0.25 , 0.07 ]$ & $\textcolor{navyblue}{\bf [ \hbox{-}0.23 , 0.02 ]}$ & $[ \hbox{-}0.23 , 0.09 ]$ & $[ \hbox{-}0.20 , 0.05 ]$ & $[ \hbox{-}0.22 , 0.16 ]$ & $[ \hbox{-}0.15 , 0.11 ]$ \\ & 184 & $[ 2.16 , 2.50 ]$ & $[ 2.22 , 2.49 ]$ & $[ 2.15 , 2.49 ]$ & $\textcolor{navyblue}{\bf [ 2.17 , 2.44 ]}$ & $[ 2.14 , 2.50 ]$ & $[ 2.23 , 2.50 ]$ \\ & 203 & $[ \hbox{-}0.15 , 0.34 ]$ & $[ 0.06 , 0.41 ]$ & $[ \hbox{-}0.14 , 0.34 ]$ & $[ 0.06 , 0.40 ]$ & $[ 0.01 , 0.40 ]$ & $\textcolor{navyblue}{\bf [ 0.09 , 0.36 ]}$ \\ \hline \end{tabular}} } \label{table_data_analysis} \end{table} \end{appendix} \bibliographystyle{imsart-nameyear}
train/arxiv
BkiUcOjxK7DgtAWezq-N
5
1
\section{INTRODUCTION} \acp{UAV} introduced a new challenge to cellular networks by acting as flying \acp{UE} that have much higher elevation than ground users. In \ac{5G} and beyond networks \acp{UAV} will be used for providing network services to ground and flying \acp{UE}. In \cite{Ahmadi_NFP17}, we introduced an architecture for \ac{NFP} in \ac{5G} and beyond networks. In this work, we aim to introduce and investigate the issues related to the resilience of airborne networks and in particular the introduced \acp{NFP}. \vspace*{-4mm} \section{Network Resilience} In the literature there are several definitions for resilience of networks. In \cite{alliance20155g} resilience is defined as the capability if the network to recover from the failures. Sterbenz et al in \cite{sterbenz2010resilience} define resilience as the ability of the network to provide and maintain an acceptable level of service in the face of various faults and challenges to normal operation. According to \cite{sterbenz2010resilience} resilience disciplines are classified into two classes of challenge tolerance and trustworthiness related disciplines. The first class relate to the design of the system and include survivability, disruption tolerance and traffic tolerance. The class of trustworthiness disciplines relate to system performance and include dependability, security and performability. In studying the resilience of airborne networks, we follow the definition provided by \cite{sterbenz2010resilience} and consider all the mentioned disciplines. However, due to the special case of airborne networks we need to emphasize on some of the disciplines and add new ones. This is mainly because \cite{sterbenz2010resilience} focuses on fixed networks and misses the features of wireless like multi-operator environments and spectrum/infrastructure sharing. \vspace*{-2mm} \section{\ac{NFP} features affecting the resilience} \vspace*{-1mm} Talking about resilience of each type of networks, we have to consider its and its components specifications and limitations. \acp{NFP} have unique features that affect their resilience. \vspace*{-4mm} \subsection{Mobility} \vspace*{-1mm} In an \ac{NFP}, e.g. 3-layer architecture in \cite{Ahmadi_NFP17}, all the \acp{HAP}, \acp{MAP} and \acp{LAP} are not fixed and have the ability to change their position and possibly their altitude. In one hand, mobility introduces challenges like possible collisions among the flying platforms, backhaul challenges, and connection loss. On the other hand, mobility enables the network to proactively respond to unpredicted events like \ac{UAV} failures or a sudden appearance of a demand hotspot. In the first scenario, the platforms especially \acp{LAP} can re-organise to preform self-healing, while in the second scenario an \ac{LAP} can move closer to the demand hotspot reducing the distance between the access point and \acp{UE} and the other \acp{LAP} reshape to cover the rest of the area. Although mobility is not considered in \cite{sterbenz2010resilience} classification, it will have a significant influence on challenge tolerance related disciplines like distribution tolerance and traffic tolerance as can be seen in the examples above. \vspace*{-2mm} \subsection{Energy limitations} \vspace*{-1mm} Most of the existing work \cite{Ahmadi_NFP17,naqvi2018drone} consider battery powered \acp{UAV} as the \acp{LAP}. This means that the \acp{UAV} have a limited operation time and need to fly to charging stations imposing a (predictable) disruption to the network. This feature clearly illustrates the importance of reliable self-organising mechanisms in the network. In these scenarios the self-organising system can either seamlessly replace the leaving \ac{UAV} with another \ac{UAV} (redundant), or change the network parameters, including the position of \acp{LAP}, to deal with this disruption. \vspace*{-1mm} \subsection{Physical vulnerabilities} Flying platforms are physically more vulnerable to accidental and intentional disruptions than fixed networks components. Accidents that can take \acp{LAP} down include lightning, strong wind, and clashing with birds. Flying platforms can also be targets of intentional disruptions like shooting or spoofing. Moreover, intruding drones pretending to be members of \ac{NFP} can disrupt the network functionality without causing problem to a single platform. \vspace*{-1mm} \subsection{Multi-operator environment} Open air is not a restricted area, except restricted zones defined by authorities, and several \ac{NFP} operators and other professional/amateur drone operators can coexist. This dynamic environment will increase the chance of collisions, turbulences, interference, and line-of-sight blockage which affects both challenge tolerance and trustworthiness related features of the network. \subsection{secondary duties} According to design and need of the system flying platforms specially \acp{LAP} and \acp{MAP} can have secondary duties like surveillance or protecting the network by spoofing intruding \acp{UAV} \cite{naqvi2018drone}. Similar to energy limitation case, this may cause the \acp{UAV} to leave their network duties. Unlike the battery recharging case, the secondary duties are not always predictable, especially in the case of intrusion protection, which makes providing a redundant \ac{UAV} to seamlessly take on network duties of the leaving \ac{UAV} more challenging. In these scenarios self-healing mechanisms help the network to maintain its \ac{QoS}. Figure~\ref{fig:SystemMOdel} shows the classification of aforementioned features. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{PIMRCpaper.png} \setlength{\belowcaptionskip}{-6mm} \caption{Classification of NFP features} \label{fig:SystemMOdel} \end{figure} \section{Strategies to enhance resilience of \acp{NFP}} Most of the aforementioned features of \acp{NFP} that influence their resilience are classified as challenge tolerance-related which are not measurable. The challenges related to these features should be addressed in the design and engineering of the network. Although the resilience of a network cannot be measured based on its design and engineering, they effect dependability, security and performability of networks which are measurable. The dynamic environment and the duties of \acp{NFP} require an architecture that enables autonomous reactions to different disruptions. Therefore, \ac{NFP} can benefit from \ac{SON} technologies. As defined in \cite{valente2017SON_survey}, \acp{SON} are adaptive, autonomous, and they are able to independently decide when or how to trigger certain actions based on interaction with the environment. In \cite{Ahmadi_NFP17} we proposed a multi-layer architecture for \acp{NFP} and studied \ac{NFP} specific \ac{SON} features. To achieve better resilience for \acp{NFP} we can use machine learning and blockchain technologies. \vspace*{-2mm} \subsection{Machine learning} \vspace*{-1mm} A survey of machine learning techniques used in \ac{SON} for wireless networks and their applications is provided in \cite{valente2017SON_survey}. To the best of our knowledge there is no existing work that studies the application of \textit{machine learning} in self-organizing airborne networks. Resilinet project \cite{sterbenz2010resilience} proposes a two-phase resilience strategy where the first phase is responsible for dealing with the disruption and maintaining an acceptable level of service while the second phase aims to help the system to evolve and prevent and/or prepare for similar future disruptions. Phase one consists of detect, defend, remediate and recover, and phase two has two activities of diagnose and refine. A resilient airborne network can quickly detect disruptions and remediate. However, an \ac{NFP} can be more resilient using carefully trained learning algorithms that can predict disruptions like battery limitation or even possible intrusions. Optimization and game theoretic modeling are the most common methods in the existing works \cite{Ahmadi_NFP17} for planing the movement and position of flying platforms to maximize the coverage area or to maximize the delivered data rate to \acp{UE}. Several parameters effecting the decision of these algorithms which are traditionally set to an empirical mean value or inaccurately chosen can be learnt by machine learning algorithms based on the previous experiences \cite{ML5G}. This leads to faster and more accurate reaction to a disruption. \vspace*{-2mm} \subsection{Blockchain and smart contracts} Blockchain can be defined as a resilient, reliable, transparent and decentralized way of storing and distributing a database across all nodes of a network \cite{malki2016automating}. Blockchain can assist with the security of \acp{NFP} against intruders pretending to be members of the network and/or spoofing attempts. In a multi-operator environment smart contracts can significantly help to manage space and spectrum sharing. A smart contract is basically a contract that its terms are enforced and executed automatically as computer codes among the participating entities without the need of an en-forcer or a third party. Smart contracts can facilitate deployment of automated charging stations at the roof of the building reducing the flight distance and time of \acp{UAV} to recharge their batteries. \vspace*{-2mm} \section{Conclusions} In this paper we introduced specific features of \acp{NFP} that affect their resilience. Most of these features are related to the design and engineering of the network, and are not easily measurable. We also named machine learning and blockchain as two promising technologies that can improve resilience of airborne networks. \vspace*{-2mm} \input{abbreviations} \bibliographystyle{IEEEtran}
train/arxiv
BkiUddM5qdmDDqr2Go7m
5
1
\section{Introduction} \label{Sec:Intro} Statistical inverse problems arise naturally in many applications in physics, imaging, tomography, and generally in engineering and throughout the sciences. A prototypical example involves a domain $\mathcal O \subset \mathbb R^d$, some function $f: \mathcal O \to \mathbb R$ of interest, and indirect measurements $G(f)$ of $f$, where $G$ is a given solution (or `forward') operator of some partial differential equation (PDE) governed by the unknown coefficient $f$. A natural statistical observational model postulates data \begin{equation}\label{discrete} Y_i = G(f)(X_i) + \sigma W_i,~~i=1, \dots, N, \end{equation} where the $X_i$'s are design points at which the PDE solution $G(f)$ is measured, and where the $W_i$'s are standard Gaussian noise variables scaled by a noise level $\sigma>0$. The aim is then to infer $f$ from the data $(Y_i, X_i)_{i=1}^N$. The study of problems of this type has a long history in applied mathematics, see the monographs \cite{EHN96, KNS08}, although explicit \textit{statistical} noise models have been considered only more recently \cite{KS04,BHM04,BHMR07, HP08}. Recent survey articles on the subject are \cite{BB18,AMOS19} where many more references can be found. For many of the most natural PDEs -- such as the divergence form elliptic equation (\ref{Eq0}) considered below -- the resulting maps $G$ are \textit{non-linear} in $f$, and this poses various challenges: Among other things, the negative log-likelihood function associated to the model (\ref{discrete}), which equals the least squares criterion (see (\ref{Eq:JointLogLikelihood}) below for details), is then possibly \textit{non-convex}, and commonly used statistical algorithms (such as maximum likelihood estimators, Tikhonov regularisers or MAP estimates) defined as optimisers in $f$ of likelihood-based objective functions can not reliably be computed by standard convex optimisation techniques. While iterative optimisation methods (such as Landweber iteration) may overcome such challenges \cite{HNS95, Q00, KNS08, KSS09}, an attractive alternative methodology arises from the Bayesian approach to inverse problems advocated in an influential paper by Stuart \cite{S10}: One starts from a \textit{Gaussian process prior} $\Pi$ for the parameter $f$ or in fact, as is often necessary, for a suitable vector-space valued re-parameterisation $F$ of $f$. One then uses Bayes' theorem to infer the best posterior guess for $f$ given data $(Y_i, X_i)_{i=1}^N$. Posterior distributions and their expected values can be approximately computed via Markov Chain Monte Carlo (MCMC) methods (see, e.g., \cite{CRSW13, CMPS16, BGLFS17} and references therein) as soon as the forward map $G(\cdot)$ can be evaluated numerically, avoiding optimisation algorithms as well as the use of (potentially tedious, or non-existent) inversion formulas for $G^{-1}$; see Subsection \ref{Rem:Computation1} below for more discussion. The Bayesian approach has been particularly popular in application areas as it does not only deliver an estimator for the unknown parameter $f$ but simultaneously provides uncertainty quantification methodology for the recovery algorithm via the probability distribution of $f|(Y_i, X_i)_{i=1}^N$ (see, e.g., \cite{DS16}). Conceptually related is the area of `probabilistic numerics' \cite{G19} in the noise-less case $\sigma=0$, with key ideas dating back to work by Diaconis \cite{D88}. As successful as this approach may have proved to be in algorithmic practice, for the case when the forward map $G$ is non-linear we currently only have a limited understanding of the statistical validity of such Bayesian inversion methods. By validity we mean here \textit{statistical guarantees} for convergence of natural Bayesian estimators such as the posterior mean $\bar f = E^\Pi[f|(Y_i, X_i)_{i=1}^N]$ towards the ground truth $f_0$ generating the data. Without such guarantees, the interpretation of posterior based inferences remains vague: the randomness of the prior may have propagated into the posterior in a way that does not `wash out' even when very informative data is available (e.g., small noise variance and/or large sample size $N$), rendering Bayesian methods potentially ambiguous for the purposes of valid statistical inference and uncertainty quantification. In the present article we attempt to advance our understanding of this problem area in the context of the following basic but representative example for a non-linear inverse problem: Let $g$ be a given smooth `source' function, and let $f: \mathcal O \to \mathbb R$ be a an unknown conductivity parameter determining solutions $u=u_f$ of the PDE \begin{equation} \label{Eq0} \begin{cases} \nabla\cdot(f\nabla u)=g & \textrm{on}\ \Ocal,\\ u=0 & \textrm{on}\ \partial\Ocal, \end{cases} \end{equation} where we denote by $\nabla\cdot$ the divergence and by $\nabla$ the gradient operator, respectively. Under mild regularity conditions on $f$, and assuming that $f\ge K_{min}>0$ on $\Ocal$, standard elliptic theory implies that \eqref{Eq0} has a unique classical $C^2$-solution $G(f) \equiv u_f$. Identification of $f$ from an observed solution $u_f$ of this PDE has been considered in a large number of articles both in the applied mathematics and statistics communities -- we mention here \cite{R81, F83, HS85, KS85, A86, KL88, IK94, K01, S10, DS11, SS12, V13, DS16, BCDPW17, BGLFS17, NVW18, G19} and the many references therein. The main contributions of this article are as follows: We show that posterior means arising from a large class of Gaussian (or conditionally Gaussian) process priors for $f$ provide statistically consistent recovery (with explicit polynomial convergence rates as the number $N$ of measurements increases) of the unknown parameter $f$ in (\ref{Eq0}) from data in (\ref{discrete}). While we employ the theory of posterior contraction from Bayesian non-parametric statistics \cite{vdVvZ08, vdVvZ09, GvdV17}, the non-linear nature of the problem at hand leads to substantial additional challenges arising from the fact that a) the Hellinger distance induced by the statistical experiment is not naturally compatible with relevant distances on the actual parameter $f$ and that b) the `push-forward' prior induced on the information-theoretically relevant regression functions $G(f)$ is non-explicit (in particular, non-Gaussian) due to the non-linearity of the map $G$. Our proofs apply recent ideas from \cite{MNP19b} to the present elliptic situation. In the first step we show that the posterior distributions arising from the priors considered (optimally) solve the PDE-constrained regression problem of inferring $G(f)$ from data \eqref{discrete}. Such results can then be combined with a suitable 'stability estimate' for the inverse map $G^{-1}$ to show that, for large sample size $N$, the posterior distributions concentrate around the true parameter generating the data at a convergence rate $N^{-\lambda}$ for some $\lambda>0$. We ultimately deduce the same rate of consistency for the posterior mean from quantitative uniform integrability arguments. The first results we obtain apply to a large class of `rescaled' Gaussian process priors similar to those considered in \cite{MNP19b}, addressing the need for additional a-priori regularisation of the posterior distribution in order to tame non-linear effects of the `forward map'. This rescaling of the Gaussian process depends on sample size $N$. From a non-asymptotic point of view this just reflects an adjustment of the covariance operator of the prior, but following \cite{D88} one may wonder whether a `fully Bayesian' solution of this non-linear inverse problem, based on a prior that does \textit{not} depend on $N$, is also possible. We show indeed that a hierarchical prior that randomises a finite truncation point in the Karhunen-Lo\'eve-type series expansion of the Gaussian base prior will also result in consistent recovery of the conductivity parameter $f$ in eq.~(\ref{Eq0}) from data (\ref{discrete}), at least if $f$ is smooth enough. \smallskip Let us finally discuss some related literature on statistical guarantees for Bayesian inversion: To the best of our knowledge, the only previous paper concerned with (frequentist) consistency of Bayesian inversion in the elliptic PDE (\ref{Eq0}) is by Vollmer \cite{V13}. The proofs in \cite{V13} share a similar general idea in that they rely on a preliminary treatment of the associated regression problem for $G(f)$, which is then combined with a suitable stability estimate for $G^{-1}$. However, the convergence rates obtained in \cite{V13} are only implicitly given and sub-optimal, also (unlike ours) for `prediction risk' in the PDE-constrained regression problem. Moreover, when specialised to the concrete non-linear elliptic problem (\ref{Eq0}) considered here, the results in Section 4 in \cite{V13} only hold for priors with bounded $C^\beta$-norms, such as `uniform wavelet type priors', similar to the ones used in \cite{NS17, N17, NS19} for different non-linear inverse problems. In contrast, our results hold for the more practical Gaussian process priors which are commonly used in applications, and which permit the use of tailor-made MCMC methodology -- such as the pCN algorithm discussed in Subsection \ref{Rem:Computation1} -- for computation. The results obtained in \cite{NVW18} for the maximum a posteriori (MAP) estimates associated to the priors studied here are closely related to our findings in several ways. Ultimately the proof methods in \cite{NVW18} are, however, based on variational methods and hence entirely different from the Bayesian ideas underlying our results. Moreover, the MAP estimates in \cite{NVW18} are difficult to compute due to the lack of convexity of the forward map, whereas posterior means arising from Gaussian process priors admit explicit computational guarantees, see \cite{HSV14} and also Subsection \ref{Rem:Computation1} for more details. It is further of interest to compare our results to those recently obtained in \cite{AN19}, where the statistical version of the \textit{Cald\'eron problem} is studied. There the `Dirichlet-to-Neumann map' of solutions to the PDE (\ref{Eq0}) is observed, corrupted by appropriate Gaussian matrix noise. In this case, as only boundary measurements of $u_f$ at $\partial \Ocal$ are available, the statistical convergence rates are only of order $\log^{-\gamma} (N)$ for some $\gamma>0$ (as $N \to \infty$), whereas our results show that when interior measurements of $u_f$ are available throughout $\Ocal$, the recovery rates improve to $N^{-\lambda}$ for some $\lambda>0$. There is of course a large literature on consistency of Bayesian \textit{linear} inverse problems with Gaussian priors, we only mention \cite{K11, R13, S13, KLS16, MNP17} and references therein. The non-linear case considered here is fundamentally more challenging and cannot be treated by the techniques from these papers -- however, some of the general theory we develop in the appendix provides novel proof methods also for the linear setting. \smallskip This paper is structured as follows. Section \ref{main} contains all the main results for the inverse problem arising with the PDE model (\ref{Eq0}). The proofs, which also include some theory for general non-linear inverse problems that is of independent interest, are given in Section \ref{Sec:Proofs} and Appendix \ref{App:GenInvProbl}. Finally, Appendix \ref{App:BoringThings} provides additional details on some facts used throughout the paper. \section{Main results}\label{main} \subsection{A statistical inverse problem with elliptic PDEs} \label{Subsec:PrelimAndNotation} \subsubsection{Main notation} Throughout the paper, $\Ocal\subset\mathbb R^d,\ d \in \mathbb N$, is a given nonempty open and bounded set with smooth boundary $\partial\Ocal$ and closure $\bar\Ocal$. The spaces of continuous functions defined on $\Ocal$ and $\bar\Ocal$ are respectively denoted $C(\Ocal)$ and $C(\bar\Ocal)$, and endowed with the supremum norm $\|\cdot\|_\infty$. For positive integers $\beta\in\mathbb N$, $C^\beta(\Ocal)$ is the space of $\beta$-times differentiable functions with uniformly continuous derivatives; for non-integer $\beta>0$, $C^\beta(\Ocal)$ is defined as $$ C^\beta(\Ocal) = \Bigg\{f\in C^{\lfloor\beta\rfloor}(\Ocal):\forall |i| = \lfloor \beta\rfloor, \sup_{x,y\in\Ocal,x\neq y} \frac{|D^i f(x)-D^i f(y)|}{|x-y|^{\beta-\lfloor \beta\rfloor}}<\infty\Bigg\}, $$ where $\lfloor \beta\rfloor$ denotes the largest integer less than or equal to $\beta$, and for any multi-index $i=(i_1,\dots,i_d)$, $D^i$ is the $i$-th partial differential operator. $C^\beta(\Ocal)$ is normed by $$ \|f\|_{C^\beta(\Ocal)} = \sum_{|i|\le \lfloor \beta\rfloor}\sup_{x\in\Ocal}|D^i f(x)| +\sum_{|i| = \lfloor \beta\rfloor} \sup_{x,y\in\Ocal,\ x\neq y}\frac{|D^if(x)-D^if(y)|}{|x-y|^{\beta-\lfloor \beta\rfloor}}, $$ where the second summand is removed for integer $\beta$. We denote by $C^\infty(\Ocal)=\cap_{\beta}C^\beta(\Ocal)$ the set of smooth functions, and by $C^\infty_c(\Ocal)$ the subspace of elements in $C^\infty(\Ocal)$ with compact support contained in $\Ocal$. Denote by $L^2(\mathcal O)$ the Hilbert space of square integrable functions on $\Ocal$, equipped with its usual inner product $\langle \cdot, \cdot \rangle_{L^2(\Ocal)}$. For integer $\alpha\ge0$, the order-$\alpha$ Sobolev space on $\Ocal$ is the separable Hilbert space $$ H^\alpha(\Ocal) = \{f\in L^2(\Ocal) : \forall |i|\le\alpha, \ \exists\ D^if\in L^2(\Ocal)\},\ \langle f,g\rangle_{H^\alpha(\Ocal)} = \sum_{|i|\le\alpha}\langle D^i f,D^i g\rangle_{L^2(\Ocal)}. $$ For non-integer $\alpha\ge0$, $H^\alpha(\Ocal)$ can be defined by interpolation, see, e.g., \cite{Lions1972}. For any $\alpha\ge0$, $H^\alpha_c(\Ocal)$ will denote the completion of $C^\infty_c(\Ocal)$ with respect to the norm $\|\cdot\|_{H^\alpha(\Ocal)}$. Finally, if $K$ is a nonempty compact subset of $\Ocal$, we denote by $ H^\alpha_K(\Ocal)$ the closed subspace of functions in $ H^\alpha(\Ocal)$ with support contained in $K$. Whenever there is no risk of confusion, we will omit the reference to the underlying domain $\Ocal$. Throughout, we use the symbols $\lesssim$ and $\gtrsim$ for inequalities holding up to a universal constant. Also, for two real sequences $(a_N)$ and $(b_N),$ we say that $a_N\simeq b_N$ if both $a_N\lesssim b_N$ and $b_N\lesssim a_N$ for all $N$ large enough. For a sequence of random variables $Z_N$ we write $Z_N = O_{\Pr}(a_N)$ if for all $\varepsilon>0$ there exists $M_\varepsilon<\infty$ such that for all $N$ large enough, $\Pr(|Z_N| \ge M_\varepsilon a_N) <\varepsilon$. Finally, we will denote by $\mathcal L(Z)$ the law of a random variable $Z$. \subsubsection{Parameter spaces and link functions} \label{Subsec:DivFormPDE} Let $g\in C^\infty(\Ocal)$ be an arbitrary source function, which will be regarded as fixed throughout. For $f\in C^\beta(\Ocal), \ \beta>1,$ consider the boundary value problem \begin{equation} \label{Eq:DivFormPDE} \begin{cases} \nabla\cdot(f\nabla u)=g & \textnormal{on}\ \Ocal,\\ u=0 & \textnormal{on}\ \partial\Ocal. \end{cases} \end{equation} If we assume that $f\ge K_{min}>0$ on $\Ocal$, then standard elliptic theory (e.g., \cite{GT98}) implies that \eqref{Eq:DivFormPDE} has a classical solution $G(f)\equiv u_f \in C(\bar\Ocal)\cap C^{1+\beta}(\Ocal)$. We consider the following parameter space for $f$: For integer $\alpha>1+d/2, \ K_{min}\in (0,1)$, and denoting by $n=n(x)$ the outward pointing normal at $x\in\partial\Ocal$, let \begin{equation} \label{Eq:ParamSp} \Fcal_{\alpha,K_{min}} = \Bigg\{f\in H^\alpha(\Ocal): \textnormal{$\inf_{x \in \Ocal}f(x)>K_{min}$, $f_{|\partial \Ocal}=1$, $\frac{\partial^j f}{\partial n^j}_{|\partial \Ocal}=0$ for $1\le j\le\alpha-1$}\Bigg\}. \end{equation} Our approach will be to place a prior probability measure on the unknown conductivity $f$ and base our inference on the posterior distribution of $f$ given noisy observations of $G(f)$, via Bayes' theorem. It is of interest to use \textit{Gaussian process priors}. Such probability measures are naturally supported in linear spaces (in our case $H^\alpha_c(\mathcal O)$) and we now introduce a bijective re-parametrisation so that the prior for $f$ is supported in the relevant parameter space $\mathcal F_{\alpha, K_{min}}$. We follow the approach of using regular link functions $\Phi$ as in \cite{NVW18}. \begin{condition}\label{Cond:LinkFunction1 For given $K_{min}>0$, let $\Phi:\mathbb R\to(K_{min},\infty)$ be a smooth, strictly increasing bijective function such that $\Phi(0)=1$, $\Phi'(t)>0, \ t\in\mathbb R$, and assume that all derivatives of $\Phi$ are bounded on $\mathbb R$. \end{condition For some of the results to follow it will prove convenient to slightly strengthen the previous condition. \begin{condition}\label{Cond:LinkFunction2 Let $\Phi$ be as in Condition \ref{Cond:LinkFunction1}, and assume furthermore that $\Phi'$ is nondecreasing and that $\liminf_{t \to -\infty}\Phi'(t)t^a>0$ for some $a>0$. \end{condition For $a=2$, an example of such a link function is given in Example \ref{Ex:LinkFunction} below. Note however that the choice of $\Phi = \exp$ is not permitted in either condition. Given any link function $\Phi$ satisfying Condition \ref{Cond:LinkFunction1}, one can show (cf.~\cite{NVW18}, Section 3.1) that the set $\Fcal_{\alpha,K_{min}}$ in \eqref{Eq:ParamSp} can be realised as the family of composition maps $$ \Fcal_{\alpha,K_{min}}=\{\Phi\circ F : F\in H^\alpha_c(\Ocal)\}, ~~ \alpha \in \mathbb N. $$ We then regard the solution map associated to \eqref{Eq:DivFormPDE} as one defined on $H^\alpha_c$ via \begin{equation} \label{Eq:ScriptG} \mathscr{G}}\def\Xscr{\mathscr{X}: H^\alpha_c(\Ocal) \to L^2(\Ocal), \quad F\mapsto \mathscr{G}}\def\Xscr{\mathscr{X} (F):= G(\Phi\circ F), \end{equation} where $G(\Phi\circ F)$ is the solution to \eqref{Eq:DivFormPDE} now with $f=\Phi\circ F \in \Fcal_{\alpha, K_{min}}$. In the results to follow, we will implicitly assume a link function $\Phi$ to be given and fixed, and understand the re-parametrised solution map $\mathscr{G}}\def\Xscr{\mathscr{X}$ as being defined as in \eqref{Eq:ScriptG} for such choice of $\Phi$. \subsubsection{Measurement model} \label{Subsec:MeasModel} Define the uniform distribution on $\Ocal$ by $\mu=dx/\textnormal{vol}(\Ocal)$, where $dx$ is the Lebesgue measure and $\textnormal{vol}(\Ocal)=\int_\Ocal dx$, and consider random design variables \begin{equation} \label{Eq:DiscrRandDesign} (X_i)_{i=1}^N \iid \mu, \quad N\in\mathbb N. \end{equation} For unknown $f\in \Fcal_{\alpha,K_{min}}$, we model the statistical errors under which we observe the corresponding measurements $\{G(f)(X_i)\}_{i=1}^N$ by i.i.d.~Gaussian random variables $W_i\sim N(0,1)$, all independent of the $X_i$'s. Using the re-parameterisation $f= \Phi\circ F$ via a given link function from the previous subsection, the observation scheme is then \begin{equation} \label{Eq:DiscrObs} Y_i=\mathscr G(F)(X_i)+\sigma W_i, \quad i=1,\dots,N, \end{equation} where $\sigma>0$ is the noise amplitude. We will often use the shorthand notation $Y^{(N)}=(Y_i)_{i=1}^N$, with analogous definitions for $X^{(N)}$ and $W^{(N)}$. The random vectors $(Y_i,X_i)$ on $\mathbb R\times\Ocal$ are then i.i.d with laws denoted as $P^i_{F}$. Writing $dy$ for the Lebesgue measure on $\mathbb R$, it follows that $P^i_{F}$ has Radon-Nikodym density \begin{equation} \label{Eq:SingleLikelihood} p_F(y,x) := \frac{d P^i_{F}}{dy\times d\mu}(y,x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-[y- \mathscr G(F)(x)]^2/(2\sigma^2)}, \quad y\in\mathbb R,\ x\in\Ocal. \end{equation} We will write $P^{N}_{F}=\otimes_{i=1}^N P^i_{F}$ for the joint law of $(Y^{(N)},X^{(N)})$ on $\mathbb R^N\times \Ocal^N$, with $E^i_{F}$, $E^{N}_{F}$ the expectation operators corresponding to the laws $P_F^i,$ $P_F^N$ respectively. In the sequel we sometimes use the notation $P^N_{f}$ instead of $P^N_{F}$ when convenient. \subsubsection{The Bayesian approach} \label{Subsec:BayesAppr} In the Bayesian approach one models the parameter $F \in H^\alpha_c(\Ocal)$ by a Borel probability measure $\Pi$ supported in the Banach space $C(\Ocal)$. Since the map $(F,(y,x))\mapsto p_{F}(y,x)$ can be shown to be jointly measurable, the posterior distribution $\Pi(\cdot|Y^{(N)},X^{(N)})$ of $F|(Y^{(N)},X^{(N)})$ arising from data in model (\ref{Eq:DiscrObs}) equals, by Bayes' formula (p.7, \cite{GvdV17}), \begin{equation} \label{Eq:PostDistr} \Pi(B|Y^{(N)},X^{(N)}) = \frac{\int_B e^{\ell^{(N)}(F)}d\Pi(F)} {\int_{C(\Ocal)} e^{\ell^{(N)}(F)}d\Pi(F)} \quad \textnormal{any Borel set $B\subseteq C(\Ocal)$}, \end{equation} where \begin{equation} \label{Eq:JointLogLikelihood} \ell^{(N)}(F)=-\frac{1}{2\sigma^2}\sum_{i=1}^N [Y_i-\mathscr{G}}\def\Xscr{\mathscr{X} (F)(X_i)]^2 \end{equation} is (up to an additive constant) the joint log-likelihood function. \subsection{Statistical convergence rates} \label{Sec:Results} In this section we will show that the posterior distribution arising from certain priors concentrates near any sufficiently regular ground truth $F_0$ (or, equivalently, $f_0$), and provide a bound on the rate of this contraction, assuming the observation $(Y^{(N)},X^{(N)})$ to be generated through model \eqref{Eq:DiscrObs} of law $P_{F_0}^N$. We will regard $\sigma>0$ as a fixed and known constant; in practice it may be replaced by the estimated sample variance of the $Y_i$'s. The priors we will consider are built around a Gaussian process base prior $\Pi'$, but to deal with the non-linearity of the inverse problem, some additional regularisation will be required. We first show how this can be done by a $N$-dependent `rescaling' step as suggested in \cite{MNP19b}. We then further show that a randomised truncation of a Karhunen-Loeve-type series expansion of the base prior also leads to a consistent, `fully Bayesian' solution of this inverse problem. \subsubsection{Results with re-scaled Gaussian priors} \label{Subsec:GaussianPriors} We will freely use terminology from the basic theory of Gaussian processes and measures, see, e.g., \cite{GN16}, Chapter 2 for details. \begin{condition}\label{Cond:BasePrior1 Let $\alpha>1+d/2$, $\beta\ge1$, and let $\Hcal$ be a Hilbert space continuously imbedded into $H^\alpha_c(\Ocal)$. Let $\Pi'$ be a centred Gaussian Borel probability measure on the Banach space $C(\Ocal)$ that is supported on a separable measurable linear subspace of $C^\beta(\Ocal)$, and assume that the reproducing-kernel Hilbert space (RKHS) of $\Pi'$ equals $\Hcal$. \end{condition As a basic example of a Gaussian base prior $\Pi'$ satisfying Condition \ref{Cond:BasePrior1}, consider a Whittle-Mat\'ern process $M=\{M(x),\ x\in\Ocal\}$ indexed by $\Ocal$ and of regularity $\alpha$ (cf.~Example \ref{Ex:CuttedMaternProcess} below for full details). We will assume that it is known that $F_0\in H^\alpha(\Ocal)$ is supported inside a given compact subset $K$ of the domain $\Ocal$, and fix any smooth cut-off function $\chi\in C^\infty_c(\Ocal)$ such that $\chi=1$ on $K$. Then, $\Pi'=\Lcal(\chi M)$ is supported on the separable linear subspace $C^{\beta'}(\Ocal)$ of $C^\beta(\Ocal)$ for any $\beta<\beta'<\alpha-d/2$, and its RKHS $\Hcal=\{\chi F, F\in H^\alpha(\Ocal)\}$ is continuously imbedded into $H^\alpha_c(\Ocal)$ (and contains $H^\alpha_K(\Ocal)$). The condition $F_0 \in \mathcal H$ that is employed in the following theorems then amounts to the standard assumption that $F_0 \in H^\alpha(\mathcal O)$ be supported in a strict subset $K$ of $\Ocal$. \smallskip To proceed, if $\Pi'$ is as above and $F'\sim\Pi'$, we consider the `re-scaled' prior \begin{equation} \label{Eq:Prior1} \Pi_N = \mathcal L(F_N), \quad F_N = \frac{1}{N^{d/(4\alpha+4+2d)}}F', \end{equation} Then $\Pi_N$ again defines a centred Gaussian prior on $C(\Ocal)$, and a basic calculation (e.g., Exercise 2.6.5 in \cite{GN16}) shows that its RKHS $\Hcal_N$ is still given by $\mathcal H$ but now with norm \begin{equation} \label{Eq:RKHS1} \|F\|_{\Hcal_N} = N^{d/(4\alpha+4+2d)} \|F\|_\Hcal \quad \forall F\in \Hcal . \end{equation} Our first result shows that the posterior contracts towards $F_0$ in `prediction'-risk at rate $N^{-(\alpha+1)/(2\alpha+2+d)}$ and that, moreover, the posterior draws possess a bound on their $C^\beta$-norm with overwhelming frequentist probability. \begin{theorem}\label{Theo:FwdRates1 For fixed integer $\alpha>\beta+d/2$, $\beta\ge1$, consider the Gaussian prior $\Pi_N$ in \eqref{Eq:Prior1} with base prior $F'\sim\Pi'$ satisfying Condition \ref{Cond:BasePrior1} for RKHS $\mathcal H$. Let $\Pi_N(\cdot|Y^{(N)},X^{(N)})$ be the resulting posterior distribution arising from observations $(Y^{(N)},$ $X^{(N)})$ in \eqref{Eq:DiscrObs}, set $\delta_N=N^{-(\alpha+1)/(2\alpha+2+d)}$, and assume $F_0\in\Hcal$. Then for any $D>0$ there exists $L>0$ large enough (depending on $\sigma,F_0,D, \alpha, \beta$, as well as on $\Ocal,d,g$) such that, as $N \to \infty$, \begin{equation} \label{Eq:FwdRates1} \Pi_N(F:\|\mathscr{G}}\def\Xscr{\mathscr{X} (F)-\mathscr{G}}\def\Xscr{\mathscr{X} (F_0)\|_{L^2}>L\delta_N|Y^{(N)},X^{(N)}) =O_{P^{N}_{F_0}}(e^{-DN\delta_N^2}), \end{equation} and for sufficiently large $M>0$ (depending on $\sigma, D,\alpha, \beta$) \begin{equation} \label{Eq:Regularisation10} \Pi_N(F: \|F\|_{C^\beta}>M|Y^{(N)},X^{(N)})=O_{P^{N}_{F_0}}(e^{-DN\delta_N^2}). \end{equation} \end{theorem \smallskip Following ideas in \cite{MNP19b}, we can combine (\ref{Eq:FwdRates1}) with the regularisation property \eqref{Eq:Regularisation10} and a suitable stability estimate for $G^{-1}$ to show that the posterior contracts about $f_0$ also in $L^2$-risk. We shall employ the stability estimate proved in \cite[Lemma 24]{NVW18} which requires the source function $g$ in the base PDE (\ref{Eq:DivFormPDE}) to be strictly positive, a natural condition ensuring injectivity of the map $f \mapsto G(f)$, see \cite{R81}. Denote the push-forward posterior on the conductivities $f$ by \begin{equation} \label{Eq:TildePi} \tilde\Pi_N(\cdot|Y^{(N)},X^{(N)}):=\Lcal(f), \quad f=\Phi\circ F : F\sim\Pi_N(\cdot|Y^{(N)},X^{(N)}). \end{equation} \begin{theorem}\label{Theo:L2Rates1 Let $\Pi_N(\cdot|Y^{(N)},X^{(N)})$, $\delta_N$ and $F_0$ be as in Theorem \ref{Theo:FwdRates1} for integer $\beta>1$. Let $f_0=\Phi\circ F_0$ and assume in addition that $\inf_{x \in \Ocal}g(x) \ge g_{min}>0$. Then for any $D>0$ there exists $L>0$ large enough (depending on $\sigma, f_0, D, \alpha, \beta, \Ocal$, $g_{min}, d$) such that, as $N \to \infty$, $$ \tilde\Pi_N(f:\|f-f_0\|_{L^2}>LN^{-\lambda}|Y^{(N)},X^{(N)}) = O_{P^{N}_{f_0}}(e^{-DN\delta_N^2}), \quad \lambda = \frac{(\alpha+1)(\beta-1)}{(2\alpha+2+d)(\beta+1)}. $$ \end{theorem We note that as the smoothness $\alpha$ of $f_0$ increases, we can employ priors of higher regularity $\alpha, \beta$. In particular, if $F_0\in C^\infty=\cap_{\alpha>0} H^\alpha$, we can let the above rate $N^{-\lambda}$ be as closed as desired to the `parametric' rate $N^{-1/2}$. We conclude this section showing that the posterior mean $ E^\Pi[F|Y^{(N)},X^{(N)}]$ of $\Pi_N(\cdot|Y^{(N)},X^{(N)})$ converges to $F_0$ at the rate $N^{-\lambda}$ from Theorem \ref{Theo:L2Rates1}. We formulate this result at the level of the vector space valued parameter $F$ (instead of for conductivities $f$), as the most commonly used MCMC algorithms (such as pCN, see Subsection \ref{Rem:Computation1}) target the posterior distribution of $F$. \begin{theorem}\label{Theo:PostMeanConv1 Under the hypotheses of Theorem \ref{Theo:L2Rates1}, let $\bar F_N=E^\Pi[F|Y^{(N)},X^{(N)}]$ be the (Bochner-) mean of $\Pi_N(\cdot|Y^{(N)},X^{(N)})$. Then, as $N \to \infty$, \begin{equation} \label{Eq:PostMeanConv1} P^N_{F_0}\big(\|\bar F_N-F_0\|_{L^2}>N^{-\lambda}\big)\to0. \end{equation} \end{theorem} The same result holds for the implied conductivities, that is, for $\|\Phi \circ \bar F_N - f_0\|_{L^2}$ replacing $\|\bar F_N-F_0\|_{L^2}$, since composition with $\Phi$ is Lipschitz. \subsubsection{Extension to high-dimensional Gaussian sieve priors}\label{sieve} It is often convenient, for instance for computational reasons as discussed in Subsection \ref{Rem:Computation1}, to employ `sieve'-priors that are concentrated on a finite-dimensional approximation of the parameter space supporting the prior. For example a truncated Karhunen-Loeve-type series expansion (or some other discretisation) of the Gaussian base prior $\Pi'$ is frequently used \cite{DS11, HSV14}. The theorems of the previous subsection remain valid if the approximation spaces are appropriately chosen. Let us illustrate this by considering a Gaussian series prior based on an orthonormal basis $\{\Psi_{\ell r}, \ \ell\ge-1,\ r\in\Z^d\}$ of $L^2(\mathbb R^d)$ composed of sufficiently regular, compactly supported Daubechies wavelets (see Chapter 4 in \cite{GN16} for details). We assume that $F_0 \in H^\alpha_K(\Ocal)$ for some $K \subset \Ocal$, and denote by $\Rcal_\ell$ the set of indices $r$ for which the support of $\Psi_{\ell r}$ intersects $K$. Fix any compact $K'\subset \Ocal$ such that $K\subsetneq K'$, and a cut-off function $\chi\in C^\infty_c(\Ocal)$ such that $\chi=1$ on $K'$. For any real $\alpha>1+d/2$, consider the prior $\Pi'_J$ arising as the law of the Gaussian random sum \begin{equation} \label{Eq:Prior02} \Pi'_J=\Lcal(\chi F), \quad F=\sum_{\ell\le J,r\in\Rcal_\ell}2^{-\ell\alpha}F_{\ell r}\Psi_{\ell r},\ F_{\ell r}\iid N(0,1), \end{equation} where $J=J_N \to \infty$ is a (deterministic) truncation point to be chosen. Then $\Pi'_J$ defines a centred Gaussian prior that is supported on the finite-dimensional space \begin{equation} \label{Eq:SubspaceHj} \Hcal_J=\textnormal{span}\{\chi \Psi_{\ell r}, \ \ell\le J, \ r\in\Rcal_\ell\}\subset C(\Ocal). \end{equation} \begin{proposition} Consider a prior $\Pi_N$ as in (\ref{Eq:Prior1}) where now $F' \sim \Pi'_J$ and $J=J_N \in \mathbb N$ is such that $2^{J} \simeq N^{1/(2\alpha +2 +d)}$. Let $\Pi_N(\cdot|Y^{(N)}, X^{(N)})$ be the resulting posterior distribution arising from observations $(Y^{(N)},X^{(N)})$ in \eqref{Eq:DiscrObs}, and assume $F_0 \in H^\alpha_K(\Ocal)$. Then the conclusions of Theorems \ref{Theo:FwdRates1}-\ref{Theo:PostMeanConv1} remain valid (under the respective hypotheses on $\alpha, \beta, g$). \end{proposition} A similar result could be proved for more general Gaussian priors (not of wavelet type), but we refrain from giving these extensions here. \subsubsection{Randomly truncated Gaussian series priors} \label{Subsec:RandSeriesPriors} In this section we show that instead of rescaling Gaussian base priors $\Pi', \Pi'_J$ in a $N-$dependent way to attain extra regularisation, one may also randomise the dimensionality parameter $J$ in (\ref{Eq:Prior02}) by a hyper-prior with suitable tail behaviour. While this is computationally somewhat more expensive (by necessitating a hierarchical sampling method, see Subsection \ref{Rem:Computation1}), it gives a possibly more principled approach to (`fully') Bayesian regularisation in our inverse problem. The theorem below will show that such a procedure is consistent in the frequentist sense, at least for smooth enough $F_0$. For the wavelet basis and cut-off function $\chi$ introduced before (\ref{Eq:Prior02}), we consider again a random (conditionally Gaussian) sum \begin{equation} \label{Eq:Prior2} \Pi=\Lcal(\chi F), \quad F=\sum_{\ell\le J,r\in\Rcal_\ell}2^{-\ell\alpha}F_{\ell r}\Psi_{\ell r},\ F_{\ell r}\iid N(0,1) \end{equation} where now $J$ is a random truncation level, independent of the random coefficients $F_{\ell r}$, satisfying the following inequalities \begin{equation} \label{Eq:PropertiesOfJ} \Pr(J> j)=e^{-2^{jd}\log 2^{jd}}\ \forall j\ge1; \quad \Pr(J=j)\gtrsim e^{-2^{jd}\log 2^{jd}},\ j\to\infty. \end{equation} When $d=1$, a (log-) Poisson random variable satisfies these tail conditions, and for $d>1$ such a random variable $J$ can be easily constructed too -- see Example \ref{poissond} below. Our first result in this section shows that the posterior arising from the truncated series prior in \eqref{Eq:Prior2} achieves (up to a log-factor) the same contraction rate in $L^2$-prediction risk as the one obtained in Theorem \ref{Theo:FwdRates1}. Moreover, as is expected in light of the results in \cite{vdVvZ09, R13}, the posterior adapts to the unknown regularity $\alpha_0$ of $F_0$ when it exceeds the base smoothness level $\alpha$. \begin{theorem}\label{Theo:FwdRates2 For any $\alpha>1+d/2$, let $\Pi$ be the random series prior in \eqref{Eq:Prior2}, and let $\Pi(\cdot|Y^{(N)}, X^{(N)})$ be the resulting posterior distribution arising from observations $(Y^{(N)},X^{(N)})$ in \eqref{Eq:DiscrObs}. Then, for each $\alpha_0\ge\alpha$ and any $F_0\in H^{\alpha_0}_K(\Ocal)$, we have that for any $D>0$ there exists $L>0$ large enough (depending on $\sigma,F_0,D,\alpha,\Ocal,d,g$) such that, as $N \to \infty$, $$ \Pi(F:\|\mathscr{G}}\def\Xscr{\mathscr{X} (F)-\mathscr{G}}\def\Xscr{\mathscr{X} (F_0)\|_{L^2}>L\xi_N|Y^{(N)},X^{(N)}) = O_{P^{N}_{F_0}}(e^{-DN\xi_N^2}), $$ where $\xi_N= N^{-(\alpha_0+1)/(2\alpha_0+2+d)}\log N$. Moreover, for $\Hcal_J$ the finite-dimensional subspaces in \eqref{Eq:SubspaceHj} and $J_N\in\mathbb N$ such that $ 2^{J_N}\simeq N^{1/(2\alpha_0+2+d)}$, we also have that for sufficiently large $M>0$ (depending on $D,\alpha$) \begin{equation} \label{Eq:Regularisation2} \Pi(F: F\in\Hcal_{J_N}, \ \|F\|_{H^\alpha}\le M 2^{J_N\alpha}N\xi_N^2 |Y^{(N)},X^{(N)})=1-O_{P^{N}_{F_0}}(e^{-DN\xi_N^2}). \end{equation} \end{theorem We can now exploit the previous result along with the finite-dimensional support of the posterior and again the stability estimate from \cite{NVW18} to obtain the following consistency theorem for $F_0 \in H^{\alpha_0}$ if $\alpha_0$ is large enough (with a precise bound $\alpha_0 \ge \alpha^*$ given in the proof of Lemma \ref{Lemma:HalphaBound}). \begin{theorem}\label{Theo:L2Rates2 Let the link function $\Phi$ in the definition \eqref{Eq:ScriptG} of $\mathscr{G}}\def\Xscr{\mathscr{X}$ satisfy Condition \ref{Cond:LinkFunction2}. Let $\Pi(\cdot|Y^{(N)},X^{(N)})$, $\xi_N$ be as in Theorem \ref{Theo:FwdRates2}, assume in addition $g \ge g_{min}>0$ on $\Ocal$, and let $\tilde\Pi(\cdot|Y^{(N)},X^{(N)})$ be the posterior distribution of $f$ as in \eqref{Eq:TildePi}. Then for $f_0=\Phi\circ F_0$ with $F_0\in H^{\alpha_0}_K(\Ocal)$ for $\alpha_0$ large enough (depending on $\alpha, d, a$) and for any $D>0$ there exists $L>0$ large enough (depending on $\sigma,f_0, D, \alpha,\Ocal, g_{min}, d$) such that, as $N \to \infty$, $$ \tilde\Pi(f:\|f-f_0\|_{L^2}>LN^{-\rho}|Y^{(N)},X^{(N)}) = O_{P^N_{f_0}}(e^{-DN\xi_N^2}), \quad \rho = \frac{(\alpha_0+1)(\alpha-1)}{(2\alpha_0+2+d)(\alpha+1)}. $$ \end{theorem Just as before, for $f_0\in C^\infty$ the above rate can be made as close as desired to $N^{-1/2}$ by choosing $\alpha$ large enough. Moreover, the last contraction theorem also translates into a convergence result for the posterior mean of $F$. \begin{theorem}\label{Theo:PostMeanConv2 Under the hypotheses of Theorem \ref{Theo:L2Rates2}, let $\bar F_N=E^\Pi[F|Y^{(N)},X^{(N)}]$ be the mean of $\Pi(\cdot|Y^{(N)},X^{(N)})$. Then, as $N \to \infty$, \begin{equation} \label{Eq:PostMeanConv1} P_{F_0}^N \big(\|\bar F_N-F_0\|_{L^2}>N^{-\rho} \big) \to 0. \end{equation} \end{theorem} We note that the proof of the last two theorems crucially takes advantage of the `non-symmetric' and `non-exponential' nature of the stability estimate from \cite{NVW18}, and may not hold in other non-linear inverse problems where such an estimate may not be available (e.g., as in \cite{MNP19b, AN19} or also in the Schr\"odinger equation setting studied in \cite{N17, NVW18}). \smallskip Let us conclude this section by noting that hierarchical priors such as the one studied here are usually devised to `adapt to unknown' smoothness $\alpha_0$ of $F_0$, see \cite{vdVvZ09, R13}. Note that while our posterior distribution is adaptive to $\alpha_0$ in the `prediction risk' setting of Theorem \ref{Theo:FwdRates2}, the rate $N^{-\rho}$ obtained in Theorems \ref{Theo:L2Rates2} and \ref{Theo:PostMeanConv2} for the inverse problem \textit{does} depend on the minimal smoothness $\alpha$, and is therefore \textit{not adaptive}. Nevertheless, this hierarchical prior gives an example of a fully Bayesian, consistent solution of our inverse problem. \subsection{Concluding discussion} \subsubsection{Posterior computation}\label{Rem:Computation1}\normalfon As mentioned in the introduction, in the context of the elliptic inverse problem considered in the present paper, posterior distributions arising from Gaussian process priors such as those above can be computed by MCMC algorithms, see \cite{CRSW13, CMPS16, BGLFS17}, and computational guarantees can be obtained as well: For Gaussian priors, \cite{HSV14} establish non-asymptotic sampling bounds for the `preconditioned Crank-Nicholson (pCN)' algorithm, which hold even in the absence of log-concavity of the likelihood function, and which imply bounds on the approximation error for the computation of the posterior mean. The algorithm can be implemented as long as it is possible to evaluate the forward map $F\mapsto \mathscr{G}}\def\Xscr{\mathscr{X}(F)(x)$ at $x \in \mathcal O$, which in our context can be done by using standard numerical methods to solve the elliptic PDE \eqref{Eq:DivFormPDE}. In practice, these algorithms often employ a finite-dimensional approximation of the parameter space (see Subsection \ref{sieve}). In order to sample from the posterior distribution arising from the more complex hierarchical prior \eqref{Eq:Prior2}, MCMC methods based on fixed Gaussian priors (such as the pCN algorithm) can be employed within a suitable Gibbs-sampling scheme that exploits the conditionally Gaussian structure of the prior. The algorithm would then alternate, for given $J$, an MCMC step targeting the marginal posterior distribution of $F|(Y^{(N)},X^{(N)},J)$, followed by, given the actual sample of $F$, a second MCMC run with objective the marginal posterior of $J|(Y^{(N)},X^{(N)},F)$. A related approach to hierarchical inversion is empirical Bayesian estimation. In the present setting this would entail first estimating the truncation level $J$ from the data, via an estimator $\hat J=\hat J(Y^{(N)},X^{(N)})$ (e.g., the marginal maximum likelihood estimator), and then performing inference based on the fixed finite-dimensional prior $\Pi_{\hat J}$ (defined as in \eqref{Eq:Prior2} with $J$ replaced by $\hat J$). See \cite{K2015} where this is studied in a diagonal linear inverse problem. \subsubsection{Open problems: Towards optimal convergence rates} The convergence rates obtained in this article demonstrate the frequentist consistency of a Bayesian (Gaussian process) inversion method in the elliptic inverse problem (\ref{Eq0}) with data (\ref{discrete}) in the large sample limit $N \to \infty$. While the rates approach the optimal rate $N^{-1/2}$ for very smooth models ($\alpha \to \infty$), the question of optimality for fixed $\alpha$ remains an interesting avenue for future research. We note that for the `PDE-constrained regression' problem of recovering $\mathscr G(F_0)$ in `prediction' loss, the rate $\delta_N=N^{-(\alpha+1)/(2\alpha+2+d)}$ obtained in Theorems \ref{Theo:FwdRates1} and \ref{Theo:FwdRates2} can be shown to be minimax optimal (as in \cite[Theorem 10]{NVW18}). But for the recovery rates for $f$ obtained in Theorems \ref{Theo:PostMeanConv1} and \ref{Theo:PostMeanConv2}, no matching lower bounds are currently known. Related to this issue, in \cite{NVW18} faster (but still possibly suboptimal) rates are obtained for the modes of our posterior distributions (MAP estimates, which are not obviously computable in polynomial time), and one may loosely speculate here about computational hardness barriers in our non-linear inverse problem. These issues pose formidable challenges for future research and are beyond the scope of the present paper. \section{Proofs} \label{Sec:Proofs} We assume without loss of generality that $\textnormal{vol}(\Ocal)=1$. In the proof, we will repeatedly exploit properties of the (re-parametrised) solution map $\mathscr{G}}\def\Xscr{\mathscr{X}$ defined in \eqref{Eq:ScriptG}, which was studied in detail in \cite{NVW18}. Specifically, in the proof of Theorem 9 in \cite{NVW18} it is shown that, for all $\alpha>1+d/2$ and any $F_1, F_2\in H^\alpha_c(\Ocal)$, \begin{equation} \label{Eq:LipCondG} \|\mathscr{G}}\def\Xscr{\mathscr{X} (F_1)-\mathscr{G}}\def\Xscr{\mathscr{X} (F_2)\|_{L^2(\Ocal)} \lesssim (1+\|F_1\|_{C^1(\Ocal)}^4\vee\|F_2\|_{C^1(\Ocal)}^4)\|F_1-F_2\|_{(H^1(\Ocal))^*}, \end{equation} where we denote by $X^*$ the topological dual Banach space of a normed linear space $X$. Secondly, we have (Lemma 20 in \cite{NVW18}) for some constant $c>0$ (only depending on $d,\ \Ocal$ and $K_{min})$, \begin{equation} \label{Eq:UnifBoundG} \sup_{F\in H^\alpha_c}\|\mathscr{G}}\def\Xscr{\mathscr{X} (F)\|_{\infty}\le c\|g\|_{\infty}<\infty. \end{equation} Therefore the inverse problem \eqref{Eq:DiscrObs} falls in the general framework considered in Appendix \ref{App:GenInvProbl} below (with $\beta=\kappa=1$, $\gamma=4$ in \eqref{Eq:GenLipCondG} and $S=c\|g\|_\infty$ in \eqref{Eq:GenUnifBoundG}) ; in particular Theorems \ref{Theo:FwdRates1} and \ref{Theo:FwdRates2} then follow as particular cases of the general contraction rate results derived in Theorem \ref{Theo:GenFwdRates1} and Theorem \ref{Theo:GenFwdRates2}, respectively. It thus remains to derive Theorems \ref{Theo:L2Rates1} and \ref{Theo:PostMeanConv1} from Theorem \ref{Theo:FwdRates1}, and Theorems \ref{Theo:L2Rates2} and \ref{Theo:PostMeanConv2} from Theorem \ref{Theo:FwdRates2}, respectively. To do so we recall here another key result from \cite{NVW18}, namely their stability estimate Lemma 24: For $\alpha>2+d/2$, if $G(f)$ denotes the solution of the PDE (\ref{Eq:DivFormPDE}) with $g$ satisfying $\inf_{x\in\Ocal}g(x)\ge g_{min}>0$, then for fixed $f_0\in \Fcal_{\alpha,K_{min}}$ and all $f\in\Fcal_{\alpha,K_{min}}$ \begin{equation} \label{Eq:StabEstim} \|f-f_0\|_{L^2(\Ocal)}\lesssim \|f\|_{C^1(\Ocal)}\|G(f)-G(f_0)\|_{H^2(\Ocal)}, \end{equation} with multiplicative constant independent of $f$. \subsection{Proofs for Section \ref{Subsec:GaussianPriors}} \label{Subsec:ProofsGaussPriors} \paragraph{Proof of Theorem \ref{Theo:L2Rates1}.} The conclusions of Theorem \ref{Theo:FwdRates1} can readily be translated for the push-forward posterior $\tilde\Pi_N(\cdot|Y^{(N)},X^{(N)})$ from \eqref{Eq:TildePi}. In particular, \eqref{Eq:FwdRates1} implies, for $f_0=\Phi\circ F_0$, as $N\to\infty$, \begin{equation} \label{Eq:L2FwdContrRate1} \tilde\Pi_N(f : \|G(f)-G(f_0)\|_{L^2}>L \delta_N |Y^{(N)},X^{(N)}) = O_{P^N_{f_0}}(e^{-D N\delta_N^2 }); \end{equation} and using Lemma 29 in \cite{NVW18} and \eqref{Eq:Regularisation10} we obtain for sufficiently large $M'>0$ \begin{equation} \label{Eq:NormBound1} \tilde\Pi_N(f : \|f\|_{C^\beta}>M'|Y^{(N)},X^{(N)}) \le \Pi_N( F : \|F\|_{C^{\beta}}>M|Y^{(N)},X^{(N)})\\ = O_{P^N_{f_0}}(e^{-D N\delta_N^2 }). \end{equation} From the previous bounds we now obtain the following result. \begin{lemma}\label{Lemma:FwdH2Risk1 For $\Pi_N(\cdot|Y^{(N)},X^{(N)}), \delta_N$ and $F_0$ as in Theorem \ref{Theo:FwdRates1}, let $\tilde\Pi_N(\cdot|Y^{(N)},X^{(N)})$ be the push-forward posterior distribution from \eqref{Eq:TildePi}. Then, for $f_0=\Phi\circ F_0$ and any $D>0$ there exists $L>0$ large enough such that, as $N\to\infty$, $$ \tilde\Pi_N(f:\|G(f)-G(f_0)\|_{H^2}>L \delta_N ^{(\beta-1)/(\beta+1)}|Y^{(N)},X^{(N)}) = O_{P^N_{F_0}}(e^{-D N\delta_N^2 }). $$ \proo Using the continuous imbedding of $C^\beta \subset H^\beta, \beta \in \mathbb N,$ and \eqref{Eq:NormBound1}, for some $M'>0$ as $N \to \infty$, $$ \tilde\Pi_N(f : \|f\|_{H^\beta}>M'|Y^{(N)},X^{(N)})=O_{P^N_{F_0}}(e^{-DN\delta_N^2}). $$ Now if $f\in H^\beta$ with $\|f\|_{H^\beta}\le M'$, Lemma 23 in \cite{NVW18} implies $G(f), G(f_0)\in H^{\beta+1}$, with $$ \|G(f_0)\|_{H^{\beta+1}}\lesssim 1+\|f_0\|_{H^\beta}^{\beta(\beta+1)}<\infty, \quad \|G(f)\|_{H^{\beta+1}}\lesssim 1+\|f\|_{H^\beta}^{\beta(\beta+1)} <M''<\infty; $$ and by the usual interpolation inequality for Sobolev spaces \cite{Lions1972}, \begin{align*} \|G(f)-G(f_0)\|_{H^2} &\lesssim \|G(f)-G(f_0)\|^{(\beta-1)/(\beta+1)}_{L^2}\|G(f)-G(f_0)\| ^{2/(\beta+1)}_{H^{\beta+1}} \\ &\lesssim \|G(f)-G(f_0)\|^{(\beta-1)/(\beta+1)}_{L^2}. \end{align*} Thus, by what precedes and \eqref{Eq:L2FwdContrRate1}, for sufficiently large $L>0$ \begin{align*} &\tilde\Pi_N(f:\|G(f)-G(f_0)\|_{H^2} > L \delta_N ^{(\beta-1)/(\beta+1)}|Y^{(N)},X^{(N)}) \\ &\ \le \tilde\Pi_N(f:\|G(f)-G(f_0)\|_{L^2}> L' \delta_N |Y^{(N)},X^{(N)}) +\tilde\Pi_N(f:\|f\|_{H^{\beta}}> M''|Y^{(N)},X^{(N)})\\ &\ = O_{P^N_{F_0}}(e^{-D N\delta_N^2 }), \end{align*} as $N \to \infty$. \endproo \end{lemma To prove Theorem \ref{Theo:L2Rates1} we use \eqref{Eq:StabEstim}, \eqref{Eq:NormBound1} and Lemma \ref{Lemma:FwdH2Risk1} to the effect that for any $D>0$ we can find $L, M>0$ large enough such that, as $N \to \infty$, \begin{align*} &\tilde\Pi_N(f:\|f-f_0\|_{L^2}> L \delta_N ^\frac{\beta-1}{\beta+1}|Y^{(N)},X^{(N)})\\ &\ \le \tilde \Pi_N(f:\|G(f)-G(f_0)\|_{H^2}> L' \delta_N ^\frac{\beta-1}{\beta+1} |Y^{(N)},X^{(N)})+\tilde\Pi_N(f:\|f\|_{C^\beta}> M|Y^{(N)},X^{(N)})\\ &\ = O_{P^N_{F_0}}(e^{-D N\delta_N^2 }). \end{align*} \paragraph{Proof of Theorem \ref{Theo:PostMeanConv1}.} The proof largely follows ideas of \cite{MNP19b} but requires a slightly more involved, iterative uniform integrability argument to also control the probability of events $\{F:\|F\|_{C^\beta} >M\}$ on whose complements we can subsequently exploit regularity properties of the inverse link function $\Phi^{-1}$. Using Jensen's inequality, it is enough to show, as $N\to\infty$, \begin{align*} P_{F_0}^N\Big(E^\Pi[\|F-F_0\|^2_{L^2}|Y^{(N)},X^{(N)}]>N^{-\lambda}\Big)\to0. \end{align*} For $M>0$ sufficiently large to be chosen, we decompose \begin{align} \label{Eq:TwoThings} E^\Pi[\|F-F_0\|_{L^2}|Y^{(N)},X^{(N)}] &= E^\Pi[\|F-F_0\|_{L^2}1_{\|F\|_{C^\beta}\le M} |Y^{(N)},X^{(N)}]\nonumber\\ &\quad+ E^\Pi[\|F-F_0\|_{L^2}1_{\|F\|_{C^\beta}> M} |Y^{(N)},X^{(N)}]. \end{align} Using the Cauchy-Schwarz inequality we can upper bound the expectation in the second summand by \begin{align*} & \sqrt{E^\Pi[\|F-F_0\|^2_{L^2}|Y^{(N)},X^{(N)}]} \sqrt{\Pi_N(F:\|F\|_{C^\beta}> M|Y^{(N)},X^{(N)})}. \end{align*} In view of \eqref{Eq:Regularisation10}, for all $D>0$ we can choose $M>0$ large enough to obtain \begin{align*} P_{F_0}^N&\Big(E^\Pi[\|F-F_0\|^2_{L^2}|Y^{(N)},X^{(N)}] \Pi_N(F:\|F\|_{C^\beta}> M|Y^{(N)},X^{(N)})>N^{-2\lambda}\Big)\\ &\le P_{F_0}^N\Big(E^\Pi[\|F-F_0\|^2_{L^2}|Y^{(N)},X^{(N)}] e^{-DN\delta_N^2}>N^{-2\lambda}\Big)+o(1). \end{align*} To bound the probability in the last line, let $\Bcal_N$ be the sets defined in \eqref{Eq:Bn} below, note that Lemma \ref{Lemma:SmallBall1} and Lemma \ref{Lemma:HellingerAndL2SmallBalls} below jointly imply that $\Pi_N(\Bcal_N)\ge ae^{-A N\delta_N^2 }$ for some $a,A>0$. Also, let $\nu(\cdot)=\Pi_N(\cdot\cap\Bcal_N)/\Pi_N(\Bcal_N)$, and let $\Ccal_N$ be the event from \eqref{Eq:Cn}, for which Lemma 7.3.2 in \cite{GN16} implies that $P^N_{F_0}(\Ccal_N)\to1$ as $N\to\infty$. Then \begin{align*} &P_{F_0}^N\Big(E^\Pi[\|F-F_0\|^2_{L^2}|Y^{(N)},X^{(N)}] e^{-DN\delta_N^2}>N^{-2\lambda}\Big)\\ &\ \le P_{F_0}^N\Bigg(\frac{\int_{C(\Ocal)}\|F-F_0\|^2_{L^2}\prod_{i=1}^Np_F/p_{F_0}(Y_i,X_i)d\Pi_N(F)} {\Pi(\Bcal_N)\int_{\Bcal_N}\prod_{i=1}^Np_F/p_{F_0}(Y_i,X_i)d\nu(F)} e^{-DN\delta_N^2}>N^{-2\lambda},\Ccal_N\Bigg)+o(1)\\ &\ \le P_{F_0}^N\Big(\int_{C(\Ocal)}\|F-F_0\|^2_{L^2}\prod_{i=1}^Np_F/p_{F_0}(Y_i,X_i)d\Pi_N(F) >N^{-2\lambda}ae^{(D-A-2)N\delta_N^2} \Big)+o(1) \end{align*} which is upper bounded, using Markov's inequality and Fubini's theorem, by \begin{align*} &\frac{1}{a}e^{-(D-A-2)N\delta_N^2}N^{2\lambda} \int_{C(\Ocal)}\|F-F_0\|^2_{L^2}E_{F_0}^N\Bigg(\prod_{i=1}^N\frac{p_F}{p_{F_0}}(Y_i,X_i)\Bigg)d\Pi_N(F). \end{align*} Taking $D>A+2$ (and $M$ large enough in \eqref{Eq:TwoThings}), using the fact that $E_{F_0}^N\big(\prod_{i=1}^N$ $p_F/p_{F_0}(Y_i,X_i)\big)=1$, and that $E^{\Pi_N}\|F\|_{L^2}<\infty$ (by Fernique's theorem, e.g., \cite[Exercise 2.1.5]{GN16}), we then conclude \begin{equation} \label{idk1} P_{F_0}^N\Big(E^\Pi[\|F-F_0\|^2_{L^2}1_{\|F\|_{C^\beta}>M}|Y^{(N)},X^{(N)}]>N^{-\lambda}\Big)\to0, \quad N\to\infty. \end{equation} To handle the first term in \eqref{Eq:TwoThings}, let $f=\Phi\circ F$ and $f_0=\Phi\circ F_0$. Then for all $x\in\Ocal$, by the mean value and inverse function theorems, \begin{align*} |F(x)-F_0(x)| = |\Phi^{-1}\circ f(x)-\Phi^{-1}\circ f_0(x)| &= \frac{1}{|\Phi'(\Phi^{-1}(\eta))|}|f(x)-f_0(x)| \end{align*} for some $\eta$ lying between $f(x)$ and $f_0(x)$. If $\|F\|_{C^\beta}\le M$ then, as $\Phi$ is strictly increasing, necessarily $ f(x)=\Phi(F(x)) \in [\Phi(-M), \Phi(M)] $ for all $x\in\Ocal$. Similarly, the range of $f_0$ is contained in the compact interval $[\Phi(-M), \Phi(M)]$ for $M \ge \|F_0\|_\infty$, so that \begin{align*} |\Phi^{-1}\circ f(x)-\Phi^{-1}\circ f_0(x)| &\le \frac{1}{\min_{z \in [-M,M]}\Phi'(z)} |f(x)-f_0(x)| \lesssim |f(x)-f_0(x)| \end{align*} for a multiplicative constant not depending on $x\in\Ocal$. It follows \begin{align*} \|F-F_0\|_{L^2} 1_{\|F\|_{C^\beta}\le M} &\lesssim \|f-f_0\|_{L^2}1_{\|F\|_{C^\beta}\le M}, \end{align*} and $$ E^\Pi[\|F-F_0\|_{L^2}1_{\|F\|_{C^\beta}\le M} |Y^{(N)},X^{(N)}] \lesssim E^{\tilde\Pi}[\|f-f_0\|_{L^2} |Y^{(N)},X^{(N)}]. $$ Noting that for each $L>0$ the last expectation is upper bounded by \begin{align*} L&N^{-\lambda} + E^{\tilde\Pi}\Big[\|f-f_0\|_{L^2} 1_{\|f-f_0\|_{L^2}>LN^{-\lambda}}|Y^{(N)},X^{(N)}]\\ &\le LN^{-\lambda}+ \sqrt{E^{\tilde\Pi}[\|f-f_0\|^2_{L^2}|Y^{(N)},X^{(N)}]} \sqrt{\tilde\Pi_N(f:\|f-f_0\|_{L^2}>LN^{-\lambda}|Y^{(N)},X^{(N)})}, \end{align*} we can repeat the above argument, with the event $\{F:\|F\|_{C^\beta} > M\}$ replaced by the event $\{f:\|f-f_0\|_{L^2}>LN^{-\lambda}\}$, to deduce from Theorem \ref{Theo:L2Rates1} that for $D>A+2$ there exists $L>0$ large enough such that \begin{align*} P_{F_0}^N&\Big(E^{\tilde\Pi}[\|f-f_0\|^2_{L^2}| Y^{(N)},X^{(N)}] \tilde\Pi_N(f:\|f-f_0\|^2_{L^2}>LN^{-\lambda}|Y^{(N)},X^{(N)}) >N^{-\lambda}\Big)\\ &\lesssim e^{-(D-A-2)N\delta_N^2}N^{2\lambda} \end{align*} which combined with \eqref{idk1} and the definition of $\delta_N$ concludes the proof. \subsection{Sieve prior proofs} The proof only requires minor modification from the proofs of Section \ref{Subsec:GaussianPriors}. We only discuss here the main points: One first applies the $L^2$-prediction risk Theorem \ref{Theo:GenFwdRates1} with a sieve prior. In the proof of the small ball Lemma \ref{Lemma:SmallBall1} one uses the following observations: the projection $P_{\Hcal_J}(F_0)\in \Hcal_J$ of $F_0 \in H^\alpha_K$ defined in \eqref{Eq:Projections} satisfies by (\ref{Eq:DualNormApprox}) $$\|F_0-P_{\Hcal_J}(F_0)\|_{(H^1(\Ocal))^*} \lesssim 2^{-J(\alpha+1)};$$ hence choosing $J$ such that $2^J\simeq N^{1/(2\alpha+2+d)}$, and noting also that $\|P_{\Hcal_J}(F_0)\|_{C^1}\le \|F_0\|_{C^1}<\infty$ for all $J$ by standard properties of wavelet bases, it follows from \eqref{Eq:LipCondG} that $$ \|\mathscr{G}}\def\Xscr{\mathscr{X} (F_0)-\mathscr{G}}\def\Xscr{\mathscr{X} (P_{\Hcal_J}(F_0))\|_{L^2} \lesssim \|F_0-P_{\Hcal_J}(F_0)\|_{(H^1)^*} \lesssim N^{-(\alpha+1)/(2\alpha+2+d)} = \delta_N. $$ Therefore, by the triangle inequality, $$ \Pi_N(F:\|\mathscr{G}}\def\Xscr{\mathscr{X} (F)-\mathscr{G}}\def\Xscr{\mathscr{X} (F_0)\|_{L^2}\ge \delta_N/q)\ge \Pi_N(F:\|\mathscr{G}}\def\Xscr{\mathscr{X} (F)-\mathscr{G}}\def\Xscr{\mathscr{X} (P_{\Hcal_N}(F_0))\|_{L^2}\ge q'\delta_N). $$ The rest of the proof of Lemma \ref{Lemma:SmallBall1} then carries over (with $P_{\Hcal_J}(F_0)$ replacing $F_0$), upon noting that (\ref{Eq:Norms}) and a Sobolev imbedding imply $$ \sup_{J\in\mathbb N} E^{\Pi'_J}\|F\|^2_{C^1}<\infty,~\text{ as well as }~\|F\|_{H^\alpha}\le c\|F\|_{\Hcal_J} \text{ for all } F\in\Hcal_J $$ for some constant $c>0$ independent of $J$. Moreover, the last two properties are sufficient to prove an analogue of Lemma \ref{Lemma:ApproxSets1} as well, so that Theorem \ref{Theo:GenFwdRates1} indeed applies to the sieve prior. The proof from here onwards is identical to the ones of Theorems \ref{Theo:FwdRates1}-\ref{Theo:PostMeanConv1} for the unsieved case, using also that what precedes implies that $\sup_J E^{\Pi'_J}\|F\|^2_{L^2}<\infty$, relevant in the proof of convergence of the posterior mean. \subsection{Proofs for Section \ref{Subsec:RandSeriesPriors}} \label{Subsec:ProofsRandSeriesPrior} Inspection of the proofs for rescaled priors implies that Theorems \ref{Theo:L2Rates2} and \ref{Theo:PostMeanConv2} can be deduced from Theorem \ref{Theo:FwdRates2} if we can show that posterior draws lie in a $\alpha$-Sobolev ball of fixed radius with sufficiently high frequentist probability. This is the content of the next result. \begin{lemma}\label{Lemma:HalphaBound Under the hypotheses of Theorem \ref{Theo:L2Rates2}, there exists $\alpha^* >0$ (depending on $\alpha, d$ and $a$) such that for each $F_0\in H^{\alpha_0}_K(\Ocal), \alpha_0>\alpha^*,$ and any $D>0$ we can find $M>0$ large enough such that, as $N \to \infty$, $$ \Pi(F: \|F\|_{H^\alpha}\le M|Y^{(N)},X^{(N)}) = 1-O_{P^N_{F_0}}(e^{-DN\xi_N^2}). $$ \proo Theorem \ref{Theo:FwdRates2} implies that for all $D>0$ and sufficiently large $L,M>0$, if $J_N\in\mathbb N:2^{J_N}\simeq N^{1/(2\alpha_0+2+d)}$ and denoting by \begin{align*} \mathcal{A}}\def\Bcal{\mathcal{B}}\def\Ccal{\mathcal{C}}\def\Dcal{\mathcal{D}}\def\Ecal{\mathcal{E}}\def\Fcal{\mathcal{F}}\def\Hcal{\mathcal{H}}\def\Lcal{\mathcal{L}}\def\Ncal{\mathcal{N}}\def\Ocal{\mathcal{O}}\def\Rcal{\mathcal{R}}\def\Scal{\mathcal{S}}\def\Tcal{\mathcal{T}}\def\Vcal{\mathcal{V}_N &= \{F\in \Hcal_{J_N} : \| F\|_{H^\alpha}\le M2^{J_N\alpha}\sqrt{N}\xi_N, \ \|\mathscr{G}}\def\Xscr{\mathscr{X}(F)-\mathscr{G}}\def\Xscr{\mathscr{X} (F_0)\|_{L^2}\le L\xi_N\}, \end{align*} then as $N\to \infty$ \begin{equation} \label{Eq:AvarepsilonDecay} \Pi(\mathcal{A}}\def\Bcal{\mathcal{B}}\def\Ccal{\mathcal{C}}\def\Dcal{\mathcal{D}}\def\Ecal{\mathcal{E}}\def\Fcal{\mathcal{F}}\def\Hcal{\mathcal{H}}\def\Lcal{\mathcal{L}}\def\Ncal{\mathcal{N}}\def\Ocal{\mathcal{O}}\def\Rcal{\mathcal{R}}\def\Scal{\mathcal{S}}\def\Tcal{\mathcal{T}}\def\Vcal{\mathcal{V}_N|Y^{(N)},X^{(N)}) =1-O_{P^N_{F_0}}(e^{-DN\xi_N^2}). \end{equation} Next, note that if $F\in\Hcal_{J_N}$, then by standard properties of wavelet bases (cf.~\eqref{Eq:EquivOfNorms}), $ \| F\|_{H^\alpha} \lesssim 2^{J_N\alpha}\| F\|_{L^2} $ for all $N$ large enough. Thus, for $P_{\Hcal_{J_N}}(F_0)$ the projection of $F_0$ onto $\Hcal_{J_N}$ defined in \eqref{Eq:Projections}, $$ \|F\|_{H^\alpha} \le \|F-P_{\Hcal_{J_N}}(F_0)\|_{H^\alpha}+\|P_{\Hcal_{J_N}}(F_0)\|_{H^\alpha} \lesssim 2^{J_N\alpha}\|F-F_0\|_{L^2}+\|F_0\|_{H^\alpha}, $$ and a Sobolev imbedding further gives $\|F\|_{L^\infty}\le M' 2^{J_N\alpha}\sqrt{N}\xi_N$, for some $M'>0$. Now letting $f=\Phi\circ F$ and $f_0=\Phi\circ F_0$, by similar argument as in the proof of Theorem \ref{Theo:PostMeanConv1} combined with monotonicity of $\Phi'$, we see that for all $N$ large enough \begin{align*} \|F-F_0\|_{L^2} \le\frac{1}{\Phi'(-M'2^{J_N\alpha}\sqrt{N}\xi_N)} \|f-f_0\|_{L^2}. \end{align*} Then, using the assumption on the left tail of $\Phi$ in Condition \ref{Cond:LinkFunction2}, and the stability estimate \eqref{Eq:StabEstim}, \begin{align*} \|F-F_0\|_{L^2} &\lesssim (2^{J_N\alpha}\sqrt{N}\xi_N)^a\|f\|_{H^\alpha}\|G(f)-G(f_0)\|_{H^2}. \end{align*} Finally, by the interpolation inequality for Sobolev spaces \cite{Lions1972} and Lemma 23 in \cite{NVW18}, \begin{align*} \|G(f)-G(f_0)\|_{H^2} &\lesssim \|G(f)-G(f_0)\|_{L^2}^{(\alpha-1)/(\alpha+1)} \|G(f)-G(f_0)\|^{2/(\alpha+1)}_{H^{\alpha+1}}\\ &\lesssim \xi_N^{(\alpha-1)/(\alpha+1)} (\|G(f)\|_{H^{\alpha+1}}+\|G(f_0)\|_{H^{\alpha+1}})^{2/(\alpha+1)}\\ &\lesssim \xi_N^{(\alpha-1)/(\alpha+1)} (1+\|f\|^{\alpha^2+\alpha}_{H^{\alpha}})^{2/(\alpha+1)}. \end{align*} so that, in conclusion, for each $F\in\mathcal{A}}\def\Bcal{\mathcal{B}}\def\Ccal{\mathcal{C}}\def\Dcal{\mathcal{D}}\def\Ecal{\mathcal{E}}\def\Fcal{\mathcal{F}}\def\Hcal{\mathcal{H}}\def\Lcal{\mathcal{L}}\def\Ncal{\mathcal{N}}\def\Ocal{\mathcal{O}}\def\Rcal{\mathcal{R}}\def\Scal{\mathcal{S}}\def\Tcal{\mathcal{T}}\def\Vcal{\mathcal{V}_N$ and sufficiently large $N$, \begin{align*} \| F\|_{H^\alpha} &\lesssim 1+ 2^{J_N\alpha}(2^{J_N\alpha}\sqrt{N}\xi_N)^a\|f\|_{H^\alpha} \xi_N^{\frac{\alpha-1}{\alpha+1}} (1+\|f\|_{H^\alpha}^{\alpha^2+\alpha})^\frac{2}{\alpha+1}. \end{align*} The last term is bounded, using Lemma 29 in \cite{NVW18}, by a multiple of \begin{align*} \xi_N^\frac{\alpha-1}{\alpha+1} 2^{J_N\alpha} (2^{J_N\alpha}\sqrt{N}\xi_N)^{2\alpha^2+2\alpha+a} = N^{-\frac{(\alpha_0+1)(\alpha-1)} {(2\alpha_0+2+d)(\alpha+1)}} N^\frac{2\alpha^3+(2+d)\alpha^2+(1+a+d)\alpha+ad/2}{2\alpha_0+2+d} \end{align*} the last identity holding up to a log factor. Hence, if $$ \alpha_0 > \alpha^* := \frac{[2\alpha^3+(2+d)\alpha^2+(1+a+d)\alpha+ad/2](\alpha+1)}{(\alpha-1)} $$ then we conclude overall that $\|F\|_{H^\alpha}\lesssim 1+o(1)$ as $N\to\infty$ for all $F\in \mathcal{A}}\def\Bcal{\mathcal{B}}\def\Ccal{\mathcal{C}}\def\Dcal{\mathcal{D}}\def\Ecal{\mathcal{E}}\def\Fcal{\mathcal{F}}\def\Hcal{\mathcal{H}}\def\Lcal{\mathcal{L}}\def\Ncal{\mathcal{N}}\def\Ocal{\mathcal{O}}\def\Rcal{\mathcal{R}}\def\Scal{\mathcal{S}}\def\Tcal{\mathcal{T}}\def\Vcal{\mathcal{V}_N$, proving the claim in view of \eqref{Eq:AvarepsilonDecay}. \endproo \end{lemma Replacing $\beta$ by $\alpha$ in the conclusion of Lemma \ref{Lemma:FwdH2Risk1}, the proof of Theorem \ref{Theo:L2Rates2} now proceeds as in the proof of Theorem \ref{Theo:L2Rates1} without further modification. Likewise, Theorem \ref{Theo:PostMeanConv2} can be shown following the same argument as in the proof of Theorem \ref{Theo:PostMeanConv1}, noting that for $\Pi$ the random series prior in \eqref{Eq:Prior2}, it also holds that $E^\Pi\|F\|^2_{L^2}<\infty$.
train/arxiv
BkiUdQw4uzlh53v22xIT
5
1
\section{Brenner's model, compressible flows with mass diffusion} \section{Introduction} \label{I} The concept of measure-valued solution to partial differential equations was introduced by DiPerna~\cite{DiP} in the context of conservation laws. He used Young measures in order to conveniently pass to the artificial viscosity limit and in some situations (e.g.\ the scalar case) proved a posteriori that the measure-valued solution is atomic, i.e.\ it is in fact a solution in the sense of distributions. For general systems of conservation laws there is no hope to obtain (entropy) solutions in the distributional sense and therefore there seems to be no alternative to the use of measure-valued solutions or related concepts. In the realm of inviscid fluid dynamics, the existence of measure-valued solutions has been established for a variety of models~\cite{DiMa, Neustup, Gwi}. Measure-valued solutions to problems involving \emph{viscous} fluids were introduced in the early nineties in \cite{MNRR} and may seem obsolete nowadays in the light of the theory proposed by P.-L. Lions \cite{LI4} and extended by Feireisl et al.\ \cite{FNP} in the framework of weak solutions for the barotropic Navier-Stokes system \begin{equation}\label{NS} \begin{aligned} \partial_t \vr + \Div (\vr \vu) &=0 \\ \partial_t (\vr \vu) + \Div (\vr \vu \otimes \vu) + \Grad p(\vr) &= \Div \mathbb{S} (\Grad \vu), \end{aligned} \end{equation} where $\vr$ is the density, $\vu$ the velocity, $p$ the given pressure function, and $\mathbb{S}$ the Newtonian viscous stress. The reason we consider measure-valued solutions nevertheless is twofold: First, the results of this paper pertain to any adiabatic exponent greater than one, whereas the known existence theory for weak solutions requires $\gamma>3/2$; second, there remains a vast class of approximate problems including systems with higher order viscosities and solutions to certain numerical schemes for which it is rather easy to show that they generate a measure-valued solution whereas convergence to a weak solution is either not known or difficult to prove. This motivates the present study, where we introduce a new concept of (dissipative) measure valued solution to the system~\eqref{NS}. The main novelty is that we have to deal with nonlinearities both in the velocity and its derivative, since we need to make sense of the energy inequality \[ \partial_t \int_{\Omega} \left[ \frac{1}{2} \vr |\vu|^2 + P(\vr) \right] + \int_\Omega \left[ \mathbb{S} (\Grad \vu) : \Grad \vu\right] \leq 0 \] in the measure-valued framework. Indeed, Neustupa~\cite{Neustup} considered measure-valued solutions of~\eqref{NS}, but his theory does not involve the energy. Young measures do not seem suitable to describe the limit distributions of a map and its gradient simultaneously, as it is unclear how the information that one component of the measure is in some sense the gradient of the other component is encoded in the measure. We solve this issue by introducing a ``dissipation defect" (see Definition \ref{DD1}), which encodes all conceivable concentration effects in the density and the velocity, and concentration \emph{and} oscillation effects in the gradient of the velocity. It then turns out that postulating a Poincar\'e-type inequality (see \eqref{KoPo}), which is satisfied by any measure generated by a reasonable approximating sequence of solutions for~\eqref{NS}, already suffices to ensure weak-strong uniqueness. As a side effect, we thus avoid the notationally somehow heavy framework of Alibert and Bouchitt\'e~\cite{AlBo} and give the most extensive definition of dissipative measure-valued solution that still complies with weak-strong uniqueness (cf.\ also the discussion in Section~\ref{dissipative}). Indeed, the proof of weak-strong uniqueness for our dissipative measure-valued solutions is the main point of this paper (Theorem~\ref{TT1}). Weak-strong uniqueness means that classical solutions are stable within the class of dissipative measure-valued solutions. For the incompressible Navier-Stokes equations, a weak-strong uniqueness principle was shown for the first time in the classical works of Prodi~\cite{Pro} and Serrin~\cite{Ser}. Surprisingly, even in the measure-valued setting, weak-strong uniqueness results have been proved: For the incompressible Euler equations and bounded solutions of conservation laws this was done in~\cite{BrDLSz}, and for the compressible Euler system and a related model for granular flow in~\cite{GwSwWi}. In the context of elastodynamics, dissipative measure-valued solutions and their weak-strong uniqueness property were studied in~\cite{DeStTz}. Here, we give the first instance of weak-strong uniqueness for measure-valued solutions of a viscous fluid model. We also identify a large class of problems generating dissipative measure-valued solutions including the pressure-density equations of state that are still beyond the reach of the current theory of weak solutions. We make a similar observation for certain numerical schemes, thus adopting the viewpoint of Fjordholm et al.\ \cite{FjKaMiTa}, who argue (in the context of hyperbolic systems of conservation laws) that dissipative measure-valued solutions are a more appropriate solution concept compared to weak entropy solutions, because the former are obtained as limits of common numerical approximations whereas the latter aren't. As a further application of weak-strong unqiueness, we show (Theorem~\ref{TT2}) that every approximate sequence of solutions of~\eqref{NS} with uniformly bounded density converges to the unique smooth solution. \section{Definition and existence of dissipative measure-valued solutions} \subsection{Motivation: Brenner's model in fluid dynamics} \label{BM} To motivate our definition of measure-valued solution, we consider a model of a viscous compressible fluid proposed by Brenner \cite{BREN}, where the density $\vr = \vr(t,x)$ and the velocity $\vu = \vu(t,x)$ satisfy \begin{eqnarray} \label{B1} \partial_t \vr + \Div (\vr \vu) &=& K \Delta \vr \\ \label{B2} \partial_t (\vr \vu) + \Div (\vr \vu \otimes \vu) + \Grad p(\vr) &=& \Div \mathbb{S} (\Grad \vu)+ K \Div (\vu \otimes \Grad \vr), \end{eqnarray} where $K > 0$ is a parameter, and $\mathbb{S}$ the standard Newtonian viscous stress \begin{equation} \label{B3} \mathbb{S}(\Grad \vu) = \mu \left( \Grad \vu + \Grad^t \vu - \frac{2}{3} \Div \vu \mathbb{I} \right) + \eta \Div \vu \mathbb{I},\ \mu > 0,\ \eta \geq 0. \end{equation} Note that $\mathbb{S}$ depends only on the symmetric part of its argument. Problem (\ref{B1}--\ref{B3}) may be supplemented by relevant boundary conditions, here \begin{equation} \label{B4} \vu|_{\partial \Omega} = 0, \ \Grad \vr \cdot \vc{n}|_{\partial \Omega} = 0, \end{equation} where $\Omega \subset R^N$, $N=2,3$ is a regular bounded domain. In addition, sufficiently smooth solutions of (\ref{B1}--\ref{B4}) obey the total energy balance: \begin{equation} \label{B5} \partial_t \intO{ \left[ \frac{1}{2} \vr |\vu|^2 + P(\vr) \right] } + \intO{ \left[ \tn{S} (\Grad \vu) : \Grad \vu + K P''(\vr) |\Grad \vr|^2 \right]} = 0, \end{equation} where $P$ denotes the pressure potential, \[ P(\vr) = \vr \int_1^\vr \frac{p(z)}{z^2} \ {\rm d}z. \] Leaving apart the physical relevance of Brenner's model, discussed and criticized in several studies (see, e.g., \"{O}ttinger, Struchtrup, and Liu \cite{OeStLi}), we examine the limit of a family of solutions $\{ \vr^K, \vu^K \}_{K > 0}$ for $K \to 0$. Interestingly, system (\ref{B1}--\ref{B3}) is almost identical to the approximate problem used in \cite{EF70} in the construction of weak solutions to the barotropic Navier-Stokes system, in particular, the existence of $\{ \vr^K, \vu^K \}_{k > 0}$ for a fairly general class of initial data may be established by the method detailed in \cite[Chapter 7]{EF70}. A more general model of a heat-conducting fluid based on Brenner's ideas has been also analyzed in \cite{FeiVas}. We suppose that the energy of the initial data is bounded \[ \intO{ \left[ \frac{1}{2} \vr^K_0 |\vu^K_0|^2 + P(\vr^K_0) \right] } \leq c \] uniformly for $K \to 0$. In order to deduce uniform bounds, certain coercivity assumption must be imposed on the pressure term: \begin{equation} \label{BB6} p \in C[0, \infty) \cap C^2(0, \infty), \ p(0) = 0, \ p'(\vr) > 0 \ \mbox{for}\ \vr > 0, \ \liminf_{\vr \to \infty} p'(\vr) > 0,\ \liminf_{\vr \to \infty} \frac{P(\vr)}{p(\vr)} > 0. \end{equation} Seeing that $P''(\vr) = p'(\vr)/\vr$ we deduce from the energy balance (\ref{B5}) the following bounds \begin{equation} \label{BB7} \begin{split} \sup_{\tau \in [0,T]} \intO{ P(\vr^K) (\tau, \cdot) } \leq c \ & \Rightarrow \sup_{\tau \in [0,T]} \intO{ \vr^K \log(\vr^K) (\tau, \cdot) } \leq c, \\ \sup_{\tau \in [0,T]} \intO{ \vr^K |\vu^K |^2 (\tau, \cdot) } & \leq c, \\ \int_0^T \intO{ \mathbb{S}(\Grad \vu^K) : \Grad \vu^K } \leq c \ & \Rightarrow \ \mbox{(Korn inequality)}\ \int_0^T \intO{ | \Grad \vu^K |^2 } \leq c \\ & \Rightarrow \ \mbox{(Poincar\' e inequality)}\ \int_0^T \intO{ | \vu^K |^2 } \leq c,\\ K \int_0^T \intO{ \frac{p'(\vr^K)}{\vr^K} |\Grad \vr^K |^2 } & \leq c \end{split} \end{equation} uniformly for $K \to 0$. Now, system (\ref{B1}, \ref{B2}) can be written in the weak form \begin{equation} \label{wB1} \left[ \intO{ \vr^K \psi} \right]_{t = 0}^{t = \tau} = \int_0^\tau \intO{ \Big[ \vr^K \partial_t \psi + \vr^K \vu^K \cdot \Grad \psi - K \Grad \vr^K \cdot \Grad \psi \Big] } \ \dt \end{equation} for any $\psi \in C^1([0,T] \times \Ov{\Omega})$, \begin{equation} \label{wB2} \begin{split} &\left[ \intO{ \vr^K \vu^K \cdot \varphi} \right]_{t = 0}^{t = \tau} = \int_0^\tau \intO{ \Big[ \vr^K \vu^K \cdot \partial_t \varphi + \vr^K (\vu^K \otimes \vu^K ) : \Grad \varphi + p(\vr^K) \Div \varphi \Big] } \ \dt\\ & - \int_0^\tau \intO{ \Big[ \mathbb{S}(\Grad \vu^K ) : \Grad \varphi + K (\vu^K \otimes \Grad \vr^K) : \Grad \varphi \Big] } \ \dt \end{split} \end{equation} for any $\varphi \in C^1([0,T] \times \Ov{\Omega})$, $\varphi|_{\partial \Omega} = 0$. The first observation is that the $K$-dependent quantities vanish in the asymptotic limit $K \to 0$ as long as (\ref{BB7}) holds. To see this, note that \begin{equation} \begin{split} \label{IB1} K \int_0^\tau \intO{ \Grad \vr^K \cdot \Grad \psi } \ \dt &= \sqrt{K} \int_0^\tau \intO{ \sqrt{K} \frac{\Grad \vr^K}{\sqrt{\vr^K}} \cdot \sqrt{\vr^K} \Grad \psi } \ \dt,\\ K \int_0^\tau \intO{ (\vu^K \otimes \Grad \vr^K ) : \Grad \varphi } \ \dt &= \sqrt{K} \int_0^\tau \intO{ \left( \sqrt{\vr^K} \vu^K \otimes \sqrt{K} \frac{\Grad \vr^K}{\sqrt{\vr^K}}\right ) : \Grad \varphi }; \end{split} \end{equation} whence, by virtue of hypothesis (\ref{BB6}), these integrals are controlled by (\ref{BB7}) at least on the set where $\vr^K \geq 1$. In order to estimate $\Grad \vr^K$ on the set where $\vr^K$ is small, we multiply (\ref{B1}) on $b'(\vr^K)$ obtaining \begin{equation} \label{RB1} \partial_t b(\vr^K) + \Div (b(\vr^K) \vu^K ) + \left( b'(\vr^K) \vr^K - b(\vr^K) \right)\Div \vu^K = K \Div \left( b'(\vr^K) \Grad b(\vr^K) \right) - Kb''(\vr^K) |\Grad \vr^K|^2. \end{equation} Such a step can be rigorously justified for the solutions of Brenner's problem discussed in \cite{FeiVas} provided, for instance, $b \in C^\infty_c[0, \infty)$. Thus taking $b$ such that $b(\vr) = \vr^2$ for $\vr \leq 1$, integrating (\ref{RB1}) and using (\ref{BB7}) we deduce that \[ K \int\int_{ \{ \vr^K \leq 1 \} } |\Grad \vr^K |^2 \ \dxdt \leq c \ \mbox{uniformly for}\ K \to 0, \] which provides the necessary bounds for the integrals in (\ref{IB1}) on the set where $\vr^K \leq 1$. Indeed using the fact that $b$ is bounded and the bounds established in (\ref{BB7}) we deduce \[ \left| K \int_0^T \intO{ b''(\vr^K) |\Grad \vr^K |^2 } \ \dt \right| \leq c. \] On the other hand, thanks to our choice of $b$, \[ K \int\int_{ \{ \vr^K \leq 1 \} } |\Grad \vr^K |^2 \ \dxdt = K \int_0^T \intO{ b''(\vr^K) |\Grad \vr^K |^2 } \ \dt - K \int\int_{ \{ \vr^K > 1 \} } b''(\vr^K) |\Grad \vr^K |^2 \ \dxdt, \] where the right-most integral is bounded in view of (\ref{BB7}), hypothesis (\ref{BB6}) and the fact that $b''(\vr^K)$ vanishes for large $\vr^K$. \color{black} Consequently, we may, at least formally, let $K \to 0$ in (\ref{wB1}), in (\ref{wB2}) and also in (\ref{B5}) obtaining a measure-valued solution to the barotropic Navier-Stokes system: \begin{eqnarray} \label{I1} \partial_t \vr + \Div (\vr \vu) &=& 0, \\ \label{I2} \partial_t (\vr \vu) + \Div (\vr \vu \otimes \vu) + \Grad p(\vr) &=& \Div \mathbb{S} (\Grad \vu), \\ \label{I3} \vu|_{\partial \Omega} &=& 0. \end{eqnarray} More specifically, as all integrands in (\ref{wB1}), (\ref{wB2}) admit uniform bounds at least in the Lebesgue norm $L^1$, it is convenient to use the well developed framework of parametrized measures associated to the family of equi-integrable functions $\{ \vr^K, \vu^K \}_{K > 0}$ generating a Young measure \[ \nu_{t,x} \in \mathcal{P} \left([0, \infty) \times R^N \right) \ \mbox{for a.a.}\ (t,x) \in (0,T) \times \Omega, \] cf. Pedregal \cite[Chapter 6, Theorem 6.2]{PED1}. We will systematically use the notation \[ \Ov{ F(\vr, \vu) }(t,x) = \langle \nu_{t,x} ; F(s, \vc{v}) \rangle \ \mbox{for the dummy variables}\ s \approx \vr, \ \vc{v} \approx \vu. \] Focusing on the energy balance (\ref{B5}) we first take advantage of the no-slip boundary conditions and rewrite the viscous dissipation term in a more convenient form \[ \intO{ \mathbb{S} (\Grad \vu^K) : \Grad \vu^K } = \intO{ \left[ \mu |\Grad \vu^K |^2 + \lambda |\Div \vu^K |^2 \right] },\ \lambda = \frac{\mu}{3} + \eta > 0. \] Now, we identify \[ \left[ \frac{1}{2} \vr^K |\vu^K|^2 + P(\vr^K) \right] (\tau, \cdot) \in \mathcal{M} (\Ov{\Omega}) \ \mbox{bounded uniformly for}\ \tau \in [0,T]; \] \[ \left[ \mu |\Grad \vu^K |^2 + \lambda |\Div \vu^K |^2 \right] \ \mbox{bounded in}\ \mathcal{M}^+ ([0,T] \times \Ov{\Omega}); \] whence, passing to a subsequence as the case may be, we may assume that \[ \begin{split} \left[ \frac{1}{2} \vr^K |\vu^K|^2 + P(\vr^K) \right] (\tau, \cdot) & \to E \ \mbox{weakly-(*) in}\ L^\infty_{\rm weak}(0,T; \mathcal{M}(\Ov{\Omega})),\\ \left[ \mu |\Grad \vu^K |^2 + \lambda |\Div \vu^K |^2 \right] &\to \sigma \ \mbox{weakly-(*) in}\ \mathcal{M}^+ ([0,T] \times \Ov{\Omega}). \end{split} \] Thus, introducing new (non-negative) measures \[ E_\infty = E - \langle \nu_{t,x} ; \frac{1}{2} s |\vc{v}|^2 + P(s) \rangle \ \dx, \ \sigma_\infty = \sigma - \left[ \mu |\nabla \langle \nu_{t,x};\vc{v}\rangle|^2 + \lambda \left( {\rm tr}|\nabla \langle \nu_{t,x};\vc{v}\rangle|\right)^2 \right] \ \dxdt, \] we may perform the limit $K \to 0$ in the energy balance (\ref{B5}) obtaining \begin{equation} \label{mvEI} \begin{split} &\intO{ \Ov{ \left( \frac{1}{2} \vr |\vu|^2 + P(\vr) \right) }(\tau, \cdot) } + E_\infty(\tau)[\Ov{\Omega}] + \int_0^\tau \intO{{\mu |\Grad\Ov{\vu}|^2 + \lambda |\Div \Ov{\vu}|^2 } } \ \dt + \sigma_\infty [[0,\tau] \times \Ov{\Omega}] \\ &\leq \intO{ \Ov{ \left( \frac{1}{2} \vr_0 |\vu_0|^2 + P(\vr_0) \right) } } + E_\infty(0)[\Ov{\Omega}] \end{split} \end{equation} for a.a. $\tau \in (0,T)$. Applying a similar treatment to (\ref{wB1}) we deduce \begin{equation} \label{mvB1} \left[ \intO{ \Ov{\vr} \psi} \right]_{t = 0}^{t = \tau} = \int_0^\tau \intO{ \Big[ \Ov{\vr} \partial_t \psi + \Ov{\vr \vu} \cdot \Grad \psi \Big] } \ \dt \end{equation} for any $\psi \in C^1([0,T] \times \Ov{\Omega})$. Note that (\ref{mvB1}) holds for any $\tau$ as the family $\{ \vr^K \}_{K > 0}$ is precompact in $C_{\rm weak}([0,T]; L^1(\Omega))$. Indeed precompactness follows from the uniform bound for $\{\rho^K\}$ in $L\log L$ in \eqref{BB7}. Our final goal is to perform the limit $K \to 0$ in (\ref{wB2}). This is a bit more delicate as both the convective term $\vr^K \vu^K \otimes \vu^K$ and $p(\vr^K)$ are bounded only in $L^\infty(L^1)$. We use the following result: \begin{Lemma}\label{Lmeas} Let $\{ \vc{Z}_n \}_{n = 1}^\infty$, $\vc{Z}_n : Q \to R^N$ be a sequence of equi-integrable functions generating a Young measure $\nu_y$, $y \in Q$, where $Q \subset R^M$ is a bounded domain. Let \[ G: R^N \to [0, \infty) \] be a continuous function such that \[ \sup_{n \geq 0} \| G(\vc{Z}_n) \|_{L^1(Q)} < \infty, \] and let $F$ be continuous such that \[ F : R^N \to R \ \ |F(\vc{Z})| \leq G(\vc{Z}) \ \mbox{for all}\ \vc{Z} \in R^N. \] Denote \[ F_\infty = \tilde F - \left< \nu_y , F(Z) \right> {\rm d}y ,\ G_\infty = \tilde G - \left< \nu_y , G(Z) \right> {\rm d}y, \] where $\tilde F \in \mathcal{M}(\Ov{Q})$, $\tilde G \in \mathcal{M} (\Ov{Q})$ are the weak-star limits of $\{ F(\vc{Z}_n) \}_{n \geq 1}$, $\{ G(\vc{Z}_n) \}_{n \geq 1}$ in $\mathcal{M}(\Ov{Q})$. Then \[ | F_\infty | \leq G_\infty. \] \end{Lemma} \bProof Write \[ <\tilde F, \phi > = \lim_{n \to \infty} \int_{ |\vc{Z}_n| \leq M} F(\vc{Z}_n) \phi \ {\rm d}y + \lim_{n \to \infty} \int_{ |\vc{Z}_n| > M} F(\vc{Z}_n) \phi \ {\rm d}y, \] \[ <\tilde G, \phi > = \lim_{n \to \infty} \int_{ |\vc{Z}_n| \leq M} G(\vc{Z}_n) \phi \ {\rm d}y + \lim_{n \to \infty} \int_{ |\vc{Z}_n| > M} G(\vc{Z}_n) \phi \ {\rm d}y. \] Applying Lebesgue theorem, we get \[ \lim_{M \to \infty} \left( \lim_{n \to \infty} \int_{ |\vc{Z}_n| \leq M} F(\vc{Z}_n) \phi \ {\rm d}y \right) = \int_Q \left< \nu_y ; F(\vc{Z}) \right> \ {\rm d}y, \] \[ \lim_{M \to \infty} \left( \lim_{n \to \infty} \int_{ |\vc{Z}_n| \leq M} G(\vc{Z}_n) \phi \ \dx \right) = \int_Q \left< \nu_y ; G(\vc{Z}) \right> \ {\rm d}y. \] Consequently, \[ \left< F_\infty ; \phi \right> = \lim_{M \to \infty} \left( \lim_{n \to \infty} \int_{ |\vc{Z}_n| > M} F(\vc{Z}_n) \phi \ {\rm d}y \right), \ \left< G_\infty ; \phi \right> = \lim_{M \to \infty} \left( \lim_{n \to \infty} \int_{ |\vc{Z}_n| > M} G(\vc{Z}_n) \phi \ {\rm d}y \right). \] As $|F| \leq G$ the desired result follows. \qed Seeing that \[ |\vr u_i u_j| \leq \vr |\vu|^2 \ \mbox{and, by virtue of hypothesis (\ref{BB6})},\ p(\vr) \leq a P(\vr) \ \mbox{for}\ \vr >> 1, \] we may let $K \to 0$ in (\ref{wB2}) to deduce \begin{equation} \label{mvB2} \begin{split} &\left[ \intO{ \Ov{\vr \vu} \cdot \varphi} \right]_{t = 0}^{t = \tau} = \int_0^\tau \intO{ \Big[ \Ov{\vr \vu} \cdot \partial_t \varphi + \Ov{ \vr (\vu \otimes \vu ) } : \Grad \varphi + \Ov{p(\vr)} \Div \varphi \Big] } \ \dt\\ & - \int_0^\tau \intO{ \Ov{ \mathbb{S}(\Grad \vu ) } : \Grad \varphi } \ \dt + \int_0^\tau \left< {r}^M ; \Grad \varphi \right> \ \dt \end{split} \end{equation} for any $\varphi \in C^1([0,T] \times \Ov{\Omega})$, $\varphi|_{\partial \Omega} = 0$, where \[ r^M = \left\{ r^M_{i,j} \right\}_{i,j=1}^N\, r^M_{i,j} \in L^\infty_{\rm weak} (0,T; \mathcal{M}(\Ov{\Omega})),\ |r^M_{i,j}(\tau)| \leq c E_\infty (\tau) \ \mbox{for a.a.} \ \tau \in (0,T). \] \subsection{Dissipative measure-valued solutions to the Navier-Stokes system}\label{dissipative} Motivated by the previous considerations, we introduce the concept of \emph{dissipative measure valued solution} to the barotropic Navier-Stokes system. \begin{Definition} \label{DD1} We say that a parameterized measure $\{ \nu_{t,x} \}_{(t,x) \in (0,T) \times \Omega }$, \[ \nu \in L^{\infty}_{\rm weak}\left( (0,T) \times \Omega; \mathcal{P} \left([0,\infty) \times R^N \right) \right),\ \left< \nu_{t,x}; s \right> \equiv \vr,\ \left< \nu_{t,x}; \vc{v} \right> \equiv \vu \] is a dissipative measure-valued solution of the Navier-Stokes system (\ref{I1} -- \ref{I3}) in $(0,T) \times \Omega$, with the initial conditions $\nu_0$ and dissipation defect $\mathcal{D}$, \[ \mathcal{D} \in L^\infty(0,T), \ \mathcal{D} \geq 0, \] if the following holds. \begin{itemize} \item {\bf Equation of continuity.} There exists a measure $r^C\in L^1([0,T];\mathcal{M}(\Ov{\Omega}))$ and $\chi\in L^1(0,T)$ such that for a.a.\ $\tau\in(0,T)$ and every $\psi \in C^1([0,T] \times \Ov{\Omega})$, \begin{equation} \left| \langle r^C (\tau) ; \Grad\psi \rangle \right| \leq \chi(\tau) \mathcal{D} (\tau) \| \psi \|_{C^1(\Ov{\Omega})} \end{equation} and \begin{equation} \label{dmvB1} \begin{split} \intO{ \langle \nu_{\tau,x}; s \rangle \psi (\tau, \cdot) } &- \intO{ \langle \nu_{0}; s \rangle \psi (0, \cdot) } \\ &= \int_0^\tau \intO{ \Big[ \langle \nu_{t,x}; s \rangle \partial_t \psi + \langle \nu_{t,x}; s \vc{v} \rangle \cdot \Grad \psi \Big] } \ \dt + \int_0^\tau \langle r^C; \Grad \psi \rangle \ \dt. \end{split} \end{equation} \color{black} \item {\bf Momentum equation.} \[ \vu = \left< \nu_{t,x}; \vc{v} \right> \in L^2(0,T; W^{1,2}_0 (\Omega;R^N)), \] and there exists a measure $r^M\in L^1([0,T];\mathcal{M}(\Ov{\Omega}))$ and $\xi\in L^1(0,T)$ such that for a.a.\ $\tau\in(0,T)$ and every $\varphi \in C^1([0,T] \times \Ov{\Omega}; R^N)$, $\varphi|_{\partial \Omega} = 0$, \begin{equation} \left| \langle r^M (\tau) ; \Grad\varphi \rangle \right| \leq \xi(\tau) \mathcal{D} (\tau) \| \varphi \|_{C^1(\Ov{\Omega})} \end{equation} and \begin{equation} \label{dmvB2} \begin{split} &\intO{ \langle \nu_{\tau,x}; s \vc{v} \rangle \cdot \varphi (\tau, \cdot) } - \intO{ \langle \nu_{0}; s \vc{v} \rangle \cdot \varphi (0, \cdot) }\\ &= \int_0^\tau \intO{ \Big[ \langle \nu_{t,x} ; s \vc{v} \rangle \cdot \partial_t \varphi + \langle \nu_{t,x}; s (\vc{v} \otimes \vc{v} ) \rangle : \Grad \varphi + \langle \nu_{t,x} ; p(s) \rangle \Div \varphi \Big] } \ \dt\\ & - \int_0^\tau \intO{ \mathbb{S}({\Grad \vu }) : \Grad \varphi } \ \dt + \int_0^\tau \left< {r}^M ; \Grad \varphi \right> \ \dt. \end{split} \end{equation} \color{black} \item{\bf Energy inequality.} \begin{equation} \label{dmvEI} \begin{split} \intO{ \left< \nu_{\tau,x}; \left( \frac{1}{2} s |\vc{v}|^2 + P(s) \right) \right> } &+ \int_0^\tau \intO{ \mathbb{S}(\Grad \vu) : \Grad \vu } \ \dt + \mathcal{D}(\tau) \\ &\leq \intO{ \left< \nu_0; \left( \frac{1}{2} s |\vc{v}|^2 + P(s) \right) \right> } \end{split} \end{equation} for a.a. $\tau \in (0,T)$. In addition, the following version of ``Poincar\' e's inequality" holds for a.a. $\tau \in (0,T)$: \begin{equation} \label{KoPo} \int_0^\tau \intO{ \left< \nu_{t,x} ; |\vc{v} - \vu|^2 \right> } \ \dt \leq c_P \mathcal{D}(\tau). \end{equation} \end{itemize} { \begin{Remark} Hypothesis \eqref{KoPo} is motivated by the following observation: Suppose that $$ \vu^\varepsilon\to \vu\ \mbox{weakly in } \ L^2(0,T;W^{1,2}_0(\Omega;R^N)), $$ then \begin{equation*} \begin{split} \int_0^\tau \intO{ \left< \nu_{t,x} ; |\vc{v} - \vu|^2 \right> } \ \dt&=\lim\limits_{\varepsilon\to0} \int_0^\tau\intO{|\vu^\varepsilon-\vu|^2}\ \dt\le c_P\lim\limits_{\varepsilon\to0}\int_0^\tau\intO{|\nabla \vu^\varepsilon-\nabla\vu|^2}\ \dt\\ &=c_P\lim\limits_{\varepsilon\to0}\int_0^\tau\intO{|\nabla \vu^\varepsilon|^2-|\nabla\vu|^2}\ \dt \leq c_P \mathcal{D}(\tau), \end{split} \end{equation*} provided the dissipation defect $\mathcal{D}$ ``contains" the oscillations and concentrations in the velocity gradient. \end{Remark} } \end{Definition} \color{black} We tacitly assume that all integrals in (\ref{dmvB1}--\ref{KoPo}) are well defined, meaning, all integrands are measurable and at least integrable. Notice that $\mathbb{S}(\Grad \vu): \Grad \vu \geq 0$ so that the dissipative term in the energy inequality is nonnegative. \color{black} The function $\mathcal{D}$ represents a dissipation defect usually attributed to (hypothetical) singularities that may appear in the course of the fluid evolution. The measure-valued formulation contains a minimal piece of information encoded in system (\ref{I1}--\ref{I3}). In contrast with the definition introduced by Neustupa \cite{Neustup}, the oscillatory and concentration components are clearly separated and, more importantly, the energy balance is included as an integral part of the present approach. Although one often uses the framework of Alibert and Bouchitt\'e~\cite{AlBo} in order to handle oscillations and concentrations (for instance in \cite{Gwi, BrDLSz, GwSwWi}), we choose here to give a somewhat simpler representation of the concentration effects, thereby avoiding usage of the concentration-angle measure. Indeed, the generalized Young measures of Alibert-Bouchitt\'e capture information on \emph{all} nonlinear functions of the generating sequence with suitable growth, whereas for our purposes this information is not fully needed, as we deal only with specific nonlinearities (such as $\rho\vc{u}\otimes\vc{u}$, $|\Grad\vc{u}|^2$, etc.), which are all encoded in the dissipation defect $\mathcal{D}$. This approach is inspired by~\cite{DeStTz}. We feel that the present formulation improves readability, and, more importantly, extends considerably the class of possible applications of the weak-strong uniqueness principle stated below. Indeed, it is possible to define dissipative measure-valued solutions in the framework of Alibert-Bouchitt\'e and show that they give rise to a dissipative measure-valued solutions as defined above, but presumably not vice versa. Let us also point out that an analogue of our dissipative measure-valued solutions could be considered also for the incompressible and compressible Euler system and might thus lead to a slight simplification and generalization of the results in~\cite{BrDLSz} and \cite{GwSwWi}. \color{black} The considerations of Section~\ref{BM} immediately yield the following existence result: \color{black} \begin{Theorem} Suppose $\Omega$ is a regular bounded domain in $R^2$ or $R^3$, and suppose the pressure satisfies~\eqref{BB6}. If $(\rho_0,\vc{u}_0)$ is initial data with finite energy, then there exists a dissipative measure-valued solution with initial data \begin{equation} \nu_0=\delta_{(\rho_0,\vc{u}_0)}. \end{equation} \end{Theorem} \begin{proof} For every $K>0$, we find a weak solution to Brenner's model with initial data $\vc{u}_0^K\in C_c^{\infty}(\Omega)$ and $\rho^K_0\in C^\infty(\Ov{\Omega})$ such that $\Grad\rho^K_0\cdot\vc{n}|_{\partial \Omega} = 0$ and such that $\rho_0^K\to\rho_0$, $\rho_0^K\vc{u}^K_0\to\rho_0\vc{u}_0$, and \[ \frac{1}{2}\rho_0^K|\vc{u}^K_0|^2+P(\rho_0^K)\to\frac{1}{2}\rho_0|\vc{u}_0|^2+P(\rho_0) \] in $L^1(\Omega)$, respectively. Indeed, it is easy to see that such an approximation of the initial density exists (use a simple truncation and smoothing argument). Then, the arguments of Section~\ref{BM} yield a dissipative measure-valued solution with \[ \mathcal{D}(\tau)=E_\infty(\tau)[\Ov{\Omega}]+\sigma_\infty[[0,\tau]\times\Ov{\Omega}] \] for a.a.\ $\tau\in(0,T)$. Moreover, we have $r^C=0$ and $\chi\equiv0$, $\xi\equiv c$. The Poincar\'e-Korn inequality~\eqref{KoPo} is an easy consequence of the respective inequality for each $\vc{u}^K$. \end{proof} Note that our definition of dissipative measure-valued solutions is arguably broader than necessary: For instance, any approximation sequence with a uniform bound on the energy will not concentrate in the momentum, whence $r^C=0$. We choose to include such an effect in our definition anyway since even in this potentially larger class of measure-valued solutions we can still show weak-strong uniqueness: a measure-valued and a smooth solution starting from the same initial data coincide as long as the latter exists. In other words, the set of classical (smooth) solutions is stable in the class of dissipative measure-valued solutions. Showing this property is the main goal of the present paper. \section{Relative energy} The commonly used form of the relative energy (entropy) functional in the context of weak solutions to the barotropic Navier-Stokes system reads \[ \mathcal{E} \left( \vr, \vu \ \Big| r, \vc{U} \right) = \intO{ \left[ \frac{1}{2} \vr |\vu - \vc{U}|^2 + P(\vr) - P'(r) (\vr - r) - P(r) \right] }, \] where $\vr$, $\vu$ is a weak solution and $r$ and $\vc{U}$ are arbitrary ``test'' functions mimicking the basic properties of $\vr$, $\vu$, specifically, $r$ is positive and $\vc{U}$ satisfies the relevant boundary conditions, see Feireisl et al.\ \cite{FeJiNo}, Germain \cite{Ger}, Mellet and Vasseur \cite{MeVa1}, among others. Here, the crucial observation is that \[ \mathcal{E} \left( \vr, \vu \ \Big| r, \vc{U} \right) = \intO{ \left[ \frac{1}{2} \vr |\vu|^2 + P(\vr) \right] } - \intO{ \vr \vu \cdot \vc{U} } + \intO{ \frac{1}{2} \vr |\vc{U}|^2 } - \intO{ P'(r) \vr } + \intO{ p(r) }, \] where all integrals on the right-hand side may be explicitly expressed by means of either the energy inequality or the field equations. Accordingly, a relevant candidate in the framework of (dissipative) measure valued solutions is \[ \begin{split} &\mathcal{E}_{mv} \left( \vr, \vu \ \Big| r, \vc{U} \right)(\tau) = \intO{ \left< \nu_{\tau,x}; \frac{1}{2} s |\vc{v} - \vc{U}|^2 + P(s) - P'(r) (s - r) - P(r) \right> } \\ &= \intO{ \left< \nu_{\tau,x}; \frac{1}{2} s |\vc{v}|^2 + P(s) \right> } - \intO{ \left< \nu_{\tau,x}; s \vc{v} \right> \cdot \vc{U} } + \intO{ \frac{1}{2} \left< \nu_{\tau,x} ; s \right> |\vc{U}|^2 }\\ &- \intO{ \left< \nu_{\tau,x} ; s \right> P'(r) } + \intO{ p(r) }. \end{split} \] Our goal in the remaining part of this section is to express all integrals on the right hand side in terms of the energy balance (\ref{dmvEI}) and the field equations (\ref{dmvB1}), (\ref{dmvB2}). \subsection{Density dependent terms} Using the equation of continuity (\ref{dmvB1}) with test function $\frac{1}{2}|\vc{U}|^2$, we get \begin{equation} \label{RE1} \begin{split} &\intO{ \frac{1}{2} \left< \nu_{\tau,x}; s \right> |\vc{U}|^2(\tau, \cdot) } - \intO{ \frac{1}{2} \left< \nu_{0,x}; s \right> |\vc{U}|^2(0, \cdot) } \\ & = \int_0^\tau \intO{ \left[ \left< \nu_{t,x}; s \right> \vc{U} \cdot \partial_t \vc{U} + \left< \nu_{t,x}; s \vc{v} \right> \cdot \vc{U} \cdot \Grad \vc{U} \right] } \ \dt + \int_0^\tau \intO{ \left< r^C; \frac{1}{2} \Grad |\vc{U}|^2 \right> } \ \dt \end{split} \end{equation} provided $\vc{U} \in C^1([0,T] \times \Ov{\Omega}; R^N)$. \color{black} Similarly, testing with $P'(r)$ we can write \begin{equation} \label{RE2} \begin{split} & \intO{ \left< \nu_{\tau,x} ;s \right> P'(r) (\tau, \cdot) } - \intO{ \left< \nu_{0,x} ;s \right> P'(r) (0, \cdot) }\\ &= \int_0^\tau \intO{ \left[ \left< \nu_{t,x}; s \right> P''(r) \partial_t r + \left< \nu_{t,x}; s \vc{v} \right> \cdot P''(r) \cdot \Grad r \right] } \ \dt + \int_0^\tau \intO{ \left< r^C ; P'(r) \right> } \ \dt\\ &= \int_0^\tau \intO{ \left[ \left< \nu_{t,x}; s \right> \frac{p'(r)}{r} \cdot \partial_t r + \left< \nu_{t,x}; s \vc{v} \right> \frac{p'(r)}{r} \cdot \Grad r \right] } \ \dt + \int_0^\tau \intO{ \left< r^C ; \Grad P'(r) \right> } \ \dt \end{split} \end{equation} \color{black} provided $r > 0$ and $r \in C^1([0,T] \times \Ov{\Omega})$, and $P$ is twice continuously differentiable in $(0, \infty)$. \subsection{Momentum dependent terms} Analogously to the preceding part, we use (\ref{dmvB2}) to compute \begin{equation} \label{RE3} \begin{split} &\intO{ \left< \nu_{\tau,x} ; s \vc{v} \right> \cdot \vc{U} (\tau, \cdot) } - \intO{ \left< \nu_{0,x} ; s \vc{v} \right> \cdot \vc{U} (0, \cdot) } \\ &= \int_0^\tau \intO{ \left< \nu_{t,x}; s \vc{v} \right> \cdot \partial_t \vc{U} } \ \dt + \int_0^\tau \int_{\Ov{\Omega}} \left[ \left< \nu_{t,x}; s \vc{v} \otimes \vc{v} \right> : \Grad \vc{U} + \left< \nu_{t,x}; p(s) \right> \Div \vc{U} \right] \dx \ \dt\\ & - \int_0^\tau \intO{ \left< \nu_{t,x} ; \mathbb{S} (\mathbb{D} ) \right> : \Grad \vc{U} } \ \dt + \int_0^\tau \left< r^M; \Grad \vc{U} \right> \ \dt\\ &= \int_0^\tau \intO{ \left< \nu_{t,x}; s \vc{v} \right> \cdot \partial_t \vc{U} } \ \dt + \int_0^\tau \int_{\Ov{\Omega}} \left[ \left< \nu_{t,x}; s \vc{v} \otimes \vc{v} \right> : \Grad \vc{U} + \left< \nu_{t,x}; p(s) \right> \Div \vc{U} \right] \dx \ \dt\\ & - \int_0^\tau \intO{ \mathbb{S}(\Grad \vu) : \Grad \vc{U} } \ \dt + \int_0^\tau \left< r^M; \Grad \vc{U} \right> \ \dt \end{split} \end{equation} for any $\vc{U} \in C^1([0,T] \times \Ov{\Omega}; R^N)$, $\vc{U}|_{\partial \Omega} = 0$. \color{black} \subsection{Relative energy inequality} Summing up the previous discussion we may deduce a measure-valued analogue of the relative energy inequality: \begin{equation} \label{RE5} \begin{split} \mathcal{E}_{mv} \left( \vr, \vu \ \Big| r , \vc{U} \right) &+ \int_0^\tau \mathbb{S}(\Grad \vu) : \left( \Grad \vu - \Grad \vc{U} \right) \ \dxdt + \mathcal{D}(\tau) \\ &\leq \intO{ \left< \nu_{0,x}; \frac{1}{2} s |\vc{v} - \vc{U}_0|^2 + P(s) - P'(r_0) (s - r_0) - P(r_0) \right> } \\ & - \int_0^\tau \intO{ \left< \nu_{t,x}, s \vc{v} \right> \cdot \partial_t \vc{U} } \ \dt\\ & - \int_0^\tau \int_{\Ov{\Omega}} \left[ \left< \nu_{t,x}; s \vc{v} \otimes \vc{v} \right> : \Grad \vc{U} + \left< \nu_{t,x}; p(s) \right> \Div \vc{U} \right] \dx \ \dt\\ & + \int_0^\tau \intO{ \left[ \left< \nu_{t,x}; s \right> \vc{U} \cdot \partial_t \vc{U} + \left< \nu_{t,x}; s \vc{v} \right> \cdot \vc{U} \cdot \Grad \vc{U} \right] } \ \dt\\ &+ \int_0^\tau \intO{ \left[ \left< \nu_{t,x} ; \left(1 - \frac{s}{r} \right) \right> p'(r) \partial_t r - \left< \nu_{t,x}; s \vc{v} \right> \cdot \frac{p'(r)}{r} \Grad r \right] } \ \dt\\ &+ \int_0^\tau \left< r^C; \frac{1}{2} \Grad |\vc{U}|^2 - \Grad P'(r) \right> \ \dt - \int_0^\tau \left< r^M ; \Grad \vc{U} \right> \dt. \end{split} \end{equation} \color{black} As already pointed out, the relative entropy inequality (\ref{RE5}) holds for any $r \in C^1([0,T] \times \Ov{\Omega})$, $r > 0$, and any $\vc{U} \in C^1([0,T] \times \Ov{\Omega}; R^N)$, $\vc{U}|_{\partial \Omega} = 0$. Moreover, in accordance with Definition \ref{DD1}, we have \[ \begin{split} &\left| \int_0^\tau \left< r^C; \frac{1}{2} \Grad |\vc{U}|^2 - \Grad P'(r) \right> \ \dt - \int_0^\tau \left< r^M ; \Grad \vc{U} \right> \dt \right| \\ &\leq c \left( \left\| \Grad \vc{U} \right\|_{C([0, T] \times \Ov{\Omega}; R^{N \times N})} + \left\| r \right\|_{C([0,T] \times \Ov{\Omega}) } + \left\| \Grad r \right\|_{C([0, T] \times \Ov{\Omega}; R^{N})} \right) \int_0^\tau (\chi(t)+\xi(t)) \mathcal{D}(t) \ \dt. \end{split} \] Thus the validity of (\ref{RE5}) can be extended to the following class of test functions by a simple argument: \color{black} \begin{equation} \label{TEST} \vc{U} , \Grad \vc{U}, r, \Grad r \in C([0,T] \times \Ov{\Omega}),\ \partial_t r, \ \partial_t \vc{U} \in L^1(0,T; C(\Ov{\Omega})),\ r > 0,\ \vc{U}|_{\partial \Omega} = 0. \end{equation} \section{Weak-strong uniqueness} Now, we suppose that the test functions $r$, $\vc{U}$ belong to the class (\ref{TEST}), and, in addition, solve the Navier-Stokes system (\ref{I1}--\ref{I3}). Our goal is to show that the measure valued solution and the strong one are close in terms of the ``distance'' of the initial data. We proceed in several steps. \subsection{Continuity equation} In addition to the general hypotheses that guarantee (\ref{RE5}) suppose that $r, \vc{U}$ satisfy the equation of continuity \bFormula{RE6} \partial_t r + \Div (r \vc{U} ) = 0. \eF Accordingly, we may rewrite (\ref{RE5}) as \begin{equation} \label{RE7} \begin{split} \mathcal{E}_{mv} \left( \vr, \vu \ \Big| r , \vc{U} \right) &+ \int_0^\tau \intO{ \mathbb{S}(\Grad \vu) : \left( \Grad \vu - \Grad \vc{U} \right) } \ \dt + \mathcal{D}(\tau) \\ &\leq \intO{ \left< \nu_{0,x}; \frac{1}{2} s |\vc{v} - \vc{U}_0|^2 + P(s) - P'(r_0) (s - r_0) - P(r_0) \right> } \\ & - \int_0^\tau \intO{ \left< \nu_{t,x}, s \vc{v} \right> \cdot \partial_t \vc{U} } \ \dt - \int_0^\tau \int_{\Ov{\Omega}} \left< \nu_{t,x}; s \vc{v} \otimes \vc{v} \right> : \Grad \vc{U} \dx \ \dt\\ & + \int_0^\tau \intO{ \left[ \left< \nu_{t,x}; s \right> \vc{U} \cdot \partial_t \vc{U} + \left< \nu_{t,x}; s \vc{v} \right> \cdot \vc{U} \cdot \Grad \vc{U} \right] } \ \dt\\ & + \int_0^\tau \intO{ \left< \nu_{t,x}; s \vc{U} - s \vc{v} \right> \cdot \frac{p'(r)}{r} \Grad r } \ \dt \\ &- \int_0^\tau \intO{ \left< \nu_{t,x} ; p(s) - p'(r)(s -r) - p(r) \right> \Div \vc{U} } \ \dt\\ &+ c \int_0^\tau (\chi(t)+\xi(t)) \mathcal{D}(t) \dt \end{split} \end{equation} where the constant $c$ depends only on the norms of the test functions specified in (\ref{TEST}). \subsection{Momentum equation} In addition to (\ref{RE6}) suppose that $r, \vc{U}$ satisfy also the momentum balance \[ \partial_t \vc{U} + \vc{U} \cdot \Grad \vc{U} + \frac{1}{r} \Grad p(r) = \frac{1}{r} \Div \mathbb{S} (\Grad \vc{U}). \] Indeed, it is easily seen that this follows from the momentum equation in conjunction with~\eqref{RE6}. Accordingly, relation (\ref{RE7}) reduces to \begin{equation} \label{RE8} \begin{split} \mathcal{E}_{mv} \left( \vr, \vu \ \Big| r , \vc{U} \right) &+ \int_0^\tau \intO{ \mathbb{S}(\Grad \vu) : \left( \Grad \vu - \Grad \vc{U} \right) } \ \dt + \mathcal{D}(\tau) \\ &\leq \intO{ \left< \nu_{0,x}; \frac{1}{2} s |\vc{v} - \vc{U}_0|^2 + P(s) - P'(r_0) (s - r_0) - P(r_0) \right> } \\ & + \int_0^\tau \intO{ \left< \nu_{t,x}, s \vc{v} - s \vc{U} \right> \cdot \Grad \vc{U} \cdot \vc{U} } \ \dt - \int_0^\tau \int_{\Ov{\Omega}} \left< \nu_{t,x}; s \vc{v} \otimes \vc{v} \right> : \Grad \vc{U} \dx \ \dt\\ & + \int_0^\tau \intO{ \left< \nu_{t,x}; s \vc{v} \right> \cdot \Grad \vc{U}\cdot \vc{U}} \ \dt\\ & + \int_0^\tau \intO{ \left< \nu_{t,x}; s \vc{U} - s \vc{v} \right> \cdot \frac{1}{r} \Div \mathbb{S} (\Grad \vc{U}) } \ \dt \\ &+ \int_0^\tau \intO{ \left< \nu_{t,x} ; p(s) - p'(r)(s -r) - p(r) \right> \Div \vc{U} } \ \dt\\ &+ c \int_0^\tau (\chi(t)+\xi(t)) \mathcal{D}(t) \dt \end{split} \end{equation} where, furthermore, \color{black} \[ \begin{split} &\int_0^\tau \intO{ \left< \nu_{t,x}; \left( s \vc{v} - s \vc{U} \right) \right> \cdot \Grad \vc{U} \cdot \vc{U} } \ \dt - \int_0^\tau \int_{{\Omega}} \left< \nu_{t,x} ; s \vc{v} \otimes \vc{v} \right> : \Grad \vc{U} \dx\ \dt\\ &+ \int_0^\tau \intO{ \left< \nu_{t,x}; s \vc{v} \right> \cdot \Grad \vc{U} \cdot \vc{U} } \ \dt\\ &= \int_0^\tau \intO{ \left< \nu_{t,x}; \left( s \vc{v} - s \vc{U} \right) \right> \cdot \Grad \vc{U} \cdot \vc{U} } \ \dt + \int_0^\tau \int_{{\Omega}} \left< \nu_{t,x}; s \vc{v} \cdot \Grad \vc{U}\cdot(\vc{U} - \vc{v} ) \right> \dx\ \dt\\ &= \int_0^\tau \int_{{\Omega}} \left< \nu_{t,x}; s (\vc{v} - \vc{U}) \cdot \Grad \vc{U}\cdot (\vc{U} - \vc{v} ) \right> \dx\ \dt. \end{split} \] Thus, finally, (\ref{RE8}) can be written as \begin{equation} \label{RE9} \begin{split} \mathcal{E}_{mv} \left( \vr, \vu \ \Big| r , \vc{U} \right) &+ \int_0^\tau \intO{ \mathbb{S}(\Grad \vu -\Grad\vc{U}):(\Grad \vu -\Grad\vc{U}) } \ \dt+ \mathcal{D}(\tau) \\ &\leq \intO{ \left< \nu_{0,x}; \frac{1}{2} s |\vc{v} - \vc{U}_0|^2 + P(s) - P'(r_0) (s - r_0) - P(r_0) \right> } \\ & + \int_0^\tau \int_{{\Omega}} \left< \nu_{t,x}; s (\vc{v} - \vc{U})\cdot \Grad \vc{U} \cdot (\vc{U} - \vc{v} ) \right> \dx\ \dt\\ & - \int_0^\tau \intO{ (\Grad \vu - \Grad \vc{U} ) : \mathbb{S} (\Grad \vc{U}) } \ \dt\\ & + \int_0^\tau \intO{ \left< \nu_{t,x}; s \vc{U} - s \vc{v} \right> \cdot \frac{1}{r} \Div \mathbb{S} (\Grad \vc{U}) } \ \dt \\ &+ \int_0^\tau \intO{ \left< \nu_{t,x} ; p(s) - p'(r)(s -r) - p(r) \right> \Div \vc{U} } \ \dt\\ &+ c \int_0^\tau (\chi(t)+\xi(t)) \mathcal{D}(t) \dt. \end{split} \end{equation} \color{black} \subsection{Compatibility} Our last goal is to handle the difference \[ \int_0^\tau \intO{ \left< \nu_{t,x}; s \vc{U} - s \vc{v} \right> \cdot \frac{1}{r} \Div \mathbb{S} (\Grad \vc{U}) } \ \dt - \int_0^\tau \intO{ (\Grad \vu - \Grad \vc{U} ) : \mathbb{S} (\Grad \vc{U}) } \ \dt. \] To this end, we need slightly more regularity than required in (\ref{TEST}), namely \[ \Div \tn{S}(\Grad \vc{U}) \in L^2(0,T; L^\infty({\Omega}; R^{N\times N})) \ \mbox{or, equivalently}\ \partial_t \vc{U} \in L^2(0,T; L^\infty ({\Omega}; R^N)). \] \color{black} Now, since \[ {\vu}\in L^2((0,T); W^{1,2}_0(\Omega;R^N)), \] we get \[ \begin{split} &\int_0^\tau \intO{ \left< \nu_{t,x}; s \vc{U} - s \vc{v} \right> \cdot \frac{1}{r} \Div \mathbb{S} (\Grad \vc{U}) } \ \dt - \int_0^\tau \intO{ (\Grad \vu - \Grad \vc{U} ) : \mathbb{S} (\Grad \vc{U}) } \ \dt\\ & = \int_0^\tau \intO{ \left< \nu_{t,x}; \left( s \vc{U} - s \vc{v} + r \vc{v} - r \vc{U} \right) \right> \cdot \frac{1}{r} \Div \mathbb{S} (\Grad \vc{U}) } \ \dt\\ & = \int_0^\tau \intO{ \left< \nu_{t,x}; (s - r) (\vc{U} - \vc{v} ) \right> \cdot \frac{\Div \mathbb{S} (\Grad \vc{U})}{r} } \ \dt. \end{split} \] \color{black} Now, we write \[ \begin{split} &\left< \nu_{t,x}; (s - r) (\vc{U} - \vc{v} ) \right> \\ &= \left< \nu_{t,x} ; \psi (s) (s - r) (\vc{U} - \vc{v} ) \right> + \left< \nu_{t,x}; (1 - \psi (s)) (s - r) (\vc{U} - \vc{v} ) \right>, \end{split} \] where \[ \psi \in C^\infty_c(0, \infty),\ 0 \leq \psi \leq 1, \ \psi(s) = 1 \ \mbox{for} \ s \in ( \inf{r}, \sup{r} ). \] Consequently, we get \[ \left< \nu_{t,x}; \psi (s) (s - r) (\vc{U} - \vc{v} ) \right> \leq \frac{1}{2}\left< \nu_{t,x}; \frac{\psi (s)^2}{\sqrt{s}} (s - r)^2 \right> + \frac{1}{2}\left< \nu_{t,x}; \frac{\psi (s)^2}{\sqrt{s}} {s} |\vc{U} - \vc{v} |^2 \right>, \] where, as $\psi$ is compactly supported in $(0, \infty)$, both terms can be controlled in (\ref{RE9}) by $\mathcal{E}_{mv}$, as is easily verified. Next, we write \[ \begin{split} &\left< \nu_{t,x}; (1 - \psi (s)) (s - r) (\vc{U} - \vc{v} ) \right> \\ & = \left< \nu_{t,x}; \omega_1 (s) (s - r) (\vc{U} - \vc{v} ) \right> + \left< \nu_{t,x}; \omega_2 (s) (s - r) (\vc{U} - \vc{v} ) \right>, \end{split} \] where \[ {\rm supp}[\omega_1] \subset [0, \inf r), \ {\rm supp}[\omega_2] \subset (\sup r, \infty], \ \omega_1+\omega_2=1-\psi. \] Accordingly, \[ \left< \nu_{t,x}; \omega_1 (s) (s - r) (\vc{U} - \vc{v} ) \right> \leq c(\delta) \left< \nu_{t,x} ; \omega_1^2(s) (s -r)^2 \right> + \delta \left< \nu_{t,x}; |\vc{U} - \vc{v}|^2 \right> \] where the former term on the right-hand side is controlled by $\mathcal{E}_{mv}$ while the latter can be absorbed by the left hand side of (\ref{RE9}) by virtue of Poincar\' e inequality stipulated in (\ref{KoPo}) provided $\delta > 0$ has been chosen small enough. Indeed, on one hand, \[ \begin{split} \left< \nu_{t,x}; |\vc{U} - \vc{v} |^2 \right> &= |\vc{U}|^2 - 2 \vu \cdot \vc{U} + |\vu|^2 + \left< \nu_{t,x} ; |\vc{v}|^2 - |\vu|^2 \right>\\ & = |\vc{u} - \vc{U} |^2 + \left< \nu_{t,x} ; |\vc{v} - \vu|^2 \right>; \end{split} \] whence, by virtue of (\ref{KoPo}) and the standard Poincar\' e-Korn inequality, \[ \int_0^\tau \intO{ \left< \nu_{t,x}; |\vc{U} - \vc{v} |^2 \right> } \ \dt \leq c_P \left( \int_0^\tau \intO{ \left( \mathbb{S} (\Grad \vu - \Grad \vc{U}) \right) : \left(\Grad \vu - \Grad \vc{U} \right) } \ \dt + \mathcal{D}(\tau) \right). \] \color{black} Finally, \[ \left< \omega_2 (s) (s - r) (\vc{U} - \vc{v} ) \right> \leq c \ \left< \nu_{t,x}; \omega_2 (s) \left( s + s (\vc{U} - \vc{v} )^2 \right) \right>, \] where both integrals are controlled by $\mathcal{E}_{mv}$. Summing up the previous discussion, we deduce from (\ref{RE9}) that \[ \begin{split} &\mathcal{E}_{mv} \left( \vr, \vu \ \Big| r , \vc{U} \right) + \frac{1}{2} \mathcal{D}(\tau) \\ &\leq \intO{ \left< \nu_{0,x}; \frac{1}{2} s |\vc{v} - \vc{U}(0, \cdot) |^2 + P(s) - P'(r(0,\cdot)) (s - r(0,\cdot)) - P(r(0,\cdot)) \right> }\\ &+ c \left( \int_0^\tau \mathcal{E}_{mv} \left( \vr, \vu \ \Big| r , \vc{U} \right)\ \dt + \int_0^\tau (\chi(t)+\xi(t))\mathcal{D}(t) \ \dt \right). \end{split} \] Thus applying Gronwall's lemma, we conclude that \begin{equation} \label{RE10} \begin{split} &\mathcal{E}_{mv} \left( \vr, \vu \ \Big| r , \vc{U} \right) (\tau) +\mathcal{D}(\tau) \\ &\leq c(T) \intO{ \left< \nu_{0,x}; \frac{1}{2} s |\vc{v} - \vc{U}(0, \cdot) |^2 + P(s) - P'(r(0,\cdot)) (s - r(0,\cdot)) - P(r(0,\cdot)) \right> } \end{split} \end{equation} for a.a. $\tau \in [0,T]$. \color{black} We have shown the main result of the present paper. \begin{Theorem} \label{TT1} Let $\Omega \subset R^N$, $N=2,3$ be a bounded smooth domain. Suppose the pressure $p$ satisfies (\ref{BB6}). Let $\{ \nu_{t,x}, \mathcal{D} \}$ be a dissipative measure-valued solution to the barotropic Navier-Stokes system (\ref{I1}--\ref{I3}) in $(0,T) \times \Omega$, with the initial state represented by $\nu_0$, in the sense specified in Definition \ref{DD1}. Let $[r, \vc{U}]$ be a strong solution of (\ref{I1}--\ref{I3}) in $(0,T) \times \Omega$ belonging to the class \[ r, \ \Grad r, \ \vc{U},\ \Grad \vc{U} \in C([0,T] \times \Ov{\Omega}),\ \partial_t \vc{U} \in L^2(0,T; C(\overline{\Omega};R^N)),\ r > 0,\ \vc{U}|_{\partial \Omega} = 0. \] Then there is a constant $\Lambda = \Lambda(T)$, depending only on the norms of $r$, $r^{-1}$, $\vc{U}$, $\chi$, and $\xi$ in the aforementioned spaces, such that \[ \begin{split} & \intO{ \left< \nu_{\tau,x}; \frac{1}{2} s |\vc{v} - \vc{U}|^2 + P(s) - P'(r) (s - r) - P(r) \right> } \\ &+ \int_0^\tau \intO{ | \Grad \vu - \Grad \vc{U} |^2 } \ \dt + \mathcal{D}(\tau) \\ &\leq \Lambda(T) \intO{ \left< \nu_{0,x}; \frac{1}{2} s |\vc{v} - \vc{U}(0, \cdot) |^2 + P(s) - P'(r(0,\cdot)) (s - r(0,\cdot)) - P(r(0,\cdot)) \right> } \end{split} \] for a.a. $\tau \in (0,T)$. In particular, if the initial states coincide, meaning \[ \nu_{0,x} = \delta_{[ r(0,x), \vc{U}(0,x) ]} \ \mbox{for a.a.} \ x \in \Omega \] then $\mathcal{D} = 0$, and \[ \nu_{\tau,x} = \delta_{[ r(\tau,x), \vc{U}(\tau,x) ]} \ \mbox{for a.a.}\ \tau \in (0,T),\ x \in \Omega. \] \end{Theorem} \color{black} \section{Examples of problems generating measure-valued solutions} \label{E} Besides the model of Brenner discussed in Section \ref{I}, there is a vast class of problems -- various approximations of the barotropic Navier-Stokes system (\ref{I1}--\ref{I3}) -- generating (dissipative) measure-valued solutions. Below, we mention three examples among many others. \subsection{Artificial pressure approximation} The theory of weak solutions proposed by Lions \cite{LI4} and later developed in \cite{FNP} does not cover certain physically interesting cases. For the sake of simplicity, consider the pressure $p$ in its iconic form \[ p(\vr) = a \vr^\gamma,\ a > 0,\ \gamma \geq 1 \] obviously satisfying (\ref{BB6}). The existence of weak solutions is known in the following cases: \[ N = 2,\ \gamma \geq 1\ \mbox{and }\ N = 3,\ \gamma > \frac{3}{2}. \] Note that the critical case $\gamma = 1$ for $N = 2$ was solved only recently by Plotnikov and Weigant \cite{PloWei}. This motivates the following approximate problem \begin{eqnarray} \label{AI1} \partial_t \vr + \Div (\vr \vu) &=& 0, \\ \label{AI2} \partial_t (\vr \vu) + \Div (\vr \vu \otimes \vu) + \Grad p(\vr) + \delta \Grad \vr^\Gamma &=& \Div \mathbb{S} (\Grad \vu), \\ \label{AI3} \vu|_{\partial \Omega} &=& 0, \end{eqnarray} where $\delta > 0$ is a small parameter and $\Gamma > 1$ is large enough to ensure the existence of weak solutions. Repeating the arguments applied in Section \ref{BM} to Brenner's model, it is straightforward to check that a family of weak solutions $\{ \vr_\delta, \vu_{\delta} \}_{\delta > 0}$, satisfying the energy inequality, generates a dissipative measure-valued solution in the sense of Definition \ref{DD1}. Indeed it is enough to observe that \[ p(\vr) + \delta \vr^\Gamma \leq c \left( P(\vr) + \frac{\delta}{\Gamma - 1} \vr^{\Gamma} \right) \ \mbox{for all}\ \vr \geq 1, \] where the constant is uniform with respect to $\delta \to 0$. We conclude that the weak solutions of the problem with vanishing artificial pressure generate a dissipative measure-valued solution. In particular, as a consequence of Theorem \ref{TT1}, they converge to the (unique) strong solution provided it exists and the initial data are conveniently adjusted. We remark that strong solutions to the barotropic Navier-Stokes system exist at least locally in time provided \begin{itemize} \item the pressure is a sufficiently smooth function of $\vr$, \item the domain $\Omega$ has a regular boundary, and \item the initial data are smooth enough and satisfy the necessary compatibility conditions as the case may be, \end{itemize} see e.g.\ Cho, Choe and Kim \cite{ChoChoeKim}, Valli and Zaj{\c{a}}czkowski \cite{VAZA}. \subsection{Multipolar fluids} The theory of multipolar fluid was developed by Ne\v cas and \v Silhav\' y \cite{NESI} in the hope to develop an alternative approach to regularity for compressible fluids. The problems may take various forms depending on the shape of the viscous stress \[ \mathbb{T}(\vu, \Grad \vu, \ \Grad^2 \vu, \dots) = \mathbb{S}( \Grad \vu) + \delta \sum_{j = 1}^{k-1} \left( (-1)^j \mu_j \Delta^j (\Grad \vu + \Grad^t \vu) + \lambda_j \Delta^j \Div \vu \ \mathbb{I} \right) + \ \mbox{non-linear terms.} \] The resulting system has a nice variational structure and, for $k$ large enough, admits global in time smooth solutions, see Ne\v cas, Novotn\' y and \v Silhav\' y \cite{NeNoSil}. It is natural to conjecture that the (smooth) solutions of the multipolar system will converge to their weak counterparts as $\delta \to 0$ at least in the cases where the pressure complies with the requirements of Lions' theory. However, this is to the best of our knowledge an open problem. Instead, such a process may and does generate a (dissipative) measure valued solutions at least for a certain class of boundary conditions studied in \cite{NeNoSil} that may be schematically written as \[ \ \mbox{no-slip}\ \vu|_{\partial \Omega} = 0 \ +\ \mbox{natural boundary conditions of Neumann type.} \] Then the proof is basically the same as for Brenner's model. \subsection{Numerical schemes} Theorem \ref{TT1} may be useful in the study of convergence to certain dissipative numerical schemes for the barotropic Navier-Stokes system, meaning schemes preserving some form of the energy inequality. Such a scheme was proposed by Karlsen and Karper \cite{KarKar1}, and a rigorous proof of convergence to weak solutions finally was finally established by Karper \cite{Karp}. Karper's result applies to a certain class of pressures, notably \[ p(\vr) = a \vr^\gamma \ \mbox{for}\ \gamma > 3 ,\ N =3. \] On the other hand, however, the consistency estimates cover a larger set for $\gamma > 3/2$, see Gallou{\"e}t et al.\ \cite{GalHerMalNov}. It can be shown that the consistency estimate imply that the family of numerical solutions generate a (dissipative) measure-valued solution. In accordance with the conclusion of Theorem \ref{TT1}, the numerical solutions will converge to a classical exact solution as soon as the latter exists. In fact this has been shown in \cite{FeHoMaNo} by means of a discrete analogue of the relative energy inequality. \section{Measure valued solutions with bounded density field} We conclude our discussion by a simple example that indicates that the measure-valued solutions may be indeed an artifact of the theory as long as they emanate from sufficiently regular initial data. The following result is a direct consequence of Theorem \ref{TT1} and a regularity criterion proved by Sun, Wang, and Zhang \cite{SuWaZh1} stating that solutions of the barotropic Navier-Stokes system starting from smooth initial data remain smooth as long as their density component remains bounded. Since Theorem \ref{TT1} requires slightly better regularity than \cite{SuWaZh1}, we restrict ourselves to very regular initial data for which the necessary local existence result was proved in \cite[Proposition 2.1]{FeHoMaNo}. \begin{Theorem} \label{TT2} In addition to the hypotheses of Theorem \ref{TT1}, suppose that $\mu > 0$, $\eta = 0$, and $\{\nu_{t,x},\mathcal{D}\}$ is a dissipative measure-valued solution to the barotropic Navier-Stokes system in $(0,T) \times \Omega$ emanating from smooth data, specifically, \[ \nu_{0,x} = \delta_{[r_0(x), \vc{U}_0(x)]} \ \mbox{for a.a.}\ x \in \Omega, \] where \[ r_0 \in C^3(\Ov{\Omega}),\ r_0 > 0, \ \vc{U}_0 \in C^3(\Ov{\Omega}),\ \vc{U}_0|_{\partial \Omega} = 0, \ \Grad p(r_0) = \Div \mathbb{S} (\Grad \vc{U}_0). \] Suppose that the measure valued solution $\nu_{t,x}$ has bounded density component, meaning the support of the measure $\nu_{t,x}$ is confined to a strip \[ 0 \leq s \leq \Ov{\vr}\ \mbox{for a.a}\ (t,x) \in (0,T) \times \Omega. \] Then $\mathcal{D}=0$ and $\nu_{t,x}= \delta_{[ r(\tau,x), \vc{U}(\tau,x) ]} \ \mbox{for a.a.}\ \tau \in (0,T),\ x \in \Omega$, where $[r, \vc{U}]$ is a classical smooth solution of the barotropic Navier-Stokes system in $(0,T) \times \Omega$. \end{Theorem} \begin{Remark} Note that $\Grad p(r_0) = \Div \mathbb{S} (\Grad \vc{U}_0)$ is the standard compatibility condition associated to~\eqref{I3}. \end{Remark} \color{black} \bProof As stated in \cite[Proposition 2.1]{FeHoMaNo}, the compressible Navier-Stokes system (\ref{I1}--\ref{I3}) endowed with the initial data $[r_0, \vc{U}_0]$ admits a local in time classical solutions fitting the regularity class required in Theorem \ref{TT1}. Thus the measure-valued solution coincides with the classical one on its life span. However, as the density component is bounded, the result of Sun, Wang, and Zhang \cite{SuWaZh1} asserts that the classical solution can be extended up to the time $T$. \qed \begin{Remark} \label{rrem} The assumption that the bulk viscosity $\eta$ vanishes is a technical hypothesis used in~\cite{SuWaZh1}. \end{Remark} \def\cprime{$'$} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0 \hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi}
train/arxiv
BkiUd3Q5qoTBG46msghW
5
1
\section{Introduction} \label{intro} When a collection of atoms, for example $^6$Li, is cooled to degeneracy in an equal mixture of two hyperfine ground states, labeled $|\uparrow\rangle$ and $|\downarrow\rangle$, Feshbach resonances make possible to tune the interactions by varying an external magnetic field. The system exhibits an interesting phase diagram, including a highly challenging strongly interacting regime, the BEC-BCS crossover. Using an highly anisotropic trapping potential, it is possible to explore also two-dimensional (2D) geometries, which are intriguing, and important in connection with, for example, high temperature superconductivity \cite{RevModPhys.78.17}, Dirac fermions in graphene \cite{RevModPhys.81.109} and topological superconductors \cite{RevModPhys.83.1057}, and nuclear ``pasta'' phases \cite{PhysRevLett.90.161101} in neutron stars. In three-dimensions (3D), there exists a unitarity point where the two-body scattering length diverges and the gas properties become universal in the dilute limit. The unitarity point resides in the heart of the so-called crossover regime, and provides a natural boundary for the strongly interacting BEC regime (positive scattering length and molecular bound state) and the weakly interacting BCS regime (negative scattering length and no bound state) \cite{RevModPhysStringari}. In 2D a two-body bound state always exists, and the scattering length, $a$, is always positive. The relevant parameter is $\log(k_F a)$, with $k_F$ being the Fermi momentum, which is controlled by the particle density of the dilute gas. In the a weakly interacting BCS regime at $\log(k_F a) > 1$, the attraction between particles with opposite ``spin'' induces a pairing similar to the one observed in ordinary superconductors. As the interaction strength is increased (and correspondingly $\log(k_F a)$ decreases), a crossover is observed leading to a BEC regime where the Cooper pairs are so tightly bound that the system behaves as a gas of bosonic molecules. While both the BCS and the BEC regimes are well understood on the grounds of the celebrated BCS theory, the crossover regime provides a challenging example of a strongly interacting quantum many-body system \cite{RevModPhysZwerger,RevModPhysStringari}. An array of ground state properties in 2D have already been measured, both theoretically and experimentally \cite{PhysRevLett.112.045301,PhysRevLett.114.230401,PhysRevLett.116.045303,nature_Feld,PhysRevA.94.031606,Bauer,PhysRevA.92.023620,Klawunn20162650,PhysRevA.93.023602,Giorgini}, although much less is available in 2D compared to 3D systems. Exact calculations have been recently achieved using auxiliary-field quantum Monte Carlo (AFQMC) methodologies both for ground state properties \cite{Hao-2DFG} and excited states \cite{Ettore-2DFG}. The main focus of this paper is the theoretical calculation of response functions of the 2D Fermi gas. The computation of these quantities is challenging, and significantly more difficult than the computation of static properties, since the full spectrum of the microscopic Hamiltonian is relevant for the response of the system to external perturbations. The importance of computing such properties from first principles can hardly be overrated: they allow for a direct comparison with experiments, and they provide insight into the behavior of the many-body physical system. In particular the density and spin structure factors can be measured for a cold gas in two-photon scattering experiments \cite{PhysRevLett.109.050403}, and they will be the main topic of this paper. We discuss two theoretical approaches to compute response functions: the dynamical BCS theory and AFQMC methods. With dynamical BCS theory \cite{PhysRev.139.A197,Nozieres,PhysRevA.74.042717} calculations, we follow the approach in Ref.~\cite{PhysRevA.74.042717}, which studied the 3D Fermi gas, and obtain systematic results as a function of the interaction strength in the 2D gas. We then describe our recently proposed approach for computing imaginary-time correlation functions with AFQMC \cite{ettoreGAP} and discuss how we apply it to compute the density and spin response functions. The AFQMC approach yields exact numerical results of imaginary time correlation functions for these strongly interacting systems, which can provide useful benchmarks for dynamical BCS and other theoretical approaches \cite{1367-2630-18-11-113044}. We will use in particular such exact estimations to assess the accuracy of dynamical BCS predictions. The rest of the paper is organized as follows. In Sec.~\ref{hcgr}, we introduce the basic notations, the Hamiltonian for the system and their regularization. In Sec.~\ref{dBCS} we first give a brief introduction to the dynamic BCS theory within the framework of linear response theory and then we show the results for the dynamical structure factors. In Sec.~\ref{AFQMC} we introduce the auxiliary-field quantum Monte Carlo (AFQMC) methodology and discuss the details of the unbiased calculation of two-body correlation functions in imaginary time. In Sec.~\ref{COMP}, we make a comparison of the results from dynamic BCS theory and AFQMC. We conclude this paper in Sec.~\ref{CONC}. \section{Hamiltonian for cold atoms and regularization} \label{hcgr} We model the system as a two-component ($\uparrow$ and $\downarrow$) Fermi gas interacting through a zero-range attractive interaction acting only among particles with opposite ``spins'' $v_{\uparrow\downarrow}(\vec{r}_1, \vec{r}_2) = -g \delta( \vec{r}_{1} -\vec{r}_{2})$. The basic Hamiltonian is thus: \begin{equation} \label{ham_r} \hat{H} = \int d\vec{r} \sum_{\sigma} \hat{\psi}_{\sigma}^{\dagger}(\vec{r}) \left( - \frac{\hbar^2 \nabla^2}{2m} - \mu \right) \hat{\psi}_{\sigma}^{}(\vec{r}) - g \int d\vec{r}\, \hat{\psi}_{\uparrow}^{\dagger}(\vec{r}) \hat{\psi}_{\downarrow}^{\dagger}(\vec{r}) \hat{\psi}_{\downarrow}^{}(\vec{r}) \hat{\psi}_{\uparrow}^{}(\vec{r})\,. \end{equation} Introducing, as usual, a supercell $\Omega = [-L/2,L/2]^2$ with volume $V = L^2$, we can write this Hamiltonian in momentum space as: \begin{equation} \label{ham_mom} \begin{split} & \hat{H} = \sum_{\vec{k},\sigma= \uparrow,\downarrow} \left( \frac{\hbar^2 |\vec{k}|^2}{2m} - \mu \right) \hat{c}^{\dagger}_{\vec{k}, \sigma} \, \hat{c}^{}_{\vec{k}, \sigma} \\ & - \frac{g}{V} \sum_{\vec{k}, \vec{k}',\vec{\lambda}} \, \hat{c}^{\dagger}_{\vec{k} + \vec{\lambda}/2, \uparrow} \, \hat{c}^{\dagger}_{-\vec{k}+ \vec{\lambda}/2, \downarrow} \hat{c}^{}_{-\vec{k}'+ \vec{\lambda}/2, \downarrow} \, \hat{c}^{}_{\vec{k}' + \vec{\lambda}/2, \uparrow} \end{split} \end{equation} where, if we choose periodic boundary conditions, the momenta are discretized as $\vec{k} = \frac{2 \pi}{L} \vec{n}$, $\vec{n} \in \mathbb{Z}^2$. Due to the singular nature of the interacting potential, which leads to divergences in summations over momentum space, some further regularization is needed. In this paper we will use a lattice regularization, which is particularly useful for QMC. We introduce a cutoff: \begin{equation} \sum_{\vec{k}} \longrightarrow \sum_{\vec{k} \in \mathcal{D} } \end{equation} where $\mathcal{D} = [-\pi/b,\pi/b) \times [-\pi/b,\pi/b)$ is the Brillouin zone of a finite square lattice $\mathcal{L} = (b \mathbb{Z})^2 \cap \Omega$, containing $\mathcal{N}_s = L/b \times L/b$ sites, and $b$ is the lattice parameter. Consistently, all functions in real space are defined on $\mathcal{L}$, and the Hamiltonian is mapped onto a lattice Hamiltonian $\hat{H}_{\mathcal{L}} = \hat{T} + \hat{V} - \mu \hat{N}$, which we conveniently write as follows: \begin{equation} \label{ham_latt} \hat{H}_{\mathcal{L}} = \sum_{\vec{k} \in \mathcal{D},\sigma= \uparrow,\downarrow} \xi(\vec{k}) \, \hat{c}^{\dagger}_{\vec{k}, \sigma} \, \hat{c}^{}_{\vec{k}, \sigma} - g_{\mathcal{L}} \sum_{i \in \mathcal{L}} \, \hat{n}_{i,\uparrow} \hat{n}_{i,\downarrow} \end{equation} \COMMENTED{ We introduce a finite square lattice containing $\mathcal{N}_s$ sites, with lattice parameter $b$, and we over $\vec{k}$ It is known that the contact interaction requires the introduction of suitable regularization schemes to deal with infinities arising from the Dirac delta. In this paper we will use a lattice regularization, which is particularly useful for QMC. We introduce a finite square lattice containing $\mathcal{N}_s = L \times L $ sites with lattice parameter $b$ and thus volume $V = (Lb)^2$ and we build a lattice Hamiltonian $\hat{H} = \hat{T} + \hat{V} - \mu \hat{N}$ as follows: we define a kinetic energy term of the form: \begin{equation} \hat{T} - \mu \hat{N} = \sum_{\vec{k},\sigma= \uparrow,\downarrow} \xi(\vec{k}) \, \hat{c}^{\dagger}_{\vec{k}, \sigma} \, \hat{c}^{}_{\vec{k}, \sigma} \end{equation} where the momenta are defined inside the first Brillouin zone of the reciprocal lattice $\vec{k} \in [-\pi/b,\pi/b) \times [-\pi/b,\pi/b)$ and are discretized as $\vec{k} = \frac{2 \pi}{Lb} (n_x, n_y)$, $n_{x,y}$ being integers.} In the interaction part the label $i$ runs over the sites of the lattice and $\hat{n}_{i,\sigma}$ denotes the spin-resolved particle density at site $i$. The dispersion relation can be either a Hubbard type $ \xi(\vec{k}) = t ( 4 - 2 \cos (k_x b) - 2 \cos (k_y b)) - \mu $, or a quadratic dispersion $ \xi(\vec{k}) = t ((k_x b)^2 + (k_y b)^2) - \mu$, with the hopping constant given by $t = \hbar^2 / 2 m b^2 $. Other forms of the dispersion can be used to produce desired two-particle scattering properties, as long as the value of the on-site interaction strength, $g_{\mathcal{L}}$, is tuned accordingly \cite{PhysRevA.84.061602} so that they converge to the same continuum limit when $b \to 0$. It can be shown that the value for 2D is \cite{PhysRevA.86.013626}: \begin{equation} \label{uofeta} \frac{g_{\mathcal{L}}}{t} = \frac{4\pi}{\ln(k_F a) - \ln(\mathcal{C} \sqrt{n})}\,, \end{equation} where $n=N/\mathcal{N}_s$ is particle density on the lattice, $k_F = \frac{\sqrt{2 \pi n}}{b}$ is the Fermi momentum, and $\ln(k_F a) $ is the interaction strength, containing the scattering length $a$, defined as the position of the first node in the zero-energy $s$-wave solution of the two-body problem . Finally, $\mathcal{C}$ is a constant, whose precise value depends on the choice of the dispersion relation (for example $\mathcal{C} = 0.80261$ for the quadratic dispersion). To summarize, in order to study the cold gas basic Hamiltonian \eqref{ham_r}, we first introduce, as usual, a supercell of finite volume $V$ and consider the Hamiltonian \eqref{ham_mom}; in order to avoid divergences, the use of the lattice Hamiltonian \eqref{ham_latt} is necessary and the properties have to be extrapolated to the continuum limit $b \to 0$. Finally, extrapolation to the thermodynamic limit $N\to \infty$ yields the results for the physical system. We will refer to the Hamiltonian \eqref{ham_mom} when writing equations, but we will implicitly assume that the lattice regularization is used and the results are extrapolated. \section{Dynamical BCS theory} \label{dBCS} Our focus here is on response functions, describing the physical response of the system to external perturbations. We will now briefly sketch the general linear response framework, which will allow us to introduce the dynamical BCS approximation, as well as to emphasize the connection with dynamical structure factors, which can be estimated from AFQMC calculations discussed in the next section. \subsection{Linear response framework and dynamical BCS approximation} Let us assume the system is in the many-body ground State $|\Psi_0\rangle$ of \eqref{ham_mom}. We switch on a periodic external potential $U(\vec{r},t) = \frac{1}{V} \delta U_{}(\vec{q},\omega) e^{i (\vec{q} \cdot \vec{r} - \omega t)} + c.c.$, with a well defined momentum $\vec{q}$ and frequency $\omega$, which couples to either the density $n$ or the spin-density $S_z$ of the system to give rise to a coupling of the form: \begin{equation} \label{perturbation} \hat{U}_{n, S_{z}}(t) = \frac{1}{V} \delta U_{}(\vec{q},\omega) e^{- i \omega t} \sum_{\vec{k}} \, \sum_{\sigma} (\pm 1)^{\sigma} \, \hat{c}^{\dagger}_{\vec{k} - \vec{q}/2, \sigma} \, \hat{c}^{}_{\vec{k} + \vec{q}/2, \sigma} + h. c.\,, \end{equation} where the $+$ sign is for the density, while the minus sign is for the spin density. The system, at first order in the strength of the perturbation, will respond by generating a density (spin density) modulation $\delta n(\vec{r},t) = \frac{1}{V} \delta n(\vec{q},\omega) e^{i (\vec{q} \cdot \vec{r} - \omega t)} + c.c $ (and correspondingly $\delta S_z(\vec{r},t)$ for spin density) with: \begin{equation} \delta n(\vec{q},\omega) = \chi_{nn}(\vec{q},\omega) \, \delta U(\vec{q},\omega), \quad \delta S_z(\vec{q},\omega) = \chi_{S_zS_z}(\vec{q},\omega) \, \delta U(\vec{q},\omega) \end{equation} where $\chi_{nn}(\vec{q},\omega)$ and $\chi_{S_zS_z}(\vec{q},\omega)$ are the density and spin-density response functions of the system. These functions are related to the density and spin density structure factors via the celebrated fluctuation-dissipation theorem which, at zero temperature, reads: \begin{equation} \Im\left( \chi_{nn}(\vec{q},\omega + i 0^{+}) \right) = -\pi n\, S(\vec{q},\omega), \quad \Im\left( \chi_{S_zS_z}(\vec{q},\omega + i 0^{+}) \right) = -\pi n\, S_s(\vec{q},\omega)\quad . \end{equation} For cold gases, the dynamical structure factors can be directly measured with two-photons Bragg spectroscopy. \COMMENTED{ In this paper we will now present two approaches to compute $S(\vec{q},\omega)$ and $S_s(\vec{q},\omega)$ starting from the Hamiltonian \eqref{ham_mom}: the dynamical BCS theory, which yields an approximate expression for $\chi_{nn}(\vec{q},\omega)$ and $\chi_{S_zS_z}(\vec{q},\omega)$, and the Auxiliary Field Quantum Monte Carlo (AFQMC) method, which provides an unbiased estimation of the Laplace transforms of $S(\vec{q},\omega)$ and $S_s(\vec{q},\omega)$ , as we will discuss below. } Essentially, dynamical BCS theory attempts to replace the time-dependent Hamiltonian $\hat{H} + \hat{U}(t)$ with a time-dependent effective Hamiltonian $\hat{H}_{BCS}(t)$ built self-consistently \cite{PhysRev.139.A197,Nozieres,PhysRevA.74.042717}. Below we describe the theory briefly, following a short section reviewing a few aspects of equilibrium BCS theory. \subsection{Nambu formalism} We recall that the ground state BCS theory for the homogeneous Fermi gas relies on the following approximation for the interaction term in the Hamiltonian: \begin{equation} \hat{V} \simeq \hat{V}_{BCS} = - \frac{V |\Delta|^2}{g} + \sum_{\vec{k}} \Delta^{\star} \, \hat{c}^{}_{-\vec{k}, \downarrow} \hat{c}^{}_{\vec{k}, \uparrow} + \Delta \, \hat{c}^{\dagger}_{\vec{k}, \uparrow} \hat{c}^{\dagger}_{-\vec{k}, \downarrow}\,, \end{equation} where it is assumed that only singlet zero-momentum Cooper pairs are formed. The order parameter $\Delta$ is determined self-consistently via the gap-equation: \begin{equation} \Delta = -\frac{g}{V} \sum_{\vec{k}} \langle \hat{c}^{}_{-\vec{k}, \downarrow} \hat{c}^{}_{\vec{k}, \uparrow} \rangle\,, \end{equation} where the brackets denote average over the ground state of the one-body Hamiltonian $\hat{H}_{BCS} = \hat{T} + \hat{V}_{BCS}$. It is useful to introduce the Nambu spinor: \begin{equation} \hat{\Psi}^{}(\vec{k}) = \left( \begin{array}{c} \hat{c}^{}_{\vec{k}, \uparrow} \\ \hat{c}^{\dagger}_{-\vec{k}, \downarrow} \end{array} \right), \quad \hat{\Psi}^{\dagger}(\vec{k}) = \left( \hat{c}^{\dagger}_{\vec{k}, \uparrow}, \hat{c}^{}_{-\vec{k}, \downarrow} \right)\,, \end{equation} which allows us to express: \begin{equation} \hat{H}_{BCS} = \sum_{\vec{k}} \, \hat{\Psi}^{\dagger}(\vec{k}) \left( \begin{array}{cc} \xi(k) & \Delta^{\star} \\ \Delta & -\xi(k) \end{array} \right)^{T} \hat{\Psi}^{}(\vec{k}) - \frac{V |\Delta|^2}{g} + \sum_{\vec{k}} \xi(k)\,. \end{equation} We will neglect the constant from now on. Moreover, will assume that $\Delta$ is real and so we will drop the complex conjugation. Note that $\Delta < 0$ in our notation. The mean-field Hamiltonian can be straightforwardly diagonalized through a Bogoliubov transformation into quasi-particles creation and destruction operators: \begin{equation} \hat{\Psi}^{}(\vec{k}) = \mathcal{W}_{\vec{k}} \, \hat{\Phi}^{}(\vec{k}), \quad \hat{\Phi}^{}(\vec{k}) = \left( \begin{array}{c} \hat{\alpha}^{}_{\vec{k}} \\ \hat{\beta}^{\dagger}_{-\vec{k}} \end{array} \right) \end{equation} The transformation matrix can be written in the simple form: \begin{equation} \mathcal{W}_{\vec{k}} = \left( \begin{array}{cc} u_{k} & v_{k} \\ -v_{k} & u_{k} \end{array} \right), \quad u_k = \sqrt{\frac{1}{2}\left( 1 + \frac{\xi(k)}{E(k)}\right)}, \quad v_k = \sqrt{\frac{1}{2}\left( 1 - \frac{\xi(k)}{E(k)}\right)}\,, \end{equation} where the quasi-particles dispersion is given by: \begin{equation} E(k) = \sqrt{\xi(k)^2 + \Delta^2}\,. \end{equation} That is \begin{equation} \mathcal{W}^{\dagger}_{\vec{k}} \, \left( e^{(0)}(k) \right)^T \, \mathcal{W}_{\vec{k}} = \left( \begin{array}{cc} E(k) & 0 \\ 0 & -E(k) \end{array} \right), \quad {\rm with}\ e^{(0)}(k) \equiv \left( \begin{array}{cc} \xi(k) & \Delta \\ \Delta & -\xi(k) \end{array} \right)\,. \end{equation} \COMMENTED{ Since the BCS ground state is annihilated by all the quasi-particle destruction operators, we easily see that, defining: \begin{equation} n^{(0)}_{i,j}(k) = \langle \hat{\Psi}^{\dagger}_i(\vec{k}) , \hat{\Psi}^{}_j(\vec{k}) \rangle \end{equation} we have: \begin{equation} n^{(0)}(k) = \mathcal{W}^{\star}_{\vec{k}} \, \left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right) \mathcal{W}^{T}_{\vec{k}} \end{equation} } \subsection{Time-dependent formalism} Suppose now we switch on a periodic time-dependent external field with wave-vector $\vec{q}$ and frequency $\omega$, as defined in \eqref{perturbation}, coupled to the particle or spin density of the system. We will focus on the particle density; the formalism for the spin density case is similar, and simpler. The central idea is that, at time $t$, as the system responds to the perturbation, it develops a time-dependent order parameter self-consistently. The Cooper pairs will now be allowed to have total momentum $\vec{q}$. Using Nambu formalism, we can write a time-dependent BCS Hamiltonian in the form: \begin{equation} \hat{H}_{BCS}(t) = \sum_{\vec{k}} \, \hat{\Psi}^{\dagger}(\vec{k}) \left( e^{(0)}(\vec{k}) \right)^T \hat{\Psi}^{}(\vec{k}) + \sum_{\vec{k}} \, \hat{\Psi}^{\dagger}(\vec{k} - \vec{q}/2) \, \left(\delta e (\vec{q},t) \right)^T \, \hat{\Psi}^{}(\vec{k} + \vec{q}/2) + h.c. \end{equation} where we have introduced the time-dependent matrix: \begin{equation} \delta e (\vec{q},t) = \left( \begin{array}{cc} \frac{1}{V}\delta U(\vec{q},\omega) e^{-i \omega t} & \left( \Delta_{-\vec{q}}(t) \right)^{\star}\\ \Delta_{\vec{q}}(t) & -\frac{1}{V} \delta U(\vec{q},\omega) e^{-i \omega t} \end{array} \right) \end{equation} The matrix $\delta e (\vec{q},t)$ contains both the external perturbation and the self-consistently generated time-dependent gap function: \begin{equation} \Delta_{\vec{q}}(t) = - \frac{g}{V} \sum_{\vec{k}} \langle \hat{c}^{}_{-(\vec{k} - \vec{q}/2),\downarrow} \hat{c}^{}_{\vec{k} + \vec{q}/2), \uparrow} \rangle \end{equation} where the brackets denote an average over the time-dependent ground state of $\hat{H}_{BCS}(t)$. This dynamical gap equation can be combined with the well known equilibrium gap equation: \begin{equation} \frac{1}{g} = \frac{1}{V} \sum_{\vec{k}} \frac{1}{2E(k)} \end{equation} to give the conditions: \begin{equation} \label{self_consistency} \begin{split} & \sum_{\vec{k}} \left( \delta n_{2,1}(\vec{k},\vec{q},t) + \frac{1}{2E(k)} \Delta_{\vec{q}}(t) \right) = 0 \\ & \sum_{\vec{k}} \left( \delta n_{1,2}(\vec{k},\vec{q},t) + \frac{1}{2E(k)} \left( \Delta_{-\vec{q}}(t) \right)^{\star} \right) = 0\,, \end{split} \end{equation} which have to be fulfilled by the time dependent fluctuating part of the density matrix: \begin{equation} \delta n_{i,j}(\vec{k},\vec{q},t) = \left\langle \hat{\Psi}^{\dagger}_i(\vec{k} - \vec{q}/2) \, \hat{\Psi}^{}_j(\vec{k} + \vec{q}/2) \right\rangle\,. \end{equation} In order to compute dynamical response functions, the key ingredient is the time derivative: \begin{equation} i \frac{d}{dt} \hat{\Psi}^{\dagger}_i(\vec{k} - \vec{q}/2) \, \hat{\Psi}^{}_j(\vec{k} + \vec{q}/2) = \left[ \hat{\Psi}^{\dagger}_i(\vec{k} - \vec{q}/2) \, \hat{\Psi}^{}_j(\vec{k} + \vec{q}/2) \, , \, \hat{H}_{BCS}(t) \right] \end{equation} which, to first order in the perturbation strength, leads to the following kinetic equation for the $2 \times 2$ density matrix $\delta n_{\vec{k}}(\vec{q},\omega) = \delta n(\vec{k},\vec{q},t) e^{i \omega t}$: \begin{equation} \label{kinetic_equation} \begin{split} & \omega \delta n_{\vec{k}}(\vec{q},\omega) = \delta n_{\vec{k}}(\vec{q},\omega) e^{(0)}(\vec{k} + \vec{q}/2) - e^{(0)}(\vec{k} - \vec{q}/2) \delta n_{\vec{k}}(\vec{q},\omega) \\ & + n^{(0)}(\vec{k} - \vec{q}/2) \delta e (\vec{q},\omega) - \delta e (\vec{q},\omega) n^{(0)}(\vec{k} + \vec{q}/2)\,, \end{split} \end{equation} where $n^{(0)}_{i,j}(\vec{k}) = \left\langle \hat{\Psi}^{\dagger}_i(\vec{k}) \, \hat{\Psi}^{}_j(\vec{k}) \right\rangle$ is the equilibrium density matrix. Equations~\eqref{kinetic_equation}, complemented with the conditions \eqref{self_consistency}, can be solved to obtain the density-density response function. We skip the simple but lengthy algebra here and give directly the final result: \begin{equation} \chi_{nn} = -\frac{1}{V} \left\{ I^{\prime\prime} + \frac{\Delta^2}{I_{11} I_{22} - \omega^2 I_{12}} \left(2\omega^2 I \, I_{12} \, I^{\prime} -\omega^2 I_{22} I^{\prime \, 2} - I^2 \, I_{11} \right) \right\}\,, \end{equation} where \COMMENTED{ \begin{equation} I''(\vec{q},\omega) = \sum_{\vec{k}} \frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left( \frac{E_{-}E_{+}-\xi_{-}\xi_{+}+\Delta^2}{E_{-}E_{+}}\right) \end{equation} \begin{equation} I(\vec{q},\omega) = \sum_{\vec{k}} \frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left( \frac{\xi_{+} + \xi_{-}}{E_{-}E_{+}}\right) \end{equation} \begin{equation} I'(\vec{q},\omega) = \sum_{\vec{k}} \frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left( \frac{1}{E_{-}E_{+}}\right) \end{equation} \begin{equation} I_{12}(\vec{q},\omega) = \sum_{\vec{k}}\frac{1}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left( \frac{E_{+} \xi_{-} + E_{-} \xi_{+}}{E_{-} E_{+} }\right) \end{equation} \begin{equation} I_{11}(\vec{q},\omega) = \sum_{\vec{k}}\frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left(\frac{E_{-} E_{+} + \xi_{-}\xi_{+} + \Delta^2}{E_{-} E_{+}} \right) - \frac{1}{E} \end{equation} \begin{equation} I_{22}(\vec{q},\omega) = \sum_{\vec{k}}\frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left(\frac{E_{-} E_{+} + \xi_{-}\xi_{+} - \Delta^2}{E_{-} E_{+}} \right) - \frac{1}{E} \end{equation} } \begin{equation} \begin{split} &I''(\vec{q},\omega) = \sum_{\vec{k}} \frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left( \frac{E_{-}E_{+}-\xi_{-}\xi_{+}+\Delta^2}{E_{-}E_{+}}\right) \\ &I(\vec{q},\omega) = \sum_{\vec{k}} \frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left( \frac{\xi_{+} + \xi_{-}}{E_{-}E_{+}}\right) \\ & I'(\vec{q},\omega) = \sum_{\vec{k}} \frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left( \frac{1}{E_{-}E_{+}}\right) \\ & I_{12}(\vec{q},\omega) = \sum_{\vec{k}}\frac{1}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left( \frac{E_{+} \xi_{-} + E_{-} \xi_{+}}{E_{-} E_{+} }\right) \\ & I_{11}(\vec{q},\omega) = \sum_{\vec{k}}\frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left(\frac{E_{-} E_{+} + \xi_{-}\xi_{+} + \Delta^2}{E_{-} E_{+}} \right) - \frac{1}{E} \\ & I_{22}(\vec{q},\omega) = \sum_{\vec{k}}\frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left(\frac{E_{-} E_{+} + \xi_{-}\xi_{+} - \Delta^2}{E_{-} E_{+}} \right) - \frac{1}{E} \end{split} \end{equation} We use the notation $E_{+} = E(\vec{k} + \vec{q}/2)$ and $E_{-} = E(\vec{k} - \vec{q}/2)$ for $E$ and $\xi$ for simplicity. The time-dependent self-consistent order parameter in the case of a perturbation coupled to the particle density is given by: \begin{equation} \delta \Delta_{\vec{q}}(t) = \frac{e^{-i\omega t} \, \delta U(\vec{q},{\omega}) \, \Delta}{V \, \left( I_{11} I_{22} - \omega^2 I^2_{12}\right)}\left( -\omega I \, I_{12} + I \, I_{11} -\omega^2 I' \, I_{12} + \omega I' \, I_{22} \right) \end{equation} The computation of the spin density response function leads to a much simpler result: \begin{equation} \chi_{S_zS_z} = -\frac{1}{V} I_s^{\prime\prime} \,, \end{equation} where \begin{equation} I''_s(\vec{q},\omega) = \sum_{\vec{k}} \frac{E_{+} + E_{-}}{\left( E_{+} + E_{-} \right)^2 - \omega^2} \left( \frac{E_{-}E_{+}-\xi_{-}\xi_{+} -\Delta^2}{E_{-}E_{+}}\right)\,. \end{equation} \begin{figure}[ptb] \begin{center} \includegraphics[width=10cm, angle = 270]{density-eps-converted-to.pdf} \caption{(color online) Color plot of the dynamical structure factor $S(\vec{q},\omega)$ (units $1/\varepsilon_F = (2m/\hbar^2) (1/ 2 \pi n )$) of a $2D$ Fermi gas for four values of the interaction parameter: $\log(k_F a) = 0.0$ (top left panel), $0.5$ (top right panel), $1.0$ (bottom left panel) and $1.5$ (bottom right panel). We also show an horizontal line at $2|\Delta|$, the threshold to break Cooper pairs, the free atoms dispersion $e_a(q) = \hbar^2|\vec{q}|^2/ 2 m$, and the free molecule dispersion $e_m(q) = \hbar^2|\vec{q}|^2/ 4 m$. } \label{fig:dBCS_dens} \end{center} \end{figure} \begin{figure}[ptb] \begin{center} \includegraphics[width=10cm, angle = 270]{spin-eps-converted-to.pdf} \caption{(color online) Color plot of the spin dynamical structure factor $S_s(\vec{q},\omega)$ (units $1/\varepsilon_F$) of a $2D$ Fermi gas. The setup is the same as in Fig.~\ref{fig:dBCS_dens}. } \label{fig:dBCS_spin} \end{center} \end{figure} The results for the dynamical structure factor for the density $S(\vec{q},\omega)$ and the spin density $S_s(\vec{q},\omega)$ are shown in Figs.~\ref{fig:dBCS_dens} and \ref{fig:dBCS_spin} respectively for four values of the interaction parameter, from the BEC regime $\log(k_F a) = 0.0$ through to the BCS regime of $\log(k_F a) =1.5$. At $\log(k_F a) = 0.0$ the system is expected to behave as a gas of molecules. This is confirmed by the high momentum behavior $\hbar^2|\vec{q}|^2/ 2 m >> |\Delta|$ of $S(\vec{q},\omega)$, where we have a spectrum of free molecules $S(\vec{q},\omega) \simeq \delta ( \omega - e_m(\vec{q}))$, with $e_m(\vec{q}) = \hbar^2|\vec{q}|^2/ 4 m$ containing the mass of the molecules ($2m$). We observe that this behavior is not present is the spin structure factor $S_s(\vec{q},\omega)$: in order to create a modulation in spin density, it is necessary to first break a Cooper pair. This is the reason that the appreciable values for $S_s(\vec{q},\omega)$ lie above the line $2 |\Delta|$, which is the estimated energy cost of breaking a molecule. On the other side, in the BCS regime at $\log(k_F a) = 1.5$ , the dynamical structure factors are more similar to the non-interacting results. The spectrum is centered around the free atom dispersion $e_{a}(\vec{q}) = \hbar^2|\vec{q}|^2/ 2 m$, with a broadening due to the particle-hole continuum. We also see that the theory predicts a smooth interpolation between the two regimes, consistent with a BEC-BCS crossover. From comparison with AFQMC calculations, which we discuss next, it is seen that the results are in reasonably good agreement, although quantitative differences exist. \COMMENTED { Note that, by construction: \begin{equation} \delta n_{1,2}(\vec{k},\vec{q},t) = \left(\delta n_{2,1}(\vec{k},-\vec{q},t)\right)^{\star} \end{equation} that is: \begin{equation} \left(\delta n_{\vec{k}}(\vec{q},\omega)\right)_{1,2} = \left( \left( \delta n_{\vec{k}}(-\vec{q},-\omega)\right)_{2,1} \right)^{\star} \end{equation} Let's start from an Hamiltonian: \begin{equation} \hat{H}(t) = \hat{H} + \hat{U}(t) \end{equation} where $\hat{H}$ is the Hamiltonian for the cold gas \eqref{ham}, while $\hat{U}(t)$ is a perturbation, with the simple interpretation of a periodic time dependent external field: \begin{equation} \hat{U}(t) = \delta U \left( e^{i \omega t} \hat{\rho}_{\vec{q}} + e^{-i \omega t} \hat{\rho}_{-\vec{q}} \right) \end{equation} coupled to the particle or spin density $\hat{\rho}_{\vec{q}}$ of the system. We find very useful to introduce Nambu spinor: \begin{equation} \hat{\Psi}^{}(\vec{k}) = \left( \begin{array}{c} \hat{c}^{}_{\vec{k}, \uparrow} \\ \hat{c}^{\dagger}_{-\vec{k}, \downarrow} \end{array} \right), \quad \hat{\Psi}^{\dagger}(\vec{k}) = \left( \hat{c}^{\dagger}_{\vec{k}, \uparrow}, \hat{c}^{}_{-\vec{k}, \downarrow} \right) \end{equation} } \section{Quantum Monte Carlo Approach} \label{AFQMC} Because of the mean-field nature of the approximations involved in dynamical BCS theory, it is not clear \emph{a priori} how reliable the results are, especially in strongly interacting regimes. Quantum Monte Carlo (QMC) approaches provide an alternative. In particular, for the spin-balanced cold Fermi gas there is no fermion sign problem in the auxiliary-field Quantum Monte Carlo (AFQMC) method \cite{AFQMC-lecture-notes-2013,hubbard_benchmark,PhysRevB.94.085103,PhysRevB.88.125132}. The AFQMC method is able to provide unbiased, numerically exact results for any observable on the ground state of the cold Fermi gas \cite{Hao-2DFG,PhysRevA.84.061602}. We have recently showed \cite{ettoreGAP,Ettore-2DFG} that it is possible to reach beyond static properties and compute imaginary-time correlation functions such as \begin{equation} \label{dynamical_general} F(\vec{q},\tau) = \frac{1}{N} \frac{ \langle \Psi_0 | \, \hat{n}_{\vec{q}} \, e^{-\tau (\hat{H} - E_0) } \, \hat{n}_{-\vec{q}} \, | \Psi_0 \rangle } {\langle \Psi_0 \, | \, \Psi_0 \rangle } \end{equation} where $N$ is the number of particles and $\hat{n}_{\vec{q}}$ the Fourier component of the density (or spin density) fluctuation operator. The exact zero-temperature relation: \begin{equation} \label{dynamical_density_tau} F(\vec{q},\tau) = \int_{0}^{+\infty} d\omega e^{-\tau \omega} S(\vec{q},\omega) \end{equation} allows us to then obtain predictions about the dynamical structure factor (density and spin density) of the system using analytic continuation techniques. In the following we will introduce the AFQMC method and discuss how to efficiently compute $F(\vec{q},\tau)$ for the Fermi gas system. \subsection{AFQMC formalism and static properties} We will introduce the basic notations of the methodology using the attractive Hubbard model Hamiltonian \eqref{ham_latt} in Sec.~\ref{hcgr}. The AFQMC methodology relies on the following \cite{AFQMC-lecture-notes-2013}: \begin{equation} | \, \Psi_0 \rangle \propto \lim_{\beta \to +\infty} e^{-\beta ( \hat{H} - E_0)} |\phi_T\rangle\,, \end{equation} where $E_0$ is an estimate of the ground state energy, and $|\phi_T\rangle$ a trial wave function which is not orthogonal to the many-body ground state $ | \,\Psi_0 \, \rangle$. Trotter-Suzuki breakup together with Hubbard-Stratonovich transformation provides the following: \begin{equation} \label{propagator} e^{-\beta ( \hat{H} - E_0)} = \left(e^{-\delta\tau ( \hat{H} - E_0)} \right)^{M} \simeq \left(\int d{\bf{x}} p({\bf{x}}) \hat{B}({\bf{x}})\right)^{M}\,, \end{equation} which becomes exact in the limit $M\to \infty$ ($\delta\tau = \beta/M$ is a {\it{time-step}}). At each time slice, there is one set of auxiliary-fields, ${\bf{x}} = (x_1, \dots, x_{\mathcal{N}_s})$, which are a discrete set of Ising fields on the lattice. $\hat{B}({\bf{x}})$ is a one-particle propagator, and the function $p({\bf{x}})$ is a discrete probability density. A key point of the methodology is that $\hat{B}(\bf{x})$ in Eq.~\eqref{propagator}, which is the result of the HS transformation, is the exponential of a one-body propagator; its application on a Slater determinant $|\phi\rangle$ results in another Slater determinant $|\phi'\rangle$, given in matrix form by \begin{equation} \mathcal{B}(\bf{x}) \Phi =\Phi'\,, \label{eq:Thouless} \end{equation} where $\Phi=\Phi_\uparrow\otimes \Phi_\downarrow$, with $\Phi_\sigma$ being the $\mathcal{N}_s\times N_\sigma$ matrix containing the spin-$\sigma$ orbitals of the Slater determinant $|\phi\rangle$, and similarly for $|\phi'\rangle$. The standard path-integral AFQMC method allows us to evaluate ground state expectation values: \begin{equation} \label{exp_val} \langle \hat{O} \rangle = \frac{ \langle \Psi_0 | \, \hat{O} \, | \Psi_0 \rangle } {\langle \Psi_0 \, | \, \Psi_0 \rangle } \end{equation} by casting them in the integral form: \begin{equation} \label{exp_val2} \langle \hat{O} \rangle = \int d{\bf{X}} \, \mathcal{W}({\bf{X}}) \, \mathcal{O}({\bf{X}})\,. \end{equation} In Eq.~\eqref{exp_val2}, ${\bf X} = ({\bf x}(1), \dots, {\bf x}(M))$ denotes a {\it{path}} in auxiliary-field space. Choosing the trial wave function $|\phi_T\rangle$ as a single Slater determinant and introducing the notation: \begin{equation} \label{phi_L} \langle \phi_L | \, = \langle \phi_T | \,\hat{B}({\bf x}(M)) \dots \hat{B}({\bf x}(l)) \end{equation} and \begin{equation} \label{phi_R} | \phi_R \rangle = \hat{B}({\bf x}(l-1)) \dots \hat{B}({\bf x}(1)) \,| \phi_T \rangle\,, \end{equation} we have: \begin{equation} \mathcal{W}({\bf{X}}) \propto \, \langle \phi_L \, | \, \phi_R \rangle \, \prod_{i=1}^{M} p({\bf{x}}(i))\,, \end{equation} while the estimator is \begin{equation} \label{static_estimator} \mathcal{O}({\bf{X}}) = \frac{ \langle \phi_L | \, \hat{O} \, | \phi_R \rangle } {\langle \phi_L \, | \, \phi_R \rangle }\,. \end{equation} For attractive interaction and $N_\uparrow=N_\downarrow$, $ \mathcal{W}({\bf{X}}) $ remains non-negative for all possible auxiliary-field path configurations. The calculation is sign-problem free, allowing us to obtain exact results. We use a Metropolis sampling of the paths, exploiting a force bias \cite{Hao-2DFG,AFQMC-lecture-notes-2013} that allows high acceptance ratio in the updates. The infinite variance problem is eliminated with a bridge link approach \cite{Hao-inf-var}. \subsection{Dynamical properties} \label{ssec:method-dynamical} The AFQMC methodology allows us to also compute dynamical correlation functions in imaginary-time at zero temperature: \begin{equation} \label{dynamical} f(\tau) = \frac{ \langle \Psi_0 | \, \hat{O} \, e^{-\tau (\hat{H} - E_0) } \, \hat{O}^{\dagger} \, | \Psi_0 \rangle } {\langle \Psi_0 \, | \, \Psi_0 \rangle }\,, \end{equation} where $\hat{O}$ can be any operator. We will focus here on one-body operators such as the particle density or the spin density. The imaginary-time propagator between the operators $\hat{O}$ and $\hat{O}^{\dagger}$ can be expressed using Eq.~\eqref{propagator}. We insert an extra segment to the path: a number $N_{\tau} = \tau/\delta\tau$ of {\it{time-slices}}, ${\bf \tilde{x}}(1), \dots , {\bf \tilde{x}}(N_{\tau})$. % The static estimator in Eq.~\eqref{static_estimator} is replaced by the following dynamical estimator: \begin{equation} \label{dynamical estimator} f({\bf{X}},\tau) = \frac{ \langle \phi_L | \, \hat{O} \, \hat{B}({\bf{\tilde{x}}}(N_{\tau})) \dots \hat{B}({\bf{\tilde{x}}}(1)) \,\,\hat{O}^{\dagger} | \phi_R \rangle } {\langle \phi_L \, | \hat{B}({\bf{\tilde{x}}}(N_{\tau})) \dots \hat{B}({\bf{\tilde{x}}}(1)) \,| \, \phi_R \rangle }\,. \end{equation} Below we will write $\hat{B}_i$ instead of $\hat{B}({\bf \tilde{x}}(i))$ for notational simplicity. Standard approaches \cite{assaad_prb,hirsch-stable,jel_dyn_method,jel_dyn_method2} to evaluate the expression in Eq.~(\ref{dynamical estimator}) require computational cost of $\mathcal{O}(\mathcal{N}^3_s)$. We have recently introduced an approach \cite{ettoreGAP} which allows computation of the matrix elements of dynamical Green functions and (spin) density correlation functions with computational cost of $\mathcal{O}(\mathcal{N}_s\,N^2)$. For dilute systems such as the ultracold Fermi gas that we are concerned with here, the number of particles is significantly smaller than the number of lattice sites, $N \ll \mathcal{N}_s $. The approach thus offers a significant advantage which enables us to reach the realistic limits. Since the estimator \eqref{dynamical estimator} is computed sampling the same probability distribution as for static properties \eqref{static_estimator}, a finite imaginary time $\tau > 0$ does not introduce any bias. We will now sketch the method, which explicitly propagates fluctuations during the random walk. \COMMENTED{ we take advantage of a computation scheme to compute \eqref{dynamical estimator} introduced by ourselves \cite{ettoreGAP} for dynamical Green functions and (spin) density correlation functions, whose computational cost, for matrix elements, is $\mathcal{O}(\mathcal{N}_s\,\mathcal{N}_p^2)$ versus the typical $\mathcal{O}(\mathcal{N}^3_s)$ of well established approaches } \subsection{Two-body correlation functions} Let us focus on the estimator: \begin{equation} \label{dynamical estimator_new_gen_tb} n({\bf{X}},\tau) =\frac{ \langle \phi_L | \, \hat{n}_{j,\sigma'} \,\hat{B}_{N_{\tau}} \dots \hat{B}_1 \,\,\hat{n}_{i,\sigma} | \phi_R \rangle } {\langle \phi_L \, | \,\hat{B}_{N_{\tau}} \dots \hat{B}_1 \,\,| \, \phi_R \rangle } \end{equation} where, as usual, $\hat{n}_{i,\sigma} = \hat{c}^{\dagger}_{i,\sigma} \hat{c}^{}_{i,\sigma} $ is the fermion spin-resolved density operator. It is straightforward to show \begin{equation} \label{smart} \hat{n}_{i,\sigma} = \frac{e^{\hat{n}_{i,\sigma}} - 1}{e-1}\,. \end{equation} This implies that, if $| \phi_R \rangle = \hat{c}^{\dagger}_{|u_1,\uparrow\rangle} \dots \hat{c}^{\dagger}_{|u_{N_\uparrow},\uparrow\rangle} \hat{c}^{\dagger}_{|v_1,\downarrow\rangle} \dots \hat{c}^{\dagger}_{|v_{N_\downarrow},\downarrow\rangle} |0\rangle$, the numerator in Eq.~\eqref{dynamical estimator_new_gen_tb} can be viewed as the propagation of two Slater determinants: \begin{equation} \hat{n}_{i,\uparrow}| \phi_R \rangle = \frac{ | \phi'_R(i) \rangle - | \phi_R \rangle}{e-1} \end{equation} where: \begin{equation} | \phi_R'(i) \rangle = \hat{c}^{\dagger}_{|e^{\hat{n}_{i,\uparrow}}u_1,\uparrow\rangle} \dots \hat{c}^{\dagger}_{|e^{\hat{n}_{i,\uparrow}}u_{N_\uparrow},\uparrow\rangle} \hat{c}^{\dagger}_{|v_1,\downarrow\rangle} \dots \hat{c}^{\dagger}_{|v_{N_\downarrow},\downarrow\rangle} |0\rangle\,. \end{equation} Consequently, Eq.~\eqref{dynamical estimator_new_gen_tb} can be broken into two pieces: \begin{equation} \label{dynamical estimator_new_gen_tb-broken} n({\bf{X}},\tau) = \frac{1}{e-1} \left(n_1({\bf{X}},\tau) - n_2({\bf{X}},\tau)\right)\,, \end{equation} which can be expressed as: \begin{equation} n_1({\bf{X}},\tau) = \frac{ \langle \phi_L | \, \hat{n}_{j,\sigma'} \hat{B} \,\, | \phi'_R(i) \rangle } {\langle \phi_L \, | \hat{B} \,\,| \, \phi'_R (i)\rangle } \,\, \frac{ \langle \phi_L | \, \hat{B} \,\, | \phi'_R(i) \rangle } {\langle \phi_L \, | \hat{B} \,\,| \, \phi_R \rangle } \end{equation} and \begin{equation} n_2({\bf{X}},\tau) = \frac{ \langle \phi_L | \, \hat{n}_{j,\sigma'} \hat{B} \,\, | \phi_R \rangle } {\langle \phi_L \, | \hat{B} \,\,| \, \phi_R \rangle }\,. \end{equation} Both $n_1({\bf{X}},\tau)$ and $n_2({\bf{X}},\tau)$ can be calculated with straightforward manipulations of Slater determinants. The average of $n({\bf{X}},\tau)$ over an ensemble of paths in the configuration space of the auxiliary field provides the estimation of the density or spin-density imaginary time correlation function. \begin{figure}[ptb] \begin{center} \includegraphics[width=8cm, angle = 270]{qmc_vs_ED-eps-converted-to.pdf} \caption{(color online) Imaginary-time on-site density-density correlation function $\langle \hat{n}_{i} \, e^{-\tau(\hat{H} - E_0)} \, \hat{n}_{i} \rangle$ for the lattice Hamiltonian in Eq.~\eqref{ham_latt} on a $4 \times 4$ lattice hosting $10$ particles with $g_{\mathcal{L}}/t = 4$. Comparison between the QMC computation (circles) and the exact diagonalization result (dotted line). AFQMC errors bars are shown but are smaller than symbol size.} \label{fig:dBCS_vs_ED} \end{center} \end{figure} In Fig.~\ref{fig:dBCS_vs_ED} we show a comparison between AFQMC and exact diagonalization results. We compute the same site density-density correlation in imaginary time for a $4 \times 4$ lattice hosting $N=10$ particles with $g_{\mathcal{L}}/t = 4$. Excellent agreement is seen, providing a strong calibration of the AFQMC algorithm. \section{Comparison between dynamical BCS theory and AFQMC} \label{COMP} \begin{figure}[ptb] \begin{center} \includegraphics[width=10cm, angle = 270]{qmc_vs_dBCS-eps-converted-to.pdf} \caption{(color online) Comparison of imaginary-time density-density correlation, $F(\vec{q},\omega)$ at $q = 4 k_F$, obtained from dynamical BCS theory (blue full line) and AFQMC (red open circles). The top panel is in the crossover regime, at $\log(k_F a) = 0.5$, while the bottom panel is in the BCS regime, at $\log(k_F a) = 1.5$. The insets show the comparison for larger values of the imaginary time. The imaginary time window considered is large enough to capture the behavior of the density fluctuation mode at $q = 4 k_F$.} \label{fig:dBCS_vs_QMC} \end{center} \end{figure} With the algorithm described above, we can compute unbiased imaginary-time correlation functions $F(\vec{q},t)$ for the 2D Fermi gas. The connection with dynamical structure factors is provided by the relation in Eq.~\eqref{dynamical_density_tau}, which has to be inverted to estimate $S(\vec{q},\omega)$. This is an analytic continuation problem, which can be delicate. However, a major advantage here is that the quality of the imaginary-time data is very high. State of art analytic continuation approaches have been shown to provide very accurate estimates under such circumstances, for instance in the realm of cold bosonic systems \cite{giftREV}. A comprehensive study of the dynamical structure factors from AFQMC for the 2D gas will be presented elsewhere \cite{Ettore-2DFG}. Here we directly compare the result from dynamical BCS theory transformed into imaginary time domain with the exact AFQMC results. We stress that the direct Laplace transform, from frequency domain to imaginary time domain, can be performed with arbitrary accuracy, while the inverse transformation is much more challenging. In order to assess the quality of the dynamical BCS theory against unbiased AFQMC results, we choose a system of $N = 18$ particles moving on a lattice with $25 \times 25$ sites, and compute $F(\vec{q},t)$. We choose a high-momentum transfer of $q = 4 k_F$, which provides an interesting probe of the BCS-BEC crossover when the atomic spectrum evolves into the molecular spectrum. In Fig.~\ref{fig:dBCS_vs_QMC} we present the comparisons in the BCS regime and in the crossover regime. We see that, in the BCS regime at $\log(k_F a) = 1.5$, dynamical BCS theory prediction is compatible with the exact result, except for a narrow window at small imaginary time, where all the excited states of the microscopic Hamiltonian play an important role. On the other hand, in the crossover regime at $\log(k_F a) = 0.5$, the discrepancy is larger, although agreement is still restored in the large imaginary time region. This suggests that many-body correlations beyond dynamical mean field have important effects on response functions in the strongly-correlated BEC-BCS crossover. Considering sum rules, we observe that the $0$th moment, the static structure factor (that is the value of the correlation function at $\tau t = 0$) is significantly different between the two approaches. On the other hand, the dynamical BCS theory strictly imposes the $1$st moment sum rule ("f sum rule''), which is also satisfied by the QMC result within statistical uncertainties. Finally, we comment that the dynamical BCS theory calculations here provide a rigorous route to benchmark analytic continuation methods. By applying an analytic continuation method to the dynamical BCS data in the imaginary-time domain, one could directly compare the results against the corresponding results in the frequency domain. Together with the availability of exact AFQMC results in the imaginary-time domain here, the problem of the Fermi gas provides an excellent potential benchmark problem for methods to treat dynamical properties in strongly interacting many-fermion systems. \section{Conclusions} \label{CONC} We have studied the response functions, and more precisely dynamical structure factors, of a two-dimensional cold gas with zero-range attractive interactions at zero-temperature. We computed $S(\vec{q},\omega)$ within the framework of dynamical BCS theory, which provides explicit approximate expressions involving integrals over momentum space. We also described an efficient algorithm to compute imaginary-time density and spin correlations with AFQMC for the dilute gases, and used the results to benchmark dynamical BCS theory. The results of dynamical BCS theory show a spectrum of density fluctuations made of a fermionic particle-hole continuum superimposed to a gapless bosonic mode which, at high momentum, gives the response of a gas of free molecules on the BEC side of the crossover. The weight of such spectrum naturally decreases as we move towards the BCS regime, where the systems is similar to an ordinary superconductor. The spin density fluctuation spectrum has a simpler structure. It displays a gap corresponding to the energy needed to break a Cooper pair, which is necessary to induce a spin density modulation at zero-temperature. At higher energies, the spectrum displays the typical particle-hole continuum. We have discussed the importance and outlook of AFQMC calculations for imaginary-time correlations and response functions. Our AFQMC results are used to assess the accuracy of dynamical BCS theory. Discrepancies arise in the BEC-BCS crossover, although many qualitative features appear to be correctly described by dynamical BCS theory. Outside the strongly correlated regime, the theory is accurate and captures both the fermionic particle-hole excitations and the bosonic dynamics. The smaller differences in this regime tend to be more pronounced at shorter imaginary-times. A very interesting perspective will be the study of the possibility to use effective parameters\cite{PhysRevB.94.235119} in the dynamical BCS theory to improve the agreement with exact results. \begin{acknowledgements} We gratefully acknowledge support from the National Science Foundation (NSF) under grant number DMR-1409510 and from the Simons Foundation. The calculations were carried out at the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by NSF grant number ACI-1053575 and the computational facilities at the College of William and Mary. \end{acknowledgements}
train/arxiv
BkiUeRg241xiDnlsQhwB
5
1
\section{\label{sec:level1}Introduction} Quantum annealing (QA) is a method to solve combinational optimization problems with quantum properties \cite{Apolloni_1989, Finnila_1994, Kadowaki_1998, Farhi_2000, Farhi_2001, Arnab_2008, Albash_2018}. The solution of the combinational optimization problems can be embedded in a ground state of a Ising Hamiltonian~\cite{Lucas_2014, Lechner_2015}. On the other hand, the Hamiltonian of the transverse magnetic field is used to induce the quantum fluctuation. We gradually decrease the transverse magnetic field while we gradually increase the Ising Hamiltonian, and we obtain the ground state of the Ising Hamiltonian if the so-called adiabatic condition is satisfied \cite{Kato_1950, messiah_2014, Morita_2008, Amin_2009, Kimura_2022, mackenzie2006perturbative}. This condition is given as follows: \begin{align} \label{eq1} \frac{|\braket{e(t)|\partial_{t}{\hat H(t)}|g(t)}|}{\Delta(t)^2}\ll 1, \end{align} where $H(t)$, $\ket{g}$, $\ket{e}$ and $\Delta$ are a Hamiltonian, a ground state, the first exited state, and an energy gap of these states, respectively. Throughout of this paper, we call the left hand side of Eq. \eqref{eq1} an adiabatic condition term. The energy gap is believed to be related to the computational complexity, and the relationship between them has been studied \cite{Seki_2012, Seki_2015, susa2022nonstoquastic, Nishimori_2017, Susa_2018, Susa_2018_2, Susa_2020}. Several methods to estimate and control the energy gap have been proposed in order to improve the performance of QA \cite{Matsuzaki_2021, Imoto_2021, Mori_2022, Schiffer_2022, Imoto_2022, kadowaki2021greedy}. It is known that a phase transition could occur if there is a competition between the quantum fluctuation and the magnetic interaction. In this case, the energy gap vanishes at the point of the phase transition when the system size goes to an infinity. As is well known in statistical mechanics, the order of systems is characterised by the order parameter. In a second-order phase transition, the order parameter is continuous, whereas in a first-order phase transition, it has discontinuity at a certain time. If the first-order phase transition occurs, it is supposed that the energy gap becomes exponentially small at a phase transition point. When tackling hard problems (that cannot be efficiently solved by classical algorithms) with QA, the first-order phase transition typically occurs. For example, when we solve the exact cover and the database search problem by QA, such a phase transition occurs \cite{Altshuler_2010, Young_2010}. As these previous studies show, the performance of QA was believed to be simply determined by the energy gap. In this paper, we challenge this common wisdom, and propose a systematic method to construct counter-intuitive models where QA with a constant annealing time fails despite a constant energy gap. Actually, not the energy gap but the transition matrix in the adiabatic condition term in Eq. \eqref{eq1} becomes the cause of the failure of QA in our model. (see Table. 1). The key idea of our proposal is to add a penalty term in the Hamiltonian, which does not change the eigenstate of Hamiltonian but change the eigenvalue. By adding such a penalty term, we analytically show that the transition matrix becomes exponentially large while the energy gap remains constant. We also numerically perform QA on the Grover search and the ferromagnetic $p$-spin model. The success probability of QA in these model becomes exponentially small as we increase the problem size $L$ despite an energy gap that scales as $O(L^0)$ during QA. Therefore, we highlight the importance of the transition matrix, which has been often overlooked in earlier work of QA. Our paper is organized as follows. In sec. II, we review QA and the adiabatic Grover search. In sec. III, we introduce the general framework to construct the case in which the transition matrix exponentially increases. In sec. IV, as a first example, we applly our general theory to the adiabatic Grover search. In sec. V, as a second example, we perform the numerical analysis in the ferromagnetic $p$-spin model. Finally, sec. VI is devoted to the conclusion. {\renewcommand\arraystretch{1.4} \begin{table} \caption{The problem size $L$ (the number of qubit) dependence of the energy gap and the transition matrix in the conventional and our model.} \centering \begin{ruledtabular} \begin{tabular}{c|l|ll} & \ \ \ \ \ \ Energy gap & \ \ \ Transition matrix & \\ & \ \ \ \ \ \ \ \ \ \ \ $\Delta(t)$ &\ \ \ \ $\braket{e(t)|\partial_{t}{\hat H(t)}|g(t)}$ & \\ \hline Conventional &\ exponentially small & \ ${\rm poly}(L)$ & \\ Our model &\ $O(L^{0})$ & \ exponentially large &\\ \end{tabular} \end{ruledtabular} \end{table} } \section{Quantum annealing and adiabatic Grover search} Let us review QA and adiabatic Grover search in this section. \subsection{Quantum annealing} In QA, the total Hamiltonian is given as following \cite{Farhi_2000, Farhi_2001}: \begin{align} \label{eq2} H_{QA}(t)=\frac{t}{T} H_{p}+\left(1-\frac{t}{T}\right) H_{d}, \end{align} where $H_{p}$, $H_{d}$ and $T$ are the problem Hamiltonian, the driver Hamiltonian, and an annealing time, respectively. We prepare a ground state of $H_{d}$, and let this state evolve by the total Hamiltonian. If an initial state at $t=0$ evolves in a sufficiently large $T$ to satisfy the adiabatic condition, we can obtain the ground state of $H_{p}$ at $t=T$. Throughout of our paper, the unit of the Hamiltonian is GHz and time is ns. \subsection{Adiabatic Grover search} Let us consider a problem to search a specific element through a database composed of $N$ elements. On a classical computer, on average, we need to check half of elements to get the target element. Grover proposed the quantum algorithm to search the target element where only $\sqrt{N}$ times of evaluation is required \cite{Grover_1997}. We can adopt an adiabatic algorithm to search the database, and such an algorithm is called the adiabatic Grover search \cite{Farhi_2000, Roland_2002, Albash_2018}. In the adiabatic Grover search, the problem Hamiltonian is given by \begin{align} \label{eq3} H_{p}=\hat{I}-\ket{m}\bra{m}. \end{align} Here, the solution to be found is denoted by $|m\rangle $, and this is represented by the computational basis ($\ket{\uparrow}$ and $\ket{\downarrow}$ ). The number of qubit is $L$, i.e., the dimension of Hilbert space is $2^L$. The driver Hamiltonian is described as \begin{align} \label{eq4} H_{d}=\hat{I}-\ket{++\cdots+}\bra{++\cdots+}, \end{align} where $\ket{+}$ is the eigenstate of the Pauli matrix ${\hat \sigma_{x}}$, i.e., $\ket{+}=(\ket{\uparrow}+\ket{\downarrow})/\sqrt{2}$. Since the adiabatic Grover search can be block-diagonalized, we can analytically obtain the eingenvalue and eigenstate by diagonalizing a two-by-two matrix. By using the desired state $\ket{m}$ and its orthogonal state $\ket{m^{\perp}}$, the total Hamiltonian $H_{QA}(t)$ can be effectively described as \begin{align} \label{eq6} H_{QA}(t)=\frac{1}{2}{\hat I}-\frac{\Delta_{0}(t)}{2}{\rm cos\theta}\ \tilde{\sigma^{z}}-\frac{\Delta_{0}(t)}{2}{\rm sin\theta}\ \tilde{\sigma^{x}}, \end{align} where we define Pauli matrices represented in $\ket{m}$ and $\ket{m^{\perp}}$ basis as $\tilde{\sigma}^{z}\equiv\ket{m}\bra{m}-\ket{m^{\perp}}\bra{m^{\perp}}$ and $\tilde{\sigma}^{x}\equiv\ket{m}\bra{m^{\perp}}+\ket{m^{\perp}}\bra{m}$. Here, ${\rm cos}\theta(t)$, ${\rm sin}\theta(t)$ and $\Delta_{0}(t)$ are given as \begin{align} {\rm cos}\theta(t)=\frac{1}{\Delta_{0}(t)}\left[1-2\left(1-t/T\right)(1-2^{-L}) \right], \end{align} \begin{align} {\rm sin}\theta(t)=\frac{2}{\Delta_{0}(t)}\left(1-t/T\right)\sqrt{2^{-L}(1-2^{-L})}, \end{align} \begin{align} \Delta_{0}(t)=\sqrt{\left(1-\frac{2t}{T}\right)^2+\frac{2^{-L+2}t}{T}\left(1-\frac{t}{T}\right)}, \end{align} where we choose the branch of ${\rm Tan}^{-1}(x)$ which is given by $-\pi/2 \leq {\rm Tan}^{-1}(x) \leq \pi/2$ and we define $\theta(t)={\rm Tan}^{-1}({\rm sin}\theta(t)/{\rm cos}\theta(t))$. The ground state $\ket{g(t)}$ and first excited state $\ket{e(t)}$ of Hamiltonian (5) are given as following: \begin{align} \label{eq10} \ket{g(t)}={\rm cos}\frac{\theta(t)}{2}\ket{m}+{\rm sin}\frac{\theta(t)}{2}\ket{m^{\perp}}, \end{align} \begin{align} \label{eq11} \ket{e(t)}=-{\rm sin}\frac{\theta(t)}{2}\ket{m}+{\rm cos}\frac{\theta(t)}{2}\ket{m^{\perp}}. \end{align} The energy of these states are given by \begin{align} E_{g}(t)=\frac{1}{2}(1-\Delta_{0}(t)), \end{align} \begin{align} E_{e}(t)=\frac{1}{2}(1+\Delta_{0}(t)). \end{align} Thus, the energy gap is $E_{e}(t)-E_{g}(t)=\Delta_{0}(t)$. From Eq. (8), the energy gap scales as $O(2^{-L/2})$ at $t=T/2$. The numerator of the adiabatic condition term (1) for the adiabatic Grover search is $O(L^0)$ \cite{Roland_2002}. The annealing time $T$ should scale as $O(2^L)$ to satisfy the adiabatic condition. This means that the necessary time to find the solution by the adiabatic Grover search is the same as that by a classical search. However, Roland and Cerf showed that the quadratic speed up as well as Grover's algorithm can be attained by choosing optimal scheduling function \cite{Roland_2002}. \section{General Framework} In this section, we show a general method to construct models where QA with a constant annealing time has exponentially small success probability despite a constant energy gap. Suppose we have a system where the transition matrix scales polynomially with $L$ and the energy gap become exponentially small with increasing $L$. Then, we show that, by adding a specific penalty term to the Hamiltonian, we can systematically construct a model with a constant energy gap where the transition matrix becomes exponentially large as the size $L$ increases. The adiabatic condition term can be written as the following form \begin{align} \label{eq14} \eta&=\frac{|\braket{e(t)|\partial_{t}{\hat H(t)}|g(t)}|}{\Delta(t)^2}\nonumber\\ &=\left[\frac{|\braket{e(t)|\partial_{t}{\hat H(t)}|g(t)}|}{\Delta(t)}\right]\frac{1}{\Delta(t)} \end{align} The squre bracket in the right-hand side of Eq. (\ref{eq14}) yields \begin{align} \label{eq15} \frac{|\braket{e(t)|\partial_{t}{\hat H(t)}|g(t)}|}{\Delta(t)}=|\braket{e(t)|{d}/{dt}|g(t)}|. \end{align} Let us consider the following penalty term (to be added to the Hamiltonian $ H_{QA}(t)$) \begin{align} \label{eq16} H_{\rm pena}(t)= \sum^{N-1}_{i=0}C_{i}(t)\ket{E_{i}(t)}\bra{E_{i}(t)}, \end{align} where $N=2^L$ denotes the dimension of Hilbert space and $\ket{E_{i}(t)}$ is eigenstate of $H$. The time dependent coefficients $C_{i}(t)$ have a role to shift the eigenenergy of $E_{i}(t)$. In order to keep the energy gap constant during QA, we choose $C_{i}(t)$ as follows: \begin{align} C_{i}(t)=E_{i}(t=0)-E_{i}(t). \end{align} From Eq. (\ref{eq15}), we show that $\frac{|\braket{e(t)|\partial_{t}{\hat H(t)}|g(t)}|}{\Delta(t)}$ is unchanged by adding the penalty term to the QA Hamiltonian, because the penalty term is diagonal in the energy eigenstate basis. This means that if we add the penalty term in order to open a exponentially small energy gap, the transition matrix becomes exponentially large. Also, the adiabatic condition term becomes larger by a factor of $1/\Delta (t)$, and so we achieve a speedup of $\Delta (t)$ by adding the penalty term. For example, let us consider a QA model where the scaling of $|\braket{e(t)|\partial_{t}{\hat H(t)}|g(t)}|$ ($\Delta(t)$) is given as $O(L^{0})$ ($O(2^{-L/2})$). In this case, as will be shown later, by adding the penalty term (15), we change the energy gap $\Delta(t)$ from $O(2^{-L/2})$ to $O(L^{0})$ while we change the transition matrix $|\braket{e(t)|\partial_{t}{\hat H(t)}|g(t)}|$ from $O(L^{0})$ to $O(2^{L/2})$. Thus, we achieve a speedup of $2^{-L/2}$. \section{adiabatic Grover search with penalty term} In the first example, we apply our theory to the adiabatic Grover search~\footnote{ It should be noted that, as mentioned in section II, the conventional adiabatic Grover search exhibits the quadratic speedup over the classical algorithm by choosing an optimal schedule. However, our current formalism is limited to the case of the linear scheduling. So we need to generalize our formalism to investigate such a case with a complicated schedule of QA, which will be discussed in more detail in a forthcoming paper.}. Actually, we show that transition matrix becomes exponentially large in the adiabatic Grover search with the penalty term (15). In this section, first, we analytically investigate the scaling of the adiabatic condition term in this model. Subsequently, we numerically show that the success probability of QA becomes exponentially small. \subsection{Scaling of adiabatic condition term} \begin{figure} \includegraphics[scale=0.45,bb=580 30 0 400]{eddtg.pdf} \caption{Plot of the adiabatic condition term $\eta$ for the adiabatic Grover search as a function of normalized time $t/T$, where $L$ is the number of qubit.} \label{fig1} \end{figure} We consider the Hamiltonian (\ref{eq6}), and the penalty term is given by \begin{align} H_{\rm pena}(t)=-E_{g}(t)\ket{g(t)}\bra{g(t)}+\{1-E_{e}(t)\}\ket{e(t)}\bra{e(t)}, \end{align} where we use Eq. (\ref{eq10}) and (\ref{eq11}). Also, the total Hamiltonian is written as follows \begin{align} \label{eq18} H(t)&=H_{QA}(t)+H_{\rm pena}(t)\nonumber\\ &=\frac{1}{2}{\hat I}-\frac{1}{2}{\rm cos}\theta(t)\ \tilde{\sigma}^{z}-\frac{1}{2}{\rm sin}\theta(t)\ \tilde{\sigma}^{x}. \end{align} We can calculate the energy of the ground state $E'_{g}(t)$ (first excited state $E'_{e}(t)$) as $E'_{g}(t)=0$ ($E'_{e}(t)=1$) in the model (18). Since we have $E'_{e}(t)-E'_{g}(t)=1$, the energy gap of our model does not depend on the number of qubit $L$. Fig. \ref{fig1} show the adiabatic condition term as function of $t$. At $t=T/2$, the adiabatic condition term becomes exponentially larger as increasing $L$. Let us analyze the scaling of the adiabatic condition term. From Eqs. (\ref{eq10}), (\ref{eq11}) and (\ref{eq15}), we obtain \begin{align} \label{eq20} |\braket{e(t)|d/dt|g(t)}|=\frac{\sqrt{(1-2^{-L})2^{-L}}}{T(1+4\left(\frac{t}{T}\right)^2(1-2^{-L})-4\frac{t}{T}(1-2^{-L}))}. \end{align} at $t=T/2$, Eq. ({\ref{eq20}}) yields \begin{align} \label{eq21} |\braket{e(t)|d/dt|g(t)}|=\frac{1}{T}\sqrt{2^L-1}\sim O(2^{L/2}). \end{align} Since the energy gap is always constant due to the penalty term, the scaling of the adiabatic condition term is $\eta\sim$ $O(2^{L/2})$ at $t=T/2$. This indicates that the adiabatic condition term is improved by the factor of $\Delta_{0}^{-1}$ compared to the case without the penalty term. From Eq. (\ref{eq15}) and (\ref{eq21}), the transition matrix becomes \begin{align} \label{eq22} \braket{e(t)|\partial_{t}H(t)|g(t)}\sim O(2^{L/2}). \end{align} Hence, as we expected from our general framework, the divergence of the adiabatic condition term in Fig. 1 stems from the exponentially large transition matrix (21). As is well known, quantum many-body systems often have the point of a gap closing due to the quantum phase transition \cite{Santoro_2002, Heim_2015, Jorg_2010, Seki_2012}. In the adiabatic Grover search, also, the competition of the driver Hamiltonian and the problem Hamiltonian cause the first-order quantum phase transition from the paramagnetic phase to the ferromagnetic phase. The energy gap at this phase transition point becomes exponentially small as we increase the size $L$ and vanishes in the thermodynamic limit $L\rightarrow \infty$. On the other hand, the energy gap in our model with the penalty term does not close in the thermodynamic limit, which seems to avoid the first-order phase transition by adding the penalty term. To check whether the first-order phase transition exists, we analyze the total magnetization as the order parameter. This analysis is essentially equivalent to the mean field analysis of a $p$-spin model for $p\rightarrow \infty$\cite{Jorg_2010}. On the other hand, we directly obtain the magnetization by using the ground state of our model (\ref{eq18}) unlike Ref. \cite{Jorg_2010}. The total magnetization of the ground state is given by \begin{align} \label{eq23} \braket{g(t)|\sum_{i}^{L}{\hat \sigma^{z}_{i}}|g(t)}={\rm cos}^{2}\frac{\theta(t)}{2}-\frac{1}{2^{L}-1}{\rm sin}^{2}\frac{\theta(t)}{2}, \end{align} where ${\hat \sigma}^{z}$ is a $z$ component of the Pauli matrix, ${\hat \sigma}^{z}=\ket{\uparrow}\bra{\uparrow}-\ket{\downarrow}\bra{\downarrow}$. The existence of the first-order phase transition can be shown as the discontinuity of magnetization at $t=T/2$ in the thermodynamic limit. We take the thermodynamic limit as $L \rightarrow \infty$ for Eq. (\ref{eq23}), \begin{align} \lim_{L \to \infty}\left({\rm cos}^{2}\frac{\theta(t)}{2}-\frac{1}{2^{L}-1}{\rm sin}^{2}\frac{\theta(t)}{2}\right)&=\frac{1}{2}+\frac{1}{2}\lim_{L \to \infty}{\rm cos}\theta(t) \nonumber\\ &=\frac{1}{2}-\frac{1-2(t/T)}{2|1-2(t/T)|}. \end{align} Thus, we obtain \begin{align} \lim_{t \to \frac{T}{2}-0}\lim_{L \to \infty}\braket{g(t)|\sum_{i}^{L}{\hat \sigma^{z}_{i}}|g(t)}=0, \end{align} \begin{align} \lim_{t \to \frac{T}{2}+0}\lim_{L \to \infty}\braket{g(t)|\sum_{i}^{L}{\hat \sigma^{z}_{i}}|g(t)}=1. \end{align} Therefore, in our model, the first-order phase transition occurs at $t=T/2$ even if the gap is $O(L^{0})$. \subsection{Numerical analysis} \begin{figure*}[!t] \includegraphics[scale=0.7,bb=680 30 0 500]{Ene_Fide.pdf} \caption{The energy spectrum and the fidelity for the adiabatic Grover search as the function of $t/T$. The energy diagram in $L=10$ (a) without the penalty term and (b) with the penalty term. The fidelity (c) without the penalty term and (d) with the penalty term. We set the annealing time $T=20$.} \label{fig2} \end{figure*} \begin{figure}[t] \includegraphics[scale=0.35, bb=750 0 0 400]{Gap_Fide.pdf} \caption{The scaling of (a) the energy gap at $t=T/2$ and (b) the fidelity at $t=T$ for the adiabatic Grover search. The purple line and the green line correspond to the case without and with the penalty term, respectively. We set the annealing time $T=20$. } \label{fig3} \end{figure} To investigate the effect of the exponential increase of the transition matrix on QA, we perform numerical calculations. Fig. {\ref{fig2}}(a) and (b) show the energy spectrum. In the case without the penalty term, the energy gap becomes minimum at $t=T/2$. On the other hand, in the case with the penalty term, the energy gap is constant and does not depend on the number of qubit $L$ as shown in Fig. 3 (a). We numerically solve the Schr\"{o}dinger equation, and obtain a state $\ket{\Psi_{0}(t)}$. We show the fidelity as the function of $t$ in Fig. {\ref{fig2}}(c) and (d) where we define the fidelity as $|\braket{\Psi_{0}(t)|g(t)}|^2$. As $L$ increases, the fidelity decrease more rapidly around $t=T/2$ due to the non-adiabatic transition to the first excited state. Fig. {\ref{fig3}} (a) and (b) show the scaling of the energy gap at $t=T/2$ and the fidelity at $t=T$. In the case without the penalty term, the energy gap becomes exponentially small. As mentioned in the section II, the scaling of transition matrix is $O(L^0)$ for the case without the penalty term. Therefore, the exponential decrease of the fidelity (the purple line in Fig. 3(b)) is originated from the gap closing at $t=T/2$. On the other hand, in the case with penalty term, the energy gap does not depend on $L$. Thus, the decrease of the fidelity (the green line in Fig. 3(b)) is originated from the exponential increase of transition matrix. We note that, in Fig. {\ref{fig3}} (b), the fidelity with the penalty term is larger than that without the penalty term. This behavior is consistent with our general framework to show that the adiabatic condition term with the penalty term is smaller than that without the penalty term. \section{Ferromagnetic $p$ spin model} \begin{figure*}[!t] \includegraphics[scale=0.7,bb=680 30 0 500]{pspin.pdf} \caption{The energy spectrum and the fidelity as the function of $t/T$ for the ferromagnetic $p$-spin model. The energy diagram in $L=40$ (a) without the penalty term and (b) with the penalty term. The fidelity (c) without the penalty term and (d) with the penalty term. We set the annealing time $T=20$ and also set $p=5$.} \label{fig4} \end{figure*} \begin{figure}[t] \includegraphics[scale=0.35, bb=750 0 0 400]{Gap_Fide_pspin.pdf} \caption{(a) The minimum of the energy gap and (b) the scaling of the fidelity at $t=T$ for the ferromagnetic $p$-spin model. The purple line and the green line denote without and with penalty term, respectively. We set the annealing time $T=20$ and also set $p=5$.} \label{fig5} \end{figure} As a second example, we apply our theory to the ferromagnetic $p$-spin model \cite{Jorg_2010, Seki_2012, Bapst_2012}. The problem Hamiltonian is given by \begin{align} H_{p}=-L\left(\frac{1}{L}\sum^{L}_{i=1}\hat{\sigma}^{z}_{i}\right)^{p}. \end{align} We use the transverse field as the driver Hamiltonian given as $H_{d}=-\sum^{L}_{i}\hat{\sigma}^{x}_{i=1}$. The Hamiltonian of QA is given as follow: \begin{align} H_{QA}(t)=\frac{t}{T}H_{p}+\left(1-\frac{t}{T}\right) H_{d}. \end{align} The penalty term is given as \begin{align} H_{\rm pena}(t)=\sum_{i=0}^{2S}\{E_{i}({t=0})-E_{i}(t)\}\ket{E_{i}(t)}\bra{E_{i}(t)}, \end{align} where $S$ is the maximum angular momentum and $\ket{E_{i}}$ ($E_{i}$) is the eigenstate (eigenenergy) of $H_{QA}(t)$. We numerically diagonalize $H_{QA}(t)$ at each time and construct the penalty term. The total Hamiltonian is given as \begin{align} H(t)=H_{QA}(t)+H_{\rm pena}(t). \end{align} The component of the total spin operator can be represented as following: $\hat{S}^{z}=\sum^{L}_{i=1}\hat{\sigma}^{z}_{i}/2$ and $\hat{S}^{x}=\sum^{L}_{i=1}\hat{\sigma}^{x}_{i}/2$. As the total spin operator $\hat{\bm{S}}$ is conserved, we consider only the subspace of $2S=L$. The ferromagnetic $p$-spin model with transverse field for $p=2$ ($p\geq 3$) has the second-order (first-order) phase transition point in QA \cite{Jorg_2010, Seki_2012, Bapst_2012}. Throughout of our paper, we set $p=5$. Fig. \ref{fig4} (a) and (b) show the energy spectrum of $L=40$. The minimum of the energy gap is at $t/T\sim 0.463$ (Figure \ref{fig4}(a)). We numerically solve the Schr\"{o}dinger equation, and plot the fidelity without and with the penalty term, in Fig. \ref{fig4}(c) and (d), respectively. As shown in Fig. \ref{fig4}(b), the energy gap is always constant in the case with penalty term. At $t/T\sim 0.463$, however, the fidelity rapidly decrease as we increase the number of qubit (Fig. \ref{fig4}(d)). We plot the scaling of the minimum energy gap and plot the fidelity at $t=T$ in Fig. \ref{fig5}(a) and (b), respectively. As clearly seen, these results are consistent with our general framework's prediction, i.e., despite constant energy gap, the fidelity exponentially decays due to the exponential increase of the transition matrix in the case with the penalty term. In addition, the decrease of the fidelity with the penalty term is alleviated compared to that without the penalty term. \section{Conclusion} In conclusion, we propose a general method to construct models where QA with a constant annealing time fails despite a constant energy gap. In our framework, we choose a known model showing an exponentially small energy gap during QA, and add a penalty term to the Hamiltonian. In the modified model, a transition matrix in the adiabatic condition term becomes exponentially larger as the number of qubits increases, while the energy gap is constant. Based on our framework, we investigated two models as concrete examples: the adiabatic Grover search and the ferromagnetic $p$-spin model. In the adiabatic Grover search, we analytically showed that the transition matrix becomes exponentially large and the magnetization has discontinuity, i.e. first-order phase transition occurs although the energy gap is always constant in QA. Moreover, in the adiabatic Grover search and the ferromagnetic $p$-spin model, we numerically showed that the success probability exponentially decay due to the exponential increase of the transition matrix. Since it is believed that the computational speed in QA is limited by an energy gap that corresponds to the denominator of the adiabatic condition term, our results challenge the common wisdom and will lead to a deeper understanding of the QA performance. \begin{acknowledgments} We would like to thank Yuki Susa, Ryoji Miyazaki, Yuichiro Mori and Tadashi Kadowaki for insightful discussions. This paper was based on results obtained from a project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO), Japan. This work was also supported by the Leading Initiative for Excellent Young Researchers, MEXT, Japan, and JST Presto (Grant No. JPMJPR1919), Japan. \end{acknowledgments}
train/arxiv
BkiUdf45qWTBLkQDI0_Z
5
1
\section{Introduction}\label{sec:intro} Soft X-ray transient (SXT) binary systems that occasionally go into episodes of high activity or {\em outbursts} are some of the best laboratories to study accretion flows near compact ($GM/R\sim c^2$) objects. Coordinated multi-wavelength campaigns have been carried out in recent years to observe SXTs simultaneously over a broad frequency range of nine decades or more from radio to X-rays \citep[see, e.g.,][for a review] {fender2006}. These campaigns have provided extremely valuable insights about the accretion in/outflows and helped to constrain the theoretical models predicting accretion geometry and physical emission processes. Outbursts of SXTs are usually explained in terms of a disc instability model \citep[see, e.g.,][for a review]{l2001}, postulating a dramatic increase in mass accretion via an accretion disc towards the central compact object. The luminosity of a black hole transient (BHT) can vary by as much as eight orders of magnitude between quiescence and peak outburst, within timescales of weeks to months. Such systems are also ideal for studying the formation of jets, which are a by-product of accretion, yet poorly understood so far. Matter spiralling into the gravitational potential well of the compact object forms an accretion disc, and radiates via dissipation of the accumulated gravitational potential energy. The observed UV and soft X-ray spectra of these sources, during epochs of high mass accretion rate, are well modeled by viscous dissipation from the inner regions of a geometrically thin, optically thick ``Shakura--Sunyaev disc'' \citep{ss1973}. However in the OIR, the viscously dissipating disc model sometimes significantly underpredicts the observed spectral energy distribution (SED), and the excess is attributed to irradiation of the outer disc by the intense X-ray radiation from the inner region close to the compact object. This irradiation mechanism has been well explored theoretically \citep{c1976,vrtilek1990,kkb1996,d1999,d2001} and their effect observed in optical and near-IR wavelengths \citep[e.g.][]{vrtilek1990,vPM1994,m1995, hynes2002,hynes2005,r2006,mb2008,g2009}. The most prominent, observable effect of irradiation is to heat up the outer regions of the accretion disc to temperatures higher than what would be possible solely by viscous dissipation of a standard Shakura-Sunyaev disc. This leads to a deviation in the shape of the SED of the accretion disc (from that of a standard disc), creating a `bump' in the optical through infra-red (OIR) wavelengths. As the quality of the OIR data improves, the deviation (i.e. the bump) from Shakura-Sunyaev spectrum becomes more and more apparent, and the need to incorporate irradiation in the model becomes important. In Roche lobe overflowing BHTs where the accretor is 10 times or more heavier than the donor, X-ray irradiation of the donor is small. Furthermore irradiation causes the outer disc to flare, shielding most of the donor star from irradiation \citep{m1978}. While most models assume an azimuthally symmetric accretion disc, warping of the accretion disc \citep[see, e.g.,][and references therein]{od2001} can introduce additional uncertainities in shielding the donor from irradiation. Not all the mass accreted inwards falls onto the compact object. Outflows, in the form of steady compact jets, or transient, ballistic ejecta are often seen in compact binary systems \citep[see, e.g.,][for a recent review] {fender2006}. Steady, compact jets have been observed from an increasing number of compact binary systems when they are in an {\em X-ray hard state} (see below). The hallmark signature of such a jet is its flat to slightly inverted spectrum which, following the classic work by \citet{bk1979}, is usually attributed to a superposition of self-absorbed synchrotron emission from segments of a collimated jet. For stellar BHTs the flat spectrum of the jet is expected to extend from radio frequencies up to IR \citep[$\sim 10^{13-14}$ Hz;][]{markoff2003}, beyond which the jet synchrotron becomes optically thin \citep[see, e.g.,][]{cf2002}. At X-ray frequencies, inverse Compton (IC) upscattering in the jet base of synchrotron as well as external photons from the accretion disc can become comparable and may even dominate the spectrum. As shown in \citet{mnw2005}, synchrotron and IC at the jet base can result in spectra very similar to those produced by compact Comptonizing coronal models \citep{p1998,c1999}. During the course of an outburst, the X-ray spectral and temporal features show an enormous amount of diversity. However extensive observations and their analyses over the past few decades have enabled the classification of spectral and temporal features into relatively few groups known as {\em X-ray states} \citep[see, e.g.,] [for extensive definitions and review of X-ray states]{mr2006,hb2005}. In particular, two canonical states are often seen: a ``soft'' state characterized by strong thermal emission from the accretion disc and low variability in the light curve, and a ``hard'' state characterized by non-thermal power law emission, high variability and probably a collimated outflow. Besides these canonical states, a host of intermediate and extreme states with varying ratios of thermal and non-thermal radiation also are seen. Recent OIR observations \citep[e.g.][]{mb2008,russell2008} covering complete outbursts of many XRBs have shown that while irradiation usually plays a dominant role in the evolution of the OIR spectrum during the entire outbursting phase, the contribution from a non-thermal source, e.g. a jet, can become dominant when XRBs (most notably black holes) are in hard state. Thus any model which aims to understand disc-jet coupling must include irradiation. We have developed a new spectral model which calculates continuum emission from an irradiated accretion disc, and in conjunction with a modified version of the compact jet model developed by \citet{mnw2005}, allows us to constrain the geometrical and radiative properties not only of the jet, but simultaneously those of the accretion disc. The thermal photons from the irradiated disc are also included in the photon field of the jet for Compton scattering, although due to Doppler redshifting in the jet frame, this external Compton emission is usually much smaller than the synchrotron self Compton (SSC). The new model is fully integrable with the standard spectral analysis software ISIS \citep{houck2002} and parallelizable for using in large cluster computing environments (also see \S\ref{sec:parallel}). We have used this new model to analyze broad band quasi-simultaneous observations of two galactic BHT systems XTE~J1118+480 and GX~339$-$4. Simultaneous or quasi-simultaneous observations of both sources, during hard X-ray state of their outbursts, have revealed broad band continuum emission ranging from radio to X-rays, strongly suggesting the presence of jets \citep{mff2001,markoff2003,h2005,mnw2005}. By studying short time-scale variability in near-IR wavelengths from XTE~J1118+480, \citet{h2006} also suggested a non-thermal origin of the near-IR radiation, with a power law index typical of optically thin synchrotron emission from a jet. The known physical properties of the two sources XTE~J1118+480 and GX~339$-$4 have been briefly summarized in \S\ref{sec:data}. Since analysis of multi-wavelength data requires careful calibration of data from various ground-based as well as space-borne instruments, we have also presented the data reduction and calibration procedures in \S\ref{sec:data}. A brief outline of the model, with somewhat more emphasis on the newly introduced irradiated disc component, is given in \S\ref{sec:model}, along with a description of the fitting procedure. The results of our broad band SED modeling suggest that despite the differences in size, mass, orbital parameters or accretion state, some of the fundamental physical parameters characterizing the geometrical and radiative properties of the jet are similar for both sources. This similarity may be a global property of jets formed near compact objects and is discussed in \S\ref{sec:discussion}. \section{Sources, Observations \& Data Reduction}\label{sec:data} XTE~J1118+480 and GX~339$-$4 are two prototype galactic BHTs with markedly different properties. Both binary systems contain high mass compact objects, $8.5\pm 0.6 \rm{M}_\odot$ for XTE~J1118+480 \citep{gelino2006} and $>6\ \rm{M}_\odot$ for GX~339$-$4 \citep{munoz2008}, thus strongly suggesting for the compact objects to be black holes in both cases. XTE~J1118+480 lies $62^\circ.3$ north of galactic plane, in the halo. It has an orbital period of $4.04$ hours \citep{gelino2006}. The distance and interstellar column density to XTE~J1118+480 are reasonably well known from OIR photometry during quiescence to be $1.72\pm0.1$ kpc and $1.3\times10^{20}$ cm$^{-2}$ respectively \citep{gelino2006}. We adopted a black hole mass of $8.5\ \rm{M}_\odot$ for this source. The orbital inclination of XTE~J1118+480, as measured by \citet{gelino2006} is $68\pm2$ degrees. GX~339$-$4 lies near the galactic plane, only $4^\circ.3$ south of the galactic equator and has an orbital period of $42.14$ hours \citep{h2003}. We adopted a black hole mass of $7\ \rm{M}_\odot$ for GX~339$-$4 in this work. The secondary star in the GX~339$-$4 system has eluded detection so far, thus making estimates of masses, distance or line of sight extinction difficult. Here we have used a distance of 6 kpc and $N_{\rm H}=6\times10^{21}$ cm$^{-2}$ for GX~339$-$4, which are consistent with the limits given by \citet{h2003,h2004} and \citet{munoz2008}. \subsection{OIR and radio data}\label{sec:oirdata} XTE~J1118+480 went through an outburst during January--February 2005. A discussion of the OIR light curve morphology of the 2005 OIR outburst of XTE~J1118+480 is presented by \citet{z2006}. It was noted by \citet{z2006} that the 5--12 keV/ 3--5 keV hardness ratio from the {\em All Sky Monitor} \citep[ASM; ][]{levine1996} data onboard the {\em Rossi X-Ray Timing Explorer} \citep[RXTE; ][]{brs1993} satellite suggested that the source was in an X-ray {\em hard} state during the entire outburst of 2005. For this source we used quasi-simultaneous optical (from the Liverpool telescope), near-IR (from UKIRT) and 15 GHz radio data (from the Ryle telescope). The full set of multi-wavelength observations obtained during this outburst of XTE~J1118+480 will be presented in Brocksopp et al. (in prep). XTE~J1118+480 went through an outburst in 2000 as well. During this outburst the source was extensively observed by many ground based telescopes as well as space-borne missions \citep[see e.g.][]{hynes2000,mcc2001,frontera2001, c2003}. In particular, the source was observed quasi-simultaneously around 2000 April 18th using the Ryle radio telescope, UKIRT, HST, EUVE, Chandra and RXTE giving an unprecedented broadband spectral coverage. This excellent data set has been used in several works \citep[e.g.][]{mff2001,esin2001, yuan2005} to infer properties of the source. We revisit this data set to constrain the geometry and radiative properties of both the jet and the disc using our new disc+jet model. We used the OIR and radio SED of GX~339$-$4 reported by \citet{h2005}, which were obtained during a multi-wavelength campaign to observe its outburst of 2002. For all data sets we used the photometric OIR magnitudes and de-reddened them using the interstellar extinction law of \citet{ccm1989}. In the absence of definitive measurement of the ratio of total to selective extinction ($R_V [=A_V/E(B-V)]$) in the direction of either of the sources, we have taken $R_V=3.1$, the widely used value measured by \citet{rl1985} towards the galactic center, and standardly used for these sources. Thus it must be kept in mind that if the ``true'' value of $R_V$ is significantly different from $3.1$ (given neither of the sources are very close to the galactic center), it might introduce additional uncertainity in reddening correction. \subsection{X-ray data}\label{sec:xraydata} For both sources, we used data from the {\em Proportional Counter Array} \citep[PCA; ][]{jahoda2006} and the {\em High Energy X-Ray Timing Experiment} \citep[HEXTE; ][]{roth1998} onboard the RXTE satellite. The X-ray data were downloaded from the NASA HEASARC's public archive\footnote{http://heasarc.gsfc.nasa.gov/docs/archive.html} and reduced using an in-house script following the standard reduction procedures outlined in {\em RXTE cook book\footnote{http://heasarc.nasa.gov/docs/xte/recipes/cook\_book.html}} using HEASOFT software \citep[v6.1.2][]{arnaud96}. To avoid any terrestrial contamination, we considered data only for elevation angles of $10^{\circ}$ or higher. Also any data with pointing offset more than $0^{\circ}.02$, or within 30 minutes of SAA passage, or trapped electron contamination ratio more than 0.1 were rejected. Since both the sources were bright (PCU2 count rate $>40$ counts/s/PCU), the model {\em pca\_bkgd\_cmbrightvle\_eMv20051128.mdl} was used for background subtraction. Data from all the layers of the active PCUs during the observations were extracted to create the spectra. Systematic uncertainty of 0.5\% was added in quadrature to the PCA data to account for uncertainties in the calibration. For the HEXTE instrument, data from both clusters were added. Both PCA and HEXTE data were binned to achieve a minimum $S/N$ of 4.5. For the PCA we used the energy range of 3--22 keV and for HEXTE 20--200 keV. The normalization of the OIR and radio data was tied to that of PCA. HEXTE normalization was allowed to vary. \section{Model, Analysis and Results}\label{sec:model} The continuum jet model used in this work is based on \citet{mnw2005}, (henceforth referred to as the {\tt agnjet} model). Given the importance of irradiation in the observed spectra of XRBs, we have modified the model to include additional physics (see \S\ref{sec:irrad} for details) that allow us to calculate the effect of irradiation on the outer disc due to high energy photons from (1) the jet, and (2) the inner regions close to the compact object, impinging on the disc. We also compute inverse Compton scattering of thermal disc photons by the electrons in the jet plasma, although their effect on the total spectrum is negligible, except near the jet base, due to Doppler redshifting of the disc photons in the jet frame. \subsection{Jet parameters}\label{sec:jetpars} The details of the physics of the jet model as well as a full description of its main input parameters is given in the appendix of \citet{mnw2005}. Therefore we only give a brief summary here, and outline the modifications made to the \citet{mnw2005} model: The main parameters that determine the properties of the jet are the input jet power ($N_j$), electron temperature of the relativistic thermal plasma entering at the jet base ($T_e$), the ratio of magnetic to particle energy density (a.k.a. the equipartition factor $k$), physical dimensions of the jet base (assumed to be cylindrical with radius $r_0$ and height $h_0$), and the location of the point on the jet ($z_{acc}$) beyond which the a significant fraction of the leptons are accelerated to a power law energy distribution. $N_j$, parameterized in terms of the Eddington luminosity, determines the power initially input into the particles and magnetic field at the base of the jets. In the absence of a full understanding of the mechanism for energizing the jets, $N_j$ plays a similar role to the compactness parameter in thermal Comptonization models \citep{c1999}. A typical value of the number density of leptons ($n$) at the base of the jet (derived from fits) is $n=N_{\rm j}L_{\rm Edd}/(4\gamma_{\rm s}\beta_{\rm s} m_{\rm p} c^3 \pi r_{\rm 0}^2) \sim 10^{14-15}$ cm$^{-3}$. Here $L_{\rm Edd}$ is the Eddington luminosity of the source, $\beta_{\rm s}\sim 0.4$ is the sound speed at the base (see below), $\gamma_{\rm s}=(1-\beta_{\rm s})^{-1/2}$, $m_{\rm p}$ is the proton rest mass, and $r_{\rm 0}\sim 10^7$ cm is the radius of the jet base, implying an optical depth $\tau=n\sigma_T r_{\rm 0}\sim 0.001-0.01$. Since the optical depth is small, the probability of multiple Compton scattering is very small and we only consider single scattering. For the non-thermal power law electrons, the lower cutoff in the Lorentz factor is assumed to be equal to peak of the relativistic Maxwell-Juttner distribution, i.e. $\gamma_{e;min}=2.23 kT_e/(m_ec^2)$. Following the prescription of \citet{fb1995,f1996} for maximally efficient jets, we assume that the bulk speed of the plasma (with adiabatic index $\Gamma=4/3$) at the base of the jet is the sound speed given by $\beta_{\rm s}=\sqrt{(\Gamma-1)/(\Gamma+1)}\sim0.4$. We do not model the particle acceleration in the jet, but just start at the sonic point. Each jet then accelerates longitudinally along its axis due to pressure gradient and expands laterally with its initial sound speed. The bulk velocity profile, magnetic field strength, electron density and electron Lorentz factor along the jet are calculated by solving the adiabatic, relativistic Euler equation. In the previous versions of the jet model \citep[e.g. in][]{mnw2005} we used the ratio of scattering mean free path to gyroradius of the particles in the plasma through which the shock moves ($f_{sc}$), as a model parameter. However given the current lack of physical understanding of the actual shock propagation mechanism, in this paper we fixed the shock speed in the plasma ($\beta_{sh}$) to $0.6$ and used the quantity $\epsilon_{sc}=\beta_{sh}^2/$(ratio of scattering mean free path to gyroradius) as a model parameter instead of $f_{sc}$. The parameter $\epsilon_{sc}$ can be physically interpreted to be a parametrization for the shock acceleration rate \citep[$t_{acc}\propto \epsilon_{sc}^{-1}$; see eq.1 of][]{mff2001}. The dominant cooling processes within the jet are synchrotron and synchrotron self-Compton radiation. If the inner edge of the accretion disc is sufficiently close to the base of the jet, then the soft photons from the accretion disc, acting as seed photons for external Comptonization, can also cool the particles in the jet. The jet continuum as observed by an observer on Earth is a superposition of spectra from individual segments along the jet axis (taken to be perpendicular to the accretion disc plane), calculated by solving the radiative transfer equation, after taking into account relativistic beaming effects. \subsection{The irradiated disc}\label{sec:irrad} Since the coupling between disc and jet is not well understood, we assume an independent classical thermal viscous ``Shakura-Sunyaev'' accretion disc, with a radial temperature profile $T(R)\propto R^{-3/4}$, and modeled by the parameters $T_{\rm in}$, $r_{\rm in}$, the temperature and radius at the inner edge of the disc. If the outer parts of the disc are irradiated, then as discussed below, it is further possible to constrain the temperature and radius at the outer edge of the disc. Given the mounting evidence for the existence of relativistic outflows from black holes, we also explore the influence of the jet acting as a source of irradiation heating of the accretion disc. \subsubsection{Can jets influence the energetics of the accretion disc ?}\label{sec:jet_irrad} We computed the radiative jet flux incident on the accretion disc as a function of radial distance on the disc plane, taking into account special relativistic beaming effects. In order to obtain an {\em upper limit} on the jet-induced heating of the accretion disc, we assumed that the entire incident radiative flux from the jet $f_{\nu}(R)$ is thermalized upon being absorbed by the disc, thus heating the disc locally at radius $R$ at a rate $H_{\rm jet}(R)$, given by \begin{equation} H_{\rm jet}(R) = \int_0^\infty f_{\nu}(R) \mbox{ } d\nu \label{jet_irrad_eqn} \end{equation} We calculated $H_{\rm jet}(R)$ at ten logarithmically spaced radii spanning from $R_{\rm in}$ to $R_{\rm out}$. The ratio of $H_{\rm jet}(R)$ to the local viscous heating ($H_{\rm visc}(R)$) is shown in Fig.~\ref{fig:jet_irrad}. At large radii ($R>100$ $R_g$), $H_{\rm jet}(R)$ has a $\sim r^{-2.4}$ radial profile, which is somewhat steeper than that expected from irradiation by a static corona or the inner accretion disc (for which the irradiation heating falls as $\sim r^{-1.6{\rm~to~}-2.0}$; see, e.g., \S\ref{sec:disc_irrad} and \citet{hynes2005}). However the local viscous heating rate drops outward even faster ($H_{\rm visc}(R) \propto R^{-3}$), thus causing the ratio $H_{\rm jet}(R)/H_{\rm visc}(R)$ to slowly increase for increasing R. As the disc radius becomes small and approaches that of the jet base, the solid angle subtended by the jet increases rapidly causing the jet heating to saturate. However the viscous heating continues to rise at smaller radii and causes $H_{\rm jet}(R)/H_{\rm visc}(R)$ to drop at small radii. However, as evident from Fig.~\ref{fig:jet_irrad}, even the maximum possible jet-induced heating of the disc is about seven orders of magnitude smaller than the viscous heating. This confirms that relativistic beaming of the jet causes a very small fraction of the flux emitted by the jet to fall back on the disc, and the resulting influence of the jet on the energetics of the disc is negligible. Light bending effects near the black hole can enhance the flux incident on the disc, but by not more than a few tens of percent of the numbers calculated above (i.e. by assuming Minkowski metric instead of a more realistic Kerr or Schwarzschild metric). \subsubsection{An irradiation source near the compact object}\label{sec:disc_irrad} Analysis of OIR light curves of XTE~J1118+480 by \citet{z2006} suggests there might have been significant contribution of thermal flux in the optical. The near-IR flux however is reported to be significantly non-thermal in nature from studies of rapid time variability in the light curve, as well as nearly flat IR SED \citep{z2006,h2006}. As discussed in the previous paragraph as well as shown in Fig.~\ref{fig:jet_irrad}, the jet's contribution in heating the disc is negligible; we have therefore added an irradiation heating term due to a somewhat more static source of irradiating X-rays near the black hole. The physical origin of this irradiating source could be the inner accretion disc near the black hole. Since we assume a radial temperature profile and do not solve the local disc structure \citep[which in itself is a detailed MHD problem, see, e.g.,][and references therein]{hbk2007}, our model cannot discern the source of the irradiating X-rays. The temperature due to irradiation heating ($T_{\rm irrad}$) is usually assumed to have a power law radial dependence of the form $R^{-n}$. Depending on the initial assumptions about disc structure and geometry of the irradiating source, theoretical models predict $n$ to be in the range of 0.4-0.5 \citep{vrtilek1990,kks1997,d1999}. Given the uncertainities in de-reddening and scarcity of data points in the OIR, the exact choice of $n$ is not vital for a global understanding of the broad-band SED. We used $n = 3/7$ (vertically isothermal disc with disc height $h\propto r^{9/7}$), mainly for consistency with earlier work \citep[e.g. by][] {vrtilek1990,hynes2002,hynes2005}. We assume that the total disc heating is a sum of local viscous heating, and heating due to a static source of irradiation near the black hole. The effective temperature of the disc ($T_{\rm eff}$) is therefore related to the viscous temperature ($T_{\rm visc}$) and the irradiation temperature ($T_{\rm irrad}$) as follows: \begin{equation} T_{\rm eff}^4(R) = T_{\rm visc}^4(R) + T_{\rm irrad}^4(R){\rm ,} \end{equation} where $T_{\rm visc} = T_{\rm in}(R/R_{\rm in})^{-3/4}$ and $T_{\rm irrad}= T_{out}(R/R_{out})^{-3/7}$ with $T_{\rm in/out}$, $R_{\rm in/out}$ as free parameters. Other parameters, viz. masses, inclination and distance to the binary system, are taken from values published elsewhere and kept fixed during the process of fitting. The radius $R_{\rm irrad}$ where irradiation heating becomes equal to viscous heating can be estimated from \begin{equation} R_{\rm irrad}=\left[ (T_{\rm in}/T_{\rm out})\ R_{\rm out}^{-3/7}/R_{\rm in}^{-3/4}\right]^{28/9}. \end{equation} Since viscous dissipation dominates for $R < R_{\rm irrad}$, the disc height at $R_{\rm irrad}$ is estimated from Shakura-Sunyaev's $\alpha$-disc solution \citep[see e.g.][]{ss1973, fkr2002} as \begin{equation} H \ (R_{\rm irrad})=1.7\times 10^8 \ \alpha^{-1/10} \ \dot M_{16}^{3/20} (M/{M}_\odot)^{-3/8} \ R_{\rm irrad;10}^{9/8} \ f^{3/5} \label{eq:H_irrad} \end{equation} where we have $\dot M_{16}=\dot M/(10^{16} \ {\rm g/s})$, $R_{\rm irrad;10}=R_{\rm irrad}/(10^{10} \ {\rm cm})$, $f=[1-2GM/(c^2 R_{\rm irrad})]^{1/4}$ and chosen $\alpha=0.05$. Similarly, since $H\propto R^{9/7}$ when $R > R_{\rm irrad}$, the disc height at $R_{\rm out}$ is given by \begin{equation} H \ (R_{\rm out}) = H(R_{\rm irrad})\times(R_{\rm out} / R_{\rm irrad})^{9/7}. \label{eq:H_out} \end{equation} The solid angle subtended by the outer, irradiation dominated disc as seen from the compact object is computed from equations~\ref{eq:H_irrad} and ~\ref{eq:H_out}. The {\tt agnjet} model computes the broadband continuum from the jet and the irradiated disc but not line emission or reflection features. Therefore we used an additional {\tt Gaussian} line profile to fit the iron K$\alpha$ emission complex near 6.5 keV and the convolution model {\tt reflect} \citep{mz1995} for the excess in the range of 10--30 keV, both features being usually attributed to reflection by the relatively cold accretion disc. The column density, taken from elsewhere published values, was fixed during the fits. For the {\tt reflect} model, which accounts for reflection from neutral material in the accretion disc, we used solar abundance and tied the inclination to that of the disc (and also the jet axis). The energy of the iron line in the {\tt Gaussian} model was allowed to vary between 6--7 keV and its width was fixed to 0.5 keV. The spectral coverage, along with column density ($N_{\rm H}$) towards the sources, distances, and masses of the compact objects are given in Table~\ref{tab:data}. \subsection{Model fits}\label{sec:parallel} Spectral fittings were performed using ISIS \citep[v1.4.9-4; ][]{houck2002}. Evaluation of confidence intervals for any physical model is computationally expensive, and this is true for {\tt agnjet} model as well. Therefore we have used the parallelization technique described in \citet{n2006} and distributed the task of computing confidence intervals\footnote{Using our own custom versions of the {\tt cl\_master} and {\tt cl\_slave} scripts described at http://space.mit.edu/cxc/isis/single\_param\_conf\_limits.html} for $f$ free parameters over $f/2$ or $(f+1)/2$ nodes (depending on whether $f$ is even or odd; each node is an Intel Xeon 3.4GHz processor) of a Parallel Virtual Machine \citep{Geist:1994:PPV} running on the LISA cluster in Almere, Netherlands. This reduced the runtime to under 24 hours, a greater than 75\% speedup; it also allowed us to discern finer features in parameter space by increasing the tolerance resolution by a factor of 500, while keeping the overall runtime to approximately half the serial runtime of 96 hours. While these are welcome improvements, reducing the model runtime even further would enable us to analyze new data more quickly. For this reason we are experimenting with ways of utilizing more than one processor per parameter, such as partitioning parameter space more finely, and exploring the use of OpenMP \citep{OpenMP.98} to parallelize internal loops within the model source code. Using the distributed parallel computing technique described above, we determined the best-fit parameter values as well as their confidence intervals. Our confidence intervals correspond to $\Delta \chi^2 = 2.71$ for a given parameter (which for {\em normal} distribution would imply a $90\%$ single parameter confidence limit). \subsubsection{XTE~J1118+480 during 2005 outburst} Of the three data sets presented in this work, the data from the 2005 outburst of XTE~J1118+480 was the faintest and had a steep power law slope in the X-rays. The flux in the HEXTE bands was extremely low for this data and while regrouping the data we found that no flux with S/N $>4.5$ could be detected above 60 keV. Since the single point in the radio frequency does not constrain the length of the jet, we fixed it to $10^{13.1}$ cm which is a lower limit on the radiative length of the jet assuming that the radio data point is still on the optically thick synchrotron regime with a flat-to-slightly-inverted spectral index. While the fits require a high fraction of non-thermal electrons in the post-shock jet ($>0.5$), they are not very sensitive to the exact value of this fraction, and we fixed its value to 0.75. The data cannot be described by a non-thermal jet continuum alone and an excess in the OIR data requires additional flux which we modeled using the irradiated accretion disc model described in \S\ref{sec:irrad}. This thermal excess in the optical has also been suggested from the OIR coverage of the outburst by \citet{z2006} and \citet{h2006}. The X-ray data (3--60 keV) can be well fit ($\chi^2/\nu=88.8/55$) only with a power law, however systematic residuals in the 5--7 keV range suggest that a weak iron line may be present. Adding a Gaussian line to the power law improves the fits to $\chi^2/\nu=79.5/52$, with the line center near 6.6 keV and width of 0.1 keV. Given the weak strength of the lines, we fixed the line energy as well as the width to these values, and did not use any reflection model modeling the broad band SED of XTE~J1118+480. Fits performed by allowing $r_{\rm in}$ and $T_{\rm in}$ of the accretion disc to vary freely results in good fits with quite small values of both $r_{\rm in}$ and $T_{\rm in}$. There are uncertainities in our understanding of the physical properties of the innermost regions of the disc close to ISCO, dereddening of interstellar extinction, and also uncertainities in instrumental response and calibration. These concerns make detection of a cold disc with small $r_{\rm in}$, $T_{\rm in}$ and its physical origin open to debate \citep{mhm2006,miller2006,rykoff2007,rs2007,g2008,dangelo2008,reis2009}. Another issue with the XTE~J1118+480 data is the jet/disc inclination. While we usually fix the inclination to a value published elsewhere, the extremely flat radio to IR spectrum of XTE~J1118+480 requires a much smaller inclination than the published value of $\sim 70^\circ$ \citep[see, e.g.,][]{gelino2006,c2003,m2001,z2002}. Allowing the inclination to vary for this source results in jet inclination of $\sim25^\circ$. Therefore we tried two different models for this data set of XTE~J1118+480. In the first model (Model 1; second column in Table~\ref{tab:fits}), apart from other regular parameters described above, the jet/disc inclination as well as $r_{\rm in}$ and $T_{\rm in}$ were allowed to vary freely. In the second, somewhat more conservative model (Model 2; third column in Table~\ref{tab:fits}), the jet inclination was fixed to $30^\circ$, $r_{\rm in}$, $T_{\rm in}$ were also fixed to $30$ $R_g$ and $0.1$ keV respectively. The best-fit values for the second model also gives similar jet parameters as the first model, although with a slightly higher jet power and steeper electron distribution index. Also the second model favors a somewhat smaller and hotter outer disc edge. \subsubsection{XTE~J1118+480 during 2000 outburst}\label{sec:j1118_2000} We used the broadband SED data of XTE~J1118+480 obtained around 2000 April 18, published in \citet{mcc2001} and \citet{hynes2000}. While analyzing this data set a broad dip in the combined EUVE and Chandra spectra was noted between 0.15--2.5 keV (see e.g. the residuals in Fig.~\ref{fig:j1118_2000fit}) which cannot be modeled using the standardly used column density for this source ($1.3\times 10^{22}\ {\rm cm}^{-2}$). This feature was been noted by previous works as well, and is attributed to metal absorption in a partially ionized gas, \citep[see e.g.][]{esin2001}. We have therefore excluded the energy range between 0.15--2.5 keV from our broadband fitting. Results of fitting indicate that during this observation emission from the jet dominates the radio and IR regions. Contribution from the accretion disc starts becoming dominant in the optical. However, as discussed in \citet{hynes2000}, the EUVE fluxes are extremely sensitive to the extinction law, and the slope of the dereddened EUVE SED can range between $+2 \geq \alpha \geq -4$ ($F_{\nu}\sim\nu^{\alpha}$) for the permissible range of N$_{\rm H}$. Here we have used the EUVE fluxes reported by \citep{hynes2000, mcc2001} and a column depth of N$_{\rm H}=1.3\times10^{20}$ cm$^{-2}$, which results in a very steep EUVE slope. This cutoff in the EUVE data would imply a cold, truncated inner disc with with r$_{\rm in}\sim 340$ R$_g$ and T$_{\rm in}\sim 2.9\times 10^5\ {\rm K}$. Given the low temperature of the inner disc and the consequent absence of a irradiating soft X-ray photons, it is not unexpected that we do not find any signature of irradiation in this case (i.e. T$_{\rm out}$ is not constrained). As in the case of the 2005 data of this source, the X-ray emission in this model is entirely dominated by optically thin synchrotron emission from the jet. While the fits require a small nozzle ($h_0/r_0 \sim 0.2-0.6$), they are not very sensitive to the exact value of this parameter, and we fixed its value to 0.4. We note that this value is abot a factor of three smaller than what is found typically in other sources, or even during the 2005 outburst of this source. \subsubsection{GX~339$-$4 during 2002 outburst}\label{sec:gx339} The X-ray data of GX~339$-$4 cannot be well fit by a power law alone. Using simple power law+ Gaussian model gives unacceptable fit to the data with reduced $\chi^2$ of 5.9. Convolving a power-law+gaussian with a reflection model give better fits, but still with rather a large reduced $\chi^2$ of 2.3. We found that the GX~339$-$4 data can be fitted well without including any thermal component, and the entire broadband SED is well described by the superposition of synchrotron and inverse Compton photons from the pre-shock region near the jet base as well as optically thick synchrotrons from post-shock regions farther from the jet base. From the radio-IR slope, the required inclination is between 45--50 degrees, but not well constrained and hence was fixed at 47 degree. The length of the jet was fixed to $10^{14.2}$ cm and fraction of non-thermal electrons fixed to 0.75 for the same reasons given for XTE~J1118+480. Since the spectrum of GX~339$-$4 could be entirely dominated by jet emission, and the source was in a bright, hard X-ray state \citep{h2005}, the input jet power is much larger than that of XTE~J1118+480. In fact the jet power for this observation is larger than any of the previous observations of this source \citep{markoff2003,mnw2005}. Also the acceleration region starts much farther out for the GX~339$-$4 data set. If the temperature of the thermal electrons ($T_{\rm e}$) at the base is left completely free, as in ``Model 1'' (fifth column in Table~\ref{tab:fits}), then $T_{\rm e}$ for the best-fit model is comes out to be rather high, and suggests that the SED at energies $>10$ keV is almost entirely due to SSC emission from the base (see top two panels in Fig.~\ref{fig:gx339fit}). The geometry of the base is rather compact and the source is in a bright hard state; therefore a strong SSC emission could imply that the photon density at the base becomes high enough for pair processes to become important. Since pair production is not included in the model, an estimate of the importance of pair processes for the best fit models were done as follows: the pair production rate ($\dot n_{pp}$) was calculated from the photon field at the base of the jet using the angle averaged pair production cross section \citep{gs1967,bs1997}; pair annihilation rate ($\dot n_{pa}$) was calculated following \cite{svn1982}. For ``Model 1'' of GX~339$-$4, $\dot n_{pp}\sim 10^{17}$ cm$^{-3} s^{-1}$ and $\dot n_{pa}\sim 10^{12}$ cm$^{-3} s^{-1}$, suggesting that pair processes could be important in this case. Therefore we explored the parameter space, limiting the electron temperature to be less than $5\times 10^{10}$ K and somewhat less compact base (larger r$_{\rm 0}$, h$_{\rm 0}$) to search for solutions which also satisfy the condition $\dot n_{pp}<\dot n_{pa}$ (see ``Model 2'', sixth column in Table~\ref{tab:fits} and bottom panels in Fig.~\ref{fig:gx339fit}). The fit derived number density ($n$), $\dot n_{pa}$ and $\dot n_{pp}$ for the different models presented in Table~\ref{tab:fits}, are given in Table~\ref{tab:pairs}. In addition to the continuum jet model, an additional Gaussian line centered at 6.3 keV and reflection from the accretion disc is needed to obtain good fits for the GX~339$-4$ data. Presence of the line and reflection suggests the presence of an accretion disc. However, as in the case for XTE~J1118+480, the disc is cold and therefore not detected by the PCA. The best-fit model parameters, associated confidence intervals, $\chi^2$ statistic and corresponding chance probabilities for both sources are shown in Table~\ref{tab:fits}. Given the simplicity of the model it is hardly unexpected that the reduced $\chi^2$ values obtained from the fitting are not very close to 1 in some cases. For instance, the viscous+irradiated disc model does not include details of disc atmosphere or ionization balance equations, and is therefore not able to predict the spectral breaks in the optical data. Because of its simplicity, this is not meant to be a real model of the disc, although it incorporates the dominant ongoing physical processes and can reproduce the general shape of the broadband continuum. Moreover, while fitting a data set spanning ~10 decades in frequency (or energy) space, the long lever arm between radio and X-rays constrains the jet model more strongly than the overall chi-squared \citep[see e.g.][]{ markoff2008}. Best fit models along with data and residuals are shown in Figures~\ref{fig:j1118_2005fit}, ~\ref{fig:j1118_2000fit} and~\ref{fig:gx339fit}. \section {Discussion and Conclusions}\label{sec:discussion} We have analyzed broad band multi-wavelength observations of the galactic black hole transient systems XTE~J1118+480 and GX~339$-$4, during their outbursts in 2000 and 2005 for XTE~J1118+480, and during the 2002 outburst of GX~339$-$4. The conclusions from our study are summarized below. The data for XTE~J1118+480 cannot be fit well with a jet continuum model alone, due to an excess in the OIR fluxes. During these observations, in OIR the source was at least 3.5 magnitude brighter than quiescence; therefore the contribution from the M1 V secondary star \citep[][ and references therein]{c2003} is small. Modeling the data from the 2000 outburst and 2005 outburst of this source we found that the outer disc was strongly irradiated during the 2005 data. It appears that the inner edge of the accretion disc was much closer to the central black hole during the observations of 2005, thus providing an ample source of soft X-ray photons to irradiate the outer disc. On the other hand, the sharp cutoff in the EUVE data and absence of any signature of irradiation from the outer disc suggests a recessed, cold accretion disc for the 2000 data. However, as discussed in \S\ref{sec:j1118_2000}, the slope of the EUVE data depends very strongly on the assumed column depth, and a small decrease in N$_H$ can flatten the EUVE SED significantly \citep[see e.g.][]{hynes2000}. Such a flat EUVE SED would be expected from a disc with small r$_{\rm in}$ and comparatively higher T$_{\rm in}$ \citep{reis2009}. Our calculations in \S\ref{sec:jet_irrad} show that at any point on the surface of the accretion disc, the heating of the disc by the jet is seven or more orders smaller than the viscous heating. There are two factors which largely reduce the amount of jet radiation that is incident on the disc: Firstly Doppler de-beaming - since the bulk motion of the jet is relativistic, special relativistic Doppler beaming concentrates most of the emergent radiation from the jet in the direction of its motion, i.e. away from the disc. A second, but significant factor that also contributes to the dimunition of jet flux incident on the disc is the fact that the post-shock jet synchrotron spectra peaks at increasingly longer wavelengths at increasingly larger distances from the base of the jet, thus creating fewer X-ray photons that can heat the disc. This is illustrated in Fig.~\ref{fig:jet_irrad} where we show the ratio of jet-heating to viscous heating as a function of radius. Since heating by the jet alone cannot produce the OIR excess in Fig.~\ref{fig:j1118_2005fit} we surmised that this the OIR excess is indeed caused by the more conventional form of irradiation, i.e. irradiation by soft X-ray photons from the inner-disc. From an energetics point of view, it is easy to see that the inner accretion disc alone (see Fig.~\ref{fig:j1118_2005fit}, at around 1 keV) radiates at least two orders or more energy than the outer region. Furthermore, it is well known that the outer edge of the accretion disc can flare-up or warp \citep[e.g.][]{od2001}, thus making the outer parts of the disc more easily visible from the inner regions. Fits to the 2005 data of XTE~J1118+480 using the jet model+irradiated disc (described in \S\ref{sec:disc_irrad}) suggest a cold disc ($T_{\rm in}\sim 0.2\ keV$) with inner radius close to the innermost stable circular orbit ($6$ $R_g$ for a Schwarzschild black hole, and smaller if spinning). Given the uncertainities associated with using a simple $T\sim R^{-3/4}$ model which ignores correction factors like spectral hardening \citep{st1995}, full disc structure as well as proper boundary conditions at the inner edge \citep{z2005}, the errors associated with the estimates of the inner disc parameters are most likely an underestimate. Another source of uncertainity comes from that fact that we assume the jet to be perpendicular to the disc. A misaligned jet \citep{mac2002}, as seems likely for XTE~J1118+480, could introduce additional error in the determination of $r_{\rm in}$. Along with the uncertainities in instrument calibration, modeling and jet-disc alignment, the sharp low energy cutoff of PCA sensitivity below $3$ keV makes detection of a cold disc with small $r_{\rm in}$ even more difficult than a recessed cold disc. Nevertheless the numbers for $r_{\rm in}$ and $T_{\rm in}$ we obtained for XTE~J1118+480 are similar to those obtained by several others for black hole transients in their hard state \citep[e.g.][]{disalvo2001,mhm2006,miller2006,rs2007,rykoff2007,reis2009}. If such a cold disc with small r$_{\rm in}$ is indeed realised, then Compton scaterring of the disc photons could be strongly anisotropic, with most of the scattered emission beamed toward the disc \citep{hp1997}. This could compensate partly the beaming effects due to the relativistic motion of the gas, and increase the illumination of the disc (both irradiation and reflection effects). Also in this case there could be a radiative feedback due to the local illumination of the disc by the base of the jet which is similar to the situation in standard accretion disc corona models \citep{hm1993} which would act as a thermostat keeping the temperature at the base lower than what is assumed here. A somewhat more conservative model, where the temperature and radius of the inner edge of the accretion disc were fixed to $0.1$ keV and $30$ R$_{\rm g}$ respectively (Model 2 in Table~\ref{tab:fits}) also describes the data well, showing the difficulty in detecting a cold disc with small r$_{\rm in}$ from the available data. The best-fit outer radii of XTE~J1118+480 for both {\em Model 1} and {\em Model 2} are smaller than the circularization radius, using the latest values of orbital parameters published by \citet{gelino2006}. These results are therefore consistent with the assumption that accretion is occuring via Roche lobe overflow. The observation was made during the decline from outburst peak. Hence the discrepancy between the circularization radius and $R_{out}$ inferred from fits, in the context of standard disc instability models, could be due to an inward moving cooling wave. A decreasing trend in the outer {\em radiative disc radius} was also observed by \citet{hynes2002} for the system XTE~J1859+226. However the value of $R_{out}$ in our case depends largely on the five OIR data points, all of which lie on the Rayleigh-Jeans region of the irradiated disc. Given the plethora of spectral features seen in OIR spectra of X-ray binaries in outburst \citep[see, e.g.,][]{hynes2005,cc2006}, the limitation of using a few bands is obvious. Analysis of the data from the 2000 outburst of XTE~J1118+480 by \citet{esin2001} and \citet{yuan2005} showed that the UV and higher energy data can be explained by an ADAF model. The flux from the ADAF model alone however falls sharply at energies below optical and underestimates IR and any lower energy data (e.g. radio), thus requiring an additional jet component \citep{yuan2005,nm2008}. In the jet-ADAF model \citep{yuan2005}, the X-ray photons have a thermal Comptonization origin, whereas in the current work, the X-ray photons can originate from either synchrotron (e.g in XTE~J118+480), or inverse Compton scattering, or a combination of both (e.g. in GX~339$-$4). Better simultaneous broadband coverage and future X-ray polarization studies can help to estimate the relative importance of these two emission components which dominate the X-ray spectra of X-ray binaries in hard state. From the available data, the X-ray spectrum of XTE~J1118+480 during both outbursts show little or no curvature at energies higher than $\sim10$ keV. Reflection features like the iron line complex and the reflection ``bump'' near 20 keV are also extremely weak. Detailed analysis of the X-ray spectra taken by Chandra and RXTE, during the 2000 outburst of this source by \citet{miller2002} also showed extremely weak reflection features, as would be expected for a jet+recessed disc scenario. From a phenomenological point of view, since the curvature is small, using a broken power law to describe data of X-ray binaries in hard state has been shown to work remarkably well, e.g., for GX~339$-$4 by \citet{nowak2005} and for Cyg~X$-$1 by \citet{wilms2006}. Residuals of a power law fit to the X-ray continuum in the 2005 data set of XTE~J1118+480 show small systematic variations near 5--7 keV, which improves slightly upon addition of a Gaussian line. However, given the weakness of the line, its energy or width cannot be constrained well. Total contribution from various source of inverse Comptonization (SSC within the jet base and external Comptonization of soft thermal photons from the accretion disc) is also much smaller compared to that of GX~339$-$4. The X-ray region of the SED of XTE~J1118+480 is dominated by post-shock synchrotron photons for both 2000 and 2005 outbursts. We note that the electron acceleration region seems to be in the nozzle of the jet in the case of 2005 outburst data of XTE~J1118+480 while it is farther out in the jet in GX~339$-$4 as well as the 2000 outburst of XTE~J1118+480. While exhaustive searching of the $\chi^2$ space confirms that this is required by the data, it is not clear why the acceleration region is inside the nozzle for the 2000 data of XTE~J1118+480. Another unexpected result is that the base of the jet seems to be quite small ($h_0/r_0\sim0.4$) for the 2000 outburst data of XTE~J1118+480, whereas $h_0/r_0\sim1.5$ for other outbursts of XTE~J1118+480 as well as other sources (see e.g. \citet{mnw2005,gallo2007,mig2007}). Given the uncertainties in our understanding of jet formation and the acceleration region, and the simplicity of the model, here we report the best fit parameters as obtained. The particle energy distribution index ($p$) of the relativistic post-shock electrons for data from both outbursts of XTE~J1118+480 is $\sim$ 2.5--2.6. This value is much steeper than what could be expected from an initial distribution of particles accelerated by a diffusive shock process \citep[$\sim 1.5-2$;] []{hd1988}. However it is possible that for XTE~J1118+480 the observed X-ray bandpass is at a higher energy than the cooling break energy, where the particle distribution steepens from $p$ to $p+1$ due to enhanced cooling. One of the key assumptions in our model is time independence of particle distributions. However if the steep power law drop-off of the observed photon index is indeed due to enhanced cooling, or more generally a departure from the assumed equillibrium between the heating and cooling processes, a more thorough time dependent analysis of particle evolution is neccessary. We will present time dependent particle evolution in the context of jets in a forthcoming paper. The jet inclination of XTE~J1118+480 is still a matter of concern. The optically measured system inclination is near $70^{\circ}$ \citep{gelino2006,c2003,m2001,z2002}. However, given the flat radio-to-IR fluxes, jet inclinations higher than $\sim30^{\circ}$ would be very difficult to fit which might suggest a large misalignment between between the disc and the jet \citep{mac2002}. Evidence of misalignment between disc and jet has been seen in several stellar X-ray binary systems, e.g., GRO~J1655$-$40 \citep{gbo2001,hr1995}, SAX~J1819$-$2525 \citep{h2000,o2001}, XTE~J1550$-$564 \citep{h2001,o2002}, as well as in several AGNs, e.g., NGC 3079 \citep{k2005}, NGC 1068 \citep{cap2006} and NGC 4258 \citep{cap2007}. Allowing the inclination to vary for XTE~J1118+480 during the fits gives a best-fit inclination of $\sim25^{\circ}$, and it is almost impossible to find statistically good fits (e.g. with $\chi^2/\nu < 2$) for inclinations larger than $30^{\circ}$. Previous fits by \citet{mff2001} also required a similar, small inclination. In our model we assume the lateral expansion and longitudinal acceleration of the jet to be driven solely via the solutions of the relativistic Euler equation for adiabatic expansion. If this is not the case, and e.g. the jet radius and acceleration are determined by some other process(es) leading to a stronger magnetic field and/or smaller bulk velocity along the jet, that could lead to a flatter radio-IR spectrum as well \citep[also see the discussion in \S3.3.3 of][]{mig2007}. The data for GX~339$-$4 does not formally require a thermal component (e.g. an accretion disc or emission from the secondary star) and the entire broad band emission from radio to X-rays is satisfactorily well fit by non-thermal photons originating from pre- and post-shock synchrotron, and synchrotron self Compton emission mainly from the jet base. Previous fits to the 3--100 keV X-ray data by \citet{h2005} also suggested that the disc contribution within this energy range is $\sim 1\%$ of the total luminosity or less. However presence of the line near 6.5 keV and signature of reflection suggests a cold accretion disc near the black hole which cuts off exponentially at an energy significantly below the lower detection limit of the RXTE/PCA. Since the X-ray spectrum has a luminosity of $\sim 5\%$ L$_{\rm Edd}$, the disc luminosity is presumably $\sim 0.05\%$ L$_{\rm Edd}$ or smaller. The data for GX~339$-$4 were taken when the source was in a bright, X-ray hard state \citep{h2005}. The jet power $N_j$ is much higher than that obtained by \citet{mnw2005}, who studied data from previous outbursts of the same source. We also obtain a higher equipartition factor ($k$) and shallower particle distribution index. It is interesting to note that $r_0$ and $z_{acc}$ for GX~339$-$4 are always found to be higher than those found for other sources. If the electron temperature at the base is left unconstrained (e.g. ``Model 1'' in the fifth column of Table~\ref{tab:fits}), then the density of photon field from best-fit model is high enough that pair processes may become important. In this case many photons are up-scattered above the pair production threshold, and the rate at which pairs are produced is much larger than the rate at which they are annihilated, thereby changing the density at the base of the jet. Constraining the electron temperature to be less than $5\times 10^{10}$ K, we present another fit (``Model 2'', sixth column of Table~\ref{tab:fits}) where the photon density at the base is small and pair production rate is much smaller than the pair annihilation rate. For this fit the the fractional pair annihilation rate $(\dot n_{\rm pa}/n)$ is also much smaller than unity, so that pair processes are not important for this fit. Using the irradiated disc+jet model presented in this paper we have shown that in certain cases (e.g. the 2000 outburst data of XTE~J1118+480) there are not enough X-ray photons from the inner regions to irradiate the disc appreciably. The 2005 outburst data of XTE~J1118+480 presents an intermediate case where the contribution from an irradiated outer disc is comparable to the jet emission in OIR. The 2002 data of GX339$-$4 presents the extreme case where the entire broadband SED is dominated by emission from the jet. The jet model has been used to analyze broad band SEDs of the stellar mass black holes XTE~J1118+480 \citep[][this work]{mff2001}, GX~339$-$4 \citep[] [this work]{mnw2005}, Cyg~X-1 \citep{mnw2005}, A0620-00 \citep{gallo2007}, GRO~J1655-40 \citep{mig2007}, as well as that of the supermassive black holes in the nucleus our own galaxy \citep[Sgr A*;][]{m2007} and that of M81 \citep{markoff2008}. From the analysis of the spectra of these sources we note that the geometry of the jet base, scaled in units of gravitational radius, does not vary largely. The radius of the jet base, $R_0$, lies in the range of 10--100 $R_g$, and the ratio of $h_0/r_0$ at the nozzle lies within 0.4--11, both suggesting a relatively compact jet base and the universal nature of mass scaling in accretion onto compact objects. The fraction of particles accelerated to a power law distribution beyond $z_{acc}$, while not very well constrained by the fits, preferes a value in the range of 0.6--0.9 and is fixed to 0.75. Considering that most of the known acceleration mechanisms in astrophysics do not have such a high efficiency, this number seems rather high and requires further exploration. The particle acceleration index ($p$) lies within 2.2--2.7, which is steeper than that expected from a relativistic shock \citep[1.5--2, see, e.g.] {hd1988}, suggesting that in these cases the X-ray energies may lie beyond the {\em cooling break} \citep{k1962}, so that the power law index of observed SED steepens by an additional amount of 0.5. The magnetic-to-particle energy densities at the jet base ($k$) is often found to be greater than unity. MHD simulations of relativistic jet launching \citep{mg2004,nm2004,devilliers2005,fm2008} suggest that near the jet base, close to the launching points, the jets are mostly Poynting flux dominated and have small plasma fractions (ratio of gas pressure to magnetic pressure is less than unity). We are focusing on inner regions where this is likely still to be the case. In GRBs and AGN, most data is only relevant to regions well beyond the magnetosonic fast points, where the flow has been driven towards equipartition via conversion of magnetic energy to particle energy. Temperature of the thermal electrons at the jet base ($T_e$) is in the range of $2-7\times10^{10}$ K for the stellar black holes and $\sim 10^{11}$ K for the supermassive black holes. The flow near the jet base could be a radiatively inefficient accretion flow (RIAF), i.e. an ADAF \citep{narayan1995,rees1982,shapiro1976}, or in light of the the magnetic domination, an MDAF \citep{meier2005}. In such a case, the ion temperature can be as high as the virial temperature, and electrons at $~10^{9-10}$ K \citep{shapiro1976,nym1996}. Keep in mind that for an outflow model, the particles are also not required to be gravitationally bound, and processes in the jet launching can heat the particles beyond virial, in theory. Most likely the similarity in the derived physical parameters among the diverse sources and their diverse accretion states is not coincindental, and may hint towards an inherent similarity in the process of formation and propagation of compact jets. The results presented above show the importance of quasi-simultaneous, multi-wavelength, broad band SEDs in constraining geometrical and radiative parameters associated with accertion flows near compact objects. \vspace{5mm} \emph{Acknowledgements}. This work was supported mainly by Netherlands Organisation for Scientific Research (NWO) grant number 614000530. JW acknowledges DLR grant 50OR0701. We would like to thank Guy Pooley for helping with the radio data, Michelle Buxton and Mickael Coriat for the OIR data reduction. We would also like to thank the anonymous referee for his/her constructive criticisms that have greatly improved the paper. DM thanks Asaf Pe'er for a discussion on pair processes in plasmas. It is a pleasure to acknowledge the hospitality of International Space Science Institute (ISSI) in Bern where a significant amount of this work was carried out. This research has made use of data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight Center. This work has also made use of the United Kingdom Infrared Telescope (UKIRT) which is operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the U.K. The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. We have also used radio data from the Ryle telescope operated by Mullard Radio Astronomy Observatory. Numerical computations for this work were done on the LISA workstation cluster, which is hosted by Stichting Academisch Rekencentrum Amsterdam (SARA) computing and networking services, Amsterdam. \begin{table*} \begin{tabular}{llll} \hline \hline & XTE~J1118+480 (2000 outburst) & XTE~J1118+480 (2005 outburst) & GX 339$-$4 (2002 outburst) \\ \hline MJD of observation & 51652 & 53393 & 52367-68 \\ Spectral Coverage & & & \\ Radio: & 15 GHz & 15 GHz & 5 GHz \\ UV,Opt/IR: & M, L, K, H, J-bands, EUVE and HST spectra & K, H, J, V, B-band & H, I, V-band \\ X-ray: & Chandra (0.24--7 keV), RXTE (3--110 keV) & RXTE (3--70 keV) & RXTE (3--200 keV) \\ $N_{\rm H}$ ($10^{22}\ cm^{-2}$) & 0.013 & 0.013 & 0.6 \\ $M_{\rm BH}$ (${M}_\odot$) & 8.5 & 8.5 & 7 \\ Distance (kpc) & 1.7 & 1.7 & 6 \\ \hline \end{tabular} \caption{Spectral coverage and source parameters used for the fits. For the 2000 outburst of XTE~J1118+480 the radio data were obtained using the Ryle telescope, IR using the UKIRT, optical spectra using the HST, UV spectra using EUVE, X-ray spectra using Chandra and RXTE. The SED was constructed from data published in \citet{mcc2001}. For the 2005 outburst of XTE~J1118+480 the radio data were obtained using the Ryle telescope, IR using the UKIRT and optical using the Liverpool telescope. The column density, mass of the black hole and distances were taken from the recent observations by \citet{gelino2006} for the source XTE~J1118+480. For GX~339$-$4, the SED was constructed from data published in \citet{h2005}, black hole mass is lower limit by \citet{munoz2008}, distance and column density from the ranges given by \citep{h2004}.} \label{tab:data} \end{table*} \begin{table*} \begin{tabular}{llllll} \hline \hline Parameter (Unit) & XTE~J1118+480 & XTE~J1118+480 & XTE~J1118+480 & GX~339$-$4 & GX~339$-$4 \\ & (2005 outburst; Model 1) & (2005 outburst; Model 2) & (2000 outburst) & (2002 outburst; Model 1) & (2002 outburst; Model 2) \\ \hline $N_{\rm j}$ ($10^{-3}$ ${\rm L}_{\rm Edd}$) & $ 2.32 _{-0.06}^{+0.02}$ & $ 2.77 _{-0.05}^{+0.01}$ & $ 7.2 _{-0.5}^{+0.1}$ & $ 55 _{-9}^{+16}$ & $ 83.8 _{-0.2}^{+0.2}$ \\ & & & & \\ $r_0$ (R$_{\rm g}$) & $ 9.39 _{-0.17}^{+0.18}$ & $ 11.8 _{-0.2}^{+0.1}$ & $ 20.8 _{-1.4}^{+1.0}$ & $ 33 _{-2}^{+3}$ & $ 114 _{-11}^{+7}$ \\ & & & & \\ $h_0/r_0$ & $ 1.589 _{-0.002}^{+0.013}$ & $ 1.39 _{-0.02}^{+0.03}$ & $0.4\star$ & $ 1.7 _{-0.1}^{+0.2}$ & $ 1.51 _{-0.03}^{+0.02}$ \\ & & & & \\ $T_{\rm e}$ ($10^{10}$ K) & $ 4.22 _{-0.02}^{+0.11}$ & $ 4.04 _{-0.02}^{+0.01}$ & $ 3.76 _{-0.07}^{+0.09}$ & $ 6.7_{-0.5}^{+0.4}$ & $ 3.8 _{-0.2}^{+0.01}$ \\ & & & & \\ $z_{\rm acc}$ (R$_{\rm g}$) & $ 12.6 _{-0.0}^{+1.5}$ & $ 12.6 _{-0.0}^{+2.8}$ & $ 8.2 _{-0.2}^{+3.3}$ & $ 252 _{-130}^{+212}$ & $ 526 _{-41}^{+85}$ \\ & & & & \\ $p$ & $ 2.63 _{-0.01}^{+0.01}$ & $ 2.71 _{-0.01}^{+0.01}$ & $ 2.48 _{-0.01}^{+0.02}$ & $ 2.28 _{-0.03}^{+0.03}$ & $ 2.20 _{-0.01}^{+0.02}$ \\ & & & & \\ $\epsilon_{\rm sc}$ ($10^{-3}$) & $ 9.9 _{-0.1}^{+0.02}$ & $ 29.9 _{-3.0}^{+0.3}$ & $ 5.5 _{-0.8}^{+1.4}$ & $ 0.16 _{-0.08}^{+0.08}$ & $ 44 _{-1}^{+2}$ \\ & & & & \\ $k$ & $ 1.9 _{-0.1}^{+0.1}$ & $ 2.3 _{-0.2}^{+0.1}$ & $ 2.1 _{-0.1}^{+0.1}$ & $ 2.2 _{-0.8}^{+1.4}$ & $ 2.38 _{-0.01}^{+0.01}$ \\ & & & & \\ Inclination (deg) & $ 25.1 _{-0.0}^{+0.1}$ & $30\star$ & $30\star$ & $47\star$ & $47\star$ \\ & & & & \\ $r_{\rm in}$ (R$_{\rm g}$) & $ 2.6 _{-0}^{+5}$ & $ 30\star$ & $ 341 _{-22}^{+4}$ & Not constrained & Not constrained \\ & & & & \\ $T_{\rm in}$ (keV) & $ 0.254_{-0.008}^{+0.004}$ & $ 0.1\star$ & $0.025_{-0.002}^{+0.001}$ & Not constrained & Not constrained \\ & & & & \\ $r_{\rm out}$ (R$_{\rm g}$) & $ 15700 _{-350}^{+160}$ & $ 8850 _{-1550}^{+1800}$ & $ 29400 _{-5750}^{+610}$ & Not constrained & Not constrained \\ & & & & \\ $T_{\rm out}$ (K) & $ 18100 _{-600}^{+1000}$ & $ 30500 _{-100}^{+200}$ & $ 650 _{-550}^{+2150}$ & Not constrained & Not constrained \\ & & & & \\ $f \ (10^{-4})$ & $1.2$ & $1.0$ & Not constrained & Not used & Not used \\ & & & & \\ $E_{\rm Line}$ (keV) & $6.63\star$ & $6.63\star$ & Not used & $ 6.3 _{-0.2}^{+0.2}$ & $ 6.3 _{-0.2}^{+0.3} $ \\ & & & & \\ $W_{\rm Line}$ (eV) & $39.3$ & $27.7$ & Not used & $93.5$ & $58.7$ \\ & & & & \\ $\Omega/2\pi$ & Not used & Not used & Not used & $ 0.22 _{-0.14}^{+0.10}$ & $ 0.35 _{-0.02}^{+0.02}$ \\ & & & & \\ $\chi^2/\nu$ [Q] & $70/50$ [0.032] & $90/53$ [0.0011] & $140/137$ [0.406] & $137/112$ [0.054] & $157/112$ [0.0032] \\ \hline \end{tabular} \caption{Best fit parameters for XTE~J1118+480 and GX~339$-$4, and $\Delta \chi^2 < 2.71$ confidence intervals. A star ($\star$) next to a number indicates that the parameter was frozen to that value. $N_{\rm j}$ = jet normalization; $r_{\rm 0}$ = nozzle radius; $h_{\rm 0}/r_{\rm 0}$ = height-to-radius ratio at base; $T_{\rm e}$ = pre-shock electron temperature; $z_{\rm acc}$ = location of acceleration region along the jet; $p$ = spectral index of post-shock electrons; $\epsilon_{\rm sc}$ = 0.36/ratio of scattering mean free path to gyroradius (see \S\ref{sec:jetpars}); $k$ = ratio between magnetic and electron energy densities; Inclination = orbital inclination as seen from Earth (also the jet inclination); $r_{\rm in/out}$ = radius of the inner/outer edge of the irradiated accretion disc; $T_{\rm in/out}$ = temperature at the inner/outer edge of the irradiated accretion disc. While $T_{out}$ is in Kelvin, $T_{\rm in}$ is reported in $keV$ as is customary for X-ray observers; $f$ = $(2\pi)^{-1}\times$ the solid angle subtended by the outer, irradiation dominated disc as seen from the inner disc; $E_{\rm Line}$ = energy of the iron K$\alpha$ line complex from the {\tt gaussian} model; $W_{\rm Line}$ = equivalent width of the line; $\Omega/2\pi$ = reflection fraction from the {\tt reflect} model. $\chi^2/\nu$ is the reduced chi-squared and $Q$ is the corresponding chance probability. For fitting the data of XTE~J1118+480 from 2000, the energy range of $0.15-2.5$ keV was excluded \citep[see text, and also][]{esin2001}. Also the disc inner edge parameters (r$_{\rm in}$, T$_{\rm in}$) are extremely sensitive to the extinction column depth (\S\ref{sec:j1118_2000}). } \label{tab:fits} \end{table*} \begin{table*} \begin{tabular}{lccc} \hline Source/Outburst/Model & log ($n$ [cm$^{-3}$] ) & log ($\dot n_{pa}$ [cm$^{-3}$ s$^{-1}$] ) & log ($\dot n_{pp}$ [cm$^{-3}$ s$^{-1}$] ) \\ \hline XTE~J1118+480/2005/1 & 13.9 & 12.0 & 7.6 \\ XTE~J1118+480/2005/2 & 13.8 & 11.8 & 6.8 \\ XTE~J1118+480/2000 & 13.7 & 11.7 & 5.8 \\ GX~339$-$4/2002/1 & 14.2 & 12.4 & 17.1 \\ GX~339$-$4/2002/2 & 13.4 & 11.0 & 6.1 \\ \hline \end{tabular} \caption{The lepton number density ($n$), pair annihilation rate ($\dot n_{pa}$), and pair production rate ($\dot n_{pp}$) at the base of the jet for the various models presented in Table~\ref{tab:fits}.} \label{tab:pairs} \end{table*} \clearpage \begin{figure*} \includegraphics[width=0.75\textwidth, angle=-90]{irrad_temp1.ps} \caption{Comparing viscous heating ($H_{\rm visc}$) and {\em maximum possible} ``jet-heating'' ($H_{\rm jet}$) for XTE~J1118+480. The filled circles are the computed upper limits of the ratio of jet induced heating to viscous heating ($H_{\rm jet}/H_{\rm visc}$), as discussed in \S\ref{sec:jet_irrad}, joined by a smooth spline. $T_{\rm in}$, $R_{\rm in}$ and $R_{out}$ for the disc were determined from broadband ``Model 1'' fits to the SED. For $R>100\ R_g$, the jet induced irradiation heating term falls more slowly ($\sim R^{-2.4}$) than the viscous heating ($\sim R^{-3}$), which causes the ratio $H_{\rm jet}/H_{\rm visc}$ to slowly increase at increasing radii. For $R_{\rm in} < r_0$, the jet heating saturates but the viscous heating continues to increase for smaller radii, causing $H_{\rm jet}/H_{\rm visc}$ to drop sharply for decreasing radii. \label{fig:jet_irrad}} \end{figure*} \begin{figure*} \includegraphics[width=0.48\textwidth]{plot_all_j1118_1.ps} \includegraphics[width=0.48\textwidth]{plot_xray_j1118_1.ps} \includegraphics[width=0.48\textwidth]{plot_all_j1118_2.ps} \includegraphics[width=0.48\textwidth]{plot_xray_j1118_2.ps} \caption{Jet model fits and residuals for the XTE~J1118+480 data from its 2005 outburst. Left panels show the entire broad band SED from radio through X-rays, and a zoom of the X-ray region is shown in the right panels. Radio and IR data from the Ryle telescope and UKIRT respectively are shown by purple squares, RXTE/PCA data are shown by by blue circles and RXTE/HEXTE data by orange triangles. The top panels are for ``Model 1'' in Table~\ref{tab:fits} (second column) where along with other parameters (see text), the jet inclination, $r_{\rm in}$ and $T_{\rm in}$, were also allowed to vary. The value of the reduced chi-squared for best fit parameters of ``Model 1'' is 1.41. The bottom panels are for ``Model 2'' in Table~\ref{tab:fits} (third column) where jet inclination=$30^\circ$, $r_{\rm in}$=$30$ $R_g$ and $T_{\rm in}$=$0.1$ keV. The value of the reduced chi-squared for best fit parameters of ``Model 2'' is 1.69. The dark-green dash-dotted curve shows the post-shock synchrotron contribution, the green dashed curve shows the pre-shock synchrotron, blue dotted curve shows the Compton upscattered component. Flux from the irradiated accretion disc is shown by the orange dash-dotted curve. The solid grey line shows the total jet+disc model continuum spectrum without convolving through detector responses, interstellar extinction, the iron line or reflection. The iron line near 6.6 keV is shown by the thick, solid green line. The red line shows the properly forward-folded models taking into account detector responses, interstellar extinction, the iron line and reflection. Since interstellar extinction is small for XTE~J1118+480, and the iron line and reflection features are very weak too, the forward-folded model (red line) is almost indistinguishable from the jet+disc model continuum (solid grey) in this figure as well as in Fig.~\ref{fig:j1118_2000fit}. However for GX~339$-$4 (Fig.~\ref{fig:gx339fit}), where the iron line as well as reflection features are stronger, and the extinction is also larger, the difference between the unfolded and folded models (grey and red lines) becomes apparent. \label{fig:j1118_2005fit}} \end{figure*} \clearpage \begin{figure*} \includegraphics[width=0.48\textwidth]{plot_all_2000.ps} \includegraphics[width=0.48\textwidth]{plot_xray_2000.ps} \caption{Jet model fits and residuals for the XTE~J1118+480 data from its 2000 outburst \citep[taken from][]{mcc2001} are shown in the left panel. The radio data from the Ryle telescope and IR data from UKIRT are shown by red circles, HST spectrum using green, EUVE data using orange triangles, Chandra spectrum using blue circles and RXTE data using purple squares. Note the dip in the 0.15--2.5 keV region which is attributed to the presence of a warm absorber \citep{esin2001}. This region was excluded from our fits. A zoom of the X-ray region after excluding 0.15--2.5 keV data is shown in the right panel. The colour coding for the model components are the same as in Fig.~\ref{fig:j1118_2005fit}. \label{fig:j1118_2000fit}} \end{figure*} \begin{figure*} \includegraphics[width=0.48\textwidth]{plot_all_gx339_1.ps} \includegraphics[width=0.48\textwidth]{plot_xray_gx339_1.ps} \includegraphics[width=0.48\textwidth]{plot_all_gx339_2.ps} \includegraphics[width=0.48\textwidth]{plot_xray_gx339_2.ps} \caption{Same as Fig.~\ref{fig:j1118_2005fit}, but for the source GX~339$-$4. The models presented here are purely jet models without any contribution from an accretion disc. The top panels are for ``Model 1'' (Table~\ref{tab:fits}; fifth column) where along with other parameters the temperature of the thermal electrons at the base of the jet ($T_{\rm e}$) was allowed to vary freely. The reduced chi-squared for this model is 1.22. Note the relatively strong inverse Comptonization component in the X-rays, originating at the jet base due to SSC emission in this model, compared to that in the case of XTE~J1118+480. The bottom panels are for ``Model 2'' (Table~\ref{tab:fits}; sixth column) where we constrained $T_{\rm e}$ to be less than $5\times10^{10}$ K, reducing the SSC emission at the base. The base is less compact and acceleration starts farther out along the jet for ``Model 2'' compared to ``Model 1''. The reduced chi-squared for this second model is 1.40. While the first model is a statistically better fit than the second, the photon density at the base can become high enough for pair processes to become important in the first model (see \S\ref{sec:gx339},\S\ref{sec:discussion} and Table~\ref{tab:pairs}), which is not the case for the second model. Also note the strong iron line near $6.3$ keV and the reflection ``bump'' near $\sim20$ keV, suggesting the presence of a cold accretion disc not detected at energies to which RXTE is sensitive. \label{fig:gx339fit}} \end{figure*} \clearpage
train/arxiv
BkiUcRXxK2li-LM1Q_jY
5
1
\section{Introduction} \label{sec:intro_general} A series of liquid argon time projection chamber~(LArTPC) detectors have been or are being deployed at Fermilab as part of the Short-Baseline Neutrino~(SBN) program~\cite{sbn_proposal} along the Booster Neutrino Beamline~(BNB~\cite{BNB}) and as part of the long-baseline program of the Deep Underground Neutrino Experiment~(DUNE)~\cite{dune_proposal}. The MicroBooNE experiment~\cite{ub_detector}, part of the Fermilab SBN program, has been operating since 2015, collecting data accumulated during beam-on and beam-off time periods. MicroBooNE operates a 170 ton~(85 ton active) LArTPC placed 470~m from the BNB target at Fermilab. The LArTPC is 10.4~m long, 2.6~m wide and 2.3~m high. The detector has three readout wire planes with 2400 readout wires on the two induction planes and 3456 readout wires on the collection plane~\cite{wirepaper}. Wires are installed with two induction planes oriented at $\pm60^{\circ}$ with respect to the vertical collection plane at a wire pitch of 3~mm. An array of 32 PMTs are installed behind the collection plane to detect the scintillation light from argon ionization caused by charged final state particles from neutrino interactions~\cite{pmtpaper}. The TPC readout time window is 4.8~ms and is digitized into 9600 readout time ticks. Charged particles in liquid argon produce ionization electrons, which drift to the readout wire planes in an electric field of 273~V/cm. It takes 2.3~ms for an ionization electron to drift across the full width of the detector. The MicroBooNE LArTPC continuously records charge drifted and its arrival time on each wire. A software trigger, based on PMT signals, records an event triggered by the BNB beam spill if the interaction light detected by the PMT array is above a set threshold. Each event consists of data collected from 1.6~ms before the trigger and 3.2 ms after the trigger. Therefore, each event has three sets of TPC data for each wire on all three planes. A truncation of the wire readout is performed around the trigger results so that the two induction planes have resolutions of 2400 wires~$\times$ 6048 readout ticks, while the collection plane has a resolution of 3456 wires~$\times$6048 readout ticks. Wire and time data can be converted into an image format (charge on each wire versus drift time) using the software toolkits LArSoft~\cite{uboonecode} and LArCV~\cite{larcv} while maintaining high resolution in wire, time and charge amplitude space. These information-rich LArTPC images are suitable for applying deep learning tools. In consideration of computing resources, images for deep learning tools are compressed along the time tick axis by a factor of six. Pixel values are merged by a simple sum. Images become 2400 wires~$\times$~1008~ticks and 3456 wires~$\times$~1008 ticks for the induction and collection planes, respectively. This corresponds to an effective position resolution of 3.3~mm~\cite{e_velocity} and 3~mm~\cite{wirepaper} along the time tick and wire number directions, respectively. Convolutional neural networks~(CNN), deep learning networks commonly applied to image processing applications, are currently used across neutrino and high energy physics experiments~\cite{dl_nature}. For accelerator neutrino experiments, NOvA has applied a CNN as a neutrino event classifier~\cite{nova_cvn} in its $\nu_\mu\to\nu_e$ oscillation measurement~\cite{nova_osc, nova_osc_new} and its neutral-current~(NC) coherent $\pi^0$ production measurement~\cite{nova_pi}. NOvA has also demonstrated a context-enriched particle identification network~\cite{nova_context}. MINERvA has developed CNN tools to determine neutrino interaction vertices and study possible biases due to models used in the large simulated training sample~\cite{minerva}. The NEXT experiment has also used a CNN classifier to perform particle content studies at candidate neutrinoless double beta decay vertices~\cite{next}. A variety of deep learning techniques have been used in neutrino LArTPC experiments. In MicroBooNE, a CNN for assigning probabilities of particle identities for single particles in the MicroBooNE LArTPC has been demonstrated on simulated data in Ref.~\cite{ub_singlePID}. A semantic segmentation network for LArTPC data~\cite{ssnet} has been used for $\pi^0$ event reconstruction~\cite{ub_ccpi0}, vertex finding, and track reconstruction~\cite{vtxfinding}. The DUNE experiment has recently presented an updated long-baseline neutrino oscillation sensitivity study incorporating a CNN for neutrino event selection and background rejection~\cite{dune_cnn}. In this article, we present our study in developing and applying a multiple particle identification~(MPID) network with the task of multiple binary logistic regression problem solving in MicroBooNE. It is the first demonstration of the performance of a CNN on LArTPC data including systematic uncertainties, and the first particle identification network applied to LArTPC datasets. The MPID network extends the functionality of MicroBooNE's previously-described single PID CNN network~\cite{ub_singlePID}. It does not require pre-processing of image data to identify and filter selected pixels in an image assumed to be produced by a specific particle. The network provides simultaneous prediction scores for particle existence probabilities in the same image among five different particle species: electrons ($e^-$), photons~($\gamma$), muons ($\mu^-$), charged pions ($\pi^\pm$) and protons ({\it p}). The network is a particularly useful tool for data analysis of particle interactions in LArTPC detectors, since the region of an interaction vertex often contains many particles. The MPID algorithm can take as input a LArTPC image with a fixed 512$\times$512 pixel scale. A detailed description of the network design and training for MPID is given in Section~\ref{sec:intro_cnn}. When used in MicroBooNE's deep learning based low-energy excess $\nu_e$ (LEE $1e${\small-}$1p$) search analysis, the MPID network is primarily applied to images that contain candidate reconstructed neutrino interaction vertices as well as all reconstructed topologically connected activity. MPID predictions are derived based on the full information of all energy depositions topologically connected to the vertex, particularly the first few centimeters of final-state particles' trajectories, which are critical for particle identification. In the $\nu_e$ search, the network is also applied to more inclusive images roughly cropped around the interaction vertex. This is a new feature compared with the single PID network, which takes as input only images containing filtered, reconstructed hits. Cropping around the interaction vertex allows re-evaluation of charge missing from the former topologically-connected image, but is nonetheless present near the vertex, such as photon showers from final-state $\pi^0$s. This feature of the MPID network can help MicroBooNE suppress important photon backgrounds to a LEE search, as observed by MiniBooNE~\cite{miniboone}. We demonstrate this feature's robustness against the presence of LArTPC activity such as cosmic ray tracks that are uncorrelated with signal features of interest. In this paper, we demonstrate the expected MPID performance on simulated test images containing one muon and one proton ($1\mu${\small-}$1p$) and one electron and one proton ($1e${\small-}$1p$) in Section~\ref{sec:mc}. Both are signal interactions of a MicroBooNE deep learning based LEE analysis. We then demonstrate the MPID's achieved level of $\mu^-$ and $p$ identification in a filtered $\nu_\mu$ charged current~(CC) dataset and compare results between data and simulation in Section~\ref{sec:data}. We similarly demonstrate the MPID's $e^-$ and $\gamma$ separation capability using a filtered sample of $\pi^0$-containing events, and compare data and simulation results. In Section~\ref{sec:nue_mc}, we demonstrate the value of MPID for a LEE measurement in $e^-$ and $\gamma$ separation, $e^-$ and $\mu^-$ separation and {\it p} and cosmic ray separation expected from a set of simulated $\nu_e$ interaction images. \section{Multiple particle convolutional neural network} \label{sec:intro_cnn} \subsection{Network design} The MPID network applies a typical CNN~\cite{cnn} structure for the task of multiple object classification, which is summarized in block diagram form in Fig.~\ref{fig:mpid_design}. Input images have a resolution of 512 $\times$ 512~(1.5~m$\times$1.5~m) pixels, which generally matches the size of neutrino-induced activity in MicroBooNE. A series of ten convolutional layers are applied to the image for extracting high-level features. \begin{figure}[htb!pb] \centering \includegraphics[width=6.5cm]{images/mpid_design.png} \caption{MPID network scheme. The output has five numbers. Each of the values is between 0 and 1, representing the probabilities of corresponding particles in the given LArTPC image. \label{fig:mpid_design}} \end{figure} The first convolutional layer has a stride~(shift unit of the convolution calculation) of two with the goal of reducing the LArTPC images' sparsity and increasing feature abundance at the beginning of the algorithm. Following convolutional layers have a stride of one, a block of two convolution layers with a kernel size of three, followed by a pooling layer that is repeated five times. An average pooling layer is applied at every other convolutional layer to contract the spatial dimension. Then following the pooling layer is a rectifier activation function~(ReLU)~\cite{relu} for adding non-linearities to the network, as well as a group normalization operator~\cite{GN} to avoid early overfitting. Two fully connected layers with 192$\times$8$\times$8 nodes and 192$\times$8 nodes are applied to combine the features derived by convolutional layers. Output of the fully connected layers is a vector with five floating point numbers, each representing a confidence score for a target particle type to be present in an image. The score is interpreted as a normalized probability after applying a sigmoid function~\cite{pytorch_sigmoid}. The algorithm is optimized by minimizing the sum of binary cross entropy loss~\cite{pytorch_bce} across target particle types. In this way the prediction categories are not exclusive between particles. Figure~\ref{fig:mpid-1e1p-exp} shows one example of the input and output of the MPID network during inference. In this case, the input image from has one $e^-$ and one {\it p} concatenated at the same vertex, a typical signal interaction topology for an interaction-channel-exclusive $1e${\small-}$1p$ search, as implemented in the MicroBooNE deep learning based LEE analysis. The MPID network calculates as output the five floating point numbers described in the previous paragraph, or ``particle scores,'' that correspond to the inferred probability to have each type of particle present in the image. In this example, high scores of 0.99 and 0.98 are given for {\it p} and $e^-$ in the image and low scores of 0.06, 0.01 and 0.02 are provided for $\gamma$, $\mu^-$ and $\pi^\pm$. \begin{figure}[htbp!] \centering \includegraphics[width=8.6cm]{images/1e1p_exp.png} \begin{tabular}{cccccc} \hline & {\it p} & $e^-$ & $\gamma$ & $\mu^-$ & $\pi^\pm$ \\ \hline MPID Score & 0.99 & 0.98 & 0.06 & 0.01 & 0.02 \\ \hline \end{tabular} \caption{MPID example of an $1e${\small-}$1p$ topology with a tabulated output of particle scores. This image is generated by concatenating a {\it p} and an $e^-$ at the same vertex. Scores indicate high probabilities of having a {\it p} and $e^-$ in the image. The image applied to MPID has 512~$\times$~512 pixels. A zoom-in image of 250~$\times$~250 pixels is shown for visualization.\label{fig:mpid-1e1p-exp}} \end{figure} \begin{figure}[htpb!] \centering \includegraphics[width=8.6cm]{images/nc_exp.png} \begin{tabular}{cccccc} \hline & {\it p} & $e^-$ & $\gamma$ & $\mu^-$ & $\pi^\pm$ \\ \hline MPID Score & 0.89 & 0.95 & 0.85 & 0.06 & 0.17 \\ \hline \end{tabular} \caption{MPID example of an $1e${\small-}$1\gamma${\small-}$1p$ topology with a tabulated output of particle scores. This image is generated by concatenating three particles at the same vertex. Scores indicate higher probabilities of having {\it p}, $e^-$ and $\gamma$ in the image. The image applied to MPID has 512~$\times$~512 pixels. A zoom-in image of 250~$\times$~250 pixels is shown for visualization. \label{fig:mpid-1g1p-exp}} \end{figure} Figure~\ref{fig:mpid-1g1p-exp} shows another example of the input and output for the network during inference. The input image has one $\gamma$, one $e^-$ and one {\it p} produced at same vertex, which would in principle be rejected in an exclusive $1e${\small-}$1p$ search. Again, the MPID calculates scores that correspond to containing each particle in the image. High scores of 0.89, 0.95 and 0.85 are for {\it p}, $e^-$ and $\gamma$ in the image and low scores of 0.02 and 0.08 are found for $\mu^-$ and $\pi^\pm$ in the image. We also note for total clarity that the photon particle score is indicative not of the predicted \emph{total number} of photons in the image, but rather the probability that \emph{any} photons are present in the image. The former judgement, as well as the capability to identify the particle content of specific sub-features within an image, is not within the scope of the MPID algorithm. \subsection{Training and Test Samples} \label{training_sample} Training and test samples for the MPID CNN are produced with a customized event generator that uses LArSoft~\cite{uboonecode} and LArCV~\cite{larcv}. Detector processes are simulated with the GEANT4~\cite{geant4_1,geant4_2,geant4_3} simulation tool. The first generator step produces a 3D vertex uniformly distributed in the MicroBooNE LArTPC. The second step generates a random number of particles from $e^-$, $\gamma$, $\mu^-$, $\pi^\pm$, and $p$ options. All particles are generated at the vertex from the first step with isotropic directions. The multiplicities for the total number of particles allowed in each image are randomly distributed between two and four. The multiplicity for each particle type is allowed to vary randomly between zero and two. Such a configuration will include as a subset final-state interaction vertex topologies that we are searching for or trying to reject in MicroBooNE analyses, such as $1e${\small-}$1p$, $1\mu${\small-}$1p$ and $1\gamma-1p$, as well as non-signal ones, such as $2\mu$ or $2e$. This generation strategy purposefully does not rely on any of the standard neutrino final-state generators~\cite{genie} to avoid possible biasing the MPID network via inclusion of possibly-incorrect kinematic or multiplicity information provided by the generator. Moreover, this training model will produce a more robust particle identification tool capable of producing unbiased results for a much broader range of vertex-generating physics processes. Finally, high multiplicity topologies generated in this randomized training samples help the network to activate more nodes and learn more parameters for classification. Each particle is generated with a single particle simulation package, where no neutrino interaction model kinematics are assumed. For 80\% of the training and test samples, particles are simulated with kinetic energies between 60~MeV and 400~MeV for protons and between 30~MeV and 1~GeV for other particles. For the other 20\% of the training and test samples, particles are simulated with kinetic energies between 30~MeV and 100~MeV for protons and between 30~MeV and 100~MeV for other particles. Particles are generated with a flat energy distribution. Energy ranges are chosen based on the BNB neutrino energy distribution and the analysis priority towards the lower energy range. We generated 60,000 simulated events for training and 20,000 images for testing. The images are intentionally generated without overlaying cosmic rays on simulated images to retain separation capabilities for $\mu^-$. Images used for training, testing and inference are from the collection plane only. \subsection{Network Training} The loss of the network is defined using the BCEWithLogitsLoss~\cite{pytorch_bce} function in PyTorch taking the output layer~(five floating point number) as input. The BCEWithLogitsLoss function combines a sigmoid~\cite{pytorch_sigmoid} operator with the binary cross entropy calculation. During training, we applied an initial learning rate of 0.001. Batch sizes of 32 and 16 are chosen for the training and test processing Training is processed with one single NVIDIA 1080 Ti graphics card. Regularization methods of dropout~\cite{dropout} and group normalization~\cite{GN} are applied to avoid early overfitting during training. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/training_loss.png} \includegraphics[width=8.6cm]{images/training_accuracies.png} \caption[]{Losses of training and test events during training~(top). Accuracies of training and test events during training~(bottom).} \label{fig:mpid_training} \end{figure} \begin{figure*}[htb!pb] \centering \includegraphics[width=7.38cm]{images/occlusion_new_raw.png} \par \includegraphics[width=8.6cm]{images/occlusion_new_proton.png} \includegraphics[width=8.6cm]{images/occlusion_new_gamma.png} \caption[]{Simulated $1e${\small-}1$\gamma${\small-}1{\it p} final state event example~(top). {\it p} score map~(bottom left), {\it p} scores decrease as occluded region crosses the {\it p} pixels. $\gamma$ score map~(bottom right), $\gamma$ scores decrease as pixels in the broken shower are occluded and increase as the trunk region of the e- shower are occluded.} \label{fig:occlusion} \end{figure*} An accuracy is calculated while the training is monitored for loss. Accuracy is defined as the fraction of predicted labels matching the truth labels with a threshold value of 0.5 per event. MPID training curves of accuracies and losses are shown in Fig.~\ref{fig:mpid_training}. After epoch 29, the test curve continues to improve but does not keep up with the training curve. With the consideration of not introducing bias from the training sample, we checked weights around epoch 29 and selected the one with best accuracy on the test sample. \subsection{MPID Occlusion Analysis} We applied an occlusion analysis~\cite{occlusion} to determine whether the MPID network has calculated its predictions using image features associated with underlying physics for example, dE/dx at the first pixels of a particle~(referred as the trunk region of a particle), as opposed to other extraneous features in the image. The strategy is to feed the network an image partially masked to check how the MPID responds to the masked image. The occlusion analysis places a 9$\times$9 pixel box in the top left corner of the image, which masks all pixels in the occlusion box with zero values. With this box placed, we then apply the MPID network to the masked image and plot at that center pixel the produced score value. This process is then repeated for each pixel as the occlusion box scans along the x and y axis of the image. Figure~\ref{fig:occlusion} shows an example of the occlusion box placed on the image. After scanning the whole image, we obtain score maps showing the MPID responses to each occlusion box placement location. A lowered score for a particular pixel in occlusion images indicates that the masked region contains topological information valuable for determining the identity of that particular particle. A simulated interaction image with one $e^-$, one $\gamma$ and one {\it p} at the same vertex, shown in Fig.~\ref{fig:occlusion} is chosen to demonstrate the occlusion analysis. The bottom left panel in Fig.~\ref{fig:occlusion} shows the {\it p} scores from the occlusion study on the input image. The {\it p} score drops significantly as the proton track's Bragg peak region, where strong {\it p} dQ/dx features exist, is masked. This indicates that the MPID network is properly identifying and leveraging features associated with the {\it p}'s unique energy deposition density profile. It can also be seen that a few pixels in the circle with very high pixel values in the pictured shower are mildly misinterpreted as {\it p}-like features. The bottom right panel in Fig.~\ref{fig:occlusion} shows the $\gamma$ scores from occlusion analysis of the same input image. From the occlusion image, it is clear that a few key physics features of $\gamma$-containing images have been properly learned by the MPID network. There are two critical features in the particle trunk region for $e^-/\gamma$ separation: the projected trunk region dE/dx difference and the presence or absence of a gap between the trunk and the interaction vertex. One can see the $\gamma$ score drops to near 0.3 when the trunk region of the $\gamma$ (rather than the gap region between $\gamma$ and vertex) is masked. We also applied an occlusion analysis to images with single $\gamma$ images to confirm that $\gamma$ scores drop and $e^-$ scores increase as the $\gamma$ trunk region is masked. We observe in this example that the $\gamma$ score also increases to near one when nearby pixels connecting the {\it p} and $e^-$ are masked, since this produces more gaps between different particles. The $e^-$ score does not change much as we move the occlusion box around since there are overwhelming $e^-$-like features in the image from both the $e^-$ and $\gamma$. \section{Performance on Simulation} \label{sec:mc} To provide a first look at the capabilities of the trained MPID algorithm, we present particle score results returned from the test images generated using the same method applied in producing training images. This section is divided into discussions of individual final state vertex and particle topologies of interest to MicroBooNE physics analyses. We generated two test samples with particles of $1\mu${\small-}$1p$ and $1e${\small-}$1p$ in the final state, which are not used in training. 10,000 events are generated in each sample. These samples are generated with the same customized event generator described in Section~\ref{training_sample}. Vertices are uniform in the detector with one proton and one corresponding lepton. Kinetic energies of the protons are between 50~MeV and 400~MeV, while kinetic energies of leptons are generated between 50~MeV and 1~GeV. The $1e${\small-}$1p$ final state dataset has a similar final state as the target events of MicroBooNE's deep learning based LEE $1e${\small-}$1p$ analysis. The $1\mu${\small-}$1p$ dataset has a final state similar to a MicroBooNE $\nu_{\mu}$ selection analysis, described in Section~\ref{sec:1u1p}, that will be used to constrain the beam-intrinsic backgrounds in the LEE search. Studying the performance of MPID with these two simulated datasets shows the general level of expected performance of the network when signal (or background) pixels in input $\nu_e$ and $\nu_{\mu}$ images are included (or removed). \subsection{$1\mu${\small-}$1p$ Simulated Sample} Figure~\ref{fig:mpid-val-1mu1p} shows stacked MPID scores of five particle hypothesis for the $1\mu${\small-}$1p$ simulated test dataset. One can see that the MPID network provides good separation between track-like and shower-like particles with {\it p} and $\mu^-$ scores concentrated near one and $e^-$ and $\gamma$ piled up near zero. The plot also shows a good separation between $\mu^-$ and $\pi^\pm$ using MPID, with a low score distribution for $\pi^\pm$. Separation between $\mu^-$ and $\pi^\pm$ comes from the fact that $\pi^\pm$ have higher rates of nuclear scattering than the $\mu$, and the $\pi^\pm$ can have a kink point where they decay as noted in Ref.~\cite{ub_singlePID}. The network is likely keying primarily off of visible kinks in a particle's trajectory in order to identify $\pi^\pm$ and the absence of visible kinks in a particle trajectory to identify $\mu^-$. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/MPID_stack_1mu1p.png} \caption{~MPID score distributions for the probabilities of {\it p}, $\mu^-$, $e^-$,$\gamma$, $\pi^\pm$ on the $1\mu${\small-}$1p$ validation sample. \label{fig:mpid-val-1mu1p}} \end{figure} \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/1mu1p_effi_misid_tracks.png} \vspace{1pt} \includegraphics[width=8.6cm]{images/1mu1p_effi_misid_showers.png} \caption[]{MPID efficiency and mis-ID curves for track-like particles~(top) of {\it p}, $\mu^-$ and $\pi^\pm$ on the $1\mu${\small-}$1p$ validation sample. MPID efficiency and mis-ID curves for shower-like particles~(bottom) of $e^-$ and $\gamma$ on the $1\mu${\small-}$1p$ validation sample.} \label{fig:mpid-effi-1mu1p} \end{figure} To perform particle identification as part of a neutrino event selection analysis, a set of selections are usually applied to particle score variables; these cuts will have associated impacts on total signal selection efficiencies and on the probability of mis-identifying background events as signal events. Figure~\ref{fig:mpid-effi-1mu1p} shows the correlation curve between efficiency/misidentification~(mis-ID) rates and MPID scores for track-like particles in the $1\mu${\small-}$1p$ dataset. The cut value for each particle score is varied between 0 and 1 with a step size of 0.01. For example, the green dotted line shows the efficiency of correctly predicting a {\it p} in the image at each {\it p} score cut value. Meanwhile, the orange dotted line shows the mis-ID rate of falsely predicting a $\pi^\pm$ in the image with different $\pi^\pm$ score values. Figure~\ref{fig:mpid-effi-1mu1p} also shows the correlation curve between mis-ID rates for shower-like particles in the $1\mu${\small-}$1p$ dataset. The mis-ID rates are both extremely low for having either in the $1\mu${\small-}$1p$ sample. Table~\ref{tab:effi_1mu1p} provides another view of Fig.~\ref{fig:mpid-effi-1mu1p}, showing the efficiencies of correctly labelling {\it p} and $\mu^-$, as well as mis-ID rates of falsely labelling $e^-$ and $\gamma$ in the $1\mu${\small-}$1p$ dataset at score cut values of 0.1, 0.3, 0.5, 0.7 and 0.9. At a cut score of 0.5, efficiencies of identifying a {\it p} and $\mu^-$ are at 82\% and 85\%, while mis-ID rates of shower-like particles are 1\% or less. It is also worth noting that a cut score of 0.5 on $\pi^{\pm}$ produces a sufficiently low mis-ID rate at 13\%. \begin{table}[htb!pb] \caption{\label{tab:effi_1mu1p} Efficiencies and mis-ID rates for $1\mu${\small-}$1p$ sample at specific selection scores.} \begin{ruledtabular} \begin{tabular}{c|cp{1cm}|ccc} MPID & \multicolumn{2}{c}{Efficiency} & \multicolumn{3}{c}{mis-ID rate} \\ Cut Value & {\it p} & $\mu^-$ & $e^-$ & $\gamma$ & $\pi^\pm$\\ \hline 0.1 & 0.95 & 0.97 & 0.07 & 0.05 & 0.54\\ 0.3 & 0.87 & 0.92 & 0.03 & 0.00 & 0.24\\ 0.5 & 0.82 & 0.85 & 0.01 & 0.00 & 0.13\\ 0.7 & 0.76 & 0.75 & 0.00 & 0.00 & 0.06\\ 0.9 & 0.62 & 0.45 & 0.00 & 0.00 & 0.01\\ \end{tabular} \end{ruledtabular} \end{table} Figure~\ref{fig:mu_score_mu_eng} shows the correlation between $\mu^-$/$\pi^\pm$ scores and the $\mu^-$ kinetic energy using the same $1\mu${\small-}$1p$ simulation of 10,000 events. One can see that when the $\mu^-$ particles have low kinetic energy and produce fewer $\mu^-$-like pixels in the image, the $\mu^-$ score is decreased. Meanwhile, $\pi^\pm$ scores for the same dataset appear to be comparatively stable across all tested muon energies. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/mu_score_mu_eng.png} \vspace{1pt} \includegraphics[width=8.6cm]{images/pi_score_mu_eng.png} \caption[]{Muon score vs. muon kinetic energy~(top) and charged pion score vs. muon kinetic energy~(bottom) for the $1\mu${\small-}$1p$ simulation. Red dots indicate the average score in the vertical bin. } \label{fig:mu_score_mu_eng} \end{figure} \subsection{$1e${\small-}$1p$ Simulated Sample} Figure~\ref{fig:mpid-val-1e1p} shows stacked MPID score distributions for the simulated $1e${\small-}$1p$ dataset. MPID correctly calculates high scores for signal particles of {\it p} and $e^-$. The network shows good separation between track particles in deriving low scores for $\mu^-$ and $\pi^\pm$. The MPID CNN also shows good separation between shower-like particles when $e^-$'s are present in the image: derived scores for $\gamma$ are clustered close to zero, while $e^-$-like scores are clustered around unity. The correlation curve between efficiency/~mis-ID rates and MPID scores for track-like particles in the $1e${\small-}$1p$ dataset are given in Fig.~\ref{fig:mpid-effi-1e1p}. Efficiencies for {\it p} in the image are much higher than the mis-ID rates for containing a $\mu^-$ or $\pi^\pm$ in the image. The capability to discriminate between {\it p} and $\mu^-$ appears to be particularly high, while {\it p}/$\pi^\pm$ separation also remains high. This difference in performance between $\mu^-$ and $\pi^\pm$ should not be too surprising given the level of $\pi^\pm$-$\mu^-$ separation demonstrated in the previous section. Figure~\ref{fig:mpid-effi-1e1p} also shows correlations between efficiency/~mis-ID rates and MPID scores for the shower-like particles in the $1e${\small-}$1p$ dataset. This plot shows that MPID should produce good separation between $e^-$ and $\gamma$ containing events. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/MPID_stack_1e1p.png} \caption{~MPID score distributions for the probabilities of {\it p}, $\mu^-$, $e^-$,$\gamma$, $\pi^\pm$ on the $1e${\small-}$1p$ validation sample. \label{fig:mpid-val-1e1p}} \end{figure} \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/1e1p_effi_misid_tracks.png} \vspace{1pt} \includegraphics[width=8.6cm]{images/1e1p_effi_misid_showers.png} \caption[]{MPID efficiency and mis-ID curves for track-like particles~(top) of {\it p}, $\mu^-$ and $\pi^\pm$ on the $1e${\small-}$1p$ validation sample. MPID efficiency and mis-ID curves for shower-like particles~(bottom) of $e^-$ and $\gamma$ on the $1e${\small-}$1p$ validation sample.} \label{fig:mpid-effi-1e1p} \end{figure} \begin{table}[htb!pb] \caption{\label{tab:effi_1e1p}% Efficiencies and mis-ID rates for $1e${\small-}$1p$ sample at specific selection scores.} \begin{ruledtabular} \begin{tabular}{c|cp{1cm}|ccc} MPID & \multicolumn{2}{c}{Efficiency} & \multicolumn{3}{c}{mis-ID rate} \\ Cut Value & {\it p} & $e^-$ & $\mu^-$& $\gamma$ & $\pi^\pm$\\ \hline 0.1 & 0.95 & 0.98 & 0.15 & 0.68 & 0.44\\ 0.3 & 0.87 & 0.95 & 0.03 & 0.31 & 0.18\\ 0.5 & 0.82 & 0.91 & 0.01 & 0.15 & 0.09\\ 0.7 & 0.76 & 0.84 & 0.00 & 0.06 & 0.04\\ 0.9 & 0.63 & 0.64 & 0.00 & 0.02 & 0.01\\ \end{tabular} \end{ruledtabular} \end{table} Table~\ref{tab:effi_1e1p} shows the efficiencies for labelling {\it p} and $e^-$, and mis-ID rates for labelling $\mu^-$ and $\gamma$ in the $1e${\small-}$1p$ dataset at particle score cut values of 0.1, 0.3, 0.5, 0.7 and 0.9. Efficiencies of correctly finding the {\it p} and $e^-$ are at 82\% and 91\% with particle score cut values at 0.5, while impressively small mis-ID rates are found for $\mu^-$ (1\%) and $\gamma$ (15\%). As in the previous section for $1\mu${\small-}$1p$ selection, $\pi^\pm$ mis-ID rates remain low, with a 9\% Mis-ID at a cut score of 0.5. Figure~\ref{fig:e_score_e_eng} shows the correlation between $e^-$/$\gamma$ scores and $e^-$ kinetic energy. One can see the MPID network has an overall high $e^-$ score until the $e^-$ kinetic energy approaches its critical energy in liquid argon and becomes less shower-like. In a related sense, $\mu^-$ scores for low energy $1e${\small-}$1p$ interactions are found to be slightly higher than high energy ones. Meanwhile, the $\gamma$ score for these events has a positive correlation with $e^-$ kinetic energy, since high energy $e^-$ are more likely to experience substantial amounts of radiative energy loss. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/e_score_e_eng.png} \vspace{1pt} \includegraphics[width=8.6cm]{images/gamma_score_e_eng.png} \caption[]{Electron score vs. electron kinetic energy~(top) and photon score vs. electron kinetic energy~(bottom) for the $1e${\small-}$1p$ simulation. Red dots indicate the average score in the vertical bin.} \label{fig:e_score_e_eng} \end{figure} \section{Use of MPID In A Low Energy Excess Measurement} \label{sec:nue_mc} In the two previous sections, we have demonstrated the MPID network's utility in particle identification for both track and shower topologies in LArTPC images, as well as its equivalent performance on both data and simulated events. We will now apply the trained MPID network to simulated BNB $\nu_e$ and $\nu_{\mu}$ interactions overlayed with beam-off cosmic event images to demonstrate the ability of the MPID network to aid in event selection for MicroBooNE's deep learning-based $1e${\small-}$1p$ low-energy excess search \subsection{Simulated Intrinsic $\nu_e$ vs. $\nu_\mu$CCQE and $\nu_\mu\pi^0$} We generated simulated neutrino events to evaluate the performance of MPID in the $1e${\small-}$1p$ selection in separating beam-intrinsic BNB backgrounds from $\nu_{\mu}$CCQE and neutrino interactions with one or more $\pi^0$s in the final state~($\nu_\mu\pi^0$). Samples for these three datasets are produced using the standard GENIE v3.0.6~\cite{genie} neutrino interaction generator and filtered using truth-level information. In these samples, we require the lepton kinetic energy be greater than 35 MeV and {\it p} kinetic energy greater than 60 MeV. The minimum kinetic energy thresholds were set in order to choose events whose lepton and {\it p} trajectories are long enough to be reconstructed by our deep learning based vertex finding and particle reconstruction algorithms~\cite{vtxfinding}. Samples are then processed using the reconstruction algorithms to identify candidate interaction vertices and nearby related particles. Finally, input images are generated with pixels from only the reconstructed interaction final-state particles; each interaction is required to have two particles at this stage. Images are centered at the pixel weighted center of reconstructed interactions. No other selection cuts beyond the truth-level filtration described above are applied to the samples. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/eminus_nue_numupi0.png} \caption{Electron score of $\nu_e$ intrinsic events and $\nu_\mu\pi^0$ events. Both datasets are generated with the GENIE neutrino generator and filtered using truth level information. Presented events have a reconstructed vertex.\label{fig:mc_eminus_nue_pi}} \end{figure} Figure~\ref{fig:mc_eminus_nue_pi} shows the $e^-$ score distribution of reconstructed events from $\nu_e$ and $\nu_\mu\pi^0$ datasets. A good separation is visible between these two event classes. For example, with only an $e^-$ cut score of 0.5, 83\% of $\nu_\mu$NC$\pi^0$ and 86\% of $\nu_\mu$CC$\pi^0$ events are rejected, while 81\% of true $1e${\small-}$1p$ events are selected. It seems likely that further gains in background rejection could be achieved by also considering scores for other particles and by using differing input pixel image inclusion settings. Previous discussion from the occlusion analysis presented in Section~\ref{sec:mc} provides some level of insight into the causes of the substantial discrimination shown in Fig.~\ref{fig:mc_eminus_nue_pi}. In particular, $\nu_e$ interactions will contain a shower-like object with a trunk directly connected to another particle, a feature that was clearly noticed by the MPID network. This is not the case for most $\gamma$ rays present in $\nu_\mu\pi^0$ interactions. Another critical parameter for separating $e^-$- and $\pi^0$-including events is the energy deposition density, dE/dx, along this vertex-connected shower trunk; the trunk region information is usually well-reconstructed, since it is almost always directly attached to neutrino candidate vertex. Some of the discrimination in Fig.~\ref{fig:mc_eminus_nue_pi} may thus also arise from the network's ability to discriminate a high trunk dE/dx for vertex-connected showers from quickly-converting $\pi^0$ $\gamma$ rays. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/eminus_numuccqe_nue.png} \caption{Electron score between $\nu_e$ intrinsic events and $\nu_\mu$ CCQE events. Both datasets are generated with the GENIE neutrino generator and filtered using truth level information. Presented events have a reconstructed vertex.\label{fig:mc_eminus_nue_ccqe}} \end{figure} The $e^-$ score can also be applied to separate $1e${\small-}$1p$ and $1\mu${\small-}$1p$ events. The separation is shown in Fig.~\ref{fig:mc_eminus_nue_ccqe}. The $\nu_e$ and $\nu_{\mu}$CCQE events are well separated using the $e^-$ score calculated by the MPID network. For example, with only an $e^-$ cut score of 0.2, 91\% of true $1e${\small-}$1p$ events are selected, while 95\% of $\nu_{\mu}$CCQE events are rejected. This discrimination ability almost certainly arises from the lack of shower-like topologies in the $\nu_{\mu}$CCQE interaction images. \subsection{Simulated Intrinsic $\nu_e$ vs. Cosmic Event} Due to the lack of substantial overburden and the long readout time, cosmic rays could provide a substantial background to a BNB-based $1e${\small-}$1p$ $\nu_e$ measurement in MicroBooNE. As most of this cosmic ray activity is induced by $\mu^-$, it is expected that the presence of a {\it p} in the signal's final state will aid in distinguishing the two categories. To test the MPID network's ability to discriminate the signal's {\it p} particle content, we generated a simulated intrinsic $\nu_e$ dataset with cosmic data overlay, in addition to another event set consisting purely of beam-off cosmic triggers. For both datasets, we applied the vertex finding and particle reconstruction algorithms developed for two-track events, as described in Ref.~\cite{vtxfinding}; in particular, each image is required to have exactly two reconstructed particles connected to the candidate neutrino interaction vertex. As in the sub-section above, no selection cuts are applied beyond truth-level event filtration. Figure~\ref{fig:mc_proton_nue_cosmic} shows the {\it p} score distributions on images from the intrinsic $\nu_e$ dataset with cosmic overlay and the pure cosmics dataset. One can see that the majority of pure cosmic dataset events reconstructed as two-particle signals events have {\it p} scores below 0.2. Meanwhile, the majority of reconstructed $\nu_e$ intrinsic events have {\it p} scores near 1. For example, with only a {\it p} cut score of 0.5, 81\% of true $1e${\small-}$1p$ events are selected, while 79\% of cosmic events are rejected. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/proton_nue_cosmic.png} \caption{Proton score of $\nu_e$ intrinsic events and beam-off cosmic data. The $\nu_e$ dataset is generated with the GENIE neutrino generator and filtered using truth level information. Presented events have a reconstructed vertex. \label{fig:mc_proton_nue_cosmic}} \end{figure} Investigation of information from prior reconstruction stages and hand-scanning of event displays indicates that the small peak in {\it p} score close to zero in the $\nu_e$ intrinsic dataset is due to inefficiencies in {\it p} reconstruction as shown in Fig. 21(a) of~Ref.~\cite{vtxfinding}. Similar investigations show that the small peak of of {\it p} score close to one in the cosmic sample is introduced by cosmics with small incident angles relative to the collection plane; these non-{\it p} tracks are often topologically compressed by reconstruction algorithms, giving them the appearance of short tracks with a proton-like Bragg peak. Thus, future improvements in lower-level signal processing and particle reconstruction is likely to further improve the cosmic discrimination shown in Fig.~\ref{fig:mc_proton_nue_cosmic}. \section{ Comparison of Data/Simulation Performance} \label{sec:data} We prepared two different MicroBooNE LArTPC data samples to validate the performance of the MPID network on data. The MPID network was not employed in the selection of these data samples. The first data sample is a $1\mu${\small-}$1p$ enriched selection that uses a hybrid selection of a series of reconstruction algorithms~\cite{vtxfinding} and MicroBooNE's semantic segmentation network~\cite{ssnet}. This dataset is intended to be used in a MicroBooNE LEE $1e${\small-}$1p$ analysis to provide a data-based constraint on the BNB neutrino beam's intrinsic $\nu_e$ contamination. The second sample contains $\nu_\mu$ charged current interactions with a final-state $\pi^0$ ($\nu_\mu$CC$\pi^0$) as defined in Ref.~\cite{ub_ccpi0}. In this section we demonstrate that the MPID network works well on real LArTPC images. We show good agreement in MPID scores between data and simulation for the selected datasets. To enable data/simulation comparisons for these two event classes, we simulate neutrino interactions using the GENIE v3.0.6~\cite{genie} neutrino Monte Carlo generator. To accurately include on-surface cosmogenic backgrounds present in all MicroBooNE LArTPC images, beam-off data containing only cosmic rays is overlayed on simulated neutrino interaction images. Beam-off data is taken with cosmic ray triggers. An overlay sample is a combination of GENIE simulated beam events and cosmic data events. This ensures that the reported particle score distributions for data and simulated images will be equally affected by the presence of cosmic rays. In the study of the $1\mu${\small-}$1p$ dataset, we apply the MPID network to processed images containing only wire signal activity associated with particles reconstructed at a candidate neutrino interaction vertex. In the study of $\nu_\mu$CC$\pi^0$ dataset, we instead apply the MPID network to images made with all pixels near the reconstructed vertex; in this case, particle scores are completely independent of any previous reconstruction. The data/simulation comparison therefore also demonstrates the robustness of the MPID network to both `cleaned'~(input images containing only the reconstructed interactions) and potentially `polluted'~(input images also containing cosmic rays) input image types. \subsection{$1\mu${\small-}$1p$ Enriched Data} \label{sec:1u1p} The $1\mu${\small-}$1p$ enriched dataset is selected from a set of MicroBooNE beam-on data corresponding to 4.4 $\times$ 10$^{19}$ protons on target~(POT) in the BNB beam. These events consist of exactly two reconstructed particles -- ideally one {\it p} and one $\mu^-$ -- at the candidate interaction vertex. The selection consists of two steps. The first step involves a set of preliminary cuts based on optical information and interaction topology cuts. Candidate $1\mu${\small-}$1p$ interactions are required to have above a threshold number of photon-electrons in the beam trigger window to be signal. Interaction topology selections require candidates to be located inside the TPC with exactly two fully-contained reconstructed tracks. Topology selections also require an opening angle greater than 0.5~radians. The second step involves two boosted decision trees~(BDT) to make a final $1\mu${\small-}$1p$ selection. The first BDT is trained to separate $1\mu${\small-}$1p$ from the cosmic backgrounds using a simulated $\nu_\mu$ sample and a beam-off cosmic ray only dataset. The second BDT is trained to separate $1\mu${\small-}$1p$ from non-signal neutrino interactions (i.e non-charged current quasi-elastic (CCQE) $\nu_\mu$ interactions, off-vertex $\nu_\mu$ interactions and interactions missing more than 20\% energy in reconstruction) using a simulated $\nu_\mu$ sample. Details of preliminary selection and BDT selections will be documented in detail in future publications. The selection of the dataset described above produces 478 data and 466 simulated input images for processing by the MPID network. In the simulated dataset, 94\% of these images contain true neutrino interactions. Among these, 314~(67\% of total images) events contain solely one reconstructable final-state {\it p} and $\mu^-$. We produce the input images in three steps. First, the interaction vertex is located and any associated track-like particles are reconstructed using algorithms described in Ref.~\cite{vtxfinding}; two and only two reconstructed tracks are required. Next, a 512$\times$512 image is produced, centered at the pixel-weighted center of the reconstructed $1\mu${\small-}$1p$ event from a flat weight for non-zero pixels. Finally, to address noise-related features, a threshold is placed on the images with a minimum and maximum pixel value of 10 and 500, respectively. This procedure removes effects from pixels from unrelated interactions near the neutrino interaction vertex. Figure~\ref{fig:mpid-1u1p-evd} shows an example of a $1\mu${\small-}$1p$ image fed into the MPID network. The image is from the collection plane. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/1u1p_evd.png} \caption[]{Example of the input data image from $1\mu${\small-}$1p$ selection. The image is centered at the non-zero pixel weight center. The image has 512$\times$512 pixels. A zoom-in image of 250~$\times$~250 pixels is shown for visualization.} \label{fig:mpid-1u1p-evd} \end{figure} \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/proton_1mu1p_data.png} \vspace{1pt} \includegraphics[width=8.6cm]{images/muon_1mu1p_data.png} \caption[]{MPID proton score distribution (top) and muon score distribution (bottom) for selected $1\mu${\small-}$1p$ interactions. Simulation-predicted score distributions show satisfactory agreement with those realized in the $1\mu${\small-}$1p$ selection applied to MicroBooNE data. Plot error bars indicate data statistical errors, while hatched bands indicate statistical and/or systematic uncertainties in the simulated dataset. The $\chi^2$ calculation incorporates contributions from systematic and statistical uncertainties. The breakdown of interaction type is based on the predicted event classification for the initial neutrino interaction.} \label{fig:mpid-1u1p-proton-muon} \end{figure} The top image of Fig.~\ref{fig:mpid-1u1p-proton-muon} shows the {\it p} score for the selected candidate $1\mu${\small-}$1p$ interactions, broken down into the true physics origin of each imaged vertex. The simulation predicts that true $1\mu${\small-}$1p$ charged-current neutrino interactions should cluster at high {\it p} score, with background processes (particularly cosmic processes) more evenly distributed across the score axis. In the data, a distinct peak is present at high {\it p} score, providing a strong indication of proton(s) being present in most of the images. The bottom sub-panel of this sub-figure shows the ratio of data and simulation versus the {\it p} score. We note that as we are primarily concerned with understanding the agreement in the distribution of scores from 0 to 1, discussion of the level of absolute agreement in normalization between data and simulation is beyond the scope of this study. For each point, the data's statistical uncertainty is shown, along with the systematic uncertainty associated with flux and cross-section uncertainties. Beam flux uncertainties are evaluated by re-weighting events according to the properties of the hadrons that decay to produce the neutrinos. Cross section uncertainties are evaluated by re-weighting events according to the properties of the neutrino's interaction with an argon nucleus. Detector uncertainties are in development and are expected to not have a dominant systematic effect on MPID scores for $1e${\small-}$1p$ events. Good agreement is found between the data and simulation across the full range of {\it p} scores with flux and cross section uncertainties. This level of agreement was quantified by calculating the $\chi^2$ between the data and simulation distributions in the top panel of Fig.~\ref{fig:mpid-1u1p-proton-muon}. This $\chi^2$ includes both statistical and systematic uncertainties in the data and simulation. A $\chi^2$/NDF of 32.4/~20 is found, indicating a comparable performance of the MPID on both data and simulation. Figure~\ref{fig:mpid-1u1p-proton-muon} also shows the $\mu^-$ score distribution for the same selected $1\mu${\small-}$1p$ interactions. A majority of events are found in the higher score region, indicating that the MPID algorithm has correctly identified the presence of $\mu^-$ in these images. The bottom panel again shows the ratio of data to simulation in the $\mu^-$ score distribution; systematic error bars are similarly defined as for the simulated {\it p} score distribution. A $\chi^2$/NDF of 9.9/~20 is found between the two distributions, indicating good MPID data-simulation agreement for $\mu^-$ score. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/pion_1mu1p_data.png} \vspace{1pt} \includegraphics[width=8.6cm]{images/eminus_1mu1p_data.png} \vspace{1pt} \includegraphics[width=8.6cm]{images/gamma_1mu1p_data.png} \caption[]{Charged pion score distribution~(top), electron score distribution~(mid), and photon score distributions~(bottom) for selected $1\mu${\small-}$1p$ interactions. Score distributions agree with the $1\mu${\small-}$1p$ selection. Data and simulation agree well. $\chi^2$ calculation include systematic and statistical uncertainties.} \label{fig:mpid-1u1p-eminus-gamma-pion} \end{figure} Figure~\ref{fig:mpid-1u1p-eminus-gamma-pion} shows the score distributions for particle types expected to be absent from or contained in limited quantities in the selected $1\mu${\small-}$1p$ dataset: $\pi^{\pm}$, $e^-$, and $\gamma$. For $\gamma$ and $e^-$, the score distributions are peaked very close to zero, since input images have only track-like particles, and because, as demonstrated in Section~\ref{sec:mc}, discrimination between track-like $\mu^-$ and $p$ particles and shower-like $\gamma$ and $e^-$ particles is expected to be high. Scores for track-like $\pi^{\pm}$ particle scores are also clustered towards zero, but with a broader overall width; this result also matches the expectations of Section~\ref{sec:mc}. The $\chi^2$/NDF of 22.0/~20, 27.0/~20, and 15.8/~20 for data/simulation comparisons for $\gamma$, $\pi^{\pm}$, and $e^-$ indicate comparable performances of MPID on data and simulation. The MPID network appears to provide similar performance on both data and simulated neutrino interaction images containing primarily track-like final-state particles. This similarity in performance is achieved despite the input image's reliance on other reconstruction algorithms to `remove' pixel content not related to final-state particles connected to the candidate neutrino interaction vertex. This indicates that not only the MPID algorithm, but also the upstream reconstruction algorithms, treat data and simulated LArTPC images on an equal footing. \subsection{$\nu_\mu$CC$\pi^0$ Enriched Data} A study of $\pi^0$-producing charged current $\nu_\mu$ ($\nu_\mu$CC$\pi^0$) interactions is useful in providing a similar data/simulation agreement validation for images that also contain shower-like objects, as is expected from charged-current $\nu_e$ interactions. For this study, we select events from the same dataset used in MicroBooNE's previous $\nu_\mu$CC$\pi^0$ measurement~\cite{ub_ccpi0}. The primary reconstruction toolkits used to develop selection metrics for these events are Pandora~\cite{pandora} and SSNet~\cite{ssnet}. Selected events are primarily required to have two showers close to the interaction vertex. This requirement makes this dataset distinct from a 1$e-1p$ selection, where one and only one shower is allowed, which must be directly attached to the vertex. In this way, in studying MPID performance on the $\nu_\mu$CC$\pi^0$ data sample, we demonstrate not only data/simulation performance, but also show how the network can help to reduce a major intrinsic background to the $\nu_e$ channel: $\pi^0$-producing interactions. Input images from $\nu_\mu$CC$\pi^0$ candidates are generated by cropping a 512 $\times$ 512 square image centered at the reconstructed interaction vertex, rather than at the image's pixel-weighted center as in the $1\mu${\small-}$1p$ images. To ensure that $\pi^0$ decay $\gamma$s are not scrubbed from the image, no additional pixel `cleaning' is applied. This means that cosmic rays and other interactions unrelated to the vertex remain in input images, presenting an additional challenge to the MPID network's performance. The same noise filtering metric, as described for the $1\mu${\small-}$1p$ dataset, is applied to the images. Figure~\ref{fig:mpid-pi0-evd} shows an example of a $\nu_\mu$CC$\pi^0$-containing image fed into the MPID network. The image is from the collection plane. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/ccpi0_evd.png} \caption[]{Example of the input data image from the $\nu_\mu$CC$\pi^0$ selection. The image is centered at the reconstructed vertex. The image has $512\times512$ pixels. A zoom-in image of $250\times250$ pixels is shown for visualization.} \label{fig:mpid-pi0-evd} \end{figure} The selection and dataset described above produces 2051 data and 2011 simulated input images for processing by the MPID network. According to the simulation, 41\% of total events have $\nu_\mu$CC$\pi^0$ interactions and 60\% of events contain $\pi^0$-including interactions~(including the $\nu_\mu$CC$\pi^0$ interactions). \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/ccpi0_electron_stack_tight.png} \vspace{1pt} \includegraphics[width=8.6cm]{images/ccpi0_gamma_stack_tight.png} \vspace{1pt} \caption{Electron score distribution (top) and photon score distribution (bottom) for selected $\nu_\mu$CC$\pi^0$ interactions. Score distributions agree with the $\nu_\mu$CC$\pi^0$ selection. Data and simulation agree well. $\chi^2$ calculation include systematic and statistical uncertainties.} \label{fig:mpid-ccpi0-eminus-gamma} \end{figure} Figure~\ref{fig:mpid-ccpi0-eminus-gamma} shows the score distribution for having any $e^-$ in the images cropped from the $\nu_\mu$CC$\pi^0$ sample. The score indicates a generally low probability of having $e^-$-like features in the data and simulated images. As a comparison, Fig.~\ref{fig:mpid-ccpi0-eminus-gamma} also shows the score distribution for having any $\gamma$-rays in the images. One can see a much higher score distribution for the $\gamma$ existence case, as expected based on the event filtering criteria described above. Figure~\ref{fig:mpid-ccpi0-muon} shows the score distribution for having any $\mu^-$ in the $\nu_\mu$CC$\pi^0$ images. The score generally indicates a high probability of having $\mu^-$-like features in data and simulation. In particular, it shows a difference between the CC-and NC-$\pi^0$ events in the low $\mu^-$ score region. \begin{figure}[htb!pb] \centering \includegraphics[width=8.6cm]{images/ccpi0_muon_stack_tight.png} \caption{ Muon score distribution for selected $\nu_\mu$CC$\pi^0$ interactions. Score distributions for data and simulation agree well using the $\nu_\mu$CC$\pi^0$ selection. $\chi^2$ calculation includes systematic and statistical uncertainties.} \label{fig:mpid-ccpi0-muon} \end{figure} The bottom panels of Fig.~\ref{fig:mpid-ccpi0-eminus-gamma} and Fig.~\ref{fig:mpid-ccpi0-muon} show the ratio of data to simulation versus $e^-$, $\gamma$ and $\mu^-$ scores. As in the 1$\mu$-1$p$ dataset, we scale the normalization of $\nu_\mu$CC$\pi^0$ interactions based on preliminary measurements by the MicroBooNE collaboration. Systematic uncertainties are also included in the same manner as described for the 1$\mu$-1$p$ dataset. Good comparable performance can be seen between data and simulation with $\chi^2$/NDFs of 43.6/~39 for $e^-$ score, 42.8/~39 for $\gamma$ score and 24.0/~39 for $\mu^-$ score. Thus, this study demonstrates that the MPID algorithm can reliably identify shower-related particle content in images without introducing biases between neutrino data and simulation predictions. This is achieved despite the presence of additional incidental pixel activity being present in interaction candidate images. \section{Conclusion} We have developed a CNN-based multiple particle identification network, MPID, and applied it to images of event interactions in MicroBooNE data. This is the first demonstration of the performance of a CNN that incorporates systematic uncertainties in LArTPC data, and the first use of CNNs to perform particle identification on real LArTPC data. The network takes a 512$\times$512 LArTPC image and calculates the probability scores for any particle in the image as {\it p}, $e^-$, $\gamma$, $\mu^-$, and $\pi^\pm$. The training images are generated with a customized event generator that concatenates particles at the same vertex. The code for making the network and training sample are made available in MPID~\cite{mpid_github} and LArSoft~\cite{uboonecode}. 10,000 $1e${\small-}$1p$ and $1\mu${\small-}$1p$ images are used to benchmark the network performance on simulated interactions. A score cut of 0.5 is applied on each particle. In the $1\mu${\small-}$1p$ sample, 8177 and 8522 events passed the selection for {\it p} and $\mu^-$, or 82\% and 85\% as a percentage. 117 and 4 events are mis-identified as $e^-$ and $\gamma$, or 1\% and 0\% as a percentage. In the $1e${\small-}$1p$ sample, 8165 and 9079 events passed the selection for {\it p} and $e^-$, or 82\% and 91\% as a percentage. 1546 and 106 events are mis-identified as $\gamma$ and $\mu^-$, or 15\% and 1\% as a percentage. Satisfactory agreement in all score distributions are found between data and simulation despite the many complexities of the MicroBooNE liquid argon TPC response, including inactive wire regions~\cite{elec_noise}, electronics noise~\cite{elec_noise}, signal processing~\cite{sp_1, sp_2}, and space charge effects~\cite{spacecharge}. We also demonstrated the metrics and performance of applying the MPID network on BNB beam data from MicroBooNE, which also illustrated the MPID network's clear capabilities in particle discrimination. When we take reconstructed vertex activity as input in filtered $1\mu${\small-}$1p$ candidate event images, MPID score distributions are indeed high for $p$ and $\mu^-$, and low for $e^-$, $\gamma$ and $\pi^\pm$. When we instead take all pixel activity as input in filtered images containing $\pi^0$-produced $\gamma$ rays, we see large differences between obtained $e^-$ and $\gamma$ scores. By applying these demonstrated particle identification capabilities to simulated BNB $\nu_e$ and $\nu_{\mu}$ interactions, we have shown that this validated tool can play an important role in achieving a successful low-energy electron-like excess measurement in MicroBooNE.
train/arxiv
BkiUeIk4eIXgu69Tmz8X
5
1
\subsubsection*{Acknowledgments} \nocite{*} \bibliographystyle{splncs} \section{Discussion} In this paper we have presented techniques that can speed up the bottleneck convolution operations in the first layers of a CNN by a factor $2-3\times$, with negligible loss of performance. We also show that our methods reduce the memory footprint of weights in the first two layers by factor of $2-3\times$ and the fully connected layers by a factor of $5-13\times$. Since the vast majority of weights reside in the fully connected layers, compressing only these layers translate into a significant savings, which would facilitate mobile deployment of convolutional networks. These techniques are orthogonal to other approaches for efficient evaluation, such as quantization or working in the Fourier domain. Hence, they can potentially be used together to obtain further gains. An interesting avenue of research to explore in further work is the ability of these techniques to aid in regularization either during or post training. The low-rank projections effectively decrease number of learnable parameters, suggesting that they might improve generalization ability. The regularization potential of the low-rank approximations is further motivated by two observations. The first is that the approximated filters for the first conolutional layer appear to be cleaned up versions of the original filters. Additionally, we noticed that we sporadically achieve better test error with some of the more conservative approximations. \section{Experiments}\label{sec:experiments} We use the 15 layer convolutional architecture of \cite{zeiler2013visualizing}, trained on the ImageNet 2012 dataset \cite{imagenet}. The network contains 4 convolutional layers, 3 fully connected layers and a softmax output layer. We evaluated the network on both CPU and GPU platforms. All measurements of prediction performance are with respect to the 20K validation images from the ImageNet12 dataset. We present results showing the performance of the approximations described in Section \ref{sec:approx_tech} in terms of prediction accuracy, speedup gains and reduction in memory overhead. All of our fine-tuning results were achieved by training with less than 2 passes using the ImageNet12 training dataset. Unless stated otherwise, classification numbers refer to those of fine-tuned models. \subsection{Speedup} The majority of forward propagation time is spent on the first two convolutional layers (see Supplementary Material for breakdown of time across all layers). Because of this, we restrict our attention to the first and second convolutional layers in our speedup experiments. However, our approximations could easily applied to convolutions in upper layers as well. We implemented several CPU and GPU approximation routines in an effort to achieve empirical speedups. Both the baseline and approximation CPU code is implemented in C++ using Eigen3 library \cite{eigenweb} compiled with Intel MKL. We also use Intel's implementation of openmp and multithreading. The baseline gives comparable performance to highly optimized MATLAB convolution routines and all of our CPU speedup results are computed relative to this. We used Alex Krizhevsky's CUDA convolution routines \footnote{\url{https://code.google.com/p/cuda-convnet/}} as a baseline for GPU comparisons. The approximation versions are written in CUDA. All GPU code was run on a standard nVidia Titan card. We have found that in practice it is often difficult to achieve speedups close to the theoretical gains based on the number of arithmetic operations (see Supplementary Material for discussion of theoretical gains). Moreover, different computer architectures and CNN architectures afford different optimization strategies making most implementations highly specific. However, regardless of implementation details, all of the approximations we present reduce both the number of operations and number of weights required to compute the output by at least a factor of two, often more. \subsubsection{First Layer} The first convolutional layer has 3 input channels, 96 output channels and 7x7 filters. We approximated the weights in this layer using the monochromatic approximation described in Section \ref{subsec:monochromatic}. The monochromatic approximation works well if the color components span a small number of one dimensional subspaces. Figure \ref{fig:RGB_components} illustrates the effect of the monochromatic approximation on the first layer filters. \begin{figure}[t] \centering \begin{minipage}{\textwidth} \includegraphics[width=0.55\linewidth]{img/RGB_components_stacked.pdf} \includegraphics[width=0.45\linewidth]{img/denoised_stacked.pdf} \end{minipage} \vspace{-3mm} \label{fig:RGB_components} \caption{Visualization of the 1st layer filters. {\bf (Left)} Each component of the 96 7x7 filters is plotted in RGB space. Points are colored based on the output filter they belong to. Hence, there are 96 colors and $7^2$ points of each color. Leftmost plot shows the original filters and the right plot shows the filters after the monochromatic approximation, where each filter has been projected down to a line in colorspace. {\bf (Right)} Original and approximate versions of a selection of 1st layer filters.} \vspace{-3mm} \end{figure} The only parameter in the approximation is $C'$, the number of color channels used for the intermediate representation. As expected, the network performance begins to degrade as $C'$ decreases. The number of floating point operations required to compute the output of the monochromatic convolution is reduced by a factor of $2-3\times$, with the larger gain resulting for small $C'$. Figure \ref{fig:mono_speedups} shows the empirical speedups we achieved on CPU and GPU and the corresponding network performance for various numbers of colors used in the monochromatic approximation. Our CPU and GPU implementations achieve empirical speedups of $2-2.5\times$ relative to the baseline with less than 1\% drop in classification performance. \begin{figure}[t] \centering \begin{minipage}{0.9\textwidth} \includegraphics[width=0.5\linewidth]{img/layer1_CPUspeedup_vs_performance_loss_finetune_and_orig.pdf} \quad \includegraphics[width=0.5\linewidth]{img/layer1_GPUspeedup_vs_performance_loss_finetune_and_orig.pdf} \end{minipage} \caption{Empirical speedups on ({\bf Left}) CPU and ({\bf Right}) GPU for the first layer. $C'$ is the number of colors used in the approximation.} \label{fig:mono_speedups} \end{figure} \subsubsection{Second Layer} The second convolutional layer has 96 input channels, 256 output channels and 5x5 filters. We approximated the weights using the techniques described in Section \ref{subsec:clustering}. We explored various configurations of the approximations by varying the number of input clusters $G$, the number of output clusters $H$ and the rank of the approximation (denoted by $K_1$ and $K_2$ for the SVD decomposition and $K$ for the outer product decomposition). Figure \ref{fig:biclust_speedups} shows our empirical speedups on CPU and GPU and the corresponding network performance for various approximation configurations. For the CPU implementation we used the biclustering with SVD approximation. For the GPU implementation we using the biclustering with outer product decomposition approximation. We achieved promising results and present speedups of $2-2.5\times$ relative to the baseline with less than a 1\% drop in performance. \begin{figure}[t] \centering \begin{minipage}{0.9\textwidth} \includegraphics[width=0.5\linewidth]{img/layer2_CPUspeedup_vs_performance_loss_finetune_and_orig.pdf} \quad \includegraphics[width=0.5\linewidth]{img/layer2_GPUspeedup_vs_performance_loss_finetune_and_orig.pdf} \end{minipage} \caption{Empirical speedups for second convolutional layer. ({\bf Left}) Speedups on CPU using biclustered ($G = 2$ and $H = 2$) with SVD approximation. ({\bf Right}) peedups on GPU using biclustered ($G = 48$ and $h = 2$) with outer product decomposition approximation.} \label{fig:biclust_speedups} \end{figure} \subsection{Combining approximations} The approximations can also be cascaded to provide greater speedups. The procedure is as follows. Compress the first convolutional layer weights and then fine-tune all the layers above until performance is restored. Next, compress the second convolutional layer weights that result from the fine-tuning. Fine-tune all the layers above until performance is restored and then continue the process. We applied this procedure to the first two convolutional layers. Using the monochromatic approximation with 6 colors for the first layer and the biclustering with outer product decomposition approximation for the second layer ($G = 48; H = 2; K = 8$) and fine-tuning with a single pass through the training set we are able to keep accuracy within 1\% of the original model. This procedure could be applied to each convolutional layer, in this sequential manner, to achieve overall speedups much greater than any individual layer can provide. A more comprehensive summary of these results can be found in the Supplementary Material. \subsection{Reduction in memory overhead} In many commercial applications memory conservation and storage are a central concern. This mainly applies to embedded systems (e.g. smartphones), where available memory is limited, and users are reluctant to download large files. In these cases, being able to compress the neural network is crucial for the viability of the product. In addition to requiring fewer operations, our approximations require significantly fewer parameters when compared to the original model. Since the majority of parameters come from the fully connected layers, we include these layers in our analysis of memory overhead. We compress the fully connected layers using standard SVD as described in \ref{subsubsec:svd_tensor}, using $K$ to denote the rank of the approximation. Table \ref{table:memory} shows the number of parameters for various approximation methods as a function of hyperparameters for the approximation techniques. The table also shows the empirical reduction of parameters and the corresponding network performance for specific instantiations of the approximation parameters. \begin{table}[t] \tiny \centering \begin{tabular}{|l|c|c|c|c|} \hline {\bf Approximation method} & {\bf Number of parameters} & {\bf Approximation} & {\bf Reduction} & {\bf Increase }\\ & & {\bf hyperparameters} & {\bf in weights} & {\bf in error}\\ \hline \hline Standard colvolution & $CXYF$ & & &\\ \hline Conv layer 1: Monochromatic & $CC' + XYF$ & $C' = 6$ & $3\times$ & 0.43\%\\ \hline Conv layer 2: Biclustering & $GHK (\frac{C}{G} + XY + \frac{F}{H})$ & $G = 48$; $H = 2$; $K = 6$ & 5.3$\times$ & 0.68\%\\ + outer product decomposition & & & &\\ \hline Conv layer 2: Biclustering + SVD& $G H (\frac{C}{G}K_1 + K_1 X Y K_2 + K_2 \frac{F}{H})$ & $G = 2; H = 2$; $K_1 = 19$; $K_2 = 24$ & $3.9\times$ & 0.9\% \\ \hline Standard FC & $N M$ & & &\\ \hline FC layer 1: Matrix SVD & $NK + KM$ & $K = 250$ & $13.4\times$ & 0.8394\%\\ & & $K = 950$ & $3.5\times$ & 0.09\%\\ \hline FC layer 2: Matrix SVD & $NK + KM$ & $K = 350 $ & $5.8\times$ & 0.19\%\\ & & $K = 650$ & $3.14\times$ & 0.06\%\\ \hline FC layer 3: Matrix SVD & $NK + KM$ & $K = 250$ & $8.1\times$ & 0.67\%\\ & & $K = 850$ & $2.4\times$ & 0.02\%\\ \hline \end{tabular} \caption{Number of parameters expressed as a function of hyperparameters for various approximation methods and empirical reduction in parameters with corresponding network performance.} \label{table:memory} \vspace{-3mm} \end{table} \vspace{-3mm} \subsection{Fine-tunning}\label{sec:fine_tunning} Presented methods were applied to fully trained networks. Obviously, networks were not trained with constrains on connectivity or sparsity as are present in approximated network. This means that agresive approximations, which can have much larger speedup are likely to give worse error score. Training with new connectivity or sparsity constraint might be quite difficult. Constrains comming from approximation would have to be included in optimization, and a lot of additional coding would be neccessary. We propose different strategy. It is easy to decompose weights using any of our method, and then reconstruct them. Particularly, reconstructed weights can be approximated exactly with the same decomposition. Let's call this process projection. Proposed strategy is to take a fully trained network, and iterativelly repeat (1) apply projection, (2) train for few epochs. Hopefuly, this way weights will become expressable in desired way, which can be speededup more than $2$ times, and prediction score would be preserved due to fine-tunning. From engineering perspective stage (1) can be implemented in any external easy environment (e.g. MATLAB), while stage can stay as it is. Stage (1) is executed every few epochs, so it is not a computational bottle-neck. \section{Introduction} Large neural networks have recently demonstrated impressive performance on a range of speech and vision tasks. However, the size of these models can make their deployment at test time problematic. For example, mobile computing platforms are limited in their CPU speed, memory and battery life. At the other end of the spectrum, Internet-scale deployment of these models requires thousands of servers to process the 100's of millions of images per day. The electrical and cooling costs of these servers is significant. Training large neural networks can take weeks, or even months. This hinders research and consequently there have been extensive efforts devoted to speeding up training procedure. However, there are relatively few efforts aimed at improving the {\em test-time} performance of the models. We consider convolutional neural networks (CNNs) used for computer vision tasks, since they are large and widely used in commercial applications. These networks typically require a huge number of parameters ($\sim 10^{8}$ in \cite{sermanet2013overfeat}) to produce state-of-the-art results. While these networks tend to be hugely over parameterized \cite{denil2013predicting}, this redundancy seems necessary in order to overcome a highly non-convex optimization \cite{hinton2012improving}. As a byproduct, the resulting network wastes computing resources. In this paper we show that this redundancy can be exploited with linear compression techniques, resulting in significant speedups for the evaluation of {\em trained} large scale networks, with minimal compromise to performance. We follow a relatively simple strategy: we start by compressing each convolutional layer by finding an appropriate low-rank approximation, and then we fine-tune the upper layers until the prediction performance is restored. We consider several elementary tensor decompositions based on singular value decompositions, as well as filter clustering methods to take advantage of similarities between learned features. Our main contributions are the following: (1) We present a collection of generic methods to exploit the redundancy inherent in deep CNNs. (2) We report experiments on state-of-the-art Imagenet CNNs, showing empirical speedups on convolutional layers by a factor of $2-3\times$ and a reduction of parameters in fully connected layers by a factor of $5-10\times$. \noindent \textbf{Notation:} Convolution weights can be described as a $4$-dimensional tensor: $W \in \mathbb{R}^{C \times X \times Y \times F}$. $C$ is the number of number of input channels, $X$ and $Y$ are the spatial dimensions of the kernel, and $F$ is the target number of feature maps. It is common for the first convolutional layer to have a stride associated with the kernel which we denote by $\Delta$. Let $I \in \mathbb{R}^{C \times N \times M}$ denote an input signal where $C$ is the number of input maps, and $N$ and $M$ are the spatial dimensions of the maps. The target value, $T = I \ast W$, of a generic convolutional layer, with $\Delta = 1$, for a particular output feature, $f$, and spatial location, $(x, y)$, is \begin{align*} \label{convlayereq} T(f,x,y) = \sum_{c=1}^C \sum_{x'=1}^{X} \sum_{y'=1}^{Y} I(c,x-x',y-y') W(c,x',y',f) \end{align*} If $W$ is a tensor, $\|W \|$ denotes its operator norm, $\sup_{\|x\|=1}\|Wx\|_F $ and $\|W \|_F$ denotes its Frobenius norm. \section{Related Work} \label{relwork} Vanhoucke {\textit{et~al.~}} \cite{vanhoucke2011improving} explored the properties of CPUs to speed up execution. They present many solutions specific to Intel and AMD CPUs, however some of their techniques are general enough to be used for any type of processor. They describe how to align memory, and use SIMD operations (vectorized operations on CPU) to boost the efficiency of matrix multiplication. Additionally, they propose the linear quantization of the network weights and input. This involves representing weights as 8-bit integers (range $[-127, 128]$), rather than 32-bit floats. This approximation is similar in spirit to our approach, but differs in that it is applied to each weight element independently. By contrast, our approximation approach models the structure within each filter. Potentially, the two approaches could be used in conjunction. The most expensive operations in CNNs are the convolutions in the first few layers. The complexity of this operation is linear in the area of the receptive field of the filters, which is relatively large for these layers. However, Mathieu {\textit{et~al.~}} \cite{mathieu2013fast} have shown that convolution can be efficiently computed in Fourier domain, where it becomes element-wise multiplication (and there is no cost associated with size of receptive field). They report a forward-pass speed up of around $2\times$ for convolution layers in state-of-the-art models. Importantly, the FFT method can be used jointly with most of the techniques presented in this paper. The use of low-rank approximations in our approach is inspired by work of Denil {\textit{et~al.~}} \cite{denil2013predicting} who demonstrate the redundancies in neural network parameters. They show that the weights within a layer can be accurately predicted from a small (e.g. $\sim 5\%$) subset of them. This indicates that neural networks are heavily over-parametrized. All the methods presented here focus on exploiting the linear structure of this over-parametrization. Finally, a recent preprint \cite{zisserman14} also exploits low-rank decompositions of convolutional tensors to speed up the evaluation of CNNs, applied to scene text character recognition. This work was developed simultaneously with ours, and provides further evidence that such techniques can be applied to a variety of architectures and tasks. Out work differs in several ways. First, we consider a significantly larger model. This makes it more challenging to compute efficient approximations since there are more layers to propagate through and thus a greater opportunity for error to accumulate. Second, we present different compression techniques for the hidden convolutional layers and provide a method of compressing the first convolutional layer. Finally, we present GPU results in addition to CPU results. \section{Convolutional Tensor Compression}\label{sec:approx_tech} In this section we describe techniques for compressing 4 dimensional convolutional weight tensors and fully connected weight matrices into a representation that permits efficient computation and storage. Section \ref{reconstr_sect} describes how to construct a good approximation criteria. Section \ref{subsec:low_rank} describes techniques for low-rank tensor approximations. Sections \ref{subsec:monochromatic} and \ref{subsec:clustering} describe how to apply these techniques to approximate weights of a convolutional neural network. \subsection{Approximation Metric} \label{reconstr_sect} Our goal is to find an approximation, $\tilde{W}$, of a convolutional tensor $W$ that facilitates more efficient computation while maintaining the prediction performance of the network. A natural choice for an approximation criterion is to minimize $\| \tilde{W} - W \|_F$. This criterion yields efficient compression schemes using elementary linear algebra, and also controls the operator norm of each linear convolutional layer. However, this criterion assumes that all directions in the space of weights equally affect prediction performance. We now present two methods of improving this criterion while keeping the same efficient approximation algorithms. {\bf Mahalanobis distance metric: } The first distance metric we propose seeks to emphasize coordinates more prone to produce prediction errors over coordinates whose effect is less harmful for the overall system. We can obtain such measurements as follows. Let $\Theta=\{W_1,\dots,W_S\}$ denote the set of all parameters of the $S$-layer network, and let $U(I; \Theta)$ denote the output after the softmax layer of input image $I$. We consider a given input training set $(I_1,\dots,I_N)$ with known labels $(y_1,\dots,y_N)$. For each pair $(I_n, y_n)$, we compute the forward propagation pass $U(I_n, \Theta)$, and define as $\{\beta_n\}$ the indices of the $h$ largest values of $U(I_n, \Theta)$ different from $y_n$. Then, for a given layer $s$, we compute \begin{equation} \label{approxi} d_{n,l,s} = \nabla_{W_s} \left( U(I_n, \Theta) - \delta(i - l)\right)~,~n\leq N\,,\, l \in \{\beta_n\}\,,\, s\leq S~, \end{equation} where $\delta(i-l)$ is the dirac distribution centered at $l$. In other words, for each input we back-propagate the difference between the current prediction and the $h$ ``most dangerous" mistakes. The Mahalanobis distance is defined from the covariance of $d$: $\| W \|_{maha}^2 = w \Sigma^{-1} w^T~,$ where $w$ is the vector containing all the coordinates of $W$, and $\Sigma$ is the covariance of $(d_{n,l,s})_{n,l}$. We do not report results using this metric, since it requires inverting a matrix of size equal to the number of parameters, which can be prohibitively expensive in large networks. Instead we use an approximation that considers only the diagonal of the covariance matrix. In particular, we propose the following, approximate, Mahalanobis distance metric: \begin{equation} \label{poormansmaha} \| W \|_{\widetilde{maha}} := \sum_p \alpha_p W(p) ~,\text{ where } \alpha_p = \Big( \sum_{n,l} d_{n,l,s}(p)^2 \Big)^{1/2}~ \end{equation} where the sum runs over the tensor coordinates. Since (\ref{poormansmaha}) is a reweighted Euclidiean metric, we can simply compute $W' = \alpha~ .* W$, where $.*$ denotes element-wise multiplication, then compute the approximation $\tilde{W'}$ on $W'$ using the standard $L_2$ norm, and finally output $\tilde{W} = \alpha^{-1} .* \tilde{W'}~.$ {\bf Data covariance distance metric: } One can view the Frobenius norm of $W$ as $\| W \|_F^2 = \mathbb{E}_{x \sim \mathcal{N}(0,I)} \| W x \|_F^2 ~.$ Another alternative, similar to the one considered in \cite{zisserman14}, is to replace the isotropic covariance assumption by the empirical covariance of the input of the layer. If $W \in \mathbb{R}^{C \times X \times Y \times F}$ is a convolutional layer, and $\widehat{\Sigma} \in \mathbb{R}^{CXY \times CXY}$ is the empirical estimate of the input data covariance, it can be efficiently computed as \begin{equation} \| W \|_{data} = \| \widehat{\Sigma}^{1/2} W_F \|_F~, \end{equation} where $W_F$ is the matrix obtained by folding the first three dimensions of $W$.As opposed to \cite{zisserman14}, this approach adapts to the input distribution without the need to iterate through the data. \subsection{Low-rank Tensor Approximations}\label{subsec:low_rank} \subsubsection{Matrix Decomposition}\label{subsubsec:svd} Matrices are $2$-tensors which can be linearly compressed using the Singular Value Decomposition. If $W \in \mathbb{R}^{m \times k}$ is a real matrix, the SVD is defined as $W = USV^{\top}$, where $U \in \mathbb{R}^{m \times m}, S \in \mathbb{R}^{m \times k}, V \in \mathbb{R}^{k \times k}$. $S$ is a diagonal matrix with the singular values on the diagonal, and $U$, $V$ are orthogonal matrices. If the singular values of $W$ decay rapidly, $W$ can be well approximated by keeping only the $t$ largest entries of $S$, resulting in the approximation $\tilde{W} = \tilde{U}\tilde{S}\tilde{V}^{\top}$, where $\tilde{U} \in \mathbb{R}^{m \times t}, \tilde{S} \in \mathbb{R}^{t \times t}, \tilde{V} \in \mathbb{R}^{t \times k}$ Then, for $I \in \mathbb{R}^{n \times m}$, the approximation error $\| I \tilde{W} - I W \|_F$ satisfies $\| I \tilde{W} - I W \|_F \leq s_{t+1} \| I \|_F~,$ and thus is controlled by the decay along the diagonal of $S$. Now the computation $I\tilde{W}$ can be done in $O(nmt + nt^2 + ntk)$, which, for sufficiently small $t$ is significantly smaller than $O(nmk)$. \subsubsection{Higher Order Tensor Approximations}\label{subsubsec:svd_tensor} SVD can be used to approximate a tensor $W \in \mathbb{R}^{m \times n \times k}$ by first folding all but two dimensions together to convert it into a $2$-tensor, and then considering the SVD of the resulting matrix. For example, we can approximate $W_m \in \mathbb{R}^{m \times (nk)}$ as $\tilde{W}_m \approx \tilde{U}\tilde{S}\tilde{V}^{\top}$. $W$ can be compressed even further by applying SVD to $\tilde{V}$. We refer to this approximation as the SVD decomposition and use $K_1$ and $K_2$ to denote the rank used in the first and second application of SVD respectively. Alternatively, we can approximate a 3-tensor, $W_S \in \mathbb{R}^{m \times n \times k}$, by a rank 1 3-tensor by finding a decomposition that minimizes \begin{equation} \label{eq:rank1} \| W - \alpha \otimes \beta \otimes \gamma \|_F~, \end{equation} where $\alpha \in \mathbb{R}^m$, $\beta \in \mathbb{R}^{n}$, $\gamma \in \mathbb{R}^k$ and $\otimes$ denotes the outer product operation. Problem (\ref{eq:rank1}) is solved efficiently by performing alternate least squares on $\alpha$, $\beta$ and $\gamma$ respectively, although more efficient algorithms can also be considered \cite{rankonetensors}. This easily extends to a rank $K$ approximation using a greedy algorithm: Given a tensor $W$, we compute $(\alpha, \beta, \gamma)$ using (\ref{eq:rank1}), and we update $W^{(k+1)} \leftarrow W^{k} - \alpha \otimes \beta \otimes \gamma$. Repeating this operation $K$ times results in \begin{equation} \label{eq:rankK} \tilde{W_S} = \sum_{k = 1}^{K} \alpha_k \otimes \beta_k \otimes \gamma_k ~. \end{equation} We refer to this approximation as the outer product decomposition and use $K$ to denote the rank of the approximation. \vspace{-3mm} \begin{figure}[ht] \centering \mbox{ \subfigure[][]{ \includegraphics[width=0.3\linewidth]{img/monochromatic_illustration.pdf} \label{fig:monochromatic} } \hspace{1mm} \subfigure[][]{ \includegraphics[width=0.3\linewidth]{img/bicluster_illustration.pdf} \label{fig:biclustering} } \hspace{1mm} \subfigure[][]{ \includegraphics[width=0.3\linewidth]{img/tensor_sum.pdf} \label{fig:monochromatic} } } \vspace{-3mm} \caption{ A visualization of monochromatic and biclustering approximation structures. {\bf (a)} The monochromatic approximation, used for the first layer. Input color channels are projected onto a set of intermediate color channels. After this transformation, output features need only to look at one intermediate color channel. {\bf (b)} The biclustering approximation, used for higher convolution layers. Input and output features are clustered into equal sized groups. The weight tensor corresponding to each pair of input and output clusters is then approximated. {\bf (c)} The weight tensors for each input-output pair in (b) are approximated by a sum of rank 1 tensors using techniques described in \ref{subsubsec:svd_tensor}} \end{figure} \subsection{Monochromatic Convolution Approximation}\label{subsec:monochromatic} Let $W \in \mathbb{R}^{C \times X \times Y \times F}$ denote the weights of the first convolutional layer of a trained network. We found that the color components of trained CNNs tend to have low dimensional structure. In particular, the weights can be well approximated by projecting the color dimension down to a 1D subspace. The low-dimensional structure of the weights is illustrated in Figure \ref{fig:RGB_components}. The monochromatic approximation exploits this structure and is computed as follows. First, for every output feature, $f$, we consider consider the matrix $W_f \in \mathbb{R}^{C \times (XY) }$, where the spatial dimensions of the filter corresponding to the output feature have been combined, and find the SVD, $W_f = U_f S_f V_f^{\top}$, where $U_f \in \mathbb{R}^{C \times C}, S_f \in \mathbb{R}^{C \times XY}$, and $V_f \in \mathbb{R}^{XY \times XY}$. We then take the rank $1$ approximation of $W_f$, $\tilde{W}_f = \tilde{U}_f \tilde{S}_f \tilde{V}_f^{\top} ~,$ where $\tilde{U}_f \in \mathbb{R}^{C \times 1}, \tilde{S}_f \in \mathbb{R}, \tilde{V}_f \in \mathbb{R}^{1 \times XY}$. We can further exploit the regularity in the weights by sharing the color component basis between different output features. We do this by clustering the $F$ left singular vectors, $\tilde{U}_f$, of each output feature $f$ into $C'$ clusters, for $C' < F$ . We constrain the clusters to be of equal size as discussed in section \ref{subsec:clustering}. Then, for each of the $\frac{F}{C'}$ output features, $f$, that is assigned to cluster $c_f$, we can approximate $W_f$ with $\tilde{W}_f = U_{c_f} \tilde{S}_f \tilde{V}_f^{\top}$ where $U_{c_f} \in \mathbb{R}^{C \times 1}$ is the cluster center for cluster $c_f$ and $\tilde{S}_f$ and $\tilde{V}_f$ are as before. This monochromatic approximation is illustrated in the left panel of Figure \ref{fig:monochromatic}. Table \ref{table:ops} shows the number of operations required for the standard and monochromatic versions. \subsection{Biclustering Approximations}\label{subsec:clustering} We exploit the redundancy within the 4-D weight tensors in the higher convolutional layers by clustering the filters, such that each cluster can be accurately approximated by a low-rank factorization. We start by clustering the rows of $W_C \in \mathbb{R}^{C \times (XYF)}$, which results in clusters $C_1, \dots, C_a$. Then we cluster the columns of $W_F \in \mathbb{R}^{(CXY) \times F}$, producing clusters $F_1, \dots, F_b$. These two operations break the original weight tensor $W$ into $ab$ sub-tensors $\{W_{C_i, F_j}\}_{i = 1, \dots, a, j = 1, \dots, b}$ as shown in Figure \ref{fig:biclustering}. Each sub-tensor contains similar elements, and thus is easier to fit with a low-rank approximation. In order to exploit the parallelism inherent in CPU and GPU architectures it is useful to constrain clusters to be of equal sizes. We therefore perform the biclustering operations (or clustering for monochromatic filters in Section \ref{subsec:monochromatic}) using a modified version of the $k$-means algorithm which balances the cluster count at each iteration. It is implemented with the Floyd algorithm, by modifying the Euclidean distance with a subspace projection distance. After the input and output clusters have been obtained, we find a low-rank approximation of each sub-tensor using either the SVD decomposition or the outer product decomposition as described in Section \ref{subsubsec:svd_tensor}. We concatenate the $X$ and $Y$ spatial dimensions of the sub-tensors so that the decomposition is applied to the 3-tensor, $W_S \in \mathbb{R}^{C \times (XY) \times F}$. While we could look for a separable approximation along the spatial dimensions as well, we found the resulting gain to be minimal. Using these approximations, the target output can be computed with significantly fewer operations. The number of operations required is a function the number of input clusters, $G$, the output clusters $H$ and the rank of the sub-tensor approximations ($K_1, K_2$ for the SVD decomposition; $K$ for the outer product decomposition. The number of operations required for each approximation is described in Table \ref{table:ops}. \begin{table}[t] \centering \begin{tabular}{|lc|} \hline {\bf Approximation technique} & {\bf Number of operations} \\ \hline \hline No approximation & $X Y C F N M \Delta^{-2}$\\ Monochromatic & $C' C N M + X Y F N M \Delta^{-2}$\\ Biclustering + outer product decomposition & $G H K (N M \frac{C}{G} + X Y N M \Delta^{-2} + \frac{F}{H} N M \Delta^{-2})$ \\ Biclustering + SVD & $G H N M (\frac{C}{G}K_1 + K_1 X Y K_2 \Delta^{-2} + K_2\frac{F}{H})$\\ \hline \end{tabular} \caption{Number of operations required for various approximation methods.} \label{table:ops} \vspace{-3mm} \end{table} \subsection{Fine-tuning} Many of the approximation techniques presented here can efficiently compress the weights of a CNN with negligible degradation of classification performance provided the approximation is not too harsh. Alternatively, one can use a harsher approximation that gives greater speedup gains but hurts the performance of the network. In this case, the approximated layer and all those below it can be fixed and the upper layers can be fine-tuned until the original performance is restored. \section{Forward propagation time breakdown} Table \ref{evaluation_time} shows the time breakdown of forward propagation for each layer in the CNN architecture we explored. Close to 90\% of the time is spent on convolutional layers, and within these layers the majority of time is spent on the first two. \begin{table}[h] \small \parbox{.5\linewidth}{ \centering \begin{tabular}{rrr} \hline {\bf Layer} & {\bf Time per batch (sec)} & {\bf Fraction} \\ \hline Conv1 & $2.8317 \pm 0.1030 $ & 21.97\% \\ MaxPool & $0.1059 \pm 0.0154$ & 0.82\% \\ LRNormal & $0.1918 \pm 0.0162$ & 1.49\% \\ Conv2 & $4.2626 \pm 0.0740 $ & 33.07\% \\ MaxPool & $0.0705 \pm 0.0029$ & 0.55\% \\ LRNormal & $0.0772\pm 0.0027$ & 0.60\% \\ Conv3 & $1.8689\pm 0.0577$ & 14.50\% \\ MaxPool & $0.0532\pm 0.0018 $ & 0.41\% \\ Conv4 & $1.5261\pm 0.0386$ & 11.84\% \\ Conv5 & $1.4222\pm 0.0416$& 11.03\% \\ MaxPool & $0.0102\pm 0.0006 $ & 0.08\% \\ FC & $0.3777\pm 0.0233$ & 2.93\% \\ FC & $0.0709 \pm 0.0038$ & 0.55\% \\ FC & $0.0168 \pm 0.0018$ & 0.13\% \\ Softmax & $0.0028 \pm 0.0015$ & 0.02\%\\ \hline Total & $12.8885$ & \\ \hline \end{tabular} } \parbox{.5\linewidth}{ \centering \begin{tabular}{rrr} \hline {\bf Layer} & {\bf Time per batch (sec)} & {\bf Fraction} \\ \hline Conv1 & $0.0604 \pm 0.0112$ & 5.14\% \\ MaxPool & $0.0072 \pm 0.0040$ & 0.61\% \\ LRNormal & $0.0041 \pm 0.0043$ & 0.35\% \\ Conv2 & $0.4663 \pm 0.0072$ & 39.68\% \\ MaxPool & $0.0032 \pm 0.0000$ & 0.27\% \\ LRNormal & $0.0015 \pm 0.0003$ & 0.13\% \\ Conv3 & $0.2219 \pm 0.0014$ & 18.88\% \\ MaxPool & $0.0016 \pm 0.0000$ & 0.14\% \\ Conv4 & $0.1991 \pm 0.0001$ & 16.94\% \\ Conv5 & $0.1958 \pm 0.0002$ & 16.66\% \\ MaxPool & $0.0005 \pm 0.0001$ & 0.04\% \\ FC & $0.0077 \pm 0.0013$ & 0.66\% \\ FC & $0.0017 \pm 0.0001$ & 0.14\% \\ FC & $0.0007 \pm 0.0002$ & 0.06\% \\ Softmax & $0.0038 \pm 0.0098$ & 0.32\%\\ \hline Total & 1.1752 & \\ \hline \end{tabular} } \caption{Evaluation time in seconds per layer on CPU (left) and GPU (right) with batch size of 128. Results are averaged over 8 runs.} \label{evaluation_time} \end{table} \section{Theoretical speedups} We can measure the theoretically achievable speedups for a particular approximation in term of the number of floating point operations required to compute the target output. While it is unlikely that any implementation would achieve speedups equal to the theoretically optimal level, the number of necessary floating point operations still provides an informative upper bound on the gains. Table \ref{table:mono_perf} shows the theoretical speedup of the monochromatic approximation. The majority of the operations result from the convolution part of the computation. In comparison, the number of operations required for the color transformation is negligible. Thus, the theoretically achievable speedup decreases only slightly as the number of color components used is increased. Figure \ref{fig:biclustering_theory} plots the theoretically achievable speedups against the drop in classification performance for various configurations of the biclustering with outer product decomposition technique. For a given setting of input and output clusters numbers, the performance tends to degrade as the rank is decreased. \begin{table}[t] \small \centering \begin{tabular}{ccccc} \hline {\bf Number of colors} & & {\bf Increase in test error} & & {\bf Theoretical speedup}\\ & & & &\\ & {\bf Original} & {\bf $\|W\|_{data}$ distance metric} & {\bf Fine-tuned} & \\ \hline 4 & 24.1\% & 5.9\% & 1.9\% & 2.97$\times$ \\ 6 & 16.1\% & 2.4\% & 0.4\% & 2.95$\times$ \\ 8 & 9.9\% & 1.4\% & 0.2\% & 2.94$\times$\\ 12 & 3.5\% & 0.7\% & 0\% & 2.91$\times$\\ 16 & 1.99\% & 0.8\% & - & 2.88$\times$\\ 24 & 1.43\% & 0.4\% & - & 2.82$\times$\\ \hline \end{tabular} \caption{Performance when first layer weights are replaced with monochromatic approximation and the corresponding theoretical speedup. Classification error on ImageNet12 validation images tends to increase as the approximation becomes harsher (i.e. fewer colors are used). Theoretical speedups vary only slightly as the number of colors used increases since the color transformation contributes relatively little to the total number of operations.} \label{table:mono_perf} \end{table} \begin{figure}[t] \centering \begin{minipage}{0.75\textwidth} \includegraphics[width=\linewidth]{img/layer2_theoreticalspeedup_vs_performance_loss.pdf} \end{minipage} \caption{Theoretically achievable speedups vs. classification error for various biclustering approximations.} \label{fig:biclustering_theory} \end{figure} \section{Combined results} We used the monochromatic approximation with 6 colors for the first layer. Table \ref{table:combined} summarizes the results after fine-tuning for 1 pass through the ImageNet12 training data using a variety of second layer approximations. \begin{table}[b] \centering \begin{tabular}{|cc|c|} \hline \multicolumn{2}{|c|}{{\bf Layer 2 }} & {\bf Increase in error}\\ {\bf Method} & {\bf Hyperparameters}& \\ \hline \hline Biclustering & $G = 48; H = 2; K = 8$ & 1\%\\ + outer product decomposition & & \\ \hline Biclustering & $G = 48; H = 2; K = 6$ & 1.5\%\\ + outer product decomposition & & \\ \hline Biclustering + SVD & $G = 2; H = 2; K_1 = 19; K_2 = 64$ & 1.2\%\\ \hline Biclustering + SVD & $G = 2; H = 2; K_1 = 19; K_2 = 51$ & 1.4\%\\ \hline \end{tabular} \caption{Cascading approximations.} \label{table:combined} \end{table}
train/arxiv
BkiUdto4dbghZzegCSHQ
5
1
\section{Introduction}\label{Introduction} The heavy ion collisions at Relativistic Heavy Ion Collisions (RHIC) and Large Hadron Collider (LHC) have produced a new state of matter called quark-gluon plasma (QGP) \cite{starcol2005,phenix2005,RHIC2005}. One of the most experimental signatures of QGP formation is the dissociation of quarkonia, like $c\bar{c}$ in the medium \cite{Shuryak1980}. Most studies over the past years have found that the main mechanism responsible for this dissociation is color screening \cite{mats1986}, however, recent studies suggest a more important reason than the screening, the imaginary part of the heavy quark potential, ImV \cite{Thakur2017}. Moreover, this quantity could be used to estimate the thermal width which is an important subject in QGP. Calculations of the ImV associated with QCD and heavy ion collisions were performed for static pairs using pQCD \cite{mlop,Laine2007}, also from first principle and non perturbative method in lattice QCD the study has been done in \cite{arth,gaca,gcs} before the gauge/gravity duality. Theoretical analysis and experimental data demonstrate that QGP is strongly coupled \cite{Shuryak2004}, then non-perturbative methods are required. The equation of state of the QGP at zero and finite temperature are given in \cite{Borsanyi2014,Bazavov2014} by using non-perturbative LQCD. Using the AdS/CFT as another non-perturbative approach is helpful, although it does not describe the real QCD. AdS/CFT conjecture originally relates the type IIB string theory on $AdS_5 \times S^5$ space-time to the four-dimensional $\mathcal{N} = 4$ SYM gauge theory \cite{adscft}. In a holographic description of AdS/CFT, a strongly coupled field theory at the boundary of the AdS space is mapped onto the weakly coupled gravitational theory in the bulk of AdS \cite{holo}. The SYM plasma is a conformal system while QGP produced in heavy ion collisions is non-conformal. There is a relatively broad band of temperatures where the system changes the nature of its relevant degrees of freedom, and several different pseudocritical temperatures may be associated to characteristic points of different physical observables. Although the actual QGP is clearly different from the SYM plasma, at high temperatures the trace anomaly slowly approaches zero. Notice that classical gauge/gravity models are always strongly coupled, so such models flow at asymptotically high temperatures to a nontrivial, strongly coupled UV fixed point. They are not asymptotically free as in actual QCD but are asymptotically safe. The bottom-up approach begins with a five-dimensional effective field theory somehow motivated by string theory and tries to fit it to QCD as much as possible. In the gravitational dual of QCD, the presence of probe branes in the AdS bulk breaks the conformal symmetry and sets the energy scales so corrections in $AdS_5$ are useful to find more phenomenological results. GC (gluon condensation) model is a holographic toy model with phenomenological applicability as an effective model for the QGP. Originally, gluon condensation was a measure of the non-perturbative physics in zero-temperature QCD \cite{Shifman}, it is an order parameter for (de)confinement hence could be a condition for the phase transition. The usual order parameter for the deconfinement transition at finite temperature is the Polyakov loop. Also the Wilson loop can be used to identify the (de)confined phases of pure YM theory by its area law behavior. However, there is no order parameter for the real-world QGP, since LQCD already established that there is no actual phase transition at zero and moderate baryon densities. GC model is useful to study the nonperturbative nature of the QGP \cite{lee89, MD2003, Mil2007,QCDcon,action,Colangelo2013,Castorina2007,kim2007,Kopnin2011,chen2019}, such as in RHIC physics \cite{Brown2007}. In the mentioned references it is shown that QCD sum rules is used to study the nonperturbative physics of the strong interaction at zero temperature. In this approach, the nonperturbative nature of the vacuum is summarized in terms of quark and gluon condensates. To study hot systems, one generalizes the technique to finite temperature. The nonperturbative physics remaining even at high temperatures, is manifested through the nonvanishing of some of the vacuum condensates. This model describes some holographic plasma in equilibrium which is traversed by a probe heavy $Q\bar{Q}$ pair. Using holographic gluon condensation model, the thermodynamic properties of the system is discussed in \cite{kim2007}, in which the well-known Stephan-Boltzmann law with no condensation case can be recovered, also the energy density for high temperature is given but for low temperature other back ground is dominating. In the same reference, the dilaton (or gluon condensation) contribution of the energy momentum tensor is identified as the difference of total and the thermal gluon, the gluon condensation contributes negative energy which is a reminiscent of the zero temperature result of Shifman, Vainstein and Zakharov \cite{Shifman}. In both case, the negativeness is coming from the renormalization. Also in \cite{kim2007} the pressure, the trace anomaly and the entropy density are given in presence of gluon condensation, as it is expected, the entropy in condensed state is less than that in thermal state. The potential of the pair describes the interaction energy between quark and anti-quark and the thermal width of the $Q\bar{Q}$ is estimated by the imaginary part of the interaction energy at finite temperature \cite{nbma,ybm} .Using a holographic approach \cite{Noronha2009}, one can adopt the saddle point approximation and discuss the motion of a heavy quarkonia in a plasma and its thermal width. The thermal width of the quark antiquark pair results from the effect of thermal fluctuations due to the interactions between quarks and the strongly coupled medium. By integrating out thermal long wavelength fluctuations in the path integral of the Nambu-Goto action in the background spacetime, a formula for the imaginary part of the Wilson loop can be found in this approach that is valid for any gauge theory dual to classical gravity. Different AdS/QCD models were applied to study properties of hadron physics including the thermal width of the $Q\bar{Q}$ \cite{mst,msd,mmk,gac,aliakbari,sara1412,ziqiang,sara2015,zhao2020,alba2008,Bitaghsir2013,Bitaghsir2015, Bitaghsir2016,Braga2016,Erlich2005, Teramond2005,kim114008,Escobedo2014,Feng2020,Hayata2013,Hashimoto2014,Burnier2009,Dudal2015,Braga2018, Bellantuono2019,Bellantuono2017}. In this work we extend the study of the quark potential using the holographic gluon condensate model. Our main propose is to consider gluon condensation phenomena in QGP while it affects heavy quark potential. Note that the effect of the medium in the motion of a $Q\bar{Q}$ should be taken into account and the pair's rapidity through the plasma has some effects on their interactions. In the LHC, the heavy quarkonia are not only produced in large numbers but also with high momenta so it is essential to consider the effect of bound state speed on dissociation \cite{Ejaz2008}. The gluon condensate dependency of the heavy quark potential was studied in \cite{Kim2008} and the results indicate that the potential becomes deeper as the gluon condensate in the deconfined phase decreases and the mass of the quarkonium drops near $T_c$ (the deconfinement temperature). The gluon condensate dependency of the jet quenching parameter and drag force was analyzed in \cite{zhangdrag} and it was found that the two quantities both decrease as the gluon condensate decreases in the deconfined phase, indicating that the energy loss decreases near $T_c$. In \cite{zhangPLB803} it is shown that the dropping gluon condensate near $T_c$ increases the entropic force and thus enhances the quarkonium dissociation. In \cite{boyed96, boyedmiller} gluon condensate shows a drastic change near $T_c$, in a pure gauge YM theory with a first order phase transition corresponds to a pure gluon plasma in the deconfined phase. This paper is organized as follows, in the section \ref{ReV} we study the real part of the potential for both cases in which the dipole moves transversely and parallel to the dipole axis. We proceed to calculate the imaginary potential for the two cases mentioned in the section \ref{se:ImV}. Section \ref{sec:Conclusions} contains results and conclusions. \section{Potential of moving $Q\bar{Q}$ in presence of gluon condensation }\label{ReV} In this section we evaluate the real part of the potential energy of the moving quark-antiquark pair. The heavy quark potential (the vacuum interaction energy) is related to the vacuum expectation value of the Wilson loop \cite{wilson,Gervais,polygaugering} as, \begin{equation}\label{W} \lim_{_\mathcal{T}\longrightarrow 0}\langle W(\mathcal{C})\rangle_{0}\sim e^{i\mathcal{T} V_{Q\bar{Q}}(L)}, \end{equation} where $\mathcal{C}$ is a rectangular loop of spatial length L and extended over $\mathcal{T}$ in the time direction. The expectation value of the Wilson loop can be evaluated in a thermal state of the gauge theory with the temperature $T$. From this point of view $V_{Q\bar{Q}}(L)$ is the heavy quark potential at finite temperature and its imaginary part defines a thermal decay width. To estimate the thermal width mentioned, one can use worldsheet fluctuations of the Nambu-Goto action \cite{Noronha2009}.\\ Note that although the Nambu-Goto action on top of a background with a nontrivial dilaton field contains a coupling of this field with the Ricci scalar as it is shown in \cite{Gursoy2007}, but the contribution of this term is small. Therefore the well-known modified holographic model introducing the gluon condensation in the boundary theory is given by the following background action, \begin{equation}\label{main action} S=-\frac{1}{2k^2} \int d^5x \sqrt{g} \left(\mathcal{R}+ \frac{12}{L^2}-\frac{1}{2}\partial_{\lambda}\varphi \partial^{\lambda}\varphi\right), \end{equation} where $k$ is the gravitational coupling in $5$-dimensions, $\mathcal{R}$ is Ricci scalar, $L$ is the radius of the asymptotic $AdS_5$ spacetime, and $\varphi$ is a massless scalar which is coupled with the gluon operator on the boundary. By considering the following ansatz the equations of the above action could be solved, \cite{keh1999,Cski2007,Bak2004}, \begin{equation}\label{eq:metric} ds^2= \frac{R^2}{z^2}( A(z)dx_i^2-B(z)dt^2+dz^2), \end{equation} where in this dilaton black hole background , $A(z),B(z), f$ are defined as, \begin{eqnarray}\label{eq:AB} A(z)&=&(1+fz^4)^{\frac{f+a}{2f}}(1-fz^4)^{\frac{f-a}{2f}},\nonumber\\ B(z)&=&(1+fz^4)^{\frac{f-3a}{2f}}(1-fz^4)^{\frac{f+3a}{2f}},\nonumber\\ f^2&=&a^2+c^2, \end{eqnarray} $a$ is related to the temperature by $a=\frac{(\pi T)^4}{4}$ and the dilaton field is given by, \begin{equation}\label{eq:dilaton} \phi(z)=\frac{c}{f}\sqrt{\frac{3}{2}} \ln \dfrac{1+fz^4}{1-fz^4}+\phi_0. \end{equation} In \eqref{eq:metric} $i = 1, 2, 3$ are orthogonal spatial boundary coordinates, $z$ denotes the 5th dimension, radial coordinate and $z = 0$ sets the boundary. $\phi_0$ in \eqref{eq:dilaton} is a constant. We work in the unit where $R=1$. Note that the dilaton black hole solution is well defined only in the range $0<z<f^{-1/4}$ , where $f$ determines the position of the singularity and $z_f$ behaves as an IR cutoff. For $a = 0$, it reduces to the dilaton-wall solution. Meanwhile, for $c = 0$, it becomes the Schwarzschild black hole solution. Also, for both solutions, expanding the dilaton profile near $z = 0$ will give, \begin{equation}\label{dilatonexpansion} \varphi(z)=\varphi_0+\sqrt{6}\, c\, z^4+.... \end{equation} $c$ is nothing but the holographic gluon condensation parameter. As discussed in \cite{kim2007}, there exists a Hawking–Page transition between the dilaton wall solution and dilaton blackhole solution at some critical value of $a$. So the former is for the confined phase, while the latter describes the deconfined phase. The gluon condensate $G_2$ is the vacuum expectation value of the operator $\frac{\alpha_s}{\pi} G^a_{\mu\nu}G^{a,\mu\nu}$ where $G^a_{\mu\nu}$ is the gluon field strength tensor. A non-zero trace of the energy-momentum tensor appears in a full quantum theory of QCD. The anomaly implies a non-zero gluon condensate which can be calculated as \cite{Colangelo2013,Leutwyler1992,Castorina2007}, \begin{equation}\label{G} \Delta G_2(T)= G_2(T)-G_2(0)=-(\varepsilon(T)-3\,P(T)), \end{equation} where $G_2(T)$ denotes the thermal gluon condensate, $G_2(0)$, being equal to the condensate value at the deconfinement transition temperature, is the zero temperature condensate vale, $\varepsilon(T)$ is the energy density, $P(T)$ is the pressure of the QGP system. To account for the effect of rapidity, one starts from a reference frame where the plasma is at rest and the dipole is moving with a constant velocity so it can be boosted to a reference frame where the dipole is at rest but the plasma is moving past it \cite{sif}. Consider a $Q\bar{Q}$ pair moving along $x_3$ direction with rapidity $\eta$. Correspondingly, we can consider a reference frame in which the plasma is at rest and the dipole moves with a constant rapidity $-\eta$ in the $x_3$ direction. Consider the following boost to a reference frame in which the dipole is at rest but the plasma is moving past it \cite{sif}, \begin{eqnarray}\label{eq:boost} dt\rightarrow dt \cosh \eta - dx_3 \sinh \eta \nonumber \\ dx_d \rightarrow -dt \sinh \eta + dx_3 \cosh \eta, \end{eqnarray} if we transform the metric \eqref{eq:metric} with \eqref{eq:boost} we obtain, \begin{eqnarray}\label{eq:metricboosted} ds^2&=& \frac{1}{z^2}\Big{(} A(z)\, dx_{i}^2+[\cosh^2 \eta \, A(z)-\sinh^2 \eta \, B(z)] \, dx_3^2-[\cosh^2 \eta \, B(z)-\sinh^2 \eta \, A(z)]\, dt^2\nonumber \\ &-&2[A(z)- B(z)]\,\sinh \eta \cosh \eta \, dx_3 \, dt +dz^2\Big{)}, \end{eqnarray} from now on, we can consider the dipole in the gauge theory, which has a gravitational dual with metric \eqref{eq:metricboosted}. \subsection{Pair alignment transverse to the axis of the quarks, ReV } Consider the dipole is moving transverse to the dipole axis. The spacetime target functions are $X^{\mu}=(\tau=t,\sigma= x_1, cte,cte, z(x,t))$. In static gauge we take $z(x,t)=z(x)$. The heavy quark-antiquark potential energy $V_{Q\bar{Q}}$ of this system is related to the expectation value of a rectangular Wilson loop, \begin{equation}\label{eq:wilsonloohol} \langle W(\mathcal{C}) \rangle \sim e^{-i S_{str}}, \end{equation} $S_{str}$ is the classical Nambu-Goto action of a string in the bulk, \begin{equation} \label{eq:nambugoto} S_{str} = \frac{1}{2\pi \alpha'} \int d\sigma d\tau e^{\frac{\phi(z)}{2}}\sqrt{-det(G_{\mu\nu} \partial_{\alpha} X^{\mu} \partial_{\beta} X^{\nu}}). \end{equation} Plugging back $S_{str}$ \eqref{eq:nambugoto} in \eqref{eq:wilsonloohol} we extract the real part of $V_{Q\bar{Q}}$. Starting from the metric \eqref{eq:metricboosted}, dilaton field \eqref{eq:dilaton}, and the mentioned $X^{\mu}$ we get, \begin{equation} \label{eq:nambugotoperpstatic} S_{str} = \frac{\mathcal{T}}{2\pi \alpha'} \int_{-L/2}^{L/2} d\sigma \sqrt{f_1(z)\cosh^2 \eta- f_2(z)\sinh^2 \eta + (f_3(z)\cosh^2 \eta- f_4(z)\sinh^2 \eta)z'^2(\sigma) }. \end{equation} The quarks are located at $x_3 = \frac{L}{2}$ and $x_3 = -\frac{L}{2}$, $z'=\frac{dz}{d \sigma}$ and we defined, \begin{eqnarray}\label{eq:f} f_1(z)&=&\frac{\omega^2(z)}{z^4}\, A(z)\,B(z),\nonumber\\ f_2(z)&=&\frac{\omega^2(z)}{z^4} \, A^2(z),\nonumber\\ f_3(z)&=&\frac{\omega^2(z)}{z^4}\, B(z),\nonumber\\ f_4(z)&=&\frac{\omega^2(z)}{z^4}\, A(z), \end{eqnarray} and, \begin{equation}\label{eq:lambda} \omega(z)= e^{\frac{\phi(z)}{2}}= (\frac{1+fz^4}{1-fz^4})^{\frac{c}{f}\sqrt{\frac{3}{8}}}. \end{equation} We also write, \begin{eqnarray}\label{F,G} F(z)&=&f_1(z)\cosh^2 \eta- f_2(z)\sinh^2 \eta, \nonumber\\ G(z)&=& f_3(z)\cosh^2 \eta- f_4(z)\sinh^2 \eta. \end{eqnarray} So action \eqref{eq:nambugotoperpstatic} could be written as, \begin{equation}\label{eq:finalaction} S_{str} = \frac{\mathcal{T}}{2\pi \alpha'} \int_{-L/2}^{L/2} d\sigma \sqrt{F(z) + G(z)z'^2(\sigma) }. \end{equation} The action depends only on $\sigma=x$ and the associated Hamiltonian is a constant of the motion. With the corresponding position of the deepest position in the bulk being $z_*$, Hamiltonian is, \begin{equation}\label{hamil} H=\frac{F(z)}{\sqrt{F(z) + G(z)z'^2(\sigma) }}=cte=\sqrt{F(z_*)}. \end{equation} From the Hamiltonian \eqref{hamil}, we can write the equation of motion for $z(x)$ as, \begin{equation} \label{eq:eomperp} \frac{dz}{dx}=\Big{[}\frac{F(z)}{G(z)}\Big{(} \frac{F(z)}{F(z_*)}-1 \Big{)} \Big{]}^{\frac{1}{2}}. \end{equation} Therefore, \begin{equation} \label{eq:x,L} dx=\Big{[}\frac{F(z)}{G(z)}\Big{(} \frac{F(z)}{F(z_*)}-1 \Big{)} \Big{]}^{-\frac{1}{2}} dz, \end{equation} and we can relate $L$ to $z_*$ as follows, \begin{equation} \label{eq:x,z} \frac{L}{2}=\int_{0}^{z_*}\Big{[}\frac{F(z)}{G(z)}\Big{(} \frac{F(z)}{F(z_*)}-1 \Big{)} \Big{]}^{-\frac{1}{2}} dz. \end{equation} From \eqref{eq:x,z} we find the length of the line connecting both quarks as, \begin{eqnarray}\label{eq:L,z} L&=&2\sqrt{F(z_*)}\int_{0}^{z_*}\Big{[} \frac{G(z)}{F(z)(F(z)-F(z_*))}\Big{]}^{\frac{1}{2}} dz. \end{eqnarray} In the literature \cite{LiuPRL,LiuJHEP} the maximum value of the above length has been used to define a dissociation length for the moving $Q\bar{Q}$ pair, where the dominant configuration for $S_{str}$ is two straight strings (two heavy quarks) running from the boundary to the horizon. If we put \eqref{eq:eomperp} in \eqref{eq:finalaction} the action is written as follows, \begin{equation} \label{eq:Sperpnreg} S_{str} = \frac{\mathcal{T}}{\pi \alpha'} \int_{0}^{z_*} dz \, \sqrt{G(z)} \sqrt{\frac{F(z)}{F(z_*)}} \left[\frac{F(z)}{F(z_*)}-1 \right]^{-1/2}. \end{equation} To regularize the above integral, we write, \begin{align} \label{eq:Sperpreg} S^{reg}_{str} & = \frac{\mathcal{T}}{\pi \alpha'} \int_{0}^{z_*} dz \, \sqrt{G(z)} \sqrt{\frac{F(z)}{F(z_*)}} \left[\frac{F(z)}{F(z_*)}-1 \right]^{-1/2} - \frac{\mathcal{T}}{\pi \alpha'} \int_{0}^{\infty} dz \sqrt{f^0_3(z)}, \end{align} where $f^0_3(z)=f_3(z)\mid_{a\longrightarrow 0}$ (quark self energy). Finally, we proceed from $\mathrm{Re}\,V_{Q\bar{Q}} = S^{reg}_{str}/\mathcal{T}$ to, \begin{align} \label{eq:Revz} \mathrm{Re}\,V_{Q\bar{Q}} & = \frac{\sqrt{\lambda}}{\pi} \int_{0}^{z_*} dz \, \sqrt{G(z)} \sqrt{\frac{F(z)}{F(z_*)}} \left[\frac{F(z)}{F(z_*)}-1 \right]^{-1/2} - \frac{\sqrt{\lambda}}{\pi} \int_{0}^{\infty} dz \sqrt{f^0_3(z)}, \end{align} \begin{figure}[h!] \begin{minipage}[c]{1\textwidth} \tiny{(a)}\includegraphics[width=8cm,height=6cm,clip]{vt1} \tiny{(b)}\includegraphics[width=8cm,height=6cm,clip]{vt2} \end{minipage} \caption{$\mathrm{Re}\,V_{Q\bar{Q}}$ as a function of $L$ for a $Q\bar{Q}$ pair oriented transverse to the axis of the quarks, from top to bottom for $\eta=0.8,\, 0.4,\,0$ respectively, in the presence of gluon condensation, for a) $c=0.02$\, Ge$V^4$\, and\, b) $c=0.9$\, Ge$V^4$. } \label{Revtrans} \end{figure} where $\lambda=\frac{1}{\alpha'^2}$ is the 't Hooft coupling of the gauge theory. Figure \ref{Revtrans} shows the real part of the potential as a function of $L$ with the $Q\bar{Q}$ pair oriented transverse to the axis of the quarks, in the presence of gluon condensation. The results show that increasing rapidity leads to a decrease in dissociation length while $c$ has the opposite effect. \subsection{Pair alignment parallel to the axis of the quarks, ReV} In this step we consider that the dipole moves parallel to the dipole axis. The spacetime target functions are $X^{\mu}=(\tau=t,\sigma=cte, cte,x_3, z(x,t))$ and in the static gauge we take $z(x,t)=z(x)$. Using steps similar to \eqref{eq:finalaction} we get the action with the new worldsheet as, \begin{equation}\label{eq:finalactionpar} S_{str} = \frac{\mathcal{T}}{2\pi \alpha'} \int_{-L/2}^{L/2} d\sigma \sqrt{f_1(z) + G(z)z'^2(\sigma) }, \end{equation} where $G(z)$ and $f_1(z)$ are defined as \eqref{F,G} and \eqref{eq:f}. Similar to the transverse case, we find the line connecting both quarks as, \begin{eqnarray}\label{eq:L,zpar} L&=&2\sqrt{f_1(z_*)}\int_{0}^{z_*}\Big{[} \frac{G(z)}{f_1(z)(f_1(z)-f_1(z_*))}\Big{]}^{\frac{1}{2}} dz, \end{eqnarray} and the real part of the potential as, \begin{align} \label{eq:Revzpar} \mathrm{Re}\,V_{Q\bar{Q}} & = \frac{\sqrt{\lambda}}{\pi} \int_{0}^{z_*} dz \, \sqrt{G(z)} \sqrt{\frac{f_1(z)}{f_1(z_*)}} \left[\frac{f_1(z)}{f_1(z_*)}-1 \right]^{-1/2} - \frac{\sqrt{\lambda}}{\pi} \int_{0}^{\infty} dz \sqrt{f^0_3(z)}. \end{align} \begin{figure}[h!] \begin{minipage}[c]{1\textwidth} \tiny{(a)}\includegraphics[width=8cm,height=6cm,clip]{vp1} \tiny{(b)}\includegraphics[width=8cm,height=6cm,clip]{vp2} \end{minipage} \caption{$\mathrm{Re}\,V_{Q\bar{Q}}$ as a function of $L$ for a $Q\bar{Q}$ pair oriented parallel to the axis of the quarks, from top to bottom for $\eta=0.8,\, 0.4,\,0$ respectively, in the presence of gluon condensation, for a) $c=0.02$\, Ge$V^4$\, and\, b) $c=0.9$\, Ge$V^4$.} \label{Revpara} \end{figure} Figure \ref{Revpara} shows the real part of potential as a function of $LT$ for some choices of $\eta$ where $Q\bar{Q}$ pair oriented parallel to the axis of the quarks, in presence of gluon condensation. Similar to previous case, increasing rapidity leads to decreasing the dissociation length while $c$ has the opposite effect. \begin{figure}[h!] \begin{center}$ \begin{array}{cccc} \includegraphics[width=10cm]{Vcom} \end{array}$ \end{center} \caption{$\mathrm{Re}\,V_{Q\bar{Q}}$ as a function of $L$, for fixed value of $\eta$ and fixed value of $c$, as a comparison between the parallel and the transverse cases. The solid black line shows parallel case and the dashed red line shows transverse case. } \label{fig:VpvsVs} \end{figure} Figure \ref{fig:VpvsVs} shows a comparison between the real part of the potential for the parallel and the transverse cases. Although the difference is not significant, the plots show that the effect of the gluon condensation is slightly stronger for the parallel case. In other words, increasing $c$ increases the dissociation length in both the transverse and the parallel cases (previous figures), this effect appears stronger when the dipole moves parallel to the axis of the quarks. \section{Imaginary potential of moving $Q\bar{Q}$ in presence of gluon condensation }\label{se:ImV} In this section, we calculate the imaginary potential using the thermal worldsheet fluctuations method for both the transverse and parallel cases. \subsection{ Pair alignment transverse to the axis of the quarks, ImV } Consider the effect of worldsheet fluctuations around the classical configuration $r=\frac{1}{z}$, \begin{equation}\label{zfluc} r(x) = r_*(x) \rightarrow r(x) = r_*(x) + \delta r (x), \end{equation} then the fluctuations in the partition function should be considered as follows, \begin{equation}\label{Zwithfluc} Z_{str} \sim \int \mathcal{D} \delta r(x) e^{i S_{NG} (r_*(x) + \delta r (x))}. \end{equation} Hence there is an imaginary part of the potential in the action. Dividing the interval of $x$ into $2N$ points (where $N\longrightarrow\infty$) we obtain, \begin{equation}\label{Zwithflucfin} Z_{str} \sim \lim_{N\to \infty}\int d [\delta r(x_{-N})] \ldots d[ \delta r(x_{N})] \exp{\left[ i \frac{\mathcal{T} \Delta x}{2 \pi \alpha'} \sum_j \sqrt{\tilde{G} \, r_j^{'2} +\tilde{F}}\right]}, \end{equation} where $\tilde{G}$ and $\tilde{F}$ are functions of $r_j$. We expand $r_*(x_j)$ around $x=0$ and keep only terms up to second order of it because thermal fluctuations are important around $r_\ast$ which means $x=0$, \begin{equation}\label{zstarexpan} r_*(x_j) \approx r_\ast + \frac{x_j^2}{2} r_*''(0), \end{equation} considering small fluctuations we have, \begin{equation}\label{Fwithfluc} \tilde{F} \approx \tilde{F}_* + \delta r \tilde{F}'_* + r_*''(0) \tilde{F}'_* \frac{x_j^2}{2} + \frac{\delta r^2}{2} \tilde{F}''_*, \end{equation} where $\tilde{F}_\ast\equiv \tilde{F}(r_\ast)$ and $\tilde{F}'_\ast\equiv \tilde{F}'(r_\ast)$. The action is written as, \begin{equation}\label{actionwithroots} S^{NG}_j = \frac{\mathcal{T} \Delta x}{2 \pi \alpha'} \sqrt{C_1 x_j^2 + C_2}, \end{equation} where $C_1$ and $C_2$ are given as follows, \begin{equation}\label{C1} C_1 = \frac{r_*''(0)}{2} \left[ 2 \tilde{G}_* r_*''(0) + \tilde{F}_*' \right], \end{equation} \begin{equation}\label{C2} C_2 = \tilde{F}_* + \delta r \tilde{F}'_* + \frac{\delta r^2}{2} \tilde{F}''_*, \end{equation} to have $ Im V_{Q\bar{Q}}\neq 0$, the function in the square root \eqref{actionwithroots} should be negative. Then, we consider the j-th contribution to $Z_{str}$ as, \begin{equation}\label{Ij} I_j \equiv \int\limits_{\delta r_{j min}}^{\delta r_{j max}} D(\delta r_j) \, \exp{\left[ i \frac{\mathcal{T} \Delta x}{2 \pi \alpha'} \sqrt{C_1 x_j^2 + C_2} \right]}, \end{equation} \begin{equation}\label{Ddeltaz} D(\delta r_j) \equiv C_1 x_j^2 + C_2(\delta r_j), \end{equation} \begin{equation} \label{deltaz} \delta r = - \frac{\tilde{F}'_*}{\tilde{F}''_*}, \end{equation} so, $ D(\delta r_j)<0 \Longrightarrow -x_*<x_j<x_*$ leads to an imaginary part in the square root. We write, \begin{equation}\label{xstar} x_* = \sqrt{\frac{1}{C_1}\left[\frac{\tilde{F}'^2_*}{2\tilde{F}''_*} - \tilde{F}_* \right]}, \end{equation} if the square root is not real we should take $ x_*=0$. With all these conditions we can approximate $D(\delta r) $ by $ D(-\frac{\tilde{F}'_{\ast}}{\tilde{F}"_{\ast}})$ in $ I_j$ as, \begin{equation}\label{Ijapprox} I_j \sim \exp \left[ i \frac{\mathcal{T} \Delta x}{2 \pi \alpha'} \sqrt{C_1 x_j^2 + \tilde{F}_* - \frac{\tilde{F}'^2_*}{2\tilde{F}''_*}} \right]. \end{equation} The total contribution to the imaginary part, will be available with a continuum limit. So, \begin{equation}\label{ImV} \mathrm{Im} \, V_{Q\bar{Q}} = -\frac{1}{2\pi \alpha'} \int\limits_{|x|<x_*} dx \sqrt{-x^2 C_1 - \tilde{F}_* + \frac{\tilde{F}'^2_*}{2\tilde{F}''_*}},\, \end{equation} which leads to, \begin{equation}\label{ImVr} \mathrm{Im} \, V_{Q\bar{Q}} = -\frac{1}{2 \sqrt{2} \alpha'} \sqrt{\tilde{G}_*} \left[\frac{\tilde{F}'_*}{2\tilde{F}''_*}-\frac{\tilde{F}_*}{\tilde{F}'_*} \right]. \end{equation} Note that \eqref{ImVr} is the imaginary potential with the $r$ coordinate. Changing the variable back to the coordinate $z=\frac{1}{r}$ according to our background, we will have, \begin{equation}\label{ImVz} \mathrm{Im} \, V_{Q\bar{Q}} = -\frac{1}{2 \sqrt{2} \alpha'} \sqrt{G_*} z^2_*\left[\frac{F_*}{z^2_*F'_*}-\frac{z^2_* F'_*}{4z^3_* F'_*+2 z_*^4\, F''_*} \right], \end{equation} where $F$ is again a function of $z$. In \eqref{ImVz} the following condition should be satisfied for the square root, \begin{equation}\label{ImVcondition} \frac{B(z_*)}{A(z_*)}> tanh^2 \eta. \end{equation} \begin{figure}[h!] \begin{minipage}[c]{1\textwidth} \tiny{(a)}\includegraphics[width=8cm,height=6cm,clip]{ivt1} \tiny{(b)}\includegraphics[width=8cm,height=6cm,clip]{ivt2} \end{minipage} \caption{$ImV_{Q\bar{Q}}$ as a function of $LT$ for a $Q\bar{Q}$ pair oriented transverse to the axis of the quarks, from left to right for $\eta=0.8,\, 0.4,\, 0$ respectively, in the presence of gluon condensation, for a) $c=0.02$\, Ge$V^4$\, and\, b) $c=0.9$\, Ge$V^4$.} \label{Imvtrans} \end{figure} Figure \ref{Imvtrans} shows the imaginary potential as a function of $LT$ for some choices of $\eta$ where $Q\bar{Q}$ pair oriented transverse to the axis of the quarks, in the presence of gluon condensation. With increasing rapidity the imaginary part of the potential begins to become nonzero for smaller values of $LT$. Also, with increasing rapidity, the onset of the imaginary potential occurs for smaller $LT$ and $ImV$ is diminished, implying that with quarkonium melts more easily with increasing rapidity, which is consistent with the results of the reference \cite{Fadafan2016}. Thus our results show that the $Q\bar{Q}$ pair's thermal width decreases with increasing its rapidity relative to the plasma, while $c$ has the opposite effects. \subsection{Pair alignment parallel to the axis of the quarks, ImV} Taking action \eqref{eq:finalactionpar} and using the same approach we followed to find \eqref{ImVz} , we get the imaginary potential of a pair moving parallel to the axis of the quarks as, \begin{equation}\label{ImVzpar} \mathrm{Im} \, V_{Q\bar{Q}} = -\frac{1}{2 \sqrt{2} \alpha'} \sqrt{G_*} z^2_*\left[\frac{f_{1*}}{z^2_*f'_{1*}}-\frac{z^2_* f'_{1*}}{4z^3_* f'_{1*}+2 z_*^4\, f''_{1*}} \right]. \end{equation} \begin{figure}[h!] \begin{minipage}[c]{1\textwidth} \tiny{(a)}\includegraphics[width=8cm,height=6cm,clip]{ivp1} \tiny{(b)}\includegraphics[width=8cm,height=6cm,clip]{ivp2} \end{minipage} \caption{$ImV_{Q\bar{Q}}$ as a function of $LT$ for a $Q\bar{Q}$ pair oriented parallel to the axis of the quarks, from left to right for $\eta=0.8,\, 0.4,\, 0$ respectively, in the presence of gluon condensation, for a) $c=0.02$\, Ge$V^4$\, and\, b) $c=0.9$\, Ge$V^4$.} \label{Imvpara} \end{figure} Figure \ref{Imvpara} shows the imaginary potential as a function of $LT$ for some choices of $\eta$ where $Q\bar{Q}$ pair oriented parallel to the axis of the quarks, in the presence of gluon condensation. The $c$ parameter in the gravitational dual, introduces the gluon condensation in the QCD side of duality. Therefore with increasing gluon condensation the imaginary part of the potential starts to become nonzero for larger values of $LT$ and the onset of the imaginary potential happens for larger $LT$ and $ImV$. Our results thus indicate that the thermal width of the $Q\bar{Q}$ pair increases with increasing gluon condensation. Similar to the transverse case, these effects are the opposite of the rapidity effects. \begin{figure}[h!] \begin{center}$ \begin{array}{cccc} \includegraphics[width=10cm]{ImVcom} \end{array}$ \end{center} \caption{$ImV_{Q\bar{Q}}$ as a function of $LT$, for fixed value of $\eta$ and fixed value of $c$, as a comparison between the parallel and the transverse cases. The (right) black line shows parallel case and the (left) red line shows transverse case. } \label{fig:ImVpvsImVs} \end{figure} Figure \ref{fig:ImVpvsImVs} shows a comparison between the imaginary part of the potential for the parallel and the transverse cases. Similar to ReV in figure \ref{fig:VpvsVs}, the plots show that the effect of the gluon condensation is stronger for the parallel case. As shown previously, increasing $c$ increases the thermal width in both the transverse and the parallel cases. This effect appears stronger when the dipole moves parallel to the axis of the quarks. While the magnetic field \cite{ziqiang} and the chemical potential effects were more important for the transverse case, in the parallel case the gluon condensation has a stronger impact. \section{Conclusions}\label{sec:Conclusions} In this work we investigated the potential of a moving quark antiquark pair in a plasma considering the effect of gluon condensation. Taking into account the thermal fluctuations of the worldsheet of the holographic Nambu-Goto string, we calculated thermal width of the moving quark antiquark pair for the cases where the axis of the moving $Q\bar{Q}$ pair is transverse and parallel with respect to its rapidity in the plasma. Our results indicate that increasing gluon condensation results in an increase in dissociation length. We found the dependency of $ImV_{Q\bar{Q}}$ on the rapidity and gluon condensation. While the thermal width of the pair is heavily suppressed as a function of $\eta$, the $c$ could be considered as an amplifying parameter of thermal width. In other words, the thermal width of the $Q\bar{Q}$ pair increases with increasing gluon condensation, which is the opposite of the rapidity effect. As it is found in \cite{zhangPLB803} in the presence of gluon condensation, increasing temperature leads to easier quarkonium melting and the dropping gluon condensate near $T_c$ enhances the quarkonium dissociation which agrees with our results. It would be interesting to check analytically whether the nonzero imaginary potential found by continuing the string configurations into the complex plane \cite{alba2008} agrees with current results. We hope to work on this topic in the future.\\ \\ \textbf{Acknowledgement}\\ Authors would like to thank Kazem Bitaghsir Fadafan for useful comments. This work was supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDB34030301) and the CAS President's International Fellowship Initiative, PIFI (2021PM0065).
train/arxiv
BkiUcJHxaJJQnMJkXbLN
5
1
\section{Introduction} Relaxation dynamics are a paradigm for describing complex dynamical phenomena spanning condensed matter \cite{datt87}, polymeric \cite{doi86}, granular \cite{edwa94} and single-molecule systems \cite{noe11,chen07}, and even cellular regulatory networks \cite{walc12}. Relaxation concepts underlie most spectroscopic methods \cite{datt87}. Moreover, our understanding of metastability is built entirely on the properties of relaxation spectra \cite{lang69,bovi15,biro01,tana03,tana04}. Complementary to relaxation processes are the statistics of the first passage time, the time a random process reaches a prescribed threshold value for the first time. First passage time statistics in turn are central to the kinetics of chemical reactions \cite{haen90,redn01,metz14a,kope88,szab80,beni10,ben93,osha95,beni14,guer16,meji11}, signaling in biological cells \cite{beni14,guer16,meji11,holc14,gode16a,gode15,gode16,gode17,greb16,greb17,greb18,vacc15}, transport in disordered media \cite{avra00}, the foraging behavior of bacteria and animals \cite{berg93,bell90,paly14}, up to the spreading of diseases \cite{lloy01,hufn04} or stock market dynamics \cite{mant00}. Further important applications of first passage concepts include the persistence properties in non-equilibrium systems \cite{bray13,maju01,maju02} and stochastic thermodynamics \cite{neri17,garr17,rold17,ging17,fuch16,nguy17}. Both relaxation and first passage processes are essential for theories building on a diffusive exploration of (free) energy landscapes $U(x)$ \cite{wale04}, which have proven to be particularly invaluable in explaining the kinetics of chemical reactions \cite{kram40,schu81}, protein dynamics \cite{frau91,onuc97} incl. recent single protein folding experiments \cite{neup16}, and the dynamics of supercooled liquids and glasses \cite{biro01,tana03,tana04,garr02}. Whereas relaxation processes can be understood intuitively in terms of the eigenmodes and eigenvalues of the underlying Fokker-Planck or Kramers operators \cite{biro01,tana03,tana04,bark14}, general, and in particular intuitive results about the full first passage time statistics are much sparser, and currently do not reach beyond a crude division between so-called direct and indirect first-passage trajectories for the simplest smooth potential landscapes \cite{beni10,beni14,gode16a,gode16,gode17}. Our general understanding of first passage phenomena would therefore substantially benefit from a deeper connection to the corresponding relaxation process. Indeed, in the limit of high energy barriers a well-known link relates $\lambda_1^{-1}$, the longest relaxation time, and the mean first passage time to surmount the highest barrier in the landscape \cite{lang69,bovi15,biro01,tana03,tana04,matk81}. However, in spite of the immense success and universal applicability of this approximate relationship, an explicit bridge between first passage and relaxation time-scales has not been explored further. Here we establish such a link rigorously for microscopically reversible Markovian dynamics. We prove that the first-passage process to an effectively one-dimensional target is in fact the \emph{dual} to the corresponding relaxation process. The duality takes the form of a \emph{spectral interlacing} of characteristic time-scales, in which each pair of successive relaxation time-scales encloses a first-passage time-scale. We establish an explicit relationship between relaxation and first-passage spectra, and express the full statistics of first passage time exactly in terms of the relaxation eigensystem. As a case study we consider a diffusive exploration of a triple-well potential. Moreover, exploiting the duality we disentangle first passage time statistics in general rugged energy landscapes. We argue why our results are important for a quantitative understanding of the occurrence of diseases related to protein misfolding. The paper is organized as follows. In Sec.~\ref{sec:main} we expose the duality between first passage and relaxation processes. In Sec.~\ref{sec:triple} we determine the first passage time statistics in a simple triple-well potential and demonstrate that knowing the full probability density is mandatory in studies of many-particle first passage problems in the few encounter limit. In Sec.~\ref{sec:rugged} we determine the first passage time density for a truly rugged energy landscape generated by a truncated Karhunen-Lo\`eve expansion of a Wiener process. The large deviation limit of the first passage time distribution, which is relevant for single-molecule first passage problems, is presented in Sec.~\ref{sec:mu1}. We conclude in Sec.~\ref{sec:conclusion}. A proof of the duality between first passage and relaxation is relegated to \ref{Appendix1}. \section{First passage time density from the relaxation spectrum} \label{sec:main} We consider reversible Markovian dynamics in continuous time governed by a Fokker-Planck operator $\mathbf{L}=\partial_x D(x)[\beta U'(x)+\partial_x]$, where $x$ is the position, $U(x)$ a potential with $U'(x)\equiv \partial_x U(x)$, and $D(x)$ the diffusion landscape. We assume $\beta$ to be the inverse temperature, such that according to the fluctuation-dissipation theorem $\beta D(x)$ is the inverse friction coefficient. For any initial condition $x_0$ the dynamics governed by the Fokker-Planck operator $\mathbf{L}$ relaxes to the Boltzmann distribution $P_{\mathrm{eq}}(x)=\mathrm{e}^{-\beta U(x)}/\int\mathrm{e}^{-\beta U(x)}{\rm d} x$. Adopting the bra-ket notation we expand $\mathbf{L}$ in a complete bi-orthogonal set of left and right eigenstates, $\mathbf{L}=-\sum_{k}\lambda_k|\psi^\mathrm{R}_k\rangle \langle\psi^\mathrm{L}_k|$, $\lambda_k$ denoting the eigenvalues and $\langle\psi^\mathrm{L}_k|\psi^\mathrm{R}_l\rangle=\delta_{kl}$ with $|\psi^\mathrm{L}_k\rangle\equiv\mathrm{e}^{\beta U(x)}|\psi^\mathrm{R}_k\rangle$. The propagator encoding the probability to be at $x$ at a time $t$ after starting from $x_0$ at $t_0=0$, is defined as \begin{equation} P(x,t|x_0)\equiv \langle x| \mathrm{e}^{\mathbf{L}t}|x_0\rangle=\sum_{k}\langle x |\psi^\mathrm{R}_k\rangle\langle\psi^\mathrm{L}_k|x_0\rangle\mathrm{e}^{-\lambda_kt}. \label{propagator} \end{equation} Since we assumed temporally homogeneous dynamics, we can define the first passage time probability density from some $x_0$ to a target at $a$, $\wp_a(t|x_0)$, by the renewal theorem \cite{sieg51} \begin{equation} P(x,t|x_0)=\int_0^t\wp_a(\tau|x_0)P(x,t-\tau|a)d\tau, \label{renewal} \end{equation} where either $x_0<a\le x$, or symmetrically $x_0>a\ge x$. Eq.~(\ref{renewal}) follows from a direct enumeration of paths between $x_0$ and $x$, which by construction must pass through $a$. Laplace transforming Eq.~(\ref{renewal}) we obtain $\tilde{\wp}_a(s|x_0)=\tilde{P}(x,s|x_0)/\tilde{P}(x,s|a)$, which is the starting point of our analysis. Using Eq.~(\ref{propagator}), which after Laplace transform reads $\tilde P(x,s|x_0)=\sum_{k}(s+\lambda_k)^{-1}\langle x |\psi^\mathrm{R}_k\rangle\langle\psi^\mathrm{L}_k|x_0\rangle$, yields \begin{equation} \tilde{\wp}_a(s|x_0)=\frac{\sum_{k}(s+\lambda_k)^{-1}\langle x |\psi^\mathrm{R}_k\rangle\langle\psi^\mathrm{L}_k|x_0\rangle}{\sum_{k}(s+\lambda_k)^{-1}\langle x |\psi^\mathrm{R}_k\rangle\langle\psi^\mathrm{L}_k|a\rangle}. \label{fpt} \end{equation} The Laplace transform of the first passage time density $\tilde{\wp}_a(s|x_0)$ is a meromorphic function having simple poles $-\mu_k$ on the negative real axis \cite{keil64}. Moreover, the poles have no accumulation point in the left half plane ($\mathrm{Re}(s)<0$). Similarly, $\tilde{P}(y,s|x)$ is meromorphic with simple poles $-\lambda_k$ arranged along the non-positive real axis. In particular, $\lambda_0=0$ and $\langle x |\psi^\mathrm{R}_0\rangle\langle\psi^\mathrm{L}_0|x_0\rangle=P_{\mathrm{eq}}(x)$. The Laplace transforms of the propagators $\tilde{P}(x,s|x_0)$ and $\tilde{P}(x,s|a)$ have coinciding poles, while the poles of $\tilde{\wp}_a(s|x_0)$ are those zeroes of $\tilde{P}(x,s|a)$, which are different from the zeroes of $\tilde{P}(x,s|x_0)$. Generally, $\tilde{P}(x,s|x_0)$ and $\tilde{P}(x,s|a)$ have infinitely many coinciding zeroes alongside the distinct ones (see proof in \cite{hart18a_arxiv}), because the region beyond $a$ cannot affect the first passage time from $x_0$, whereas it must affect the relaxation. However, all common zeroes result in a vanishing residue. One can prove that setting $x=a$ in Eqs.~(\ref{renewal}) and (\ref{fpt}) guarantees that \emph{all relevant} eigenvalues (i.e., those satisfying $\langle a|\psi^\mathrm{R}_k\rangle\neq0$) of the relaxation and first passage processes interlace \begin{equation} \lambda_{k-1} < \mu_k <\lambda_{k}, \quad \forall k\ge1, \label{interlacing} \end{equation} which is due to the fact that $\langle a |\psi^\mathrm{R}_k\rangle\langle\psi^\mathrm{L}_k|a\rangle>0$ (see also \cite{hart18a_arxiv}). Based on the interlacing in Eq.~(\ref{interlacing}) we are now in the position to determine the entire first passage time statistics from the relaxation eigenspectrum, $\{|\psi^\mathrm{R}_k\rangle,\langle\psi^\mathrm{L}_k| ,\lambda_k\}$. The calculation becomes rather involved and is sketched in \ref{Appendix1} (see also \cite{hart18a_arxiv}), here we simply state the result. Introducing $\bar{\mu}_k=(\lambda_k+\lambda_{k-1})/2$ the corresponding first passage eigenvalues $\mu_k$ are given exactly in the form of a convergent Newton's series \begin{equation} \mu_k=\bar{\mu}_k+\sum_{n=1}^{\infty}f_0(k)^nf_1(k)^{1-2n}\frac{\mathrm{det}\boldsymbol{\mathcal{A}}_n(k)}{(n-1)!}, \label{eigenv} \end{equation} with the almost triangular $(n-1)\times(n-1)$ matrix $\boldsymbol{\mathcal{A}}_n(k)$ with elements \begin{equation} \label{element} \mathcal{A}^{i,j}_n(k)=\frac{f_{(i-j+2)}(k)\Theta(i-j+1)}{(i-j+2)!}\Big[n(i-j+1)\Theta(j-2) +i\Theta(1-j)+j-1\Big], \end{equation} where $\Theta(l)$ denotes the discrete Heaviside step function ($\Theta(l)=1$ if $l\ge0$), and symbolically we set $\mathrm{det}\boldsymbol{\mathcal{A}}_1\equiv 1$. Setting $k^{\ast}=k$ if $\tilde{P}(a,-\bar{\mu}_k|a)<0$ and $k^{\ast}=k-1$ otherwise, $f_n(k)$ in Eqs.~(\ref{eigenv}-\ref{element}) are defined by \begin{equation} \label{coefs} \begin{aligned} f_0(k)&=\langle a |\psi^\mathrm{R}_{k^{\ast}}\rangle\langle\psi^\mathrm{L}_{k^{\ast}}|a\rangle+\sum_{l\ne k^{\ast}}{\langle a |\psi^\mathrm{R}_l\rangle\langle\psi^\mathrm{L}_l|a\rangle\frac{(\bar{\mu}_k-\lambda_{k^{\ast}})}{(\bar{\mu}_k-\lambda_l)}},\\ f_{n\ge 1}(k)&=n!\sum_{l\ne k^{\ast}}{\langle a |\psi^\mathrm{R}_l\rangle\langle\psi^\mathrm{L}_l|a\rangle \frac{(\lambda_l-\lambda_{k^{\ast}})}{(\bar{\mu}_k-\lambda_l)^{n+1}}}. \end{aligned} \end{equation} Using Eq.~(\ref{coefs}) the determinant $\det \boldsymbol{\mathcal{A}}_n(k)$ of the almost triangular matrix with elements \eqref{element} is fully characterized, which in turn determines the first passage eigenvalue Eq.~\eqref{eigenv}. It is now easy to obtain $\wp_a(t|x_0)$ by inverting the Laplace transform in Eq.~(\ref{fpt}) using Cauchy's residue theorem yielding \cite{hart18a_arxiv} \begin{equation} \wp_a(t|x_0)=\sum_{k\ge 1}w_k(x_0)\mu_k\mathrm{e}^{-\mu_kt}, \label{first} \end{equation} where the spectral weights as a function of the initial condition $x_0$, $w_k(x_0)$, are given by \begin{equation} w_k(x_0)=\frac{\sum_{l}(1-\lambda_l/\mu_k)^{-1}\langle a |\psi^\mathrm{R}_l\rangle\langle\psi^\mathrm{L}_l|x_0\rangle}{\sum_{l}(1-\lambda_l/\mu_k)^{-2}\langle a |\psi^\mathrm{R}_l\rangle\langle\psi^\mathrm{L}_l|a\rangle}, \label{weight} \end{equation} where we only sum over relevant $\lambda_l$, and the weights are normalized $\sum_{k\ge 1}w_k(x_0)=1$. Moments of the first passage time follow immediately, \begin{equation} \langle t^n_a(x_0)\rangle=n!\sum_{k\ge 1}w_{k}(x_0)\mu_k^{-n} \label{eq:moments} \end{equation} We note that while the first weight is necessarily positive, $w_1(x_0)>0$, the other weights $w_k(x_0)$ can be negative for $k>1$. In the following section we will show how this duality between first passage and relaxation can be used to determine and understand more deeply the full first passage time distribution. For the sake of completeness we briefly comment on the ``reverse'' direction of the duality. The starting point is the renewal theorem \eqref{renewal} for $x=a$ in Laplace space, which reads $\tilde{\wp}_x(s|x_0)=\tilde P(x,s|x_0)/\tilde P(x,s|x)$. Using standard Green's function theory \cite{hart18a_arxiv} it can be shown after some straightforward but tedious algebra that \begin{equation} \tilde P(x,s|x_0)=\sigma_{\pm}\frac{\e^{\beta U(x_0)-\beta U(x)}\tilde \wp_{x_0}(s|x)}{D\frac{\partial}{\partial x_0}\ln[\tilde\wp_{x_0}(s|x)\tilde\wp_{x}(s|x_0)]} \label{dualityInv} \end{equation} holds, where $\sigma_{\pm}=-1$ if $x_0<x$ and $\sigma_{\pm}=+1$ if $x_0>x$. For the remainder of the paper we will focus solely on the explicit forward duality, since it allows us to efficiently determine the full first passage time density. \section{Triple-well potential} \label{sec:triple} \subsection{First passage time density} \begin{figure} \includegraphics[width=\textwidth]{multiplot} \caption{(a) Triple well potential $\beta U(x)=(x^6 -6x^4 + 0.15x^3 + 8x^2)/2$ (line) and corresponding invariant measure $P_{\mathrm{eq}}(x)\propto\psi^\mathrm{R}_0(x)$ (shaded). Highlighted are the initial conditions $x_0=-1.2$ and $x_1=0.6$ alongside three target positions $a_1=-0.2$, $a_2=0.9$ and $a_3=1.7$; (b) Right eigenvectors $\psi^\mathrm{R}_k(x)\equiv\langle x |\psi^\mathrm{R}_k\rangle$ for the four slowest eigenmodes of the relaxation process with a reflecting boundary at $a_3$; (c) Spectra of characteristic time scales for relaxation ($\lambda_k^{-1}$, filled gray circles), and first passage ($\mu_k^{-1}$, open color symbols) processes, with the boundaries at $a_1$ to $a_3$, respectively; we used relflecting boundaries for the determination of $\lambda_k^{-1}$ and absorbing boundaries for $\mu_k^{-1}$; (d) First passage time probability densities -- the lines correspond to Eq.~(\ref{first}) and the symbols to Brownian dynamics simulations each of an ensemble of $2\times 10^6$ trajectories with an integration step $\Delta t=10^{-5}$.} \label{fg1} \end{figure} As a case study we analyze the first passage time statistics in a triple well potential (see Fig.~\ref{fg1}a). Understanding the diffusive exploration of multi-well potentials is important from a biophysical perspective, as it underlies e.g. the folding \cite{neup16,bryn89}, misfolding \cite{yu15,dee16}, conformational dynamics \cite{noe11,chen07} and aggregation of proteins and peptides \cite{dobs03,zhen13} as well as (bio)chemical reactions \cite{kram40,schu81}. We computed the first 40 left and right relaxation eigenvectors, $\langle x|\psi^\mathrm{R}_k\rangle$ and $\langle \psi^\mathrm{L}_k|a\rangle$, and eigenvalues $\lambda_k$ numerically using a reflecting boundary condition at the target $a$. The four lowest $\langle x|\psi^\mathrm{R}_k\rangle$ of the relaxation process are depicted in Fig.~\ref{fg1}b. From $\{\lambda_k,|\psi^\mathrm{R}_k\rangle,\langle \psi^\mathrm{L}_k|\}$ we calculate the first 30 $\mu_k$ and $w_k(x_0)$ using the duality, i.e. Eqs.~(\ref{eigenv}) and (\ref{weight}). The spectrum of first passage eigenvalues is depicted in Fig.~\ref{fg1}c, with the corresponding first passage time probability densities shown in Fig.~\ref{fg1}d (lines) and compared to the result of Brownian dynamics simulations (symbols). We find an excellent agreement between theory and simulations. Note that the deviations of the theoretical results from simulations observed on extremely short timescales are a direct consequence of truncating the sums in Eqs.~(\ref{propagator}) and (\ref{first}) (i.e., we considered 40 eigenvalues in the relaxation spectrum and 30 first passage eigenvalues). We now link metastability to the first passage time behavior. A potential $U(x)$ has metastable states if the minima are separated by high barriers $>k_\mathrm{B}T$. The probability mass in the ground state $P_{\mathrm{eq}}(x)$ is concentrated around these minima. The barriers give rise to a separation of time-scales between inter-well (see, e.g., $\psi^\mathrm{R}_1,\psi^\mathrm{R}_2$ in Fig.~\ref{fg1}b) and intra-well dynamics (see $\psi^\mathrm{R}_{k>2}$), and thus create gaps in the relaxation spectrum \cite{bovi15,biro01,tana03,tana04}. As a result we observe in Fig.~\ref{fg1}c (see filled gray circles) two gaps $0=\lambda_0\ll\lambda_1\ll\lambda_2<\lambda_3$ when the reflecting boundary is at $a_1$, corresponding to the crossing of a single barrier. Conversely, three gaps, $0\ll\lambda_1\ll\lambda_2\ll\lambda_3<\lambda_4$, appear when the reflecting boundary is at $a_3$, corresponding to the global relaxation to $P_{\mathrm{eq}}(x)$, to direct transitions between the leftmost and right-most wells, and to the transition to the central well from both sides, respectively (see Fig.~\ref{fg1}b). These gaps are independent of $x_0$. Due to the interlacing (Eq.~(\ref{interlacing})), and because $\mu_1\ne 0$, the $N$ gaps in the relaxation spectrum reflecting all the metastable basins translate to $N-1$ gaps in the first passage spectrum due to the $N-1$ barriers. The first passage spectrum is shifted to shorter times, since contrary to relaxation, \emph{all trajectories} must surmount the barriers. The spectral weights $w_k$ depend on the initial position, gauging the contribution of each relaxation mode with respect to the given first passage time-scale $\mu_k^{-1}$ (see Eq.~(\ref{weight})). The four lowest $w_k$ for the first passage process $x_0\to a_3$ are shown in Fig.~\ref{fg_w}a. \begin{figure} \centering \includegraphics[width=\textwidth]{weights} \caption{First passage in triple-well potential from Fig.~\ref{fg1} to fixed target position $a_3=1.3$. (a)~Lowest four weights $w_k(x_0)$ determined from Eq.~(\ref{weight}) for the first passage process from $x_0$ to fixed target $a=a_3$ in the triple-well potential depicted in Fig.~\ref{fg1}a as a function of the starting point $x_0$. (b)~Mean first passage time (solid black line) corroborated by Brownian simulations (symbols) versus slowest time-scale approximation $\langle t_a \rangle_1^{\rm slowest}=w_1(x_0)/\mu_1$ from Eq.~(\ref{tslowest}). (c)~$\phi_N=\langle t_a \rangle_N/\langle t_a \rangle_N^{\rm slowest}$ comparing the mean first passage time of $N$ particles, $\langle t_a (x_0)\rangle_N$, with the one-scale approximation $\langle t_a (x_0)\rangle_N^{\rm slowest}=w_1(x_0)^N/(N\mu_1)$ (see Eq.~(\ref{tslowest})) for various $N$. Lines denote the theory and symbols the simulation results.} \label{fg_w} \end{figure} In view of \cite{gode16,gode17} (see also \cite{beni14}) we now separate all first passage trajectories into two classes --- the so-called `globally indirect' and the rest. The class of `globally indirect' trajectories includes those exploring the entire accessible phase space prior to absorption. These trajectories therefore arrive on the slowest time-scale $\mu_1^{-1}$ and their associated weight $w_1$ is approximately the fraction of all first passage trajectories that reach quasi-equilibrium before hitting the target. Correspondingly, $w_1$ -- the weight of globally indirect trajectories decreases as the starting position $x_0$ approaches the target at $a$ (see e.g. blue solid line in Fig.~\ref{fg_w}a for $a=1.7$). In other words, the closer $x_0$ is to the target the more unlikelier are globally indirect trajectories. Pushing this picture even further we can also identify in Fig.~\ref{fg1}d ($x_1\to a_3$) a second pronounced time-scale $\mu_2^{-1}$ with weight $w_2$, reflecting what we may call `locally indirect' trajectories -- those that first equilibrate locally within the central well but cross the second barrier without returning to the left, deepest well. Comparing the second weight $w_2$ from Fig.~\ref{fg_w}a (see dash-dotted red line) and the potential landscape Fig.~\ref{fg1}a we find that `locally indirect' trajectories are most pronounced in the sense of the largest value of $w_2$ if the starting position is within the central well. The locally indirect trajectories account for local equilibration prior to absorption and become relevant as soon as the potential landscapes has more than one deep free energy basin, such as for example the one depicted in Fig.~\ref{fg1}. Our work therefore extends the present understanding of first passage processes \cite{beni14,gode16,gode17} by explicitly identifying locally indirect trajectories -- those equilibrating only locally prior to absorption. For $x_0$ within the central well the fraction of globally indirect trajectories decreases, and locally indirect trajectories become likelier, i.e. $w_2$ increases. Concurrently, higher spectral weights also grow, rendering direct trajectories more likely. As a result, an additional time-scale appears, giving rise to a second `bump' in $\wp_a(t)$ (see Fig.~\ref{fg1}d, blue lines). This reasoning extends to arbitrary landscapes; $w_k,\mu_k$ reflect a hierarchy of time-scales, on which trajectories equilibrate locally in the sequence of all intervals between consecutive basins and $a$, before hitting $a$. The highest modes encode direct trajectories. \subsection{Few encounter kinetics require the full first passage time distribution} The full first passage time statistics are crucial for kinetics in the few-encounter limit, when only the first of many particles needs to find the target \cite{gode16,gode17}. We highlight this on hand of first passage time statistics in a non-interacting $N$-particle system. The $N$-particle survival probability --- the probability that none of the $N$ particles starting from $x_0$ has reached the target until time $t$ --- is simply given by \begin{equation} \label{survival} \mathcal{P}_a(t|x_0)^N\equiv\left[\int_t^{\infty}\wp_a(\tau|x_0)d\tau\right]^N=\left[\sum_{k>0}w_k(x_0)\e^{-\mu_k t}\right]^N, \end{equation} where we have inserted Eq.~(\ref{first}). We note that if the initial conditions where not identical with $x_i\neq x_0$ for all $i=1,\ldots, N$ one would replace the survival probability $\mathcal{P}_a(t|x_0)^N$ by the product $\prod_{i=1}^N \mathcal{P}_a(t|x_i)$. For convenience, we will restrict our discussion to the scenario in which all particles start from the same position. Using the survival probability \eqref{survival} the $N$-particle first passage time density follows directly from the single particle case \begin{equation} \wp_a^{(N)}(t|x_0)\equiv-\frac{\partial}{\partial t} \mathcal{P}_a(t|x_0)^N= N\wp_a(t|x_0) \mathcal{P}_a(t|x_0)^{N-1}, \label{Ndensity} \end{equation} which is the probability density that one of $N$ particles reaches the target $a$ at time $t$ under the condition that none of the remaining $N-1$ particles has arrived before. Obviously, $N$-particles will find the target on average in a shorter time than a single particle. More precisely, the mean first passage time in the many particle setting reads \begin{equation} \langle t_a(x_0)\rangle_N\equiv\int_0^{\infty}t\wp_a^{(N)}(t|x_0){\rm d} t=\int_0^{\infty}\mathcal{P}_a(t|x_0)^N{\rm d} t, \end{equation} where we have inserted Eq.~(\ref{Ndensity}) and performed an integration by parts in the last step. Let us now focus on the mean first passage time and start with a single particle exploration $(N=1)$ in which case the mean according to Eq.~\eqref{eq:moments} is simply given by $ \langle t_a(x_0)\rangle_1=\sum_{k>0}w_k(x_0)/\mu_k$. If there are free energy barriers between the initial position of the particle and the target, which lead the emergence of a local equilibrium before reaching $a$, i.e., $\mu_2\gg\mu_1$, we expect the mean first passage time to be well approximated by the slowest timescale $ \langle t_a(x_0)\rangle_1\approx w_1(x_0)/\mu_1\equiv \langle t_a(x_0)\rangle_1^\text{slowest}$. The dominance of the single slowest time-scale is fully corroborated by simulations as depicted in Fig.~\ref{fg_w}b (compare solid black line and dash-dotted magenta line). Notably, the approximation $\langle t_a(x_0)\rangle_1\approx w_1(x_0)/\mu_1$ $(\ll w_2(x_0)/\mu_2)$ can be accurate even if the barriers are not located between $x_0$ and $a$ as can be seen in Fig.~\ref{fg_w} for $x_0\gtrsim 1$ (see Fig.~\ref{fg1}a for potential landscape). This can be explained intuitively by the fact that $w_{1}(x_0)$ is approximately the splitting probability that the particle will reach the deepest potential well at $x^\dagger\approx-1.8$ before hitting the target $a=a_3$, multiplied by the average time to leave the deepest basin in the potential, which is $1/\mu_1$ \cite{gode16}. In the $N$ particle setting the ``slowest'' first passage rate is simply $\mu_1^{(N)}=N\mu_1$, such that the long-time limit of the first passage time density is given by $\wp^{(N)}_a(t|x_0)\simeq w_{1}^{(N)}(x_0)\mu_1^{(N)}\e^{-\mu_1^{(N)}t}$ with weight $w_{1}^{(N)}(x_0)=w_{1}(x_0)^N$. Utilizing only long-time asymptotics in the $N$-particle system gives \begin{equation} \langle t_a(x_0)\rangle_N^\text{slowest}=\frac{w_1(x_0)^N}{N\mu_1}. \label{tslowest} \end{equation} Comparing the exact $\langle t (x_0) \rangle_N$ with this approximation in terms of $\phi_N=\langle t (x_0) \rangle_N/\langle t (x_0) \rangle_N^{\rm slowest}$ (see Fig.~\ref{fg_w}c) reveals, however, that the long-time approximation can be orders of magnitude off, despite its accuracy in the single particle setting. In particular, it underestimates $\langle t (x_0) \rangle_N$ for distant $x_0$ up to approximately the point $x_0\approx-1.1$ (see curves with $1\le N\le20$), where $w_2$ changes sign, where from it overestimates $\langle t (x_0) \rangle_N$. Increasing $N$ further beyond $N>20$ shift the first passage towards shorter time scales, rendering higher modes corresponding to $w_{k\ge 3}$ more relevant, which finally yields a systematically longer mean first passage time than expected from the single times-scale estimate (see magenta line with filled rectangles, $N=100$, in Fig.~\ref{fg_w}c). The large discrepancy as a result of neglecting direct and locally indirect trajectories grows further with increasing $N$, and highlights the importance of understanding the full first passage time statistics. The lines in Fig.~\ref{fg_w}c are obtained with the duality relation presented in Sec.~\ref{sec:main} and are fully corroborated by Brownian dynamics simulations (symbols). We now inspect the shape of the distribution upon increasing the number of particles in Fig.~\ref{fig_narrowing}. We can identify in Fig.~\ref{fig_narrowing}a two competing effects that eventually lead to a canonical narrowing of the first passage time distribution for $N\gg 1$. First, according to Eq.~(\ref{Ndensity}) the $N$-particle density is proportional to $\mathcal{P}_a(t|x_0)^{N-1}$, with $\mathcal{P}_a(t|x_0)$ being a strictly monotonic decaying function that formally satisfies $\mathcal{P}_a(0|x_0)=1$ and $\mathcal{P}_a(\infty|x_0)=0$. By increasing the number of particles $N$ the weight of the survival probability $\mathcal{P}_a(t|x_0)^{N-1}$ is progressively shifted towards shorter time-scales (see Fig.~\ref{fig_narrowing}b), i.e., the long-time asymptotics are shifted towards shorter times for increasing $N$, thereby decreasing the width of the probability density. Second, at short times the single-particle first passage probability density for generic diffusion process vanishes \cite{kamp93,beij03,redn01}, $\lim_{t\to 0}\wp_a(t|x_0)=0$. We will refer to this feature as the ``short-time cutoff'', which according to Eq.~\eqref{Ndensity} prevails for any number of particles $N$. Hence the combination of the suppression of long-time asymptotics and the short-time cutoff eventually inevitably leads to a narrowing of the first passage time distribution, irrespective of the details of the underlying dynamics. Further studies specifically targeting the short time limit of first passage time distributions can be found in \cite{kamp93,beij03}. \begin{figure} \centering \includegraphics{narrowing} \caption{Narrowing of the first passage time density in the few-encounter limit. (a)~$N$-particle density for $x_0\to a_1$. The thick solid line corresponds to the solid blue line in Fig.~\ref{fg1}d. The diamonds depict the respective mean first passage times $\langle t_a(x_0)\rangle_N$. (b)~The survival probability $P_a(t|x_0)^{N-1}$ that none of the remaining $N-1$ particles has reached the target. According to Eq.~(\ref{survival}) each colored curve in (a) is the product of the thick curve $N=1$ and the corresponding survival probability in (b). } \label{fig_narrowing} \end{figure} In general, the `$N$-particle' first passage problem is essential for describing nucleation kinetics, since the occurrence of the first stable nucleus triggers the spontaneous growth of the new phase (see e.g. \cite{kamp93,beij03}). A particular form thereof is the occurrence of misfolding-triggered protein aggregation resulting in many diseases \cite{yu15,dee16,dobs03,zhen13}. Namely, in many-protein systems the free energy minimum does not correspond to a folded state, but rather to an aggregate of misfolded proteins \cite{dobs03,zhen13}. Misfolding of a single protein, which indeed occurs by slow diffusion in a rough energy landscape \cite{yu15,dee16}, seeds aggregation similar to a nucleation phenomenon. To predict the onset of aggregation and hence disease from the protein's energy landscape, an understanding of the full first passage time statistics is required, and our work provides the foundations to do so. In the following section we briefly show that our exact theory from Sec.~\ref{sec:main} can also be applied to systems with truly rugged energy landscapes. \section{Rugged energy landscapes} \label{sec:rugged} In the previous section we have demonstrated that our theory from Sec.~\ref{sec:main} can readily be used to obtain first passage time densities for multi-well barrier crossing problems with barrier heights $>k_{\rm B}T$. Moreover, we have discussed the few-encounter limit, for which it is imperative to have access to the full first passage time distribution, since any attempt to explain many-particle first passage kinetics by single-particle moments are prone to fail. To model a rugged energy landscape containing, in addition to high barriers, also barriers which are $\lesssim k_{\rm B} T$, we use a parabolic potential plus a Karhunen-Lo\`eve expansion of a realization of a Brownian motion \begin{equation} U(x)=x^2/4+\sum_{k=1}^Kz_k\frac{\sin[(2k-1)x]}{(2k-1)}, \label{UKL} \end{equation} where we have truncated the potential after $K$ terms and where $z_k$ are Gaussian random numbers. Once $z_k$ are generated we keep them constant. In Fig.~\ref{fig_rugged}a we depict the potential generated from Eq.~(\ref{UKL}) with $K=16$. As before, we determine the eigenvalues $\{\lambda_k\}$ and eigenfunctions $\{\psi^\mathrm{R}_k(x)\}$ of the relaxation process with $\psi^\mathrm{R}_0$ being the equilibrium Boltzmann density (see left panel of Fig.~\ref{fig_rugged}). Exploiting the theory from Sec.~\ref{sec:main} we obtain the first passage time density $\wp_a(t|x_0)$ in Fig.~\ref{fig_rugged}c (see solid black line), which is corroborated by extensive Brownian dynamics simulations (see blue open circles). The inset of Fig.~\ref{fig_rugged}c depicts the first passage density on a linear scale. In order to indicate the short-time cutoff, which is dominated by diffusive transport, we also plot the short time asymptotic for free diffusion $(U(x)=0)$. In Fig.~\ref{fig_rugged}d we depict the corresponding $N$-particle first passage time densities for the few-encounter limit, which clearly reveal the drastic narrowing of the first passage time distribution arising from the aforementioned interplay between the diffusive short-time cutoff and the suppression of the long-time asymptotics for increasing $N$. This example illustrates that our theory can readily be applied to arbitrarily rough potential landscapes. \begin{figure} \includegraphics{rugged} \caption{First passage in a rugged energy landscape. (a)~Potential from Eq.~(\ref{UKL}) with the corresponding equilibrium measure $\psi^\mathrm{R}_0(x)\propto P_{\rm eq}(x)$, $K=16$ and $(z_1,\ldots,z_{16})=(-0.2,$ $-0.83,$ $-0.93,$ $1.05,$ $-0.79,$ $0.55,$ $0.61,$ $1.96,$ $-0.88,$ $-1.41,$ $-1.53,$ $-0.31,$ $-0.75,$ $-0.43,$ $-0.46,$ $1.34)$. The first passage from $x_0=1.5$ to $a=2.4$ is considered. (b)~First four excited-state eigenfunctions of the relaxation process. (c)~First passage time density $\wp_a(t|x_0)$ on a log-log-scale with the corresponding plot on a linear scale depicted in the inset. The solid black line is determined using the duality Eqs.~(\ref{eigenv}-\ref{weight}) while the blue symbols are obtained by simulating $10^6$ trajectories. The dashed green line corresponds to the short-time asymptotics of the density $\wp_a^\text{free}(t|x_0)=(2\sqrt{\pi t^3})^{-1}|a-x_0|\exp[-(a-x_0)^2/(4t)]$ for a free Brownian motion ($U(x)=0$; see, e.g., Ref.~\cite{gode16}). (d)~$N$-particle first passage time density $\wp_a^{(N)}(t|x_0)$.} \label{fig_rugged} \end{figure} \section{Large deviation limit} \label{sec:mu1} For single-particle problems the mean first passage time as well as higher moments are typically dominated by the long-time asymptotics of the first passage time distribution, which we have also demonstrated in Fig.~\ref{fg_w}b for the triple-well potential. The long-time limit is encoded in the principal first passage eigenvalue $\mu_1$. As we demonstrate in \ref{Appendix1}, the principal eigenvalue $\mu_1$ can be obtained in a simplified manner by formally setting $\bar\mu_k=0$ and $k^\ast=0$ in Eq.~(\ref{coefs}). Moreover, a powerful approximation can be obtained by tuncating in Eq.~(\ref{coefs}) all coefficients with $n>2$ (for a formal justification see last paragraph of \ref{Appendix1}), \begin{equation} \mu_1\simeq\tilde{\mu}_1=\frac{\sigma_1(a)}{2\sigma_2(a)}\left[\sqrt{1+4\frac{P_{\mathrm{eq}}(a)\sigma_2(a)}{\sigma_1(a)^2}}-1\right], \label{principal} \end{equation} where we introduced $\sigma_n(a)\equiv \sum_{l\ge 1}\langle a |\psi ^R_l\rangle\langle\psi ^L_l|a\rangle/\lambda_l^{n}$. Since Eq.~\eqref{principal} is derived from a Taylor expansion around $s=0$ (see Eq.~\eqref{eq:A_F0}) it is expected to be quite accurate as soon as the formal condition $\mu_1\ll\lambda_1$ is met, which in turn translates self-consistently into $\tilde\mu_1\ll\lambda_1$. The relative error $\epsilon=|\mu_1-\tilde\mu_1|/\mu_1$ is expected to scale as $\epsilon\propto(\tilde\mu_1/\lambda_1)^2$. For example, in the presence of at least one high barrier and as long as $a$ is not the deepest point of $U(x)$ the condition $\lambda_1\gg \mu_1$ is indeed satisfied (see, e.g. Fig. \ref{fg1}a,c). Thus, rescaling $\wp_a(t)$ according to Eq.~(\ref{principal}), all curves must collapse for long times onto a unit exponential $\wp_a(t=\theta/\tilde{\mu}_1)/(w(x_0)\tilde{\mu}_1)=\mathrm{e}^{-\theta}$, which is indeed fully confirmed in Fig.~\ref{fg2}. The relative errors for the triple-well potential (see open colored symbols) are strictly bounded, $|\mu_1-\tilde{\mu}_1|/\mu_1<0.02$ for any $a$. \begin{figure} \centering \includegraphics{quadratic} \caption{Rescaled first passage time probability density, $\wp(t)/(w(x_0)\tilde{\mu}_1)$, obtained by rescaling the simulation results from Fig.~\ref{fg1}d (colored open symbols) and Fig.~\ref{fig_rugged}c (filled magenta rectrangles) by Eqs.~(\ref{weight}) and (\ref{principal}). The line denotes a unit exponential. } \label{fg2} \end{figure} More generally, Eq.~(\ref{principal}) holds for relaxation spectra obtained under a reflecting boundary at $a,$ as well as for natural boundary conditions if there is no deeper minimum beyond $a$. If furthermore $P_{\mathrm{eq}}(a)\to 0$ in Eq.~(\ref{principal}) (i.e. the case of 'rare-event' absorption), then $\tilde{\mu}_1\simeq P_{\mathrm{eq}}(a)/\sigma_1(a)$, where particularly \begin{equation} \sigma_1(a)=\int_0^\infty [P(a,t|a)-P_{\mathrm{eq}}(a)]{\rm d} t. \end{equation} Eq.~(\ref{principal}) generalizes the `Poissonization' phenomenon observed in \cite{gode16,gode17}. We note that Eq.~(\ref{principal}) can also accurately describe the long-time first passage asymptotics in rugged energy landscapes with an arbitrary number of lower barriers (i.e. $<k_{\rm B}T$; see closed magenta rectangles in Fig.~\ref{fg2}). Further technical remarks including an extension to discrete state systems can be found in \cite{hart18a_arxiv}. \section{Conclusion} \label{sec:conclusion} This paper establishes rigorously the duality between relaxation and first-passage processes for ergodic reversible Markovian dynamics. Based on the duality, an intuitive explanation of first passage time statistics in general rugged energy landscapes is provided. The full first passage time statistics are shown to be required for explaining correctly the kinetics in the few-encounter limit -- particularly relevant cases thereof are the triggering of diseases by protein misfolding and related nucleation-limited phenomena. In addition, we obtained accurate large deviation asymptotics dominating the mean first passage time, which emerge from a time-scale separation in the relaxation process. We show in \cite{hart18a_arxiv} that all concepts presented here can readily be extended to discrete state-space network dynamics, which, \textit{inter alia} extends the duality between first passage and relaxation to higher dimensional networks. Notably, they allowed us to determine, for the first time, analytically the full first passage time statistics of the Ornstein-Uhlenbeck process (see \cite{hart18a_arxiv}). Our work provides an exact unified framework for studying the full statistics of first passage time under detailed balance conditions. Generalizations to irreversible dynamics will be pursued in our future studies. \ack The financial support from the German Research Foundation (DFG) through the \emph{Emmy Noether Program "GO 2762/1-1"} (to AG) is gratefully acknowledged.
train/arxiv
BkiUdXfxK6wB9dbuT7rK
5
1
\section{Introduction} \label{sec:intro} This work is concerned with the existence theory for the stochastic Schr\"odinger equation with fractional multiplicative random noise of the form \begin{equation} \label{SSE} \left\{ \begin{array}{l} d \Psi=i \Delta \Psi dt-i \Psi d B_t^H -ig(\Psi)dt, \qquad t>0, \qquad x \in \mathbb R^n, \quad n\leq 3\\ \Psi(t=0,\cdot)=\Psi_0, \end{array} \right. \end{equation} where $B_t^H\equiv B^H(t,x)$ is an infinite dimensional fractional Brownian motion in time and smooth in the space variables. The sense of the stochastic integral will be precised later on and the term $g(\Psi)$ is non-linear. We limit ourselves to $n \leq 3$ for physical considerations, but the theory should hold for arbitrary $n$ with adjustments of some hypotheses. Our interest for such a problem is motivated by the study of the propagation of paraxial waves in random media that are both strongly oscillating and slowly decorrelating in the variable associated to the distance of propagation. Such media are encountered for instance in turbulent atmosphere or in the earth's crust \cite{dolan,sidi}. More precisely, it is well-known that the wave equation reduces in the paraxial approximation \cite{TAP} to the Schr\"odinger equation on the enveloppe function $\Psi$, which reads in three dimensions $$ i \partial_z \Psi=-\Delta_\bot \Psi + V(z,x) \Psi, \qquad z \in \mathbb R^+, \quad x\in \mathbb R^2, $$ where $z$ is the direction of propagation of the collimated beam, $x=(x_1,x_2)$ is the transverse plane, $\Delta_\bot=\partial_{x_1^2}^2+\partial_{x_2^2}^2$, and $V$ is a random potential accounting for the fluctuations of the refraction index. If $V$ is stationary and its correlation function $R(z,x):=\mathbb E \{V(z+u,x+y) V(u,y)\}$ has the property that $$ R(z,x) \underset{z \to \infty}{\sim} z^{-\alpha} R_0(x), $$ where $R_0$ is a smooth function and $0<\alpha<1$, then the process $V$ presents long-range correlations in the $z$ variable since $R$ is not integrable. Rescaling $V$ as $V \to \varepsilon^{-\frac{\alpha}{2}} V(z/\varepsilon,x)$ and invoking the non-central limit theorem, see e.g. \cite{taqqu}, one may expect formally when $\varepsilon \to 0$ that $$ \frac{1}{\varepsilon^{\frac{\alpha}{2}}} V(\frac{z}{\varepsilon},x) \Psi^\varepsilon(z,x) \tolaw \Psi(z,x) dB^H(z,x), $$ where $B^H$ is a Gaussian process with correlation function \begin{equation} \label{corrB} \mathbb E\{B^H(z,x+y)B^H(z',x) \}=\frac{1}{2H(2H-1)} (z^{2H}+(z')^{2H}-|z-z'|^{2H})R_0(x), \end{equation} with $H=1-\frac{\alpha}{2} \in (\frac{1}{2},1)$. Proving this fact is an open problem, while the short-range case (when $R$ is integrable) was addressed in \cite{BCF-SIAP-96,Garnier-ITO}, and the limiting wave function $\Psi$ is shown to be a solution to the It\^o-Schr\"odinger equation. Our starting point here is \fref{SSE}, where we added a non-linear term $g(\Psi)$ to account for possible non-linearities arising for instance in non-linear optics. Let us be more precise now about the nature of the stochastic integral in \fref{SSE}. Since \fref{SSE} is obtained after formal asymptotic limit of a $L^2$ norm preserving Schr\"odinger equation, one may legitimately expect the limiting equation to also preserve the $L^2$ norm. The appropriate stochastic integral should therefore be of Stratonovich type, which in the context of fractional Brownian motions are encountered in the literature as pathwise integrals of various types, e.g. symmetric, forward, or backward, \cite{biagini-book,zahle}. Since in our case of interest the Hurst index $H$ is greater than $\frac{1}{2}$, all integrals are equivalent and can be seen as Riemann-Stieljes integrals of appropriate functions, see \cite{biagini-book,zahle} and section \ref{prelim} for more details. Such an integral is well-defined for instance if both integrands are of H\"older regularity with respective indices $\beta$ and $\gamma$ such that $\beta+\gamma>1$ \cite{zahle}. In the context of SPDEs, the infinite dimensional character of the Gaussian process is usually addressed within two frameworks, whether for standard or fractional Brownian motions: the $Q-$(fractional) Brownian type, or the cylindrical type, see \cite{daprato}. The first class is more restrictive and requires the correlation operator $Q$ in the space variables to be a positive trace class operator (or even more for fractional Brownian motions, see \cite{mas-nualart}); in the second class, it is only supposed that $Q$ is a positive self-adjoint operator on some Hilbert space with appropriate Hilbert-Schmidt embeddings. As was done in \cite{mas-nualart} for parabolic equations with multiplicative fractional noise, we will assume our noise is of $Q-$fractional type, which yields direct pathwise (almost sure) estimates on $B^H$ in some functional spaces. The cylindrical case is more difficult and our approach does not seem to generalize to it. The $Q-$fractional case actually excludes stationary in $x$ correlation functions of the form \fref{corrB} since they lead to a cylindrical type noise, which is a drawback of our assumptions. This latter case, even in the more favorable situation of parabolic equations, seems to still be open. Standard Brownian motions are more amenable to cylindrical noises since the It\^o isometry holds. In the case of fractional type integrals, the ``It\^o isometry'' involves the Malliavin derivative of the process, which is difficult to handle in the context of SPDEs with multiplicative noise. Hence, an existence theory for the Schr\"odinger equation in some average sense seems more involved to achieve, and we thus focus on a pathwise theory which requires the $Q-$Brownian assumption in our setting. Stochastic ODEs with fractional Brownian motion were investigated in great generality in \cite{nualart-rasc}. Stochastic PDEs with fractional multiplicative noise are somewhat difficult to study and to the best of our knowledge, the most advanced results in the field are that of Maslowski and al \cite{mas-nualart}, Duncan et al \cite{duncan-mas-mult} or Grecksch et al \cite{grecksch}. The reference \cite{duncan-mas-mult} involves finite dimensional fractional noises, which is a limitation. Several other works deal with additive noise, which is a much more tractable situation as stochastic integrals are seen as Wiener integrals \cite{duncan-mas,tindel,gautier} and cylindrical type noises are allowed. References \cite{mas-nualart,duncan-mas-mult,grecksch} consider variations of parabolic equations of the form \begin{equation} \label{equ} d u= A u dt+u d B_t^H, \end{equation} where $A$ is the generator of an analytic semigroup $S$, and the equation can be complemented with non-linear terms and a time-dependency in $A$ in \cite{duncan-mas-mult}. The noise $B^H$ in these references is a $Q-$fractional Brownian with possibly additional assumptions. The difficulty is naturally to make sense of the term $u d B_t^H$ and to show that $u$ is H\"older in time. In that respect, the analyticity hypothesis is crucial: indeed, the standard technique to analyze \fref{equ} is to use mild solutions of the form $$ u(t)=S(t)u_0+\int_0^t S(t-s)u(s)d B^H_s, $$ and for the integral to exist, one needs the term $S(t-s)u(s)$ to be roughly of H\"older regularity in time with index greater than $1-H$. This means that both $u$ and the semigroup $S$ need such a regularity. While the term $u(s)$ can be treated in the fixed point procedure, the semigroup $S(t-s)$ has to be sufficiently smooth in time, which holds for analytic semigroups, but not in the case of $C_0$ unitary groups generated by $i \Delta$ in the Schr\"odinger equation. In the latter situation, one can ``trade'' some regularity in time for $S$ with some spatial regularity on $u$, but this procedure does not seem to be exploitable in a fixed point procedure. Another possibility could be to take advantage of the regularizing properties of the Schr\"odinger semigroup that provide a gain of almost half a derivative in space, and therefore to almost a quarter of a derivative in time \cite{cazenave}. It looked to us rather delicate to follow such an approach since the smoothing effects hold for particular topologies involving spatial weights which looked fairly intricate to handle in our problem, even by using the classical exchange regularity/decay for the Schr\"odinger equation. The strictly linear case (i.e. when $g=0$ in \fref{SSE}) can likely be treated by somewhat brute force with iterated Wiener integrals and the Hu-Meyer formula, but this approach does not carry on to the non-linear setting. We propose in this work a different route than the mild formulation and a quite simple remedy based on two direct observations: (i) the usual change of variables formula holds for the pathwise stochastic integral and (ii) using it along with a change of phase removes the noise and leads to a Schr\"odinger equation with magnetic vector potential $A(t,x)=-\nabla B^H(t,x)$. Forgetting for the moment the non-linear term $g(\Psi)$, introducing $\varphi(t,x):=e^{i B^H(t,x)} \Psi(t,x)$ the filtered wavefunction, and supposing without lack of generality that $B^H(0,x)=0$, this yields the system \begin{equation} \label{SEM} \left\{ \begin{array}{l} i \partial_t \varphi = -\Delta_{B_t^H} \varphi, \qquad \Delta_{B_t^H} =e^{i B_t^H}\circ \Delta \circ e^{-i B_t^H}\\ \varphi(t=0,\cdot)=\Psi_0, \end{array} \right. \end{equation} which is a standard Schr\"odinger equation with a time-dependent Hamiltonian. There is a vast literature on the subject, see \cite{RS-80-2, debouard_mag,kato0, katolinear,nakamura, michel,yajima,yajima2,zagrebnov} for a non-exhaustive list. One of the most classical assumptions on $A$ for the existence of an evolution operator generated by the Hamiltonian $\Delta_{B_t^H}$ is that $A$ is a ${\mathcal C}^1$ function in time with values in $H^1(\mathbb R^n)$. This is of course not verified for the fractional Brownian motion. The price to pay for that is to require additional spatial regularity, and one possibility (likely not optimal) is to suppose that $B^H$ has values in $H^4(\mathbb R^n)$. Assuming such a strong regularity is naturally a drawback in this approach. Regarding the treatment of the non-linearity, we suppose that it is invariant by a change of phase, that is $e^{iB^H_t} g(\Psi)=g(\varphi)$, which is verified by power non-linearities of the form $g(\Psi)=|\Psi^\sigma| \Psi$ or by $g(\Psi)=V[\Psi] \Psi$ where $V$ is the Poisson potential. Contrary to the case of non-linear It\^o-Schr\"odinger equations where various tools such as Strichartz estimates or Morawetz estimates have been successfully used to investigate focusing/defocusing phenomena in random and deterministic settings \cite{cazenave,debu2,debu3,debu4}, there are very few available techniques to study \fref{SEM} with a potential vector $A$ not smooth in time and augmented with the term $g$. There are Strichartz estimates in the context of magnetic Schr\"odinger equations, but some require $A$ to be ${\mathcal C}^1$ in time \cite{yajima}, and some others avoid such an hypothesis but assume instead that $A$ is small in some sense \cite{stefanov_mag}, which has no reason to hold here. As a result, we are lead to make rather crude assumptions on $g$ in order to obtain a local existence result. Moreover, the analysis of non-linear Schr\"odinger equations generally relies in a crucial manner on energy methods. In our problem of interest, we are only able to obtain energy conservation for smooth solutions, which turns out to be of no use when trying to obtain a global-in-time result and limits us to local results, unless the non-linearity is globally Lipschitz in the appropriate topology. This is due to the fractional noise that does not allow us to obtain $H^1$ estimates for $\Psi$ via the energy relation, as we explain further in remark \ref{ener}. The main result of the paper is therefore a local existence result of pathwise solutions to \fref{SSE} with a smooth $Q-$fractional noise $B^H_t$ and appropriate assumptions on the non-linearity $g$. The article is structured as follows: in section \ref{prelim}, we recall basic results on fractional stochastic integration, and present our main result in section \ref{main}; section \ref{secmag} is devoted to the magnetic Schr\"odinger equation \fref{SEM}, while section \ref{back} concerns the proof of our main theorem. \section{Preliminaries} \label{prelim} \noindent \textbf{Notation.} We denote by $H^k(\mathbb R^n)$ and $W^{k,q}(\mathbb R^n)$, $1 \leq n \leq 3$, the standard Sobolev spaces with the convention that $H^0(\mathbb R^n):=L^2(\mathbb R^n)$. For a Banach space $V$, $T>0$, and $0<\alpha<1$, $W_{\alpha,1}(0,T,V)$ denote the space or mesurable functions $f: [0,T] \to V$ equipped with the norm $$ \|f \|_{\alpha,1,V}=\int_0^T\left(\frac{\|f(s)\|_V}{s^\alpha}+\int_0^s\frac{\|f(s) -f(\tau)\|_V}{(s-\tau)^{\alpha+1}} d\tau \right) ds. $$ The space ${\mathcal C}^{0,\alpha}(0,T,V)$ denotes the classical H\"older space of functions with values in $V$. When $V=\mathbb C$ or $\mathbb R$, we will simply use the notations $W_{\alpha,1}(0,T)$, ${\mathcal C}^{0,\alpha}(0,T)$ and $\|\cdot \|_{\alpha,1}$. Notice that for any $\varepsilon>0$, ${\mathcal C}^{0,\alpha+\varepsilon}(0,T,V) \subset W_{\alpha,1}(0,T,V)$. For two Banach spaces $U$ and $V$, ${\mathcal L}(U,V)$ denotes the space of bounded operators from $U$ to $V$, with the convention ${\mathcal L}(U)={\mathcal L}(U,U)$. The $L^2$ inner product is denoted by $(f, g)=\int_{\mathbb R^n} \overline{f}g dx$ where $\overline{f}$ is the complex conjugate of $f$.\\ \noindent \textbf{Fractional Brownian motion.} For some positive time $T$, we denote by $\beta^H=\{ \beta^H(t), \; t \in [0,T]\}$ a standard fractional Brownian motion (fBm) over a probability space $(\Omega, {\mathcal F},\P)$ with Hurst index $H \in (\frac{1}{2},1)$. We will denote by $L^2(\Omega)$ the space of square integrable random variables for the measure $\P$ and will often omit the dependence of $\beta^H$ on $\omega \in \Omega$ for simplicity. The process $\beta^H$ is a centered Gaussian process with covariance $$ \mathbb E\{ \beta^H_t\beta^H_s \}=\frac{1}{2} (t^{2H}+s^{2H}-|t-s|^{2H}). $$ Since $\mathbb E\{ (\beta^H_t-\beta^H_s)^2\}=|t-s|^{2H}$, $\beta^H$ admits a H\"older continuous version with index strictly less than $H$. In order to definite the infinite dimensional noise $B^H(t,x)$, consider a sequence of independent fBm $(\beta_n^H)_{n \in \mathbb N}$. Let $Q$ be a positive trace class operator on $L^2(\mathbb R^n)$ and denote by $(\mu_n, e_n)_{n \in \mathbb N}$ its spectral elements. For $V=H^{q+4}(\mathbb R^n)$, $q$ non-negative integer, and $\lambda_n=\sqrt{\mu_n}$, we assume that \begin{equation} \label{assumQ} \sum_{p \in \mathbb N} \lambda_p \|e_p\|_V < \infty. \end{equation} The process $B^H(t,x)$ is then formally defined by $$ B^H(t,x):=\sqrt{Q} \sum_{p \in \mathbb N} e_p(x) \beta^H_p(t)=\sum_{p \in \mathbb N} \lambda_p e_p(x) \beta^H_p(t). $$ The sum is normally convergent in ${\mathcal C}^{0,\gamma}(0,T,V)$, $\P$ almost surely for $0\leq \gamma<H$. Indeed, in the same fashion as \cite{mas-nualart}, let $$ K(\omega)=\sum_{p \in \mathbb N} \lambda_p \|e_p\|_V \|\beta^H_p(\cdot, \omega)\|_{C^{0,\gamma}(0,T)} $$ so that by monotone convergence $$ \mathbb E K=\sum_{p \in \mathbb N} \lambda_p \|e_p\|_V \mathbb E \|\beta^H_p\|_{C^{0,\gamma}(0,T)}. $$ According to \cite{nualart-rasc} Lemma 7.4, for every $T>0$ and $\varepsilon>0$, there exists a positive random variable $\eta_{\varepsilon,T,p}$ where $\mathbb E\{|\eta_{\varepsilon,T,p}|^q \}$ is finite for $1 \leq q < \infty$ and independent of $p$ since the $\beta_p^H$ are identically distributed, such that $|\beta^H_p(t)-\beta^H_p(s)| \leq \eta_{\varepsilon,T,p} |t-s|^{H-\varepsilon}$ almost surely. Hence, thanks to \fref{assumQ} and picking $\gamma=H-\varepsilon$, we have $\mathbb E K<\infty$, \begin{equation} \label{defK} K(\omega) < \infty, \quad \P \quad \textrm{almost surely}, \end{equation} and $B^H$ defines almost surely an element of ${\mathcal C}^{0,\gamma}(0,T,V)$. As a contrast, a cylindrical fractional Brownian motion is defined for a positive self-adjoint $Q$, which does not provide us with almost sure bounds on $B^H$ in ${\mathcal C}^{0,\gamma}(0,T,V)$. Suppose indeed that $\sqrt{Q}$ is a convolution operator of the form $\sqrt{Q} u= g*u$ for some smooth real-valued kernel $q$ and that $(e_p)_{p \in \mathbb N}$ is a real-valued basis of $L^2(\mathbb R^n)$. Then, the resulting correlation function is stationary (this follows from the convolution and is motivated by \fref{corrB}) and $$ \mathbb E \{(B^H(t,x)-B^H(s,x))^2\}= |t-s|^{2H}\sum_{p \in \mathbb N} (g*e_p(x))^2=|t-s|^{2H} \|g\|^2_{L^2} $$ so that $B^H$ belongs to ${\mathcal C}^{0,\gamma}(0,T,L^\infty(\mathbb R^n, L^2(\Omega)))$ for $0\leq \gamma \leq H$. As explained in the introduction, we are not able to handle such a noise since integration in the probability space is required beforehand in order to get some estimates. This is not an issue in the context of standard Brownian motions or additive fractional noise, but leads to technical difficulties here.\\ \noindent \textbf{Fractional stochastic integration.} We follow the approach of \cite{mas-nualart,nualart-rasc} based on the work of Z\"ahle \cite{zahle} and introduce the so-called Weyl derivatives defined by, for any $\alpha \in (0,1)$ and $t \in (0,T)$: \begin{eqnarray*} D_{0+}^\alpha f(t)&=&\frac{1}{\Gamma(1-\alpha)} \left(\frac{f(t)}{t^\alpha}+\alpha \int_0^t \frac{f(t) -f(s)}{(t-s)^{\alpha+1}} ds \right)\\ D_{T-}^\alpha f(t)&=&\frac{(-1)^\alpha}{\Gamma(1-\alpha)} \left(\frac{f(t)}{(T-t)^\alpha}+\alpha \int_t^T \frac{f(t) -f(s)}{(s-t)^{\alpha+1}} ds \right), \end{eqnarray*} whenever these quantities are finite. Above, $\Gamma$ stands for the Euler function. Following \cite{zahle}, the generalized Stieljes integral of a function $f \in {\mathcal C}^{0,\lambda}(0,T)$ against a function $g \in {\mathcal C}^{0,\gamma}(0,T)$ with $\lambda+\gamma>1$, $\lambda>\alpha$ and $\gamma>1-\alpha$ is defined by \begin{equation} \label{stiel} \int_0^T f dg:=(-1)^\alpha \int_0^T D_{0+}^\alpha f(s) D_{T-}^{1-\alpha} g_{T-}(s) ds, \end{equation} with $g_{T-}(s)=g(s)-g(T-)$. The definition does not depend on $\alpha$ and $$ \int_0^t f dg:=\int_0^T f \un_{(0,t)} dg. $$ The integral can be extended to different classes of functions since, see \cite{nualart-rasc}, \begin{equation} \label{estimint} \left|\int_0^T f dg \right| \leq \|f \|_{\alpha,1} \Lambda_\alpha(g), \end{equation} where $$ \Lambda_\alpha(g):=\frac{1}{\Gamma(1-\alpha) \Gamma(\alpha)} \sup_{0<s<t<T} \left(\frac{|g(t)-g(s)|}{(t-s)^{1-\alpha}}+\alpha \int_s^t \frac{|g(\tau) -g(s)|}{(\tau-s)^{2-\alpha}} d\tau\right), $$ so that the integral is well-defined if $f \in W_{\alpha,1}(0,T)$ and $\Lambda_\alpha(g)<\infty$. Besides, the fractional integral satisfies the following change of variables formula, see \cite{zahle}: let $F \in {\mathcal C}^1(\mathbb R \times [0,T])$, $g \in {\mathcal C}^{0,\lambda}(0,T)$ and $\partial_1 F(g(\cdot),\cdot) \in {\mathcal C}^{0,\gamma}(0,T)$ with $\lambda+\gamma>1$, then \begin{equation} \label{chain} F(g(t),t)-F(g(s),s)=\int_s^t \partial_2 F(g(\tau),\tau) d\tau+ \int_s^t \partial_1 F(g(\tau),\tau) d g(\tau), \end{equation} where $\partial_j F$, $j=1,2$ denotes the partial derivative of $F$ with respect to the $j$ coordinate. For some Banach space $U$ and an operator-valued random function $F \in W_{\alpha,1}(0,T,{\mathcal L}(V,U))$ almost surely for some $\alpha \in (1-H,\frac{1}{2})$, the stochastic integral of $F$ with respect to $B^H$ is then formally defined by \begin{equation} \label{defintsto} \int_0^t F_s d B_s^H:=\sum_{p \in \mathbb N} \lambda_p \int_0^t F_s(e_p) d\beta_p^H(s). \end{equation} The integral defines almost surely an element of $U$ for all $t\in[0,T]$ since by Jensen's inequality for the second line \begin{eqnarray*} \sum_{p \in \mathbb N} \lambda_p \left\| \int_0^t F_s(e_p) d\beta_p^H(s) \right\|_U &\leq& \sum_{p \in \mathbb N} \lambda_p \Lambda_\alpha(\beta^H_p) \left\| \|F_s(e_p) \|_{\alpha,1} \right\|_U \\ &\leq& C \|F_s\|_{\alpha,1,{\mathcal L}(U,V)} \sum_{p \in \mathbb N} \lambda_p \|e_p\|_V \Lambda_\alpha(\beta_p^H) \end{eqnarray*} and as shown in \cite{mas-nualart}, \begin{equation} \label{finitelamb} \sum_{p \in \mathbb N} \lambda_p \|e_p\|_V \Lambda_\alpha(\beta_p^H(\cdot,\omega)) <\infty \qquad \P \quad \textrm{almost surely}. \end{equation} Hence \fref{defintsto} is well-defined and the convergence of the sum has to be understood as the $\P$ almost sure convergence in $U$. We will use the following two results: the first Lemma is a generalization of the change of variables formula \fref{chain} to the infinite dimensional setting, and the second a version a the Fubini theorem adapted to the stochastic integral. Their proofs are given in the appendix. Below, $V=H^{q+4}(\mathbb R^n)$. \begin{lemma} \label{chain2} Let $F: V \times [0,T] \to \mathbb C$ be a continuously differentiable function. Let $\partial_1 F$ be the differential of $F$ with respect to the first argument and $\partial_2 F$ be its partial derivative with respect to the second. For every $v \in V$ and $B \in {\mathcal C}^{0,\gamma}(0,T,V)$ for any $0\leq \gamma<H$, let $\phi(t):=\partial_1 F(B_t,t)(v)$. Assume that $\phi \in {\mathcal C}^{0,\lambda}(0,T)$ with $\lambda+\gamma>1$, and that there exists a constant $C_M>0$ such that, for all $B$ with $\|B\|_{{\mathcal C}^{0,\gamma}(0,T,V)} \leq M$: \begin{equation} \label{hypphi} \|\phi\|_{{\mathcal C}^{0,\lambda}(0,T)} \leq C_M \|v\|_V. \end{equation} Then, we have the change of variables formula, $\forall (s,t) \in [0,T]^2$, $\P$ almost surely: $$ F(B_t^H,t)-F(B_s^H,s)=\int_s^t \partial_2 F(B_{\tau}^H,\tau) d\tau+ \sum_{p \in \mathbb N} \lambda_p \int_s^t \partial_1 F(B_{\tau}^H,\tau)(e_p) d \beta^H_p(\tau). $$ \end{lemma} \begin{lemma} \label{fubini}Let $F \in W_{\alpha,1}(0,T,{\mathcal L}(V,L^1(\mathbb R^n)))$ with $1-H<\alpha<\frac{1}{2}$. Then we have: $$ \sum_{p \in \mathbb N} \lambda_p \int_s^t \left(\int_{\mathbb R^n} F_{\tau,x}(e_p) dx \right) d \beta^H_p(\tau) =\int_{\mathbb R^n} \left(\int_s^t F_{\tau,x} dB_\tau^H \right)dx. $$ \end{lemma} \section{Main result} \label{main} We present in this section the main result of the paper. We precise first in which sense \fref{SSE} is understood. We say that $\Psi \in {\mathcal C}^0(0,T,H^q(\mathbb R^n)) \cap {\mathcal C}^{0,\gamma}(0,T,H^{q-2}(\mathbb R^n))$, for all $0\leq \gamma<H$, $q$ non-negative integer, is a solution to \fref{SSE} if it verifies for all test function $w\in {\mathcal C}^1(0,T,H^{q+2}(\mathbb R^n))$, for all $t \in [0,T]$ and $\P$ almost surely \begin{align} \label{defsol} \nonumber &\left( \Psi(t), w(t) \right)-\left( \Psi_0, w(0) \right) =\int_0^t \left( \Psi(s), \partial_s w(s) \right) ds \\[3mm] & -i\int_0^t \left( \Psi(s),\Delta w(s)\right) ds +i \int_0^t\left(\Psi(s), w(s) d B_s^H\right)+i\int_0^t \left(g(\Psi(s)),w(s) \right) ds, \end{align} where the term involving the stochastic integral is understood as $$ \int_0^t\left(\Psi(s), w(s) d B_s^H \right):=\sum_{p \in \mathbb N} \lambda_p \int_0^t\left(\Psi(s)e_p, w(s)\right) d \beta^H_p(s). $$ The latter is well-defined since the mapping $F_s: e_p \mapsto (\Psi e_p, w)$ belongs to $\in {\mathcal C}^{0,\gamma}(0,T,{\mathcal L}(V,\mathbb R))$ thanks to standard Sobolev embeddings for $n \leq 3$. We assume the following hypotheses on the non-linear term $g$:\\ \textbf{H}: We have $g( e^{i \theta(t,x)} \Psi)= e^{i \theta(t,x)} g(\Psi)$ for all real function $\theta$, and for any $\Psi_1, \Psi_2$ in $H^q(\mathbb R^n)$ with $\|\Psi_i\|_{H^s}\leq M$, $i=1,2$, there exist $p \in \{0,\cdots,q\}$ and positive constants $C_M$ and $C'_M$ such that \begin{eqnarray*} \| g(\Psi_1) \|_{H^q} &\leq& C_M \|\Psi\|_{H^q}\\ \| g(\Psi_1)-g(\Psi_2) \|_{H^p} &\leq& C'_M \|\Psi_1-\Psi_2\|_{H^p}. \end{eqnarray*} The main result of this paper is the following: \begin{theorem} \label{th1} Assume that \textbf{H} is satisfied. Suppose moreover that \fref{assumQ} is verified for $V=H^{q+4}(\mathbb R^n)$, $q$ non-negative integer. Then, for every $\Psi_0 \in H^q(\mathbb R^n)$, there exists a maximal existence time $T_M>0$ and a unique function $\Psi \in {\mathcal C}^0(0,T_M,H^{q}(\mathbb R^n)) \cap{\mathcal C}^{0,\gamma}(0,T_M,H^{q-2}(\mathbb R^n))$, $0\leq \gamma <H$, verifying \fref{defsol} for all $t\in[0,T_M]$ $\P$ almost surely. Moreover, $\Psi$ admits the following representation formula: \begin{equation} \label{repre_th} \Psi(t)= e^{-i B^H_t} U(t,0) \Psi_0+ e^{-i B^H_t} \int_0^t U(t,s) e^{i B^H_s} g(\Psi(s))ds, \end{equation} where $U=\{U(t,s)\}$ is the evolution operator generated by the operator $$ i\Delta_{B_t^H} =i e^{i B_t^H}\circ \Delta \circ e^{-i B_t^H}. $$ If in addition $\Im g(\Psi) \overline{\Psi}=0$, then for all $t\in[0,T_M]$ the charge conservation holds: $$ \| \Psi(t)\|_{L^2}=\| \Psi(0)\|_{L^2}. $$ If $g$ is globally Lipschitz in $H^q(\mathbb R^n)$, then the solution exists for all time $T<\infty$. \end{theorem} When $d=3$, a classical example of a non-linearity satisfying \textbf{H} for $q=p=1$ is $g(\Psi)=V[\Psi] \Psi$, where $V[\Psi]$ is the Poisson potential defined by $$ V[\Psi](x)=\int_{\mathbb R^3} \frac{|\Psi(y)|^2}{|x-y|}dy. $$ Indeed, $g$ is locally Lipschitz in $H^1(\mathbb R^3)$: let $\Psi_1, \Psi_2 \in H^1(\mathbb R^n)$; thanks to the Hardy-Littlewood-Sobolev inequality \cite{RS-80-2}, Chapter IX.4, as well as standard Sobolev embeddings, we have $$ \|\nabla V[\Psi_1] -\nabla V_2[\Psi_2] \|_{L^3} \leq C \| |\Psi_1|^2-|\Psi_2|^2 \|_{L^{\frac{3}{2}}} \leq C \| \Psi_1 -\Psi_2 \|_{L^2}\| \Psi_1 +\Psi_2 \|_{H^1} $$ and direct computations yield $$ \| V [\Psi_1] \|_{L^\infty} \leq C \|\Psi_1\|^2_{L^2}+C\||\Psi_1|^2\|_{L^2}. $$ Hence, \begin{align*} &\| g(\Psi_1)-g(\Psi_2)\|_{H^1} \\ &\qquad \leq C\|V[\Psi_1] \|_{L^\infty} \|\Psi_1 -\Psi_2\|_{L^2}+ C\|\Psi_1-\Psi_2 \|_{H^1}\|\Psi_1+\Psi_2 \|_{H^1}\|\Psi_2\|_{L^2}\\ &\qquad \qquad +C\|V[\Psi_1] \|_{L^\infty} \|\nabla \Psi_1 - \nabla \Psi_2\|_{L^2}+C\| \Psi_1 -\Psi_2 \|_{L^2}\| \Psi_1 +\Psi_2 \|_{H^1} \|\Psi_2 \|_{L^6}\\ &\qquad \leq C( \| \Psi_1\|^2_{H^1}+ \| \Psi_2\|^2_{H^1}) \|\Psi_1-\Psi_2 \|_{H^1}. \end{align*} Another example is given by power non-linearities of the form $g(\Psi)=\mu |\Psi|^{2\sigma} \Psi$ for some $\mu \in \mathbb R$ and $\sigma>0$. A $L^\infty$ bound is needed on $\Psi$ for \textbf{H} to be verified. When $n>1$, we set then $q=2$ and obtain, for all $\sigma\geq \frac{1}{2}$: $$ \| g(\Psi) \|_{H^2} \leq C \| \Psi \|^{2\sigma+1}_{H^2}, \qquad \| g(\Psi_1)-g(\Psi_2) \|_{L^2} \leq C \| \Psi_2 +\Psi_1\|^{2\sigma}_{H^2}\| \Psi_1-\Psi_2 \|_{L^2}, $$ while it can be easily shown that $\textbf{H}$ is verified for $n=1$ and $q=1$ for all $\sigma \geq 0$. \begin{remark} \label{ener} In order to both lower the spatial regularity assumptions on $B^H$, $\Psi_0$, $g$ and to obtain global-in-time results, it is natural to consider the energy conservation identity (derived formally by multiplying \fref{mag} by $\overline{\partial_t \varphi}$ and integrating, and can be justified for classical solutions when $q \geq 2$ using the regularity of $\varphi$ of Theorem \ref{th_mag} and Lemma \ref{chain2}) that reads for $g=0$ for simplicity: $$ \frac{1}{2} \| \nabla \Psi(t) \|^2_{L^2}=\frac{1}{2} \| \nabla \Psi_0 \|^2_{L^2}-\Im \int_0^t \int_{\mathbb R^n} \overline{\Psi(s)} \nabla \Psi(s) \cdot \nabla d B^H_s dx. $$ Unfortunately, it is not clear to us how this identity can be used in order to obtain estimates on $\| \nabla \Psi\|_{W_{\alpha,1}(0,T,L^2)}$ for $1-H<\alpha<\frac{1}{2}$ that would depend only on $\| \nabla \Psi_0 \|_{L^2}$ and $\| B^H\|_{{\mathcal C}^{0,\gamma}(0,T,W^{1.\infty})}$, $\frac{1}{2}<\gamma<H$. Indeed, following the lines of the stochastic ODE case of \cite{nualart-rasc} in order to treat the stochastic integral and use the Gronwall Lemma, what can be deduced from the above relation is an estimate of the form $$ \left\| \| \nabla \Psi(t,\cdot) \|^2_{L^2} \right\|_{W_{\alpha,1}(0,T)} \leq C+C \int_0^T f(s) \| \nabla \Psi(s,\cdot) \|^2_{_{W_{\alpha,1}(0,T,L^2)}}ds $$ for some positive integrable function $f$ and where the constant $C$ depends on $\| \nabla \Psi_0 \|_{L^2}$ and $\| B^H\|_{{\mathcal C}^{0,\gamma}(0,T,W^{1.\infty})}$. This does not yield the desired bound since we cannot control the term $\| \nabla \Psi(s,\cdot) \|^2_{_{W_{\alpha,1}(0,T,L^2)}}$ by $\left\| \| \nabla \Psi(s,\cdot) \|^2_{L^2} \right\|_{W_{\alpha,1}(0,T)}$. Hence, as opposed to the standard Brownian case, energy methods do not provide us here with an $H^1$ global-in-time estimate. \end{remark} \begin{remark} \label{rem2} When $q \geq 2$, then $\Psi$ is a classical solution to \fref{SSE} in the sense that it satisfies for all $t \in [0,T_m]$, $\P$ a.s., $x$ a.e.: $$ \Psi(t)=\Psi(0)+i \int_0^t \Delta \Psi(s) ds-i \int_0^t \Psi(s) dB_s^H-i\int_0^t g(\Psi(s)) ds. $$ A proof of this result is given in the appendix. \end{remark} The rest of the paper is devoted to the proof of Theorem \ref{th1}. The starting point is to define $\varphi(t,x)=e^{i B^H(t,x)} \Psi(t,x)$, to use the invariance of $g$ with respect to a change of phase and to formally apply Lemma \ref{chain2} to arrive at \begin{equation} \label{MSE2} i\partial_t \varphi = -\Delta_{B_t^H} \varphi+g(\varphi). \end{equation} Remark that $\Delta_{B_t^H}$ can formally be recast as $$ \Delta_{B_t^H} =\Delta -2i \nabla B_t^H \cdot \nabla -|\nabla B_t^H|^2 -i \Delta B_t^H . $$ In section \ref{secmag}, we construct the evolution operator $U=\{U(t,s)\}$ generated by $i\Delta_{B_t^H}$ and obtain the existence of a unique solution to the latter magnetic Schr\"odinger equation. In section \ref{back}, we use the regularity properties of the function $\varphi$ together with Lemma \ref{chain2} to prove that $\Psi=e^{-i B^H_t}\varphi$ is the unique solution to \fref{SSE}. The existence follows from showing that $e^{-i B^H_t}\varphi$ is a solution to \fref{defsol}. The uniqueness stems from a reverse argument: owing a solution $\Psi$ to $\fref{defsol}$ with the corresponding regularity, we show that $\Psi e^{i B^H_t}$ is a solution to \fref{MSE2}. This requires some regularization since the function $ e^{-i B^H_t} z$ for $z$ smooth cannot be used as a test function in \fref{defsol}, as well as the interpretation of a classical integral involving a full derivative as a fractional integral. \section{Existence theory of the magnetic Schr\"odinger equation} \label{secmag} The first part of this section consists in constructing the evolution operator $U$. We follow the classical methods of Kato \cite{katolinear} and \cite{pazy}. The second part is devoted to the existence theory for the linear magnetic Schr\"odinger equation, which is then used for the non-linear case. \subsection{Construction of the evolution operator} We follow here the construction of \cite{pazy}, Chapter 5. Let $X$ and $Y$ be Banach spaces with norms $\|\cdot \|$ and $\| \cdot \|_Y$, where $Y$ is densely and continuously embedded in $X$. For $t \in [0,T]$, let $A(t)$ be the infinitesimal generator of a $C_0$ semigroup on $X$. Consider the following hypotheses: \begin{itemize} \item[(H1)] $\{ A(t)\}_{t\in [0,T]}$ is such that there are constants $\omega_0$ and $M\geq 1$, where $]\omega_0,\infty[ \subset \rho(A(t))$ for $t \in [0,T]$, $\rho(A(t))$ denoting the resolvent set of $A(t)$, and $$ \left\|\prod_{j=1}^k e^{- s_j A(t_j )}\right \| \leq M e^{\omega_0 \sum_{j=1}^k s_j }, \qquad s_j \geq 0, \qquad 0 \leq t_1 \leq t_2 \leq \dots \leq T. $$ \item[(H2)] There is a family $\{ Q(t)\}_{t\in [0,T]}$ of isomorphisms of $Y$ onto $X$ such that for every $y\in Y$, $Q(t)v$ is continously differentiable in $X$ on $[0,T]$ and $$ Q(t) A(t) Q(t)^{-1}=A(t)+C(t) $$ where $C(t)$, $0 \leq t \leq T$, is a strongly continuous family of bounded operators on $X$. \item[(H3)] For $t\in [0,T]$, $Y \subset D(A(t))$, $A(t)$ is a bounded operator from $Y$ into $X$ and $t \to A(t)$ is continuous in the ${\mathcal L}(Y,X)$ norm. \end{itemize} We then have the following result, see \cite{katolinear}, or \cite{pazy}, Chapter 5, Theorems 2.2 and 4.6: \begin{theorem}\label{kato} Assume that (H-1)-(H-2)-(H-3) are verified. Then, there exists a unique evolution operator $U=\{U(t,s)\}$, defined on the triangle $\Delta_T:T\geq t\geq s\geq 0$ such that \begin{itemize} \item[(a)] $U$ is strongly continuous on $\Delta_T$ to ${\mathcal L}(X)$, with $U(s,s)=I$, \item[(b)] $U(t,r)U(r,s)=U(t,s)$, \item[(c)] $U(t,s) Y \subset Y$, and $U$ is strongly continuous on $\Delta_T$ to ${\mathcal L}(Y)$, \item[(d)] $dU(t,s)/dt=-A(t)U(t,s)$, $dU(t,s)/ds=U(t,s)A(s)$, which exist in the strong sense in ${\mathcal L}(Y,X)$, and are strongly continuous $\Delta_T$ to ${\mathcal L}(Y,X)$. \end{itemize} \end{theorem} In the next result, we show that for suitable functions $B$, the operator $i \Delta_B=i e^{iB} \circ \Delta \circ e^{-iB}$ generates an evolution operator $U$. \begin{proposition} \label{geneevol} Let $X=L^2(\mathbb R^n)$ and $Y=H^{2k}(\mathbb R^n)$, $k \geq 1$, and let $B \in {\mathcal C}^0(0,T,H^{2k+2}(\mathbb R^n))$. Then, the operator $i \Delta_B$ generates an evolution operator $U$ satisfying Theorem \ref{kato} and $U$ is an isometry on $L^2(\mathbb R^n)$. \end{proposition} \begin{proof} We verify hypotheses (H-1)-(H-2)-(H-3) for $A(t)=i \Delta_B$. Let $\Delta_{B_t} := \Delta +L(t)$ with \begin{equation} \label{defL} L(t)=-2i \nabla B_t \cdot \nabla -|\nabla B_t|^2 -i \Delta B_t .\end{equation} First, for $t$ fixed in $[0,T]$, the Kato-Rellich theorem \cite{RS-80-2} yields that $\Delta_{B_t}$ is self-adjoint on $D(\Delta)=H^2(\mathbb R^n)$. Indeed, using the regularity $B \in {\mathcal C}^0([0,T],H^4(\mathbb R^n))$, it is straightforward to verify that $L(t)$ is symmetric and $\Delta$-bounded with relative bound strictly less than one. We also obtain that $D(\Delta_{B_t})=H^2(\mathbb R^n)$, $\forall t \in [0,T]$. Stone's theorem \cite{RS-80-I} then implies that for $t$ fixed, $i \Delta_{B_t}$ is the generator of a $C_0$ unitary group on $X$. Moreover, $-\Delta_{B_t}$ is positive, so that the spectrum of $i \Delta_{B_t}$ lies in $i[0,\infty)$. We therefore conclude that the family $\{i \Delta_{B_t} \}_{t\in [0,T]}$ satisfies hypothesis (H-1). Regarding (H-2), let $Q=\Delta_{(k)}+I$, where $I$ is the identity operator and $$\Delta_{(k)}=(-1)^k \sum_{j=1}^n \partial^{2k}_{x_j^{2k}}, \qquad k \geq 1.$$The operator $Q$ is a positive definite self-adjoint operator on $H^{2k}(\mathbb R^n)$, and an isomorphism from $H^{2k}(\mathbb R^n)$ to $L^2(\mathbb R^2)$. It is also obviously continuously differentiable since it does not depend on $t$. Moreover, $$ Q \Delta_{B_t} Q^{-1}=\Delta_{B_t}+[Q,\Delta_{B_t}]Q^{-1}, $$ where $[A,B]$ denotes the commutator between two operators $A$ and $B$. We have the following Lemma: \begin{lemma} For $k \geq 1$, let $B \in {\mathcal C}^0([0,T],H^{2k+2}(\mathbb R^n))$. Then $[Q,\Delta_{B_t}]Q^{-1} \in {\mathcal L}(L^2(\mathbb R^n))$. \end{lemma} \begin{proof} We have $[Q,\Delta_{B_t}]Q^{-1}=[\Delta_{(k)},L(t)]Q^{-1}$, and using the product rule \begin{align*} &[\Delta_{(k)},L(t)]=\\ &(-1)^k \sum_{j=1}^n \sum_{p=0}^{2k-1} \left( \begin{array}{c} 2k \\p \end{array} \right) \left(-2i \{\nabla \partial^{2k-p}_{x^{2k-p}_j} B \}\cdot \nabla \partial^p_{x^p_j} - \{\partial^{2k-p}_{x^{2k-p}_j}|\nabla B|^2 \}\partial^p_{x^p_j}-i\{ \Delta \partial^{2k-p}_{x^{2k-p}_j} B \}\partial^p_{x^p_j} \right) \end{align*} where $\tiny{\left( \begin{array}{c} 2k \\p \end{array} \right)}$ is the binomial coefficient and there are as usual no terms corresponding to $p=2k$ because of the commutator. Using Standard Sobolev embeddings for $H^{2k+2}(\mathbb R^n)$ when $n \leq 3$, we have for $j=1,\dots,n$ that $\partial^{2k+1}_{x^{2k+1}_j} B \in L^\infty_t L^p_x$ for $p=6$, $p<\infty$ and $p=\infty$ when $n=3,2,1$, respectively, and $\partial^q_{x^q_j} B \in L^\infty_t L^\infty_x$ for $q \leq 2k$. Together with the fact that $Q^{-1}$ is an isomorphism from $L^2(\mathbb R^n)$ to $H^{2k}(\mathbb R^n) \subset W^{2k-2,\infty}(\mathbb R^n)$, this is enough to insure that $[Q,\Delta_B]Q^{-1} \in {\mathcal L}(L^2(\mathbb R^n))$. \end{proof} \bigskip Hypothesis (H-2) is then verified with $C(t)=[Q,\Delta_{B_t}]Q^{-1}$, the strong continuity of $C$ following from the continuity of $B$. Finally, (H-3) follows easily from $H^{2k}(\mathbb R^n) \subset D(\Delta_{B_t})=H^2(\mathbb R^n)$, $k\geq 1$, and that $B \in {\mathcal C}^0(0,T,H^{2k+2}(\mathbb R^n))$. We can thus apply Theorem \ref{kato} and obtain the existence of an evolution group $U$ generated by $i \Delta_{B_t}$. The fact that $U$ is an isometry on $L^2(\mathbb R^n)$ is a consequence of $\Re i (\Delta_{B_t} \varphi,\varphi)=0$ for every $\varphi \in H^2(\mathbb R^n)$. \end{proof} \begin{remark} When $B \in {\mathcal C}^1(0,T,H^2(\mathbb R^n))$, a classical choice \cite{pazy} for $Q$ is $Q(t)= \lambda I-A(t)$ for $\lambda$ in the resolvent set of $A$. This allows to lower the spatial regularity of $B$ but is not verified when $B=B_t^H$. Notice that in the case when $B=B_t^H$, Proposition \ref{geneevol} can likely be improved in terms of the required spatial regularity of $B$ since we have not used the H\"older regularity in time of $B_t^H$ at all. \end{remark} \subsection{Application to the magnetic Schr\"odinger equation} We apply now the result of the preceeding section to the differential equation \begin{equation} \label{eqgene} \partial_t u=i \Delta_Bu+f, \qquad 0<t\leq T, \qquad u(0)=v, \end{equation} where $\Delta_B=e^{i B} \circ \Delta \circ \,e^{-iB}$. As for \fref{SSE}, we say that $u \in {\mathcal C}^0(0,T,H^{q}(\mathbb R^n)) \cap{\mathcal C}^1(0,T,H^{q-2}(\mathbb R^n))$, $q$ non-negative integer, is a solution to \fref{eqgene} if it verifies for all $w\in {\mathcal C}^1(0,T,H^{q+2}(\mathbb R^n))$, for all $t \in [0,T]$: \begin{align} \label{defsol2} \nonumber &\left( u(t), w(t) \right)-\left( v, w(0) \right) =\int_0^t \left( u(s), \partial_s w(s) \right) ds \\ & \qquad \qquad -i\int_0^t \left( u(s),\Delta_B w(s)\right) ds +i\int_0^t \left(f,w(s) \right) ds. \end{align} We have the following result: \begin{proposition} \label{gene_exist} Let $B \in {\mathcal C}^0(0,T,H^{q+4}(\mathbb R^n))$, $q$ non-negative integer, and denote by $U$ the evolution operator of Proposition \ref{geneevol}. Then, for every $v \in H^{q}(\mathbb R^n)$ and $f \in {\mathcal C}^0(0,T,H^{q}(\mathbb R^n))$, the function \begin{equation} \label{rep} u(t)=U(t,0) v+\int_0^t U(t,s) f(s)ds \end{equation} belongs to ${\mathcal C}^0(0,T,H^{q}(\mathbb R^n)) \cap{\mathcal C}^1(0,T,H^{q-2}(\mathbb R^n))$ and is the unique solution to \fref{eqgene}. Moreover, $u$ satisfies the estimate, for all $t \in [0,T]$: \begin{equation} \label{estimu} \|u(t)\|_{H^q} \leq C \| v\|_{H^q}+C\int_0^t \| f(s) \|_{H^q}ds, \end{equation} where the constant $C$ depends on $\| B\|_{{\mathcal C}^0(0,T,H^{q+4}(\mathbb R^n))}$ when $q\neq 0$. \end{proposition} \begin{proof} Consider first the case $q=2k$ with $k\geq 1$. The result then follows from \cite{katolinear}, Theorem II and the equation \fref{eqgene} in order to obtain the regularity on $\partial_t u$. The cases $q=2k-1$, $k \geq 1$, and $q=0$ are treated by approximation: choose for instance sequences $B_\varepsilon \in {\mathcal C}^0(0,T,H^{q+9}(\mathbb R^n))$, $v_\varepsilon\in H^{q+5}(\mathbb R^n)$ and $f_\varepsilon \in {\mathcal C}^0(0,T,H^{q+5}(\mathbb R^n))$ such that as $\varepsilon \to 0$: \bea \label{convB1} B_\varepsilon &\to&B \qquad \textrm{ in} \quad {\mathcal C}^0(0,T,H^{q+4}(\mathbb R^n))\\ \label{convv} v_\varepsilon &\to&v \qquad \textrm{ in} \quad H^{q}(\mathbb R^n)\\ \label{convf} f_\varepsilon &\to&f \qquad \textrm{ in} \quad {\mathcal C}^0(0,T,H^{q}(\mathbb R^n)). \eea Applying the result when $q=2k$ with $k\geq 1$, the corresponding smooth solution $u_\varepsilon$ to \fref{eqgene} when $q=2k-1$ belongs to ${\mathcal C}^0(0,T,H^{2k+4}(\mathbb R^n))$ with $\partial_t u_\varepsilon \in {\mathcal C}^1(0,T,H^{2k+2}(\mathbb R^n))$, with the convention that $k=\frac{1}{2}$ when $q=0$. In order to pass to the limit, it is proven in \cite{katolinear}, Theorem V, that if $i \Delta_{B_\varepsilon}$ converges to $i \Delta_{B}$ in ${\mathcal L}(H^2(\mathbb R^n),L^2(\mathbb R^n))$ a.e. $t$, and $\|\Delta_{B_\varepsilon}\|_{{\mathcal L}(H^2(\mathbb R^n),L^2(\mathbb R^n))} $ is uniformly bounded in $t$ independently of $\varepsilon$, then \begin{equation} \label{convU} U_\varepsilon(t,s) \to U(t,s) \quad \textrm{in} \quad {\mathcal L}(L^2(\mathbb R^n)) \quad \textrm{uniformly in } (t,s), \end{equation} where $U$ is the evolution operator associated to $B$. These latter conditions are direcly satisfied because of \fref{convB1}. We then write: \begin{eqnarray*} u_\varepsilon(t)&=& U_\varepsilon(t,0)v_\varepsilon+\int_0^t U_\varepsilon(t,s)f_\varepsilon(s) ds, \qquad \forall t \in [0,T]\\ &=&U(t,0)v+\int_0^t U(t,s)f(s) ds+R^1_\varepsilon+R^2_\varepsilon=u+R^1_\varepsilon+R^2_\varepsilon\\ R^1_\varepsilon&=&U_\varepsilon(t,0)(v_\varepsilon-v)+\int_0^t U_\varepsilon(t,s) (f_\varepsilon(s)-f(s))\\ R^2_\varepsilon&=&(U_\varepsilon(t,0)-U(t,0))v+\int_0^t (U_\varepsilon(t,s)-U(t,s)) f(s)) ds. \end{eqnarray*} Using \fref{convU} and the strong convergence of $v_\varepsilon$ and $f_\varepsilon$, we then obtain that $u_\varepsilon \to u$ in ${\mathcal C}^0(0,T,L^2(\mathbb R^n))$. Assume first that $q \neq 0$. In order to get the announced better regularity on $u$, we use the fact that $u_\varepsilon \in {\mathcal C}^0(0,T,{\mathcal C}^{2k+2}(\mathbb R^n))$ and $\partial_t u_\varepsilon \in {\mathcal C}^1(0,T,{\mathcal C}^{2k}(\mathbb R^n))$ thanks to standard Sobolev embeddings for $n \leq 3$. We can then differentiate equation \fref{geneevol}, and find using the representation formula \begin{equation} \label{eqDbeta} D^\beta u_\varepsilon(t)= U_\varepsilon(t,0) D^\beta v_\varepsilon+\int_0^t U_\varepsilon(t,s)(D^\beta f_\varepsilon(s)+[D^\beta,L_\varepsilon(s)]u_\varepsilon(s)) ds, \end{equation} where $1\leq |\beta| \leq q$ and $$D^\beta:=\frac{\partial^{\beta_1}}{\partial x_1^{\beta_1}} \times \cdots \times \frac{\partial^{\beta_n}}{\partial x_n^{\beta_n}}, \qquad \beta=(\beta_1,\cdots,\beta_n), \quad |\beta|=\beta_1 + \cdots + \beta_n,$$ and $L_\varepsilon(s)$ is defined in \fref{defL} with $B$ replaced by $B_\varepsilon$. Only the term involving the commutator requires some attention. Using \fref{convB1}, we can show that for all $s \in [0,T]$, $$ \| [D^\beta,L_\varepsilon(s)]u_\varepsilon(s)\|_{L^2} \leq C \|u_\varepsilon(s)\|_{H^{|\beta|}}, $$ where the constant $C$ is independent of $\varepsilon$. Together with \fref{convB1}-\fref{convv}-\fref{convf}-\fref{convU}-\fref{eqDbeta} and the Gronwall lemma, this yields a uniform bound for $u_\varepsilon$ in ${\mathcal C}^0(0,T,H^q(\mathbb R^n))$. Using this latter bound along with \fref{convB1}-\fref{convv}-\fref{convf}-\fref{convU}-\fref{eqDbeta} and equation \fref{eqgene} for the smooth solution $u_\varepsilon$ in order to estimate $\partial_t u_\varepsilon$, it is then not difficult to show that $(u_\varepsilon)_\varepsilon$ is a Cauchy sequence in ${\mathcal C}^0(0,T,H^{q}(\mathbb R^n)) \cap{\mathcal C}^1(0,T,H^{q-2}(\mathbb R^n))$, whose limit $u$ satisfies estimate \fref{estimu} and \fref{defsol2}. When $q=0$, it suffices to use equation \fref{eqgene} for the smooth solution $u_\varepsilon$ in order to show that $(\partial_t u_\varepsilon)_\varepsilon$ is Cauchy in ${\mathcal C}^0(0,T,H^{-2}(\mathbb R^n))$. This proves the existence, the representation formula \fref{rep} and estimate \fref{estimu}. Uniqueness is straightforward in the case $q \geq 1$ since solutions to \fref{eqgene} are regular enough to be used as test functions and to obtain after an integration by part that $\Im (\nabla (e^{-i B_t} u),\nabla (e^{-i B_t} u ))=0$. When $q=0$, we use the adjoint formulation of \fref{eqgene}. The difference between two solutions to \fref{eqgene} satisfies in the case of a test function $w \in {\mathcal C}^0(0,T,H^2(\mathbb R^n)) \cap {\mathcal C}^1(0,T,L^2(\mathbb R^n))$, \begin{align*} &\left( u(t), w(t)\right)=\int_0^t \left ( u(s), \partial_s w(s)+(i\Delta_{B_s})^* w(s) \right) ds, \end{align*} where $(i\Delta_{B_s})^*=-i\Delta_{B_s}$ is the adjoint of $i \Delta_{B_s}$. Let $t \in [0,T]$, pick some $w_0 \in H^2(\mathbb R^n)$ and let $w(s)=z(t-s)$ where $z(s)$ is the solution to $\partial_s z(s)=(i \Delta_{B_{t-s}})^* z(s)$, $z(0)=w_0$, $0<s<t$. Adapting Proposition \ref{geneevol} to the operator $(i\Delta_{B_s})^*$ , Theorem \ref{kato} yields that $z \in {\mathcal C}^0(0,T,H^2(\mathbb R^n)) \cap {\mathcal C}^1(0,T,L^2(\mathbb R^n))$. Hence, $\partial_s w(s)+(i\Delta_{B_s})^* w(s)=0$ $x$ a.e., $w(t)=w_0$ and it comes, for all $t \in [0,T]$: $$ \left( u(t), w_0 \right)=0, \qquad \forall w_0 \in H^2(\mathbb R^n), $$ so that $u=0$. This ends the proof. \end{proof} \bigskip We use the result of the last Proposition to prove that the non-linear magnetic Schr\"odinger equation \begin{equation} \label{mag} \partial_t \varphi=i \Delta_{B^H_t} \varphi +g(\varphi), \qquad 0<t\leq T, \qquad u(0)=\Psi_0, \end{equation} admits a unique solution $\P$ almost surely in the same sense as \fref{defsol2}: \begin{theorem} \label{th_mag} Assume that \textbf{H} is satisfied. Suppose moreover that \fref{assumQ} is verified for $V=H^{q+4}(\mathbb R^n)$, $q$ non-negative integer. Then, for every $\Psi_0 \in H^q(\mathbb R^n)$, there exists a maximal existence time $T_M>0$ and a unique function $\varphi \in {\mathcal C}^0(0,T_M,H^{q}(\mathbb R^n)) \cap{\mathcal C}^1(0,T_M,H^{q-2}(\mathbb R^n))$ verifying \fref{mag} for $t\in[0,T_M]$ which admits the following representation formula: $$ \varphi(t)= U(t,0) \Psi_0+ \int_0^t U(t,s) g(\varphi(s))ds, $$ where $U=\{U(t,s)\}$ is the evolution operator generated by the operator $$ i\Delta_{B_t^H} =i e^{i B_t^H}\circ \Delta \circ e^{-i B_t^H}. $$ If moreover $\Im g(\varphi) \overline{\varphi}=0$, then for all $t\in[0,T_M]$ \begin{equation} \label{charge} \| \varphi(t)\|_{L^2}=\| \varphi(0)\|_{L^2}. \end{equation} If $g$ is globally Lipschitz on $H^q(\mathbb R^n)$, then the solution exists for all time $T<\infty$. \end{theorem} \begin{proof} The proof is very classical and relies on Proposition \ref{gene_exist} and a standard fixed point procedure. First of all, \fref{assumQ} insures that $\P$ almost surely, $B^H \in {\mathcal C}^0(0,T,H^{q+4}(\mathbb R^n))$, which allows us to define an evolution operator $U$ according to Proposition \ref{geneevol}. The rest of the proof follows the usual arguments of for instance \cite{pazy}, Theorem 1.4, Chapter 6, that we sketch here for completeness. Given $\Psi_0$ in $H^{q}(\mathbb R^n)$ and $\varphi \in {\mathcal C}^0(0,t_1,H^q(\mathbb R^n))$ for some $t_1>0$ to be fixed later on, denote by $u:=F(\varphi)$ the solution to $\fref{eqgene}$ in ${\mathcal C}^0(0,t_1,H^{q}(\mathbb R^n)) \cap{\mathcal C}^1(0,t_1,H^{q-2}(\mathbb R^n))$ where $f=g(\varphi)$ belongs to ${\mathcal C}^0(0,t_1,H^{q}(\mathbb R^n))$ thanks to hypothesis \textbf{H}. Using the latter, estimate \fref{estimu}, following the aforementioned theorem of \cite{pazy}, one can establish the existence of $M>0$ and a time $t_1(M)$ such that $F$ maps the ball of radius $M$ of ${\mathcal C}^0(0,t_1,H^{q}(\mathbb R^n))$ centered at 0 into itself. In this ball, the function $g$ being uniformly Lipschitz on $H^p(\mathbb R^n)$, $0\leq p\leq q$ according to hypothesis \textbf{H}, existence and uniqueness of a fixed point of $F$ in ${\mathcal C}^0(0,t_1,H^{q}(\mathbb R^n))$, denoted by $\varphi^\star$, follows from the contraction principle. Moreover, this solution verifies the representation formula \fref{rep} with $f=g(\varphi^\star) \in {\mathcal C}^0(0,t_1,H^{q}(\mathbb R^n))$ and according to Proposition \ref{gene_exist}, belongs in addition to ${\mathcal C}^1(0,t_1,H^{q+2}(\mathbb R^n))$ and satisfies \fref{mag}. The existence of a maximal time of existence $T_M$ is established following the same lines as \cite{pazy}. When $g$ is globally Lipschitz in $H^q(\mathbb R^n)$, then $T_M<\infty$ by the Gronwall Lemma. Regarding the conservation of charge \fref{charge}, the case $q \geq 1$ is direct since the solution $\varphi^\star$ is regular enough to be used as a test function in \fref{defsol2} (after interpretation of $(\cdot, \cdot )$ as the $H^{-1}-H^1$ duality pairing when $q=1$) and it then suffices to take the imaginary part of the equation. When $q=0$, we use a regularization procedure very similar to that of the proof of Proposition \ref{gene_exist}, the details are left to the reader. \end{proof} \section{Back to the stochastic Schr\"odinger equation} \label{back} We apply the result of the last section to prove Theorem \ref{th1}. Owing the solution $\varphi$ of Theorem \ref{th_mag}, it suffices to show (i) that $e^{-i B_t^H} \varphi$ is a solution to \fref{defsol}, which will follow from the regularity of $\varphi$ and Lemma \ref{chain2}, this yields existence; and (ii) that all solutions to \fref{defsol} with the corresponding regularity read $e^{-i B_t^H} u$ where $u$ is a solution to \fref{mag}, which yields uniqueness since \fref{mag} has a unique solution. As explained earlier, the last step requires a regularization procedure since test functions of the form $w=e^{-i B_t^H} z$, with $z$ smooth, are not differentiable in time and cannot be used directly in \fref{defsol}. In the whole proof, $T$ denotes some time $T\geq T_M$.\\ \noindent \textbf{Proof of Theorem \ref{th1}.} \textit{Existence.} Let $\varphi$ be the unique solution to \fref{mag} according to Theorem \ref{th_mag} and define, for any test function $w \in {\mathcal C}^1(0,T_M,H^{q+2}(\mathbb R^n))$, $$ F(B^H_t,t)=( e^{-i B_t^H} \varphi(t), w(t)). $$ We verify that $F$ satisfies the hypotheses of Lemma \ref{chain2}. First of all, $F$ is clearly continuously differentiable w.r.t. the first variable and for all $v \in H^{q+4}(\mathbb R^n)$, let $\phi(t):=\partial_1 F(B^H_t,t)(v)=i ( e^{-i B_t^H} \varphi(t) v, w(t))$. Second of all, we need to show that $\phi \in {\mathcal C}^{0,\lambda}(0,T)$ for $\lambda$ verifying $\lambda+\gamma>1$, together with the bound \fref{hypphi}. To this goal, we have for $(t,s) \in [0,T_M]^2$: \begin{eqnarray*} \phi(t)-\phi(s)&=&i\left( (e^{-i B_t^H}-e^{-i B_s^H}) \varphi(t) v, w(t)\right)+i\left(e^{-i B_s^H} (\varphi(t)-\varphi(s)) v, w(t)\right)\\ &&+i\left(e^{-i B_s^H} \varphi(s) v, (w(t)-w(s))\right)\\ &:=&T_1+T_2+T_3. \end{eqnarray*} We treat each term separately. We have, using standard Sobolev embeddings for $n\leq 3$: \bea \nonumber |T_1| &\leq& C \|\varphi(t)\|_{L^2} \|w(t)\|_{L^\infty} \|v\|_{L^\infty} \|B_t^H-B_s^H\|_{L^2}\\ \nonumber &\leq & C (t-s)^{\gamma} \|v\|_{H^{q+4}} \|B_t^H\|_{{\mathcal C}^{0,\gamma}(0,T_M,L^2)}\\ &\leq & C (t-s)^{\gamma} \|v\|_{H^{q+4}}, \label{estimT1} \eea for all $0\leq \gamma<H$. Regarding the term $T_2$, notice that the product $e^{i B_t^H} \overline{v} w$ belongs to $H^{q+2}(\mathbb R^n)$ when $n \leq 3$, so that since $\partial_t \varphi \in {\mathcal C}^{0}(0,T_M,H^{q-2}(\mathbb R^n))$, we can write $$ \left(e^{-i B_s^H} (\varphi(t)-\varphi(s)) v, w(t)\right)=\int_s^t \langle \partial_\tau \varphi(\tau) , e^{ i B_s^H} \overline{v} w(t) \rangle_{H^{q-2},H^{q+2}} d\tau, $$ where when $q\geq 2$, the pairing $\langle \cdot, \cdot \rangle_{H^{q-2},H^{q+2}}$ is replaced by the $L^2$ inner product. Hence, $$ |T_2| \leq |t-s| \|w(t)\|_{H^{q+2}} \|v\|_{H^{q+4}} \|B_t^H\|_{{\mathcal C}^{0}(0,T_M,H^{q+4})} \|\partial_t \varphi\|_{{\mathcal C}^{0}(0,T_M,H^{q-2})} \leq C |t-s| \|v\|_{H^{q+4}}. $$ Estimation of $T_3$ is straightforward and leads to a similar estimate as above. This, together with \fref{estimT1} yields that $$ \| \phi\|_{{\mathcal C}^{0,\gamma}(0,T_M)} \leq C \|v\|_{H^{q+4}}. $$ Since $\frac{1}{2}<H$, we can pick $H=\frac{1}{2}+\varepsilon$ and $\gamma=\frac{1}{2}+\frac{\varepsilon}{2}$ such that $2\gamma>1$ and the assumption on $\phi$ of Lemma \ref{chain2} is verified. It remains to show that $\partial_2 F$ exists and is continuous, and this is a consequence of the fact that $\partial_t \varphi \in {\mathcal C}^{0}(0,T_M,H^{q-2}(\mathbb R^n))$. Applying Lemma \ref{chain2} then yields \begin{align} \label{eqpphi} &( e^{-i B_t^H} \varphi(t), w(t))-( \Psi_0, w(0))=\int_0^t\langle \partial_\tau \varphi(\tau), e^{i B_\tau^H} w(\tau)\rangle_{H^{q-2},H^{q+2}} d\tau\\ \nonumber &+\int_0^t( e^{-i B_\tau^H} \varphi(\tau), \partial_\tau w(\tau)) d\tau+i \sum_{p \in \mathbb N} \lambda_p \int_0^t ( e^{-i B_\tau^H} \varphi(\tau)e_p, w(\tau)) d \beta^H_p(\tau). \end{align} In order to conclude, picking $w(t,x)=w(x) \in H^{q+2}(\mathbb R^n)$ in \fref{defsol2} with $f=g(\varphi)$, it comes that \fref{mag} is verified in $H^{q-2}(\mathbb R^n)$ for all $t\in [0,T_M]$ and almost surely. This yields $\partial_\tau \varphi= i \Delta_{B_t^H} \varphi-i g(\varphi)$ in $H^{q-2}(\mathbb R^n)$, and replacing $\partial_\tau \varphi$ by its latter expression in \fref{eqpphi}, setting $\Psi=e^{-i B_t^H} \varphi \in {\mathcal C}^0(0,T_M,H^q(\mathbb R^n)) \cap {\mathcal C}^{0,\gamma}(0,T_M,H^{q-2}(\mathbb R^n))$ for all $0\leq \gamma<H$, finally yields \fref{defsol}. \textit{Uniqueness, Step 1: regularization.} Starting from a solution $\Psi$ to \fref{defsol} with the above regularity, we would like to choose the test function $w=e^{-i B_\tau^H} z$ for some regular function $z$ in order to recover the weak formulation of \fref{mag}, which admits a unique solution. This is not allowed of course since $B^H$ is not differentiable. The solution is to use the H\"older regularity of $\Psi$ in order to reinterpret the term $$ \int_0^t \left( \Psi(s), \partial_s w(s) \right) ds $$ as a fractional integral. To this end, let $$ B^{H,\varepsilon}(t,x):=\sum_{p \in \mathbb N} \lambda_p e_p(x) \beta_p^{H,\varepsilon}(t) $$ where $\beta^{H,\varepsilon}_p$ is a ${\mathcal C}^1$ regularization of $\beta^{H}_p$ such that $\beta^{H,\varepsilon}_p \to \beta^{H}_p$ in ${\mathcal C}^{0,\gamma}(0,T)$ almost surely for all $p$ and $\|\beta^{H,\varepsilon}_p\|_{{\mathcal C}^{0,\gamma}(0,T)} \leq \|\beta^{H}_p\|_{{\mathcal C}^{0,\gamma}(0,T)}$, $0\leq \gamma <H$. We have \begin{equation} \label{conVV} B^{H,\varepsilon}\to B^{H}\quad \textrm{in}\quad {\mathcal C}^{0,\gamma}(0,T,H^{q+4}(\mathbb R^n)),\qquad \P \quad \textrm{almost surely}. \end{equation} Indeed: $$ \| B^{H,\varepsilon}-B^{H}\|_{{\mathcal C}^{0,\gamma}(0,T,H^{q+4})} \leq \sum_{p \in \mathbb N} \lambda_p \| e_p \|_{H^{q+4}} \|\beta^{H,\varepsilon}_p -\beta^{H}_p\|_{{\mathcal C}^{0,\gamma}(0,T)} $$ and $$ \|\beta^{H,\varepsilon}_p -\beta^{H}_p\|_{{\mathcal C}^{0,\gamma}(0,T)} \leq 2 \|\beta^{H}_p\|_{{\mathcal C}^{0,\gamma}(0,T)}, $$ which, together with the convergence of $\beta_p^{H,\varepsilon}$ to $\beta_p^{H}$ in ${\mathcal C}^{0,\gamma}$, \fref{defK} and the Weierstrass rule gives the desired result. Set $w_\varepsilon=e^{-i B_t^{H,\varepsilon}} z$ where $z \in {\mathcal C}^1(0,T,H^{q+2}(\mathbb R^n))$. Then \begin{align*} &\int_0^t \left( \Psi(s), \partial_s w_\varepsilon(s) \right) ds\\ &\qquad =\int_0^t \left( \Psi(s)e^{i B_s^{H,\varepsilon}}, \partial_s z(s) \right)ds-i\sum_{p\in \mathbb N} \lambda_p \int_0^t \left( \Psi(s)e^{i B_s^{H,\varepsilon}},z(s) e_p\right) (\beta_p^{H,\varepsilon}(s))' ds, \end{align*} where all permutations of sum and integrals were permitted since the series defining $B^{H,\varepsilon}$ is normally convergent in ${\mathcal C}^1(0,T,H^{q+4}(\mathbb R^n))$. We now use the fact that for a continuously differentiable function $f$, $D_{t-}^\alpha f \to f'$ as $\alpha \to 1$, see \cite{zahle} section 1, and that $D_{t-}^{\alpha+\beta} f=D_{t-}^{\alpha} D_{t-}^{\beta} f$, where the operator $D_{t-}^{\alpha}$ is defined in section \ref{prelim}. We then introduce, for $\frac{1}{2}<1-\mu<\gamma$, \begin{eqnarray*} I^\varepsilon_p&:=&\int_0^t \left( \Psi(s)e^{i B_s^{H,\varepsilon}},e_p\right) (\beta_p^{H,\varepsilon}(s)-\beta_p^{H,\varepsilon}(t))' ds\\ &=&\lim_{\alpha \to 1} I^{\varepsilon,\alpha}_p := \lim_{\alpha \to 1} \int_0^t \left( \Psi(s)e^{i B_s^{H,\varepsilon}},e_p z(s)\right) D_{t-}^{\alpha-\mu} D_{t-}^{\mu} (\beta_p^{H,\varepsilon})_{t-}(s)ds, \end{eqnarray*} where $(\beta_p^{H,\varepsilon})_{t-}(s)=\beta_p^{H,\varepsilon}(s)-\beta_p^{H,\varepsilon}(t^-)$. Owing the fact that $( \Psi(s)e^{i B_s^{H,\varepsilon}},z(s) e_p)\in {\mathcal C}^{0,\gamma}(0,T_M)$ for any $0\leq \gamma<H$ and using the fractional integration by part formula of \cite{zahle} section 1, we find $$ I^{\varepsilon,\alpha}_p=(-1)^{\alpha-\mu} \int_0^t D_{0+}^{\alpha-\mu} \left( \Psi(s)e^{i B_s^{H,\varepsilon}},z(s) e_p\right) D_{t-}^{\mu} (\beta_p^{H,\varepsilon})_{t-}(s)ds. $$ Moreover, we can send $\alpha$ to one above thanks to dominated convergence since the term $( \Psi(s)e^{i B_s^{H,\varepsilon}},z(s) e_p)$ belongs to ${\mathcal C}^{0,\gamma}(0,T_M)$ and $1-\mu<\gamma$ in order to obtain $$ I^\varepsilon_p=(-1)^{1-\mu} \int_0^t D_{0+}^{1-\mu} \left( \Psi(s)e^{i B_s^{H,\varepsilon}},z(s) e_p\right) D_{t-}^{\mu} (\beta_p^{H,\varepsilon})_{t-}(s)ds. $$ We derive below some estimates needed to pass to the limit $\varepsilon \to 0$.\\ \noindent \textit{Uniqueness, Step 2: uniform estimates.} Recall that $w_\varepsilon=e^{-i B_t^{H,\varepsilon}} z$ and define first, with $w=e^{-i B_t^{H}} z$: $$\phi^\varepsilon(s)=( \Psi(s),(w_\varepsilon(s)-w(s)) e_p):=( \Psi(s),r_\varepsilon(s)e_p).$$ In order to estimate $D_{0+}^{1-\mu} \phi^\varepsilon$, we write \begin{eqnarray*} \phi^\varepsilon(t)-\phi^\varepsilon(s)&=&( \Psi(t)- \Psi(s),r_\varepsilon(t)e_p)+( \Psi(s),(r_\varepsilon(t)-r_\varepsilon(s))e_p):=T_1+T_2. \end{eqnarray*} For the term $T_1,$ we use the ${\mathcal C}^{0,\gamma}$ regularity of $\Psi$ in $H^{q-2}$, while we use that of $r^\varepsilon$ for $T_2$. It comes with the help of standard Sobolev embeddings: \begin{eqnarray*} |T_1| &\leq& |t-s|^\gamma \|\Psi\|_{{\mathcal C}^{0,\gamma}(0,T_M,H^{q-2})}\|r_\varepsilon(t)\|_{H^{q+2}} \|e_p\|_{H^{q+2}}\\ |T_2| &\leq& |t-s|^\gamma \|\Psi(t)\|_{L^2} \|e_p\|_{L^\infty} \| r_\varepsilon \|_{{\mathcal C}^{0,\gamma}(0,T,L^{2})}. \end{eqnarray*} Since $1-\mu<\gamma$, this gives \begin{equation} \label{estphiW} \| \phi^\varepsilon\|_{W_{1-\mu,1}(0,T_M)} \leq C \|e_p\|_{H^{q+4}} \| B^{H,\varepsilon}-B^H \|_{{\mathcal C}^{0,\gamma}(0,T,H^{q+4}(\mathbb R^n))}. \end{equation} On the other hand, using the notation of section \ref{prelim}, we find \begin{equation} \label{estimbet} |D_{t-}^{\mu} (\beta_p^{H,\varepsilon})_{t-}(s)| \leq \Lambda_{1-\mu}(\beta^{H,\varepsilon}_p) \leq C \|\beta_p^{H,\varepsilon}\|_{{\mathcal C}^{0,\gamma}(0,T)} \leq C \|\beta_p^{H}\|_{{\mathcal C}^{\gamma}(0,T)} < \infty, \end{equation} and \begin{equation} \label{estimbet2} |D_{t-}^{\mu} (\beta_p^{H,\varepsilon}-\beta_p^{H})_{t-}(s)| \leq \Lambda_{1-\mu}(\beta^{H,\varepsilon}_p-\beta_p^{H}) \leq C \|\beta_p^{H,\varepsilon}-\beta_p^{H}\|_{{\mathcal C}^{0,\gamma}(0,T)}. \end{equation} \noindent \textit{Uniqueness, Step 3: passing to the limit.} We have all needed now to pass to the limit in the weak formulation \fref{defsol}. Plugging $w^\varepsilon(t)=e^{-i B_t^{H,\varepsilon}} z \in {\mathcal C}^1(0,T,H^{q+2}(\mathbb R^n))$ yields \begin{align} \label{defsol3} \nonumber &\left( \Psi(t), w^\varepsilon(t) \right)-\left( \Psi_0, z(0) \right) =\int_0^t \left( \Psi(s), \partial_s w^\varepsilon(s) \right) ds \\[3mm] & -i\int_0^t \left( \Psi(s),\Delta w^\varepsilon(s)\right) ds +i \int_0^t\left(\Psi(s),w^\varepsilon(s) d B_s^H\right)+i\int_0^t \left(g(\Psi(s)),w^\varepsilon(s) \right) ds. \end{align} We have $$ \int_0^t \left( \Psi(s), \partial_s w^\varepsilon(s) \right) ds=\int_0^t \left( \Psi(s)e^{i B_t^{H,\varepsilon}}, \partial_s z(s) \right) ds-i\sum_{p \in \mathbb N^*} I_p^\varepsilon. $$ Using \fref{estimint}-\fref{conVV}-\fref{estphiW}-\fref{estimbet}-\fref{estimbet2} as well as \fref{defK}, we can pass to the limit in the latter equation and obtain that, $\forall t \in [0,T_M]$: $$ \lim_{\varepsilon \to 0} \int_0^t \left( \Psi(s), \partial_s w^\varepsilon(s) \right) ds=\int_0^t \left( \Psi(s)e^{i B_s^{H}}, \partial_s z(s) \right) ds-i \int_0^t\left(\Psi(s)e^{i B_s^{H}}, z(s) d B_s^H\right). $$ Similar arguments can be employed to pass to the limit in the remaining terms of \fref{defsol3}. The stochastic integrals simplify and we are left with \begin{align} \label{defsol4} \nonumber &\left( \Psi(t)e^{i B_t^{H}}, z(t) \right)-\left( \Psi_0, z(0) \right) =\int_0^t \left( \Psi(s)e^{i B_s^{H}}, \partial_s z(s) \right) ds \\[3mm] & -i\int_0^t \left( \Psi(s),\Delta (e^{-i B_s^{H}}z(s))\right) ds +i\int_0^t \left(g(\Psi(s)),e^{-i B_s^{H}}z(s) \right) ds. \end{align} Hence, $\Psi e^{i B_t^{H}}$ verifies the magnetic Schr\"odinger equation $\fref{defsol2}$ with $f=g(\Psi(s))e^{i B_s^{H}}=g(\Psi(s)e^{i B_s^{H}})$. Since the latter admits a unique solution $\varphi \in {\mathcal C}^0(0,T_M,H^q(\mathbb R^n)) \cap {\mathcal C}^1(0,T_M,H^{q-2}(\mathbb R^n))$ according to Theorem \ref{th_mag}, we can conclude that \fref{SSE} admits a unique solution. The representation formula \fref{repre_th} follows then without difficulty with the identification $\Psi e^{i B_t^{H}}=\varphi$ and Theorem \ref{th_mag}. This ends the proof of Theorem \ref{th1}. \section{Appendix} \subsection{Proof of Lemma \ref{chain2}} First of all, we know by section \ref{prelim} that $B_t^{H}$ belongs to $E:={\mathcal C}^{0,\gamma}(0,T,V)$, $\P$ almost surely for $0\leq \gamma<H$. We proceed by approximation in order to apply the change of variables formula \fref{chain} valid in finite dimensions. Let $$B_t^{H,N}(x):=\sum_{p=0}^N \lambda_p e_p(x) \beta^H_p(t),$$ so that, \begin{equation} \label{convB}B_t^{H,N} \to B_t^{H} \quad \textrm{in } E, \qquad \P \textrm{ almost surely,} \end{equation} thanks to \fref{assumQ} and \fref{defK}. We have moreover the bound $\|B_t^{H,N}\|_E \leq \|B_t^{H}\|_E:=M$. Since $F$ is ${\mathcal C}^1$, and $\phi \in {\mathcal C}^{0,\lambda}(0,T)$ with $\lambda+\gamma>1$, we can use \fref{chain} and find, for $0\leq s \leq t \leq T$ fixed: \begin{eqnarray*} F(B_t^{H,N},t)-F(B_s^{H,N},s)=\int_s^t \partial_2 F(B_{\tau}^{H,N},\tau) d\tau+ \sum_{p=0}^N \lambda_n \int_s^t \partial_1 F(B_{\tau}^{H,N},\tau)(e_p)\, d \beta_p^H(\tau). \end{eqnarray*} By continuity of $F$, it is direct to pass to the limit in the left hand side. The same holds for the first term of the right hand side thanks to dominated convergence and the fact that $\partial_2 F$ is continuous and $B_\tau^{H,N}$ is bounded in $E$ independently of $N$. Regarding the last term, let $\phi^N_p(\tau)=\partial_1 F(B_{\tau}^{H,N},\tau)(e_p)$ and $$ f_p^N:=\int_s^t \phi_p^N(\tau) \, d \beta_p^H(\tau)=(-1)^\alpha \int_s^t D_{s+}^\alpha \phi_p^N(\tau) D_{t-}^{1-\alpha} (\beta_p^H)_{t-}(s) d\tau, $$ by \fref{stiel} for some $\alpha$ verifying $\alpha<\lambda$ and $1-\alpha<\gamma$. Since $\|B_t^{H,N}\|_E \leq M$, we have by \fref{hypphi} $$ |\tau-s|^{-\alpha-1}|(\phi_p^N(\tau)-\phi_p^N(s))| \leq |\tau-s|^{\lambda-\alpha-1} \|\phi_p^N \|_{{\mathcal C}^{0,\lambda}(0,T)} \leq C_M |\tau-s|^{\lambda-\alpha-1} \|e_p \|_V, $$ where $\lambda-\alpha>0$. The latter estimate, dominated convergence, \fref{convB} together with the continuity of $\partial_1 F$ yield first that $D_{s+}^\alpha \phi_p^N(\tau) \to D_{s+}^\alpha \phi_p(\tau)$ a.e. where $\phi_p(\tau)=\partial_1 F(B_{\tau}^{H},\tau)(e_p)$. Then, since $|D_{t-}^{1-\alpha} (\beta_p^H)_{t-}(s)| \leq \Lambda_\alpha(\beta^H_p)<\infty$, $\P$ almost surely, we have \begin{equation} \label{estimDphi} |D_{s+}^\alpha \phi_p^N(\tau) D_{t-}^{1-\alpha} (\beta_p^H)_{t-}(s) | \leq C |\tau-s|^{\lambda-\alpha-1}\|e_p \|_V \end{equation} so that dominated convergence implies that $f_p^N \to f_p=\int_s^t \phi_p(\tau) \, d \beta_p^H(\tau)$, for all $p \in \mathbb N$. Finally, since $$ \lambda_p |f_p^N| \leq C \lambda_p \|e_p \|_V, $$ thanks to \fref{estimDphi}, and moreover \fref{assumQ} holds, we can apply the Weierstrass rule and conclude that $\P$ almost surely: $$ \lim_{N\to \infty} \sum_{p=0}^N \lambda_n f_p^N=\sum_{p=0}^\infty \lambda_n \int_s^t \partial_1 F(B_{\tau}^{H},\tau)(e_p)\, d \beta_p^H(\tau). $$ This ends the proof. \subsection{Proof of Lemma \ref{fubini}} The hypothesis on $F$ show that the integral on the left is well-defined and that $$ \sum_{p \in \mathbb N} \lambda_p \int_0^t \left(\int_{\mathbb R^n} F_{\tau,x}(e_p) dx \right) d \beta^H_p(\tau) = \lim_{N\to \infty } \sum_{p=0}^N \lambda_p \int_0^t \left(\int_{\mathbb R^n} F_{\tau,x}(e_p) dx \right) d \beta^H_p(\tau):=\lim_{N\to \infty } I_N. $$ Morever, for $1-H<\alpha<\frac{1}{2}$, we have that $|D_{t-}^{1-\alpha} (\beta_p^H)_{t-}(s)| \leq \Lambda_\alpha(\beta^H_p)<\infty$, $\P$ almost surely and $D_{0+}^\alpha F_{t,x}(e_n) \in L^1((0,T)\times \mathbb R^n)$ since $F_{t,x}(e_p)\in W_{\alpha,1}(0,T,L^1(\mathbb R^n))$. Hence, using the definition of the stochastic integral and Fubini Theorem, it comes \begin{eqnarray*} I_N&=&(-1)^{|\alpha|}\sum_{p=0}^N \lambda_p \int_0^t \left[D_{0+}^\alpha \int_{\mathbb R^n} F_{\tau,x}(e_p)(\tau) dx \right]\left[D_{t-}^{1-\alpha} (\beta_p^H)_{t-}(\tau)\right] d\tau \\ &=& (-1)^{|\alpha|}\sum_{p=0}^N \lambda_p \int_{\mathbb R^n} \left(\int_0^t \left[D_{0+}^\alpha F_{\tau,x}(e_p)(\tau) \right] \left[D_{t-}^{1-\alpha} (\beta_p^H)_{t-}(\tau)\right] d\tau \right)dx\\ &=& \int_{\mathbb R^n} \left(\sum_{p=0}^N \lambda_p \int_0^t F_{t,x}(e_p) d \beta^H_p(\tau) \right) dx:=\int_{\mathbb R^n} f_N(x) dx. \end{eqnarray*} Moreover, $\P$ almost surely: \begin{eqnarray*} \|f_N\|_{L^1} &\leq& \sum_{p=0}^N \lambda_p \left\| \|F_{t,x}(e_p)\|_{W_{\alpha,1}(0,T)} \right\|_{L^1} \Lambda_\alpha(\beta_p^H)\\ & \leq& C \|F\|_{W_{\alpha,1}(0,T,{\mathcal L}(V,L^1))} \sum_{p=0}^N \lambda_p \|e_p\|_V \Lambda_\alpha(\beta_p^H), \end{eqnarray*} so that thanks to \fref{finitelamb}, the series defining $f_N$ converges strongly in $L^1(\mathbb R^n)$ and almost surely. This yields $$ \lim_{N\to \infty } I_N=\int_{\mathbb R^n} \left(\sum_{p=0}^\infty \lambda_p \int_s^t F_{t,x}(e_n) d \beta^H_p(\tau) \right) dx $$ and ends the proof. \subsection{Proof of Remark \ref{rem2}} For $q \geq 2$, using the regularity $\Psi \in {\mathcal C}^0(0,T_M,H^q(\mathbb R^n)) \cap {\mathcal C}^{0,\gamma}(0,T_M,H^{q-2}(\mathbb R^n))$ and picking $w \in L^2(\mathbb R^n)$, we can recast \fref{defsol} as \begin{align*} &\left( \Psi(t), w \right)-\left( \Psi_0, w \right)= -i\int_0^t \left( \Delta \Psi(s), w\right) ds +i \int_0^t\left(\Psi(s), w d B_s^H\right)+i\int_0^t \left(g(\Psi(s)),w \right) ds. \end{align*} Since the mapping $F: e_p \to \overline{\Psi(s)} w e_p$ belongs to ${\mathcal C}^{0,\gamma}(0,T_M,{\mathcal L}(V,L^1(\mathbb R^n)))$, we can use Lemma \ref{fubini} together with Fubini theorem to arrive at \begin{align*} &\left( \Psi(t), w \right)-\left( \Psi_0, w \right)= \\[3mm] &-i \left( \int_0^t \Delta \Psi(s) ds, w\right) +i \left( \int_0^t\Psi(s)d B_s^H, w \right)+i\left( \int_0^t g(\Psi(s))ds,w \right), \end{align*} which yields the desired result. \footnotesize{
train/arxiv
BkiUdFjxK0wg09KOW3MD
5
1
\section{Introduction} H. Poincar\'e considered the problem of characterizing the structure of limit sets of trajectories of analytic vector fields on the plane in 1886 \cite{poincare}. I. Bendixson improved the solution proposed by Poincar\'e in 1903 by solving the problem under the weaker hypothesis of $C^1$ vector fields \cite{bendixson}. Since then, the investigation of the asymptotic behavior of dynamical systems has been essential to understanding their behavior. The theory of Poincar\'e-Bendixson studies so-called \textit{limit sets}. The classic version of the Poincar\'e-Bendixson Theorem states that if a trajectory is bounded and its limit set does not contain any fixed points, then the limt set is a periodic orbit \cite{perko1991differential}. Therefore, the problem of determining the existence of limit cycles in planar continuous dynamics is well understood. Hybrid systems \cite{teelSurvey} are non-smooth dynamical systems which exhibit a combination of smooth and discrete dynamics, where the flow evolves continuously on a state space, and a discrete transition occurs when the flow intersects a co-dimension one hypersurface. Due to many engineering applications, such as dynamical walking of bipedal robots \cite{collins}, \cite{grizzle}, \cite{grizzle2}, there has been an increased interest in recent years in studying the existance and stability of limit cycles in hybrid systems \cite{Lou2015OnRS}, \cite{Lou2015ResultsOS}, \cite{Lou2017ExistenceOH}. There have also been several attempts at building a foundational qualitative theory for hybrid systems (see \cite{Matveev:2000:QTH:555969}, \cite{SIMIC2002197} and references therein) where early versions of the Poincar\'e-Bendixson Theorem were developed. However, many fundamental questions solved for continuous-time systems still remain open for hybrid systems. The results in \cite{Matveev:2000:QTH:555969} are restricted to the situation of constant vector fields while in \cite{SIMIC2002197}, the authors considered a particular class of systems with much stricter assumptions to ensure the existence of periodic orbits. The Poincar\'e map, or first return map, is a method of studying the stability of limit cycles by reducing the dimension of the dynamics for a continuous-time dynamical system by one, and considering this as a discrete-time system \cite{guckenheimer2002nonlinear}, \cite{perko1991differential}, \cite{strogatz2014nonlinear}. The map is constructed as follows: if $\gamma$ is a periodic orbit that intersects a hypersurface (the Poincar\'e section) transversely at a point $x_0$, then for a point $x$ near $x_0$, the solution through $x$ will cross the hypersurface again at a point $P(x)$ near $x_0$. The mapping $x\mapsto P(x)$ is called the Poincar\'e map. In practice, it is impossible to find the Poincar\'e map analytically and in closed form due to the fact that it requires the solution to the differential equation. The extension of the continuous-time Poincar\'e map for mechanical systems with impulsive effects was considered in \cite{grizzle}. The hybrid Poincar\'e map is important in applications as it is used to ensure the existence and stability properties of periodic locomotion gaits \cite{siamreview}. In this work, we study the problem of existence and stability of periodic orbits for hybrid dynamical systems, in addition to other results concerning the qualitative behavior of these systems. Among these results, we present a version of the Poincar\'e-Bendixson Theorem for two dimensional hybrid systems under weaker conditions than the ones considered in \cite{Matveev:2000:QTH:555969}, \cite{SIMIC2002197}. We also derive an analytical method to compute the derivative of the hybrid Poincar\'e map to characterize the stability of periodic orbits. We apply our results to find a region in parameter space where one can ensure the existence of limit cycles for the rimless wheel, a popular system in locomotion research used to study essential properties of walking robots. We additionally prove a Poincar\'e-Bendixson Theorem for a general class of one dimensional hybrid dynamical systems. The paper is organized as follows: Section \ref{sec:hybrid} introduces the formulation of hybrid dynamical systems as well as hybrid $\omega$-limit sets. Section \ref{sec:limit} contains two properties of the hybrid $\omega$-limit set that are needed to properly formulate our hybrid Poincar\'e-Bendixson theorem. Section \ref{sec:poincare} introduces the hybrid Poincar\'e map and uses this to prove our hybrid Poincar\'e-Bendixson Theorem (Theorem \ref{th:HPB}). Section \ref{sec:stability} offers an analytic way to compute the derivative of the hybrid Poincar\'e map, then uses this result to study the stability of planar hybrid limit cycles. Section \ref{sec:1dim} contains a version of the Poincar\'e-Bendixson theorem for a general class of one dimensional hybrid dynamical systems. The paper ends with an application of the main theorem to find conditions for stability of periodic walking of the rimless wheel. Appendix \ref{appendix} contains some analogue results of this work for time-continuous flows. \section{Hybrid Dynamical Systems}\label{sec:hybrid} Hybrid dynamical systems (HDS) are dynamical systems characterized by their mixed behavior of continuous and discrete dynamics where the transition is determined by the time when the continuous flow switches from the ambient space to a co-dimensional one submanifold. This class of dynamical systems is given by an $4$-tuple, $(\mathcal{X},S,f,\Delta)$. The pair ($\mathcal{X}$,$f$) describes the continuous dynamics as \begin{equation*} \dot{x}(t) = f(x(t)) \end{equation*} where $\mathcal{X}$ is a smooth manifold and $f$ a $C^1$ vector field on $\mathcal{X}$ with flow $\varphi_t:\mathcal{X}\rightarrow\mathcal{X}$. Additionally, ($S$,$\Delta$) describes the discrete dynamics as $x^{+}=\Delta(x^{-})$ where $S\subset\mathcal{X}$ is a smooth submanifold of co-dimension one called the \textit{impact surface} The hybrid dynamical system describing the combination of both dynamics is given by \begin{equation}\label{sigma}\Sigma: \begin{cases} \dot{x} = f(x),& x\not\in S\\ x^+ = \Delta(x^-),& x^-\in S. \end{cases}\end{equation} The flow of the hybrid dynamical system \eqref{sigma} is denoted by $\varphi_t^H$. This may cause a little confusion around the break points, that is, where $\varphi_{t_0}(x)\in S$. Then, is $\varphi_{t_0}^H(x) = \varphi_{t_0}(x)$ or $\varphi_{t_0}^H(x) = \Delta(\varphi_{t_0}(x))$? That is, at the time of impact with the submanifold $S$, is the state $x^-$ or $x^+$? We will take the second value. i.e. $\varphi_{t_0}^H(x)=x^+$. However, this means that the orbits will (in general) not be closed. \begin{definition} The (forward) \textit{orbit} and the \textit{$\omega$-limit set} for the hybrid flow $\varphi_t^H(x)$ are given by \begin{equation}\begin{split} &o^+_H(x) := \left\{ \varphi_t^H(x) : t\in\mathbb{R}^+\right\}\\ &\omega_H(x) := \left\{ y\in\mathcal{X} : \exists t_n\rightarrow\infty ~ s.t. \lim_{n\rightarrow\infty} \varphi_{t_n}^H(x) = y\right\} \end{split} \end{equation} Additionally, we define the set $\text{fix}(f)$ of fixed points for a function $f$ and the covering set of fixed points $\N{\varepsilon}$ as \begin{equation}\label{eq:fixed_sets} \begin{split} \text{fix}(f) &:= \left\{ y\in \mathcal{X} : f(y)=0\right\} \\ \N{\varepsilon} &:= \bigcup_{x\in\text{fix}(f)} \; \mathcal{B}_\varepsilon(x) \end{split} \end{equation} where $\mathcal{B}_\varepsilon (x)$ is the open ball of radius $\varepsilon$ around the point $x$. \end{definition} The main problem studied in this work is that of proving an analogue of the Poincar\'e-Bendixson Theorem for continuous-time planar dynamical systems for a suitable class of planar HDS. For this purpose, we will consider a slightly more general HDS form than the one studied in \cite{grizzle}. \begin{definition}\label{def:smooth_hybrid} A 4-tuple, $(\mathcal{X},S,f,\Delta)$, forms a hybrid dynamical system if \begin{enumerate} \item[(H.1)] $\mathcal{X}\subset\mathbb{R}^n$ is open and connected. \item[(H.2)] $f:\mathcal{X}\rightarrow\mathbb{R}^n$ is $C^1$. \item[(H.3)] $H:\mathcal{X}\rightarrow\mathbb{R}$ is $C^1$. \item[(H.4)] $S:=H^{-1}(0)$ is non-empty and for all $x\in S$, $\displaystyle{\frac{\partial H}{\partial x}\ne0}$ (so $S$ is $C^1$ and has co-dimension 1). \item[(H.5)] $\Delta:S\rightarrow \mathcal{X}$ is $C^1$. \item[(H.6)] $\overline{\Delta(S)}\cap S\subset \text{fix}(f)$ and they intersect transversely. \end{enumerate} \end{definition} Note that assumptions (H.1) and (H.2) are required for the continuous flow to exist and be unique. (H.3) and (H.4) make the impact surface well defined, according to \cite{grizzle}. The assumption (H.5) is included because without it, the $\omega_H$-limit set is not (in general) invariant under the flow. The last assumption, (H.6), is to rule out the Zeno phenomenon away from fixed points. (A flow experiences a \textit{Zeno state} (\cite{teelSurvey},\cite{SIMIC2002197}) if the flow $\varphi_t^H$ intersects $S$ infinitely often in a finite amount of time.) Assumption (H.6) is slightly weaker than as presented in \cite{grizzle}, where it is assumed that $\overline{\Delta(S)}\cap S=\emptyset$ (or equivalently, the set of impact times is closed and discrete). \begin{remark} Dropping hypothesis (H.5), $\omega_H(x)$ is not always invariant under the flow $\varphi_t^H$. That is, if $p\in \omega_H(x)$, then $o_H^+(p)\not\subset \omega_H(x)$. The following example shows this situation. \end{remark} \begin{example} Consider the following hybrid system: Let the state-space be $\mathcal{X}=[0,1]\times\mathbb{R}\subset\mathbb{R}^2$ and the continuous flow be determined by $\dot{x}=1$ and $\dot{y} = -y^2$. Let the impact surface be $S = \{ (1,y):y\in\mathbb{R}\}$ and the impact map be given by $$\Delta(1,y) = \begin{cases} (0,y) & y > 0\\ (0,y-1) & y \leq 0. \end{cases}$$ The $\omega_H$-limit set of the starting point $(0,1)$ is the interval $[0,1]\times\{0\}$ which is clearly not invariant under the flow of the system because the impact moves the flow away from $\omega_H(0,1)$. \end{example} \section{Properties of Hybrid Limit Sets}\label{sec:limit} In this section we study two relevant properties of hybrid limit cycles. First, we study sufficient conditions for which the $\omega_H$-limit set is nonempty and compact in analogy with Theorem \ref{th:perko} in the Appendix, but for hybrid flows. The result is used to show that with assumption (H.5) the $\omega_H$-limit set is indeed invariant. \begin{proposition}\label{th:closed} The $\omega_H$-limit set of a trajectory $o_H^+(x)$ is a closed set. Additionally, if $R$ is compact and forward invariant set, then $\omega_H(x)$ is nonempty and compact for $x\in R$. \end{proposition} \begin{proof} The proof follows the same arguments as in Theorem \ref{th:perko} (see \cite{perko1991differential} pp. 193). First, let us prove that $\omega_H(x)$ is closed. Let $\{p_n\}_{n\in\mathbb{N}}$ be a sequence in $\omega_H(x)$, such that $p_n\rightarrow p$ when $n\to\infty$. We want to show that $p\in\omega_H(x)$. Since $p_n\in\omega_H(x)$, there exists a sequence of times, $\{t_k^{(n)}\}$, such that $\varphi_{t_k^{(n)}}^H(x)\rightarrow p_n$ when $t_k^{(n)}\rightarrow\infty$. Without loss of generality, consider $t_k^{(n+1)}>t_k^{(n)}$. Then, for all $n\geq 2$ there exists $K_n>K_{n-1}$ such that for all $k\geq K_n$ $$\left| \varphi_{t_k^{(n)}}^H(x)-p_n\right| < \frac{1}{n}.$$ Choose a sequence of times $t_n = t_{K_n}^{(n)}$. Then, by the triangle inequality, as $t_n\rightarrow\infty$ we obtain that $\varphi_{t_n}^H(x)$ converges to $p$, that is, $$\left| \varphi_{t_n}^H(x) - p\right| \leq \left| \varphi_{t_n}^H(x)-p_n\right| + \left| p_n-p\right| \leq \frac{1}{n} + \left| p_n - p \right| \rightarrow 0 {\hbox{ when } n\to\infty}.$$ For the second part, we have that $\omega_H(x)\subset R$, so it is compact {since it is a closed subset of a compact set}. To show that it is nonempty, we point out that the sequence $\{ \varphi_n^H(x)\}_{n\in\mathbb{N}}$ is in a compact set so by Bolzano-Weierstrass {Theorem}, there exists a convergent subsequence. \end{proof} \begin{remark}Note that $\omega_H(x)$ is closed but $o_H^+(x)$ is not.\end{remark} \begin{proposition} $\omega_H(x)$ is invariant under the flow, {$\varphi_t^H$}, i.e., if $x\in\mathcal{X}$, for all $p\in \omega_H(x)$, $o_H^+(p)\subset\omega_H(x)$. \end{proposition} \begin{proof} The proof of the analogous theorem {for continuous time systems given in} \cite{perko1991differential} {(see Theorem $2$, pp.194)} depends on the trajectories changing continuously based on initial conditions. This {argument} clearly does not work {for hybrid flows}, so we need to modify what continuous means. We do this by identifying points as being close if they are on opposite sides of the jump. Define an equivalence relation on $\mathcal{X}$ by $x\sim y$ if $x=y$ or if $x=\Delta(y)$ for $y\in S$. Re-topologize $\mathcal{X}$ by defining open balls via $$\tilde{\mathcal{B}}_\varepsilon(x) = \bigcup_{y\in [x]} \mathcal{B}_\varepsilon(y).$$ Then, under this topology the flow, $\varphi_t^H$ is continuous. Additionally, we now have continuous dependence on initial conditions. If the flow takes us away from the impact surface, we get continuous dependence under normal continuous flows. If it takes us to the impact surface, we are still continuous because the impact is continuous.\\ \indent {Consider} $q\in o_H^+(p)$. Let $t_0$ {be the time} such that $q = \varphi_{t_0}^H(p)$. Additionally, {let $\{t_n\}_{n\in\mathbb{N}}$ be a sequence of times such that} $\varphi_{t_n}^H(x)\rightarrow p$ {as $t_n\to\infty$}. By the semi-group property of flows and the continuity of the flow with respect to initial conditions, we get $$\varphi_{t_0+t_n}^H(x) = \varphi_{t_0}^H\circ \varphi_{t_n}^H(x) \rightarrow \varphi_{t_0}^H(p) = q. $$ \end{proof} \section{Poincar\'e-Bendixson theorem for $2$ dimensional hybrid dynamical systems}\label{sec:poincare} \subsection{Preliminary result for discrete dynamical systems} \begin{definition} Let $S$ be a smooth manifold and $P:S\rightarrow S$ be $C^1$. The \textit{discrete flow} is defined as \begin{equation}\label{eq:disc} x_{n+1}=P(x_n). \end{equation} The \textit{discrete} $\omega_d$-\textit{limit set} is defined as \begin{equation}\label{eq:discrete_omega} \omega_d(x):=\left\{ y\in S : \exists N_n\rightarrow\infty~s.t.~ \lim_{n\rightarrow\infty}P^{N_n}(x)=y \right\}. \end{equation} \end{definition} The set $\omega_d(x)$ satisfies the following property \begin{lemma}\label{le:single} Let $P:[a,b]\rightarrow[a,b]$ be $C^1$ and injective. Then for all $x\in[a,b]$, $\omega_d(x)$ is either a single point or two points. i.e. all trajectories approach a periodic orbit. \end{lemma} \begin{proof} First, because $P$ is invertible on its image and differentiable, $P'>0$ or $P'<0$ on the entire interval. Without loss of generality, assume that it is increasing (by examining $P^2$ if $P'<0$). Next, since we are condsidering the $\omega_d-$limit set, we can take an iterate of $P$. This makes the system into $P:[c,d]\tilde{\rightarrow}[c,d]$ where $c=P(a)$ and $d=P(b)$. Additionally, since $P$ is a bijection and is continuous and increasing, we must have $P(c)=c$ and $P(d)=d$. Define the closed set $F:=\{x:P(x)=x\}$. Then, if $x\in F$ we are done. So assume that $x\not\in F$. Let $a_1\in F$ be the maximal element of $F$ less than $x$ and $a_2\in F$ be the minimal element greater than $x$. Also, call the invariant interval $I=(a_1,a_2)$. Since $P-Id$ does not have a root on $I$ and is continuous, we have two possibilities for all $y\in I$: either $P(y)>y$ or $P(y)<y$. If $P(y)<y$ for all $y\in I$, then the sequence $P(x),P^2(x),\ldots$ is monotone decreasing and thus is convergent. Likewise, if $P(y)>y$, $P^n(x)$ is a monotone increasing sequence. Thus, $P^n(x)$ always converges and its limit set must be a single point, or two points if we're dealing with $P^2$. \end{proof} Thus, if $P$ is injective, the $\omega_d$-limit set is either a periodic orbit or a fixed point. \subsection{Existence of hybrid Poincar\'e map} To study periodic orbits it is useful to take a Poincar\'e section, of which it would seem natural to take $S$ as the section \cite{grizzle}. The problem is that for a given $x\in S$, it is not guaranteed that $P'(x)$ exists. The next theorem addresses this problem. \begin{theorem}\label{thm:smooth} Let $x_0\in S\setminus\text{fix}(f)$ be such that there exists a time, $T_0>0$, where $\varphi_{T_0}(x_0)\in S$. Additionally, assume that the flow intersects the impact surface transversely at $\varphi_{T_0}(x_0)$. Then there exists an $\varepsilon>0$ and a $C^1$ function $\tau:\mathcal{B}_{\varepsilon}(x_0)\cap S\rightarrow \mathbb{R}^+$ such that for all $y\in \mathcal{B}_{\varepsilon}(x_0)\cap S$, $\varphi_{\tau(y)}(y)\in S$. \end{theorem} \begin{proof} Define the function $F:(0,+\infty)\times S\rightarrow \mathcal{X}$ by $F(t,x) = H(\varphi_t(\Delta(x)))$. It follows from Theorem 1 in Section 2.5 in \cite{perko1991differential} that $(t,x)\mapsto \varphi_t(x)$ is $C^1(\mathbb{R}\times \mathcal{X})$. Combining this with the fact that both $H$ and $\Delta$ are $C^1$ functions, we get that their compositions are. Since $F\in C^1( \mathbb{R}^+ \times S)$, we can use the implicit function theorem. At our point $x_0\in S$, we know that the orbit enters the set $S$ at some minimal future time, $T_0$. This gives $F(T_0,x_0)=0$. Differentiating $F$ with respect to time yields: $$\frac{\partial F}{\partial t} (T_0,x_0) = \left. \frac{\partial H}{\partial y} \right|_{y=\varphi_{T_0}(\Delta(x_0))\in S} \cdot f(\varphi_t(\Delta(x_0))) \ne 0.$$ The first factor is nonzero because of assumption (H.4) and the second is nonzero because we are away from a fixed point. Their inner product is nonzero because of the transversality condition. This lets us use the implicit function theorem (cf., e.g., Theorem 9.28 in \cite{rudin1976principles}), to show that there exists a neighborhood of $\mathcal{B}_{\varepsilon}(x_0)$ of $x_0$ and a $C^1$ function $\tau$ with all the desired properties. \end{proof} \subsection{Poincar\'e-Bendixson theorem for planar HDSs} With the existence of a smooth Poincar\'e map, we can now prove the Poincar\'e-Bendixson theorem for planar HDSs. \begin{theorem}\label{th:HPB} Assume the conditions (H.1)-(H.6). Additionally, assume \begin{enumerate} \item[(C.1)] $\mathcal{X}\subset \mathbb{R}^2$. \item[(C.2)] $\Delta:S\rightarrow \mathcal{X}$ is injective. \item[(C.3)] There exists a forward, invariant, compact set $F\subset \mathcal{X}$ with $F\cap \text{fix}(f)=\emptyset$. \item[(C.4)] $F\cap S$ is diffeomorphic to an interval. \item[(C.5)] The vector field, $f$, is transverse to both $F\cap S$ and $\Delta(F\cap S)$. \end{enumerate} Then, if $x_0\in F$, $\omega_H(x_0)$ is a periodic orbit. Moreover, $\omega_H(x_0)$ intersects the impact surface, $S$, at most twice. \end{theorem} \begin{proof} For the entirety of this proof, we will redefine $S$ by $F\cap S$. That is, $S$ is diffeomorphic to an interval. Consider the Poincar\'e return map, $P:x\mapsto \varphi_{\tau(\Delta(x))}(\Delta(x))$. The domain of this differentiable function is an open subset of $S$ (by Theorem \ref{thm:smooth}). Call this set $S^1$. i.e. $$S^1 := \{ x\in S:~\exists t>0~with~\varphi_t(\Delta(x))\in S\}$$ Specifically, we want to look at all points of $S$ that return back to $S$ infinitely often. Call this set $S^{\infty}$. We can define $S^{\infty}$ recursively as such. \begin{equation*} \begin{split} S^{n+1} := \{ x\in S^n &:~\exists t>0 ~with ~\varphi_t(\Delta(x))\in S^n\}\\ S^\infty := &\bigcap_{n=1}^{\infty}S^n \end{split} \end{equation*} We have two cases for $x_0$: either $o_H^+(x_0)$ hits $S^\infty$ (and thus the impact surface infinitely often), or $o_H^+(x_0)$ avoids $S^\infty$. \begin{caseof} \case{$o_H^+(x_0)$ misses $S^\infty$.}{ In this case, there exists a time large enough where the flows completely stops being hybrid. In this setting, we can invoke the normal Poincar\'e-Bendixson theorem for continuous systems. See Theorem 1 in chapter 3.7 in \cite{perko1991differential}. } \case{$o_H^+(x_0)$ hits $S^\infty$.}{ Because $o_H^+(x_0)\cap S^\infty\ne\emptyset$, we know that $S^\infty\ne\emptyset$. We wish to show that $S^\infty$ is either an interval or a point. This will let us use Lemma \ref{le:single} to show that $P:S^\infty\rightarrow S^\infty$ converges to a limit cycle. We will begin by showing that for each $n\geq 0$, $S^n$ is an interval. By assumption (C.4), $S^0$ is an interval. We will continue by induction. Assume that $S^n$ is an interval and we wish to prove that $S^{n+1}$ is also an interval. Since $S^n$ is diffeomorphic to an interval, let $g_n:S^n\rightarrow [a_n,b_n]$ be a diffeomorphism. Define the points $a_{n+1}$ and $b_{n+1}$ as follows: \begin{equation} \begin{split} a_{n+1} &:= \min \left\{ x\in [a_n,b_n] : o^+(\Delta( g_n^{-1}(x)))\cap S^n\ne\emptyset \right\} \\ b_{n+1} &:= \max \left\{ x\in [a_n,b_n] : o^+(\Delta( g_n^{-1}(x)))\cap S^n\ne\emptyset \right\}. \end{split} \end{equation} We claim that $S^{n+1}$ is diffeomorphic to $[a_{n+1},b_{n+1}]$. Denote the points on the curve $S^n$ by $A:=g_n^{-1}(a_{n+1})$ and $B:= g_n^{-1}(b_{n+1})$. The claim can then be verified by constructing the set $\mathcal{B}$ under the continuous dynamics as the region bounded by the four curves: $\Delta( [A,B])$, $S^n$, $o^+(A)$, and $o^+(B)$. Using assumption (C.5), we know that for every initial condition on $\Delta([A,B])$, the flow will eventually hit the set $S^n$. Thus we can be apply the Rectification Theorem \cite{arnolʹd1978ordinary}, to straighten out the flow. } \end{caseof} \end{proof} Conditions (C.2), (C.4), and (C.5) are unfortunate restrictions, however, they are necessary. If (C.4) is dropped, the flow can end up looking like a Kronocker flow; see Example \ref{ex:c4}. If (C.2) or (C.5) are dropped, mixing can be added to the system and chaos can occur; see Example \ref{ex:c5} \begin{example}\label{ex:c4} Let $S=x^2+y^2-4$. Thus the impact surface is the circle of radius 2 centered about the origin. Let the impact map be given by $\Delta(x,y)=(x/2,y/2)$, so the image of the map is the unit circle. Lastly, define the vector field to be (in polar coordinates) $\dot{r}=\dot{\theta}=1$. Then, $F=\{ 1\leq|r|\leq2\}$ is a compact invariant set. But for all $x\in F$, $\omega_H(x)=F$. \end{example} \begin{example}\label{ex:c5} Let $S = \{ x=2\}$ and define $\Delta$ as $\Delta(2,y) = (y,4y(1-y))$. Then, if the flow is $\dot{x}=1$, $\dot{y}=0$, the first return map becomes the Logistic map which leads to chaos (see \cite{bookHSD} pp. 344 for more details). \end{example} \section{Stability of Periodic Orbits}\label{sec:stability} Given that we have now a method to determine the existence of periodic orbits, we would like to be able to determine their stability. As such, we would like to be able to compute the derivative of $P$ and get a result analogous to Theorem \ref{th:AA} for hybrid dynamical systems. There are a couple of {differences we are faced with in the hybrid approach as opposed to the continuous-time situation}. First, we do not get to choose $\Sigma$ to be normal to the flow as in \cite{perko1991differential}; we are stuck with $\Sigma=S$. Second, we are no longer dealing with a closed orbit and we have to take the geometry of the impact into consideration. We first look at a helpful {result} about the continuous flow {we will use in Theorem \ref{th:stability}}. \begin{lemma}[\cite{perko1991differential}, p. 86]\label{perkodiv} Let $\varphi_t(x_0)$ be the flow of $\varphi_t:\mathcal{X}\rightarrow\mathcal{X}, \frac{d}{dt}\varphi_t(x) = f(\varphi_t(x))$ with initial condition $x_0$. Then, \begin{equation}\label{eq:det_of_partials} \det \left.\frac{\partial}{\partial x} \varphi_t(x)\right|_{x=x_0}= \exp \left(\int_0^t \, \nabla \cdot f(\varphi_s(x_0))\, ds\right). \end{equation} \end{lemma} To understand the stability of our orbit, we want to look at the hybrid Poincar\'e return map, $P:S^1\rightarrow S$. As in Theorem \ref{thm:smooth}, let $\tau:\Delta(S^1)\rightarrow\mathbb{R}$ be the time required to return to the impact surface. Then, if we denote $y:=\Delta(x)$, we can write $P$ as \begin{equation}\label{eq:poincare_expanded} P(x) = \varphi_{\tau(y)}(y) = \int_{0}^{\tau(y)} \, f\left(\varphi_s(y)\right)\, ds. \end{equation} \begin{theorem}\label{th:stability} Assume that we have a hybrid periodic orbit that intersects $S$ once. Suppose that $x\in S$ and $y=\Delta(x)$. Additionally, let $\theta$ be the angle $f(x)$ makes with the tangent of $S$ at $x$ and $\alpha$ be the angle of $f(y)$ with $\Delta(S)$. Assume that $\theta$ and $\alpha$ are not integer multiple of $\pi$. If we denote the continuous flow that connects $y$ to $x$ by $\gamma(t)$ and suppose that it takes time $T$ to complete the loop, the derivative of the Poincar\'e map is \begin{equation}\label{eq:finally} \left. P'(x) \right. = \Delta'(x)\cdot\frac{ \lVert f(y)\rVert}{\lVert f(x)\rVert} \frac{\sin\alpha}{\sin\theta} \cdot\exp \left(\int_0^T \nabla\cdot f(\gamma(t))\, dt\right). \end{equation} \end{theorem} \begin{proof} \begin{figure}[H]\label{deriv_pic} \includegraphics[scale=0.4]{derivative_picture} \caption{The orbit of the periodic orbit for the system given by Theorem \ref{th:stability}.} \end{figure} To differentiate $P$, let's first look at the continuous part (that is, starting at $y_0=\Delta(x_0)$). Let $n$ be the unit normal vector to $\Delta(S)$ at $y$ and let $p$ be the unit tangent vector. To make things reasonable, we want $\langle f(y_0),n_0\rangle\ne0$. \begin{equation}\label{eq:partials} \begin{split} \left.\frac{\partial}{\partial p} \varphi_{\tau(y)}(y)\right|_{y=y_0} &= \int_0^{\tau(y)}\, \left.\frac{\partial}{\partial y} f\left( \varphi_s(y)\right)\right|_{y=y_0} \, ds \cdot \frac{\partial y}{\partial p} + \frac{\partial}{\partial t} (\varphi_{\tau(y_0)}(y_0)) \cdot \frac{\partial t}{\partial p}\\ &= F(y_0)\cdot \delta y + G(y_0)\cdot \delta t \end{split} \end{equation} Now call the flow $\varphi_t(y)=:\gamma(t)$, the time $T=\tau(y)$, and recall that the final point is $\varphi_{\tau(y)}(y)=x$. Then, $G(y) = f(x)$ and $\delta y$ is the unit vector $p$ rooted at $y_0$. We need to figure out what $\delta t$ and $F(y)$ are. By Lemma \ref{perkodiv}, we know the determinant of $F(y)$. \begin{equation}\label{eq:Fy} \det\left( F(y) \right) = \exp \left(\int_0^T \, \nabla \cdot f(\gamma(t)) \, dt\right) \end{equation} To find $F$ (in the direction of $\delta y$), we note that we know the derivative in the direction of the flow: $F(y)\cdot f(y) = f(x)$. Knowing the determinant and this direction, we can attempt to find $F(y)$ in the direction of $\delta y$. We first differentiate $H$ from (H.3) along $S$, which is zero because $S$ is a level set of $H$. \begin{equation} 0=\left.\frac{\partial}{\partial p}H(\varphi_{\tau(y)}(y))\right|_{y=y_0} = \left.\left.\frac{\partial}{\partial x}H(x)\right|_{x=x_0} \cdot \left( F(y)\cdot \delta y + f(x)\cdot \delta t \right)\right|_{y=y_0} \end{equation} This tells us that $F(y)\cdot\delta y + f(x)\cdot\delta t$ lies on the tangent to $S$ at $x$. \begin{figure}[H] \includegraphics[scale=0.27]{proj_picture} \caption{The vector $\delta x= F(y)\delta y + f(x)\delta t$, where the green line is the tangent to $S$ at the point $x$.} \end{figure} Let $V(u,v)$ be the area of the parallelogram spanned by the two vectors $u$ and $v$. Additionally, let $\Lambda=\det(F(y))$. Then, we have \begin{equation} \begin{split} V(f(y),\delta y) &= \lVert f(y)\rVert\cdot\lVert \delta y \rVert \sin\alpha\\ V(\underbrace{F(y)\cdot f(y)}_{=f(x)},F(y)\cdot \delta y) &= \Lambda \lVert f(y)\rVert\cdot\lVert \delta y \rVert \sin\alpha\\ &= V(f(x),F(y)\cdot\delta y + f(x)\cdot\delta t)\\ &= \lVert f(x) \rVert \cdot \lVert \delta x \rVert \sin\theta. \end{split} \end{equation} Collecting terms, we see that \begin{equation} \frac{\lVert \delta x \rVert}{\lVert \delta y \rVert} = \frac{\lVert f(y)\rVert}{\lVert f(x)\rVert} \frac{\sin\alpha}{\sin\theta} \Lambda. \end{equation} Combining this with equation \eqref{eq:Fy}, we arrive at equation \eqref{eq:finally}. \end{proof} \begin{corollary} Suppose now that we have a hybrid periodic orbit that intersects $S$ $n$ times. Let $x_1,\ldots,x_n \in S$ and $y_i=\Delta(x_i)$. Additionally, let $\gamma_i$ be the flow that connects $y_i$ to $x_{i+1}$, i.e. $\gamma_i(0)=y_i$ and $\gamma_i(T_i)=x_{i+1}$. Also, let $\alpha_i$ be the angle $f(y_i)$ makes with $\Delta(S)$ and $\theta_i$ be the angle $f(x_i)$ makes with $S$. Then, the derivative of the Poincar\'e map is given by \begin{equation} (P^n)'(x_1) = \prod_{i=1}^n \, \Delta'(x_i) \frac{\lVert f(y_i)\rVert}{\lVert f(x_i)\rVert} \frac{\sin\alpha_i}{\sin\theta_i} \, \exp \left( \int_0^{T_i}\, \nabla \cdot f(\gamma_i(t))\, dt \right). \end{equation} \end{corollary} This gives a precise test for determining the stability of planar hybrid orbits. We would like to extend this to higher dimensions, but we can only calculate $\det P'(x_0)$ and not its individual eigenvalues. \begin{theorem} Assume that $\mathcal{X}=\mathbb{R}^n$ and that $\gamma(\cdot)$ is a periodic orbit intersecting $S$ once with $x\in S$ and $y=\Delta(x)$ and period length $T$. Let $\alpha$ and $\theta$ be described as in Theorem \ref{th:stability}. If $\gamma$ is stable, then \begin{equation}\label{eq:determinant} \left|\det \left( \Delta'(x) \right) \frac{\lVert f(y)\rVert}{\lVert f(x)\rVert} \frac{\sin\alpha}{\sin\theta} \cdot \exp\left( \int_0^T \, \nabla\cdot f(\gamma(t)) \, dt\right)\right|\leq 1. \end{equation} \end{theorem} \begin{proof} Equation \eqref{eq:determinant} is equal to $\det P'(x)$. Thus, if the determinant is greater than 1, it must have an eigenvalue greater than 1 and the system is unstable. \end{proof} \begin{corollary} If the expression in \eqref{eq:determinant} has value is less than 1 and the orbit, $\gamma(t)$, is unstable, the point $x_0$ under $P$ must be a saddle type instability. \end{corollary} \subsection{Example: Hybrid Van der Pol} Consider the Van der Pol system \begin{equation} \begin{split} \dot{x} &= y\\ \dot{y} &= \mu (1-x^2)y - x \end{split} \end{equation} If we let $z=[x;y]$ and let $f$ be such that that the dynamics is given by $\dot{z}=f(z)$, then $\nabla\cdot f(z) = \mu(1-x^2)$. This allows us to cut up the state space as $P=\{ -1<x<1\}$ and $N=\{ -\infty < x < -1\} \cup \{ 1<x<\infty\}$. The divergence of $f$ is strictly negative on $N$ and strictly positive on $P$. Additionally, it is known that the stable limit cycle of this system intersects both $P$ and $N$; as is required by Dulac's criterion (see for instance \cite{strogatz2014nonlinear}, p. 204). As such, let us take $S=\{(x,y)\in\mathbb{R}^{2}|x=1\}$ because we know the continuous limit cycle intersects $S$. \subsubsection{Numerical Simulation}\label{sub:numerical} Let $\mu=1$ and $\Delta(x,y) = (x,-1.5y)$. Let $z_0=[1;3]$. \begin{figure}[h!] \includegraphics[scale=0.25]{stable_periodic} \caption{1000 cycles of the flow from \S\ref{sub:numerical}.} \end{figure} After running 100 cycles and seeing that the flow ends up being periodic, the initial and final $y$ values are: \begin{equation}\label{eq:limit_ys} \begin{array}{rr} y^- = & -1.0498\\ y^+ = & 1.5747 \end{array} \end{equation} Now, we want to calculate the stability of this orbit. We use the following formula for the derivative of the Poincar\'e map: \begin{equation}\label{eq:deriv} P'(z) = \Delta'(y^-)\cdot\frac{\lVert f(y^+)\rVert}{\lVert f(y^-)\rVert} \frac{\sin\alpha}{\sin\theta} \cdot \exp \left( \int_0^T \, \nabla\cdot f(\gamma(t))\, dt\right) \end{equation} This can be interpreted as multiplying together the discrete part, the geometric part and the continuous part. Numerically integrating over the limit cycle yields a derivative of \begin{equation*} |P'|=0.3338 \end{equation*} \subsubsection{Testing Instability}\label{sub:inst} Now, we will modify the impact map (while keeping the continuous flow and the impact surface fixed) to make the orbit unstable. We will do this by making the impact map be $\Delta(1,y)=(1,m(y-A)+B)$ where $A=y^-$ and $B=y^+$ as in equation \eqref{eq:limit_ys}. This allows us to control the derivative of $\Delta$ (that is, $m$) while keeping the orbit from changing. Using the results from equation \eqref{eq:deriv}, we see that the derivative is now \begin{equation} |P'(y^-)| = 0.2225|m| \end{equation} If we run the simulations for $m$ increasing in magnitude past $\approx4.4943$ the orbit should become unstable. Also, the sign of $m$ will determine the number of times the orbit intersects the impact surface. $$\begin{array}{l|ccccc} m & -4.6 & -4.55 & -4.5 & -4.45 & -4.4 \\ \hline y^+ & 1.6034 & 1.5898 & 1.5768 & 1.5747 & 1.5747 \end{array}$$ If we let $m$ be positive, the resulting unstable periodic orbit will intersect the impact surface twice. $$\begin{array}{l|ccccc} m & 4.4 & 4.45 & 4.5 & 4.55 & 4.6 \\ \hline y^+_1 & 1.5747 & 1.5747 & 1.5059 & 1.3758 & 1.3091 \\ y^+_2 & 1.5747 & 1.5747 & 1.6475 & 1.8119 & 1.9132 \end{array}$$ \begin{figure}[h!] \includegraphics[scale=0.35]{bifur} \caption{Displaying the locations of the jumps after performing 1000 iterations of the system in \S\ref{sub:inst}.} \end{figure} All of the numerics were performed with Matlab's ode45 differential equation solver, equation \eqref{eq:deriv} was integrated via the trapezoidal rule, and all tests ran for a duration of 1000 iterations to locate the steady-state. \subsection{An Analytic Example} Consider the continuous dynamics (in polar coordinates) \begin{equation} \begin{split} \dot{r}& = 1-r\\ \dot{\theta}&=1 \end{split} \end{equation} Under these continuous dynamics, for all points $x_0=(r_0,\theta_0)$, $\omega_c(x_0) = S^1$. Additionally, the flow of the system is \begin{equation}\label{eq:flow} \varphi_t(r_0,\theta_0) = \left( (r_0-1)e^{-t}+1, \theta_0 + t\right). \end{equation} The last notable feature is that the divergence is everywhere equal to -1, i.e. $\nabla\cdot f(r,\theta) \equiv -1$. Let us consider the hybrid system where the impact map is the ray from the origin at angle $\alpha$, that is $S=\{(r,\theta)|\theta=\alpha\}$ and the impact map is given as \begin{equation}\label{eq:impact} \Delta(r,\alpha) = (\beta r,\gamma). \end{equation} Let us now compute the Poincar\'e map both analytically and by equation \eqref{eq:deriv} to compare. We will assume that $0\leq \gamma < \alpha \leq 2\pi$. Then the time between all impacts is $\alpha-\gamma$. Using the fact that the time between impacts is constant and equations \eqref{eq:flow} and \eqref{eq:impact}, we obtain the Poincar\'e map \begin{equation} P(r_0) = \beta \left[ (r_0-1)e^{\gamma-\alpha}+1 \right]. \end{equation} If $\beta e^{\gamma-\alpha}<1$, this yields a fixed point of \begin{equation} r_0^* = \frac{\beta \left[ e^{\gamma-\alpha}-1\right]}{\beta e^{\gamma-\alpha}-1}. \end{equation} Then the derivative is \begin{equation}\label{eq:p_derivative} P'(r_0^*) = \beta e^{\gamma - \alpha}. \end{equation} Now, we compare with equation \eqref{eq:deriv}. Computing each of the three pieces, \begin{equation} \begin{split} \Delta'(y^-) = \beta\\ \frac{\lVert f(y^+)\rVert}{\lVert f(y^-)\rVert} \frac{\sin\alpha}{\sin\theta} = 1\\ \exp\left(\int_0^T \, \nabla\cdot f(\gamma(t))\, dt\right) = e^{\gamma-\alpha}. \end{split} \end{equation} which matches up with equation \eqref{eq:p_derivative}. \section{Poincar\'e-Bendixson theorem for 1-dimensional hybrid dynamical systems} \label{sec:1dim} In this section we contrast the above results to a Poincar\'e-Bendixson theorem for hybrid systems in one dimension. \begin{lemma}\label{le:1} Let $S\subset\mathbb{R}$ be a finite set and $P:S\rightarrow S$. Then, for all $x\in S$ there exists $N\ne M$ large enough such that $g^N(x)=g^M(x)$. Specifically, $\omega_d(x)$ is a periodic orbit. \end{lemma} \begin{proof} Fix a $x\in S$. Define the sequence $\{x_n\}_{n\in\mathbb{N}}$ where $x_n = P^n(x)$. Since the set $S$ is compact, by Bolzano-–Weierstrass there exists a convergent subsequence, $\{x_{n_k}\}_{k\in\mathbb{N}}{\subset \{x_n\}_{n\in\mathbb{N}}}$. Call the limit $\overline{x}$. Since $S$ is uniformly separated, there exists a $K$ large enough such that for all $p\geq K$, $x_{n_p}=\overline{x}$. Take $N=n_K$ and $M=n_{K+1}$ and we have found our periodic orbit. \end{proof} Here, we prove a version of Poincar\'e-Bendixson for a much more general class of hybrid systems in one dimension. In this section, we drop the assumptions (H.1)-(H.6) and replace them with the following: \begin{enumerate} \item[(A.1)] $\mathcal{X}\subset\mathbb{R}$ is open and connected. \item[(A.2)] $f:\mathcal{X}\rightarrow\mathbb{R}$ is $C^1$. \item[(A.3)] $S$ is a subset of $\mathbb{R}$. \item[(A.4)] $\Delta:S\rightarrow\mathbb{R}$. \end{enumerate} Under these considerably weaker assumptions (which requires the dimension restriction) we can prove the following theorem. Recall {$\N{\varepsilon}$ from} equation \eqref{eq:fixed_sets}. \begin{theorem}\label{th:main} If either \begin{enumerate} \item[(S.1)] $S\subset\mathbb{R}$ is uniformly discrete, that is $$\inf_{\substack{x,y\in S\\ x\ne y}} \! |x-y|=\delta>0.$$ \item[(S.2)] The image of $\Delta$ is far from $S$ if we are away from a fixed point of $f$, that is for $\varepsilon>0$ $$\inf_{x,y\in S\setminus \N{\varepsilon}} |\Delta(x)-y| = \eta(\varepsilon) >0.$$ \end{enumerate} Then, if $R\subset\mathbb{R}$ is a forward invariant, compact set and for some $x\in R$ such that ${\omega_H(x)}\cap\text{fix}(f)=\emptyset$, then $\omega_H(x)$ is a limit-cycle. Moreover, $\omega_H(x)\subset \overline{o_H^+(x)}$. \end{theorem} First note that condition (S.1) is similar to (H.4) and condition (S.2) is similar to (H.6). Before we can prove this result, we need to go through some {preliminaries results given in the following lemmas.} Do note, however, that Proposition \ref{th:closed} still holds for {this class of} HDSs, i.e. $\omega_H(x)$ is still a closed set. \begin{lemma}\label{le:2} Let $R$ be a compact, forward invariant set. Fix $x\in R$. Then for all $\varepsilon>0$ there exists $T>0$ such that for all $t>T$, $\varphi_t^H(x)\in \mathcal{B}_\varepsilon(\omega_H(x))$. \end{lemma} \begin{proof} Fix $\varepsilon>0$. Suppose the for all $T>0$, there exists $t>T$ such that $\varphi_t^H(x)\not\in\mathcal{B}_\varepsilon(\omega_H(x))$. So let $T_n\rightarrow\infty$ and choose $t_n>T_n$ such that $\varphi_{t_n}^H(x)\not\in\mathcal{B}_\varepsilon(\omega_H(x))$. Then, the sequence $\{ \varphi_{t_n}^H(x)\}_{n\in\mathbb{N}}$ is far away from $\omega_H(x)$. But, because $R$ is compact, by Bolzano--Weierstrass, there exists a convergent subsequence, $\varphi_{t_{n_k}}^H(x)\rightarrow\overline{x}$. By the definition of $\omega_H(x)$, $\overline{x}\in\omega_H(x)$. \end{proof} \begin{lemma}\label{le:3} If $\text{fix}(f)\cap{\omega_H(x)} = \emptyset$ and $x\in R$ as in Theorem \ref{th:main}, then $$\text{dist} \left( o_H^+, \text{fix}(f)\right) = \delta > 0.$$ i.e. $o_H^+(x)\cap \N{\delta}=\emptyset$. \end{lemma} \begin{proof} Because $f$ is {a $C^1$ function and therefore} a Lipschitz function, $\text{fix}(f)$ is a closed set. Since $\omega_H(x)\subset R$, ${\omega_H(x)}$ is compact. This implies that since $\text{fix}(f)$ and $\omega_H(x)$ are disjoint, they are uniformly separated. So there exists an $\varepsilon>0$ such that $\N{\varepsilon}\cap \omega_H(x)=\emptyset$. By Lemma \ref{le:2} for $T>0$ large enough, all $t>T$ we have $\varphi_t^H(x)\in \mathcal{B}_{\varepsilon/2}(\omega_H(x))$. This tells us that for sufficiently large times, the forward orbit of $x$ is far away form $\text{fix}(f)$. So, we just need to examine the orbit up to time $T$. Call the set $o_H^T(x) = \{ \varphi_t^H(x) : t\in [0,T]\}$. We know that $o_H^T(x)$ is disjoint from $\text{fix}(f)$, but because $o_H^T(x)$ is not closed, we can't say for sure that it is uniformly distant. The only points that can cause trouble are the points close to $o_H^T(x)$ but not in the set. The only points that fit this bill are the break points. However, if one of the break points of the flow is a fixed point of $f$, the flow would approach it asymptotically and thus the limit set would be that point. \end{proof} It is interesting to note that because both Lemmas \ref{le:2} and \ref{le:3} do not require assumption (A.1), they still hold for HDS as defined by definition \ref{def:smooth_hybrid}. \begin{lemma}\label{le:4} Let $f$, $S$, $\Delta$, $R$, and $x$ be as in Theorem \ref{th:main}. Then, for all $y\in o_H^+(x)$ there exists a time, $t_0$, such that $\varphi_{t_0}(x)\in S$. \end{lemma} \begin{proof} Assume not. Then, $o_H^+(y)$ never jumps. So we can replace it with $o^+(y)$. But, by Lemma \ref{le:2}, $o^+(y)$ is uniformly far from $\text{fix}(f)$. So the flow of $y$ is either monotonically increasing or decreasing for all time with a speed bounded away form zero. This means that $y$ must approach either $+\infty$ or $-\infty$ as time approaches infinity. This contradicts the assumption that $o^+(y)$ is confined to a compact set. \end{proof} \begin{proof}[Proof of Theorem \ref{th:main}] First, let us assume that condition (S.1) holds. Then there exist finitely many points inside $R\cap S$. Label these points in ascending order $s_1,\ldots,s_n$. Define the set $E:=\{s\in R\cap S | \Delta^n (s) \in R\cap S, \forall n\}$. Then, since $E$ is a finite set with discrete dynamics by Lemma \ref{le:1}, $x\in E$ must eventually be a fixed point or a periodic orbit. So, if $x\in E$ then $\omega_H(x)$ is a periodic orbit. Additionally, if there exists any time where the orbit of $x$ intersects $E$, then $\omega_H(x)$ is a periodic orbit. So, let's assume that $o_H^+(x)\cap E=\emptyset$.\\ Without loss of generality, let $x_0\not\in S$. Then, by Lemma \ref{le:4}, there exists a point $s_{k_0}\in R\cap S$ such that the flow $\varphi_{t_0}(x) = s_{k_0}$. Now, let $x_1 := \Delta(s_{k_0})$ and let $s_{k_1}$ be the impact point $x_1$ gets mapped to. This gives dynamics on the impact points, $$s_{k_{n+1}} = \mathcal{M}(s_{k_n}).$$ But since there are only finite many $s_j$'s, we must either end up with a periodic orbit or a fixed point (Lemma \ref{le:1}). Thus $\omega_H(x)$ is a limit cycle.\\ Now, assume condition (S.2) holds. Since ${\omega_H(x)}$ contains no fixed points, $o_H^+(x)$ is uniformly far from roots of $f$ (Lemma \ref{le:3}). Let us rename the set $R$ to be $R=\overline{o_H^+(x)}$ (which is closed). Then, $$R\cap S = (R\cap S)\setminus \N{\delta} $$ with $\N{\varepsilon}$ as in equation \eqref{eq:fixed_sets}. So, condition (2) tells us that there exists some positive $\eta$ such that $$\inf_{x,y\in R\cap S} \! d(\Delta(x),y) = \eta >0.$$ Additionally, let $\xi := \displaystyle \inf_{x\in R} |f(x)|$. Then, the minimal time between consecutive impacts is bounded below by $\eta/\xi$. By concatenating the smooth dynamics between impacts and using Lemma \ref{le:4}, the orbit looks like $$o_H^+(x) = \bigsqcup_{n=0}^\infty \; [a_{k_n},b_{k_{n+1}}).$$ But each interval has a minimal length of $\eta$ and therefore since $R$ is compact, they must eventually intersect. Additionally, if $[a_j,b_j)\cap[a_i,b_i)\ne\emptyset$ then $b_j=b_i$. This shows that there only may exist finitely many distinct $b_{k_n}$'s. This allows us to define dynamics on a finite set, $$b_{k_{n+1}} = \mathcal{M}(b_{k_n}).$$ and thus, a periodic orbit of $\mathcal{M}$ must exist (Lemma \ref{le:1}). Therefore $\omega_H^+(x)$ is a limit-cycle.\\ Since we construct a periodic orbit via Lemma \ref{le:1} for both cases (S.1) and (S.2), we point out that Lemma \ref{le:1} states that we hit the periodic orbit after finitely many impacts. Thus, the forward orbit enters $\omega_H(x)$ at some finite time and $\omega_H(x)\subset \overline{o_H^+(x)}$. \end{proof} \section{Application to periodic walking: the rimless wheel}\label{sec:examples} The rimless wheel is a one-degree-of-freedom hybrid mechanical system in which the guard is reached when the swinging spoke makes contact with the inclined plane (see Figure \ref{fig:rw_schematic}. For a rimless wheel rolling along an inclined plane an analytically computable stable limit cycle exits \cite{Coleman1998}. \begin{figure}[h!] \centering \begin{tikzpicture}[scale=1] \draw[thick] (1,0.8) to (1.77,0.5) -- (4.85,-0.8)--(0,-0.8); \draw[thick] (3,1.1) circle (.15cm); \spoke{(3.0,1.1)}{-85}; \spoke{(3.0,1.1)}{-25}; \spoke{(3.0,1.1)}{35}; \spoke{(3,1.1)}{95}; \spoke{(3,1.1)}{155}; \spoke{(3,1.1)}{-145}; \draw[dotted,thick] (2,1) -- (2,2); \draw[->,thick] (4,-.45) to [out=-145, in =125] (4,-0.8); \node at (3.7,-0.6) {$\alpha$}; \draw[->, thick] (3.2,1.55) to [out=125, in =55] (2.7,1.5); \node at (2.8,2) {$2\delta$}; \draw[->,thick] (2.5,1.05) to [out=115, in =0] node[pos=0.5, above]{\small $\theta$} (2,1.4) ; \draw[dotted, thick] (1.3,.7) -- node [pos=0,above] {$\ell$} (2.1,2); \end{tikzpicture} \caption{The rimless wheel.} \label{fig:rw_schematic} \end{figure} For this system let $x=(\theta,\dot{\theta})$, the continuous dynamics are given by equations \eqref{eq:wheel_cont} and \eqref{eq:wheel_disc} below (see {\cite{Coleman1998} and } \cite{saglam2014lyapunov} for an in depth formulation of this problem). {We assume the mass $m$ is lumped into the center of the robot, the length of each leg is given by $\ell$, and each inter-leg angle is $2\delta= \frac{2\pi}{N}$, with $N$ being the number of legs}. Here, $\delta$ is the angle the leg makes with the ground when it lifts off and $\alpha$ is the grade of the slope the passive walker is walking down. \begin{equation}\label{eq:wheel_cont} \dot{x} = f(x) = \left[\begin{array}{c} x_2 \\ \zeta \sin(x_1) \end{array}\right], \quad\zeta = g/\ell \end{equation} The impact surface is given by $S = \{x_1 = -\delta-\alpha\}$ and the impact map is \begin{equation}\label{eq:wheel_disc} \Delta(x) = \left[ \begin{array}{c} \delta-\alpha \\ \cos(2\delta)x_2 \end{array}\right]. \end{equation} By applying Theorem \ref{th:HPB}, all the smooth hybrid assumptions (H.1)-(H.6) are satisfied as well as points (C.1), (C.2), and (C.4). The transversality condition, (C.5), is satisfied as long as the trajectory stays away from the origin. To find the forward invariant compact set free of fixed points we do an energy balance. The (potential) energy gained over a single swing is \begin{equation}\label{eq:potential} \Delta P = 2\ell g\sin\delta\sin\alpha. \end{equation} While the amount of (kinetic) energy lost at impact is \begin{equation}\label{eq:kinetic} \Delta V = \frac{1}{2}(\ell x_2^-)^2\left( 1-\cos^2 2\delta \right). \end{equation} Call the total energy of the system $E$. If $\Delta V > \Delta P$, then $E(P(x)) < E(x)$. And if $\Delta V < \Delta P$, then $E(P(x))>E(x)$. This is how we will locate a forward invariant compact (FIC) set. Clearly, for $x_2^-$ large enough, more kinetic energy will be lost through impacts than is acquired over the swing phase. The remaining question is for $x_2^-$ being small enough that we gain more energy. If $\delta > \alpha$, then we can fail to swing forward. In this case, we can calculate the minimum velocity, $x_2$, needed at the beginning of the swing phase to make it to the next one. \begin{equation} (x_2^+)^2 > 2\zeta \left( 1- \cos(\delta-\alpha)\right),\quad (x_2^-)^2 > \frac{2\zeta\left( 1- \cos(\delta-\alpha)\right)}{\cos^2(2\delta)} \end{equation} If we start the swing with this velocity and by equations \eqref{eq:potential} and \eqref{eq:kinetic} we gain energy, then \begin{equation} \Delta V = \frac{\ell^2}{2}\left( 1-\cos^22\delta\right) \left( \frac{2\zeta\left( 1- \cos(\delta-\alpha)\right)}{\cos^2(2\delta)} \right). \end{equation} Therefore, if $\delta>\alpha$ and \begin{equation}\label{eq:existance_walking} 2\sin\delta\sin\alpha > \left( 1-\cos^22\delta\right) \left( \frac{\left( 1- \cos(\delta-\alpha)\right)}{\cos^2(2\delta)} \right), \end{equation} then there exists at least one periodic orbit that intersects the impact surface either once or twice. In Figure \ref{fig:stability_regions} (left) we show the parameters $\alpha$ and $\delta$ where a stable limit cycle exists as well as (right) a trajectory and the region of attraction for the choice set of parameters $\delta=\pi/10$, $\alpha=\pi/30$, and $\zeta=9.8$. \begin{figure}[h!] \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{stability_region_plot_2} \end{subfigure} \quad \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{stable_cycle_2} \end{subfigure} \caption{Left: The green region indicates values of $\alpha$ and $\delta$ where there exists a limit cycle as predicted by equation \eqref{eq:existance_walking}. Right: The green region is the domain of attraction for the limit cycle, whose existence is guaranteed by equation \eqref{eq:existance_walking}.} \label{fig:stability_regions} \end{figure}
train/arxiv
BkiUfMI5qsFChfNA3E7K
5
1
\section{Introduction} \label{sec:intro} First\xspace-order\xspace methods\xspace play\xspace a\xspace fundamental\xspace role\xspace in\xspace large\xspace-scale\xspace machine\xspace learning\xspace and\xspace optimization\xspace tasks\xspace. In\xspace most\xspace scenarios\xspace, the\xspace performance\xspace of\xspace a\xspace first\xspace-order\xspace method\xspace is\xspace represented\xspace by\xspace its\xspace \emph{convergence\xspace rate\xspace}: the\xspace relationship\xspace between\xspace the\xspace optimization\xspace error\xspace $\varepsilon$\xspace and\xspace the\xspace number\xspace of\xspace gradient\xspace computations\xspace $T$\xspace . This\xspace is\xspace meaningful\xspace because\xspace in\xspace most\xspace applications\xspace, the\xspace time\xspace complexities\xspace for\xspace evaluating\xspace gradients\xspace at\xspace different\xspace points\xspace are\xspace of\xspace the\xspace same\xspace magnitude\xspace. In\xspace other\xspace words\xspace, the\xspace worse\xspace-case\xspace time\xspace complexities\xspace of\xspace first\xspace-order\xspace methods\xspace are\xspace usually\xspace proportional\xspace to\xspace a\xspace fixed\xspace parameter\xspace times\xspace $T$\xspace . In\xspace certain\xspace large\xspace-scale\xspace settings\xspace, if\xspace we\xspace have\xspace already\xspace spent\xspace time\xspace computing\xspace the\xspace (full)\xspace gradient\xspace at\xspace $x$\xspace , perhaps\xspace we\xspace can\xspace use\xspace such\xspace information\xspace to\xspace reduce\xspace the\xspace time\xspace complexity\xspace to\xspace compute\xspace full\xspace gradients\xspace at\xspace other\xspace points\xspace near\xspace $x$\xspace . We\xspace call\xspace this\xspace the\xspace ``lingering\xspace'' of\xspace gradients\xspace, because\xspace the\xspace gradient\xspace at\xspace $x$\xspace may\xspace be\xspace partially\xspace reused\xspace for\xspace future\xspace consideration\xspace, but\xspace will\xspace eventually\xspace fade\xspace away\xspace once\xspace we\xspace are\xspace far\xspace from\xspace $x$\xspace . In\xspace this\xspace paper\xspace, we\xspace consider\xspace an\xspace important\xspace class\xspace of\xspace optimization\xspace problems\xspace in\xspace which\xspace algorithms\xspace can\xspace exploit\xspace the\xspace lingering\xspace of\xspace gradients\xspace and\xspace thus\xspace converge\xspace faster\xspace. Formally\xspace, consider\xspace the\xspace (finite\xspace-sum)\xspace stochastic\xspace convex\xspace minimization\xspace problem\xspace: \begin{equation}\label{eqn:the-problem Then\xspace, could\xspace it\xspace be\xspace possible\xspace that\xspace \emph{whenever\xspace} $x$\xspace is\xspace sufficiently\xspace close\xspace to\xspace $y$\xspace , for\xspace at\xspace least\xspace a\xspace large\xspace fraction\xspace of\xspace indices\xspace $i\in [n]$\xspace , we\xspace have\xspace $\nabla f_i(x)\approx \nabla f_i(y)$\xspace ? In\xspace other\xspace words\xspace, if\xspace $\nabla f_1(x),\dots,\nabla f_n(x)$\xspace are\xspace already\xspace calculated\xspace at\xspace some\xspace point\xspace $x$\xspace , can\xspace we\xspace reuse\xspace a\xspace large\xspace fraction\xspace of\xspace them\xspace to\xspace approximate\xspace $\nabla f(y)$\xspace ? \parodista{Example\xspace 1} In\xspace the\xspace problem\xspace of\xspace matching\xspace customers\xspace to\xspace resources\xspace, $f_i(x)$\xspace represents\xspace the\xspace marginal\xspace profit\xspace of\xspace the\xspace $i$\xspace -th\xspace customer\xspace under\xspace bid\xspace-price\xspace vector\xspace $x\in\mathbb{R}^d_+$\xspace over\xspace $d$\xspace items\xspace. In\xspace many\xspace applications\xspace (see\xspace \spersola{sec:pre:revenue}), $\nabla f_i(x)$\xspace only\xspace depends\xspace on\xspace customer\xspace $i$\xspace 's\xspace preferences\xspace under\xspace $x$\xspace . If\xspace the\xspace bid\xspace-price\xspace vector\xspace $x\in\mathbb{R}_+^d$\xspace changes\xspace by\xspace a\xspace small\xspace amount\xspace to\xspace $y$\xspace , then\xspace for\xspace a\xspace large\xspace fraction\xspace of\xspace customers\xspace $i$\xspace , their\xspace most\xspace profitable\xspace items\xspace may\xspace not\xspace change\xspace, and\xspace thus\xspace $\nabla f_i(x) \approx \nabla f_i(y)$\xspace . Indeed\xspace, imagine\xspace if\xspace one\xspace of\xspace the\xspace items\xspace is\xspace Xbox\xspace, and\xspace its\xspace price\xspace drops\xspace by\xspace 5\%, perhaps\xspace 90\% of\xspace the\xspace customers\xspace will\xspace not\xspace change\xspace their\xspace minds\xspace about\xspace buying\xspace or\xspace not\xspace. We\xspace shall\xspace demonstrate\xspace this\xspace using\xspace real\xspace-life\xspace data\xspace. \parodista{Example\xspace 2} In\xspace classification\xspace problems\xspace, $f_i(x)$\xspace represents\xspace the\xspace loss\xspace value\xspace for\xspace ``how\xspace well\xspace training\xspace sample\xspace $i$\xspace is\xspace classified\xspace under\xspace predictor\xspace $x$\xspace ''. For\xspace any\xspace sample\xspace $i$\xspace that\xspace has\xspace a\xspace large\xspace margin\xspace under\xspace predictor\xspace $x$\xspace , its\xspace gradient\xspace $\nabla f_i(x)$\xspace may\xspace stay\xspace close\xspace to\xspace $\nabla f_i(y)$\xspace whenever\xspace $x$\xspace is\xspace close\xspace to\xspace $y$\xspace . Formally\xspace, let\xspace $f_i(x) = \max\{0, 1 - \langle x, a_i \rangle\}$\xspace be\xspace the\xspace hinge\xspace loss\xspace (or\xspace its\xspace smoothed\xspace variant\xspace if\xspace needed)\xspace with\xspace respect\xspace to\xspace the\xspace $i$\xspace -th\xspace sample\xspace $a_i \in \mathbb{R}^d$\xspace . If\xspace the\xspace margin\xspace $|1 - \langle x, a_i \rangle|$\xspace is\xspace sufficiently\xspace large\xspace, then\xspace moving\xspace from\xspace $x$\xspace to\xspace a\xspace nearby\xspace point\xspace $y$\xspace should\xspace not\xspace affect\xspace the\xspace sign\xspace of\xspace $1 - \langle x, a_i \rangle$\xspace , and\xspace thus\xspace not\xspace change\xspace the\xspace gradient\xspace. Therefore\xspace, if\xspace samples\xspace $a_1,\dots,a_n$\xspace are\xspace sufficiently\xspace diverse\xspace, then\xspace a\xspace large\xspace fraction\xspace of\xspace them\xspace should\xspace incur\xspace large\xspace margins\xspace and\xspace have\xspace the\xspace same\xspace gradients\xspace when\xspace $x$\xspace changes\xspace by\xspace little\xspace. \subsection{Summary of Main Results and Contributions} We\xspace assume\xspace in\xspace this\xspace paper\xspace that\xspace, given\xspace any\xspace point\xspace $x\in\mathbb{R}^d$\xspace and\xspace index\xspace $i \in [n]$\xspace , one\xspace can\xspace efficiently\xspace evaluate\xspace a\xspace ``lingering\xspace radius\xspace'' $\delta(x, i)$\xspace . The\xspace radius\xspace satisfies\xspace the\xspace condition\xspace that\xspace for\xspace every\xspace point\xspace $y$\xspace that\xspace is\xspace within\xspace distance\xspace $\delta(x,i)$\xspace from\xspace $x$\xspace , the\xspace stochastic\xspace gradient\xspace $\nabla f_i(y)$\xspace is\xspace equal\xspace to\xspace $\nabla f_i(x)$\xspace . We\xspace make\xspace two\xspace remarks\xspace: \begin{itemize} \item We\xspace use\xspace ``equal\xspace to\xspace'' for\xspace the\xspace purpose\xspace of\xspace proving\xspace theoretical\xspace results\xspace. In\xspace practice\xspace and\xspace in\xspace our\xspace experiments\xspace, it\xspace suffices\xspace to\xspace use\xspace approximate\xspace equality\xspace such\xspace as\xspace $\|\nabla f_i(x) - \nabla f_i(y)\|\leq 10^{-10}$\xspace . \item By\xspace ``efficient\xspace'' we\xspace mean\xspace $\delta(x,i)$\xspace is\xspace computable\xspace in\xspace the\xspace same\xspace complexity\xspace as\xspace evaluating\xspace $\nabla f_i(x)$\xspace . This\xspace is\xspace reasonable\xspace because\xspace when\xspace $\nabla f_i(x)$\xspace is\xspace an\xspace explicit\xspace function\xspace of\xspace $x$\xspace , it\xspace is\xspace usually\xspace easy\xspace to\xspace tell\xspace how\xspace sensitive\xspace it\xspace is\xspace to\xspace the\xspace input\xspace $x$\xspace . (We\xspace shall\xspace include\xspace such\xspace examples\xspace in\xspace our\xspace experiments\xspace.) \end{itemize} If\xspace we\xspace denote\xspace by\xspace $B(x,r)$\xspace the\xspace set\xspace of\xspace indices\xspace $j$\xspace satisfying\xspace $\delta(x,j) < r$\xspace , and\xspace if\xspace we\xspace travel\xspace to\xspace some\xspace point\xspace $y$\xspace that\xspace is\xspace at\xspace most\xspace distance\xspace $r$\xspace from\xspace $x$\xspace , then\xspace we\xspace only\xspace need\xspace to\xspace re\xspace-evaluate\xspace the\xspace (stochastic)\xspace gradients\xspace $\nabla f_j(y)$\xspace for\xspace $j\in B(x,r)$\xspace . Intuitively\xspace, one\xspace should\xspace expect\xspace $|B(x,r)|$\xspace to\xspace grow\xspace as\xspace a\xspace function\xspace of\xspace $r$\xspace if\xspace the\xspace data\xspace points\xspace are\xspace sufficiently\xspace diverse\xspace. \begin{wrapfigure}{r}{0.35\textwidth} \includegraphics[page=1,height=0.2\textwidth]{rate-compare} \caption{\label{fig:svm:theory}$e^{-T^{1/3}}$\xspace vs\xspace $1/T$\xspace } \end{wrapfigure} \parodista{Better\xspace Convergence\xspace Rate\xspace in\xspace Theory\xspace} To\xspace present\xspace the\xspace simplest\xspace theoretical\xspace result\xspace, we\xspace modify\xspace gradient\xspace descent\xspace (GD)\xspace to\xspace take\xspace into\xspace account\xspace the\xspace lingering\xspace of\xspace gradients\xspace. At\xspace a\xspace high\xspace level\xspace, we\xspace run\xspace GD\xspace, but\xspace during\xspace its\xspace execution\xspace, we\xspace maintain\xspace a\xspace decomposition\xspace of\xspace the\xspace indices\xspace $\Lambda_0 \cup \cdots \cup \Lambda_t = \{1,2,\dots,n\}$\xspace where\xspace $t$\xspace is\xspace logarithmic\xspace in\xspace $n$\xspace . Now\xspace, whenever\xspace we\xspace need\xspace $\nabla f_i(x_k)$\xspace for\xspace some\xspace $i \in \Lambda_p$\xspace , we\xspace approximate\xspace it\xspace by\xspace $\nabla f_i(x_{k'})$\xspace for\xspace a\xspace point\xspace $k'$\xspace that\xspace was\xspace visited\xspace at\xspace most\xspace $2^p$\xspace steps\xspace ago\xspace. Our\xspace algorithm\xspace makes\xspace sure\xspace that\xspace such\xspace $\nabla f_i(x_{k'})$\xspace is\xspace available\xspace in\xspace memory\xspace. We\xspace prove\xspace that\xspace the\xspace performance\xspace of\xspace our\xspace algorithm\xspace depends\xspace on\xspace how\xspace $|B(x,r)|$\xspace grows\xspace in\xspace $r$\xspace . Formally\xspace, let\xspace $T$\xspace be\xspace the\xspace total\xspace number\xspace of\xspace stochastic\xspace gradient\xspace computations\xspace divided\xspace by\xspace $n$\xspace , and\xspace suppose\xspace $|B(x,r)|\leq O(r^\beta)$\xspace . Then\xspace, our\xspace algorithm\xspace finds\xspace a\xspace point\xspace $x$\xspace with\xspace $f(x) - f(x^*) \leq \tilde{O}(T^{-1/(1-\beta)})$\xspace if\xspace $\beta \in (0,1)$\xspace , or\xspace $f(x) - f(x^*) \leq 2^{-\Omega(T^{1/3})}$\xspace if\xspace $\beta=1$\xspace . In\xspace contrast\xspace, traditional\xspace GD\xspace satisfies\xspace $f(x) - f(x^*) \leq O(T^{-1})$\xspace . \parodista{Faster\xspace Algorithm\xspace in\xspace Practice\xspace} We\xspace also\xspace design\xspace an\xspace algorithm\xspace that\xspace practically\xspace maximizes\xspace the\xspace use\xspace of\xspace gradient\xspace lingering\xspace. We\xspace take\xspace the\xspace SVRG\xspace method~\citep{JohnsonZhang2013-SVRG,MahdaviZhangJin2013-sc} as\xspace the\xspace prototype\xspace because\xspace it\xspace is\xspace widely\xspace applied\xspace in\xspace large\xspace-scale\xspace settings\xspace. Recall\xspace that\xspace SVRG\xspace uses\xspace gradient\xspace estimator\xspace $\nabla f(\tilde{x}) - \nabla f_i(\tilde{x}) + \nabla f_i(x_k)$\xspace to\xspace estimate\xspace the\xspace full\xspace gradient\xspace $\nabla f(x_k)$\xspace , where\xspace $\tilde{x}$\xspace is\xspace the\xspace so\xspace-called\xspace snapshot\xspace point\xspace (which\xspace was\xspace visited\xspace at\xspace most\xspace $n$\xspace steps\xspace ago)\xspace and\xspace $i$\xspace is\xspace a\xspace random\xspace index\xspace. At\xspace a\xspace high\xspace level\xspace, we\xspace modify\xspace SVRG\xspace so\xspace that\xspace the\xspace index\xspace $i$\xspace is\xspace only\xspace generated\xspace from\xspace those\xspace whose\xspace stochastic\xspace gradients\xspace need\xspace to\xspace be\xspace recomputed\xspace, and\xspace ignore\xspace those\xspace such\xspace that\xspace $\nabla f_i(x_k) = \nabla f_i(\tilde{x})$\xspace . This\xspace can\xspace further\xspace reduce\xspace the\xspace variance\xspace of\xspace the\xspace gradient\xspace estimator\xspace, and\xspace improve\xspace the\xspace running\xspace time\xspace. \parodista{Application\xspace to\xspace packing\xspace LPs\xspace} Our\xspace algorithms\xspace serve\xspace as\xspace tools\xspace for\xspace solving\xspace a\xspace variety\xspace of\xspace packing\xspace linear\xspace programs\xspace (LPs)\xspace, including\xspace those\xspace widely\xspace used\xspace by\xspace revenue\xspace-maximization\xspace policies\xspace \citep{FMMM09, Stein2016}. In\xspace this\xspace paper\xspace, we\xspace solve\xspace a\xspace packing\xspace LP\xspace of\xspace this\xspace form\xspace on\xspace the\xspace Yahoo\xspace! Front\xspace Page\xspace Today\xspace Module\xspace application\xspace \citep{LCLS2010,Chu2009case} with\xspace 4.6 million\xspace users\xspace to\xspace $10^{-6}$\xspace error\xspace (or\xspace $10^{-12}$\xspace dual\xspace error)\xspace using\xspace only\xspace 6 passes\xspace of\xspace the\xspace dataset\xspace. \parodista{Application\xspace to\xspace SVM\xspace} Our\xspace algorithms\xspace also\xspace apply\xspace to\xspace training\xspace support\xspace vector\xspace machine\xspace (SVM)\xspace, one\xspace of\xspace the\xspace most\xspace classical\xspace supervised\xspace learning\xspace model\xspace for\xspace classification\xspace tasks\xspace. On\xspace the\xspace Adult\xspace dataset\xspace of\xspace LibSVM~\citep{LibSVMdata}, we\xspace manage\xspace to\xspace minimize\xspace the\xspace SVM\xspace training\xspace objective\xspace to\xspace $10^{-5}$\xspace error\xspace in\xspace 30 passes\xspace of\xspace the\xspace dataset\xspace. In\xspace contrast\xspace, PEGASOS\xspace, arguably\xspace the\xspace most\xspace popular\xspace method\xspace for\xspace SVM~\citep{Shalev-Shwartz2011pegasos}, cannot\xspace minimize\xspace this\xspace objective\xspace even\xspace to\xspace $10^{-3}$\xspace error\xspace within\xspace 90 passes\xspace. \subsection{Related Work} \label{sec:related} \parodista{Variance\xspace Reduction\xspace} The\xspace SVRG\xspace method\xspace was\xspace independently\xspace proposed\xspace by\xspace \citet{JohnsonZhang2013-SVRG,MahdaviZhangJin2013-sc}, and\xspace belong\xspace to\xspace the\xspace class\xspace of\xspace stochastic\xspace methods\xspace using\xspace the\xspace so\xspace-called\xspace variance\xspace-reduction\xspace technique~\citep{Schmidt2013-SAG,MahdaviZhangJin2013-sc,MahdaviZhangJin2013-nonsc,JohnsonZhang2013-SVRG,Shalev-Shwartz2013-SDCA,Shalev-Shwartz2015-SDCAwithoutDual,Shalev-ShwartzZhang2014-ProxSDCA,XiaoZhang2014-ProximalSVRG,Defazio2014-SAGA,AY2015-univr}. The\xspace common\xspace idea\xspace behind\xspace these\xspace methods\xspace is\xspace to\xspace use\xspace some\xspace full\xspace gradient\xspace of\xspace the\xspace past\xspace to\xspace approximate\xspace future\xspace, but\xspace they\xspace do\xspace not\xspace distinguish\xspace which\xspace $\nabla f_i(x)$\xspace can\xspace ``linger\xspace longer\xspace in\xspace time\xspace'' among\xspace all\xspace indices\xspace $i\in[n]$\xspace for\xspace different\xspace $x$\xspace . Arguably\xspace the\xspace two\xspace most\xspace widely\xspace applied\xspace variance\xspace-reduction\xspace methods\xspace are\xspace SVRG\xspace and\xspace SAGA~\citep{Defazio2014-SAGA}. They\xspace have\xspace complementary\xspace performance\xspace depending\xspace on\xspace the\xspace internal\xspace structural\xspace of\xspace the\xspace dataset~\citep{AYS2016}, so\xspace we\xspace compare\xspace to\xspace both\xspace in\xspace our\xspace experiments\xspace. A\xspace practical\xspace modification\xspace of\xspace SVRG\xspace is\xspace to\xspace use\xspace an\xspace approximate\xspace full\xspace gradient\xspace (as\xspace opposed\xspace to\xspace the\xspace exact\xspace full\xspace gradient)\xspace of\xspace the\xspace past\xspace to\xspace approximate\xspace future\xspace. This\xspace is\xspace studied\xspace by\xspace \cite{harikandeh2015stopwasting,LeiJordan2016less,LeiJCJ2017}, and\xspace we\xspace refer\xspace to\xspace this\xspace method\xspace as\xspace SCSG\xspace due\xspace to\xspace \cite{LeiJordan2016less,LeiJCJ2017}. \parodista{Reuse\xspace Gradients\xspace} Some\xspace researchers\xspace have\xspace exploited\xspace the\xspace internal\xspace structure\xspace of\xspace the\xspace dataset\xspace to\xspace speed\xspace up\xspace first\xspace-order\xspace methods\xspace. That\xspace is\xspace, they\xspace use\xspace $\nabla f_i(x)$\xspace to\xspace approximate\xspace $\nabla f_j(x)$\xspace when\xspace the\xspace two\xspace data\xspace samples\xspace $i$\xspace and\xspace $j$\xspace are\xspace sufficiently\xspace close\xspace. This\xspace is\xspace orthogonal\xspace to\xspace our\xspace setting\xspace because\xspace we\xspace use\xspace $\nabla f_i(x)$\xspace to\xspace approximate\xspace $\nabla f_i(y)$\xspace when\xspace $x$\xspace and\xspace $y$\xspace are\xspace sufficiently\xspace close\xspace. In\xspace the\xspace extreme\xspace case\xspace when\xspace all\xspace the\xspace data\xspace samples\xspace are\xspace identical\xspace, they\xspace have\xspace $\nabla f_i(x) = \nabla f_j(x)$\xspace for\xspace every\xspace $i,j$\xspace and\xspace thus\xspace stochastic\xspace gradient\xspace methods\xspace converge\xspace as\xspace fast\xspace as\xspace full\xspace gradient\xspace ones\xspace. For\xspace this\xspace problem\xspace, \citet{HLM2015} introduce\xspace a\xspace variant\xspace of\xspace SAGA\xspace, \citet{AYS2016} introduce\xspace a\xspace variant\xspace of\xspace SVRG\xspace and\xspace a\xspace variant\xspace of\xspace accelerated\xspace coordinate\xspace descent\xspace. Other\xspace authors\xspace study\xspace how\xspace to\xspace reduce\xspace gradient\xspace computations\xspace at\xspace the\xspace snapshot\xspace points\xspace of\xspace SVRG~\citep{harikandeh2015stopwasting,LeiJordan2016less}. This\xspace is\xspace also\xspace orthogonal\xspace to\xspace the\xspace idea\xspace of\xspace this\xspace paper\xspace, and\xspace can\xspace be\xspace added\xspace to\xspace our\xspace algorithms\xspace for\xspace even\xspace better\xspace performance\xspace (see\xspace \spersola{sec:scsg}). \parodista{A\xspace Preliminary\xspace Version\xspace} An\xspace extended\xspace abstract\xspace of\xspace a\xspace preliminary\xspace version\xspace of\xspace this\xspace paper\xspace has\xspace appeared\xspace in\xspace the\xspace conference\xspace NeurIPS\xspace 2018, and\xspace the\xspace current\xspace paper\xspace is\xspace a\xspace significant\xspace extension\xspace to\xspace it\xspace. Specifically\xspace, the\xspace current\xspace version\xspace has\xspace three\xspace more\xspace major\xspace contributions\xspace. \begin{itemize} \item First\xspace, we\xspace now\xspace provide\xspace theories\xspace for\xspace a\xspace more\xspace general\xspace assumption\xspace on\xspace the\xspace lingering\xspace radius\xspace (the\xspace current\xspace \fistulare{ass:psi} allows\xspace $\beta \in (0,1]$\xspace while\xspace the\xspace conference\xspace version\xspace only\xspace allows\xspace $\beta=1$\xspace ). \item Second\xspace, we\xspace now\xspace apply\xspace our\xspace methods\xspace also\xspace to\xspace the\xspace task\xspace of\xspace support\xspace vector\xspace machines\xspace (\spersola{sec:exp:svm}). \item Third\xspace, we\xspace now\xspace provide\xspace theories\xspace showing\xspace that\xspace the\xspace assumption\xspace of\xspace lingering\xspace radius\xspace indeed\xspace holds\xspace when\xspace data\xspace is\xspace sufficiently\xspace random\xspace (\spersola{sec:B}). \end{itemize} Besides\xspace these\xspace major\xspace contributions\xspace, we\xspace have\xspace additionally\xspace applied\xspace our\xspace technique\xspace to\xspace the\xspace SCSG\xspace method\xspace and\xspace conducted\xspace more\xspace thorough\xspace experiments\xspace. \subsection{Roadmap} In\xspace \spersola{sec:pre}, we\xspace introduce\xspace notations\xspace for\xspace this\xspace paper\xspace and\xspace give\xspace setups\xspace for\xspace our\xspace packing\xspace LP\xspace and\xspace SVM\xspace applications\xspace. In\xspace \spersola{sec:theory}, we\xspace prove\xspace our\xspace main\xspace theoretical\xspace result\xspace on\xspace the\xspace improved\xspace convergence\xspace rate\xspace for\xspace gradient\xspace descent\xspace under\xspace the\xspace aforementioned\xspace assumption\xspace $|B(x,r)|\leq O(r^\beta)$\xspace . In\xspace \spersola{sec:svrg}, we\xspace introduce\xspace our\xspace practical\xspace algorithm\xspace by\xspace incorporating\xspace the\xspace lingering\xspace of\xspace gradients\xspace into\xspace SVRG\xspace and\xspace SCSG\xspace. Using\xspace real\xspace-life\xspace datasets\xspace, we\xspace apply\xspace our\xspace algorithms\xspace to\xspace packing\xspace LP\xspace in\xspace \spersola{sec:exp} and\xspace to\xspace SVM\xspace in\xspace \spersola{sec:exp:svm}. Finally\xspace, in\xspace \spersola{sec:B}, we\xspace provide\xspace theoretical\xspace support\xspace for\xspace the\xspace assumption\xspace $|B(x,r)|\leq O(r^\beta)$\xspace using\xspace randomness\xspace of\xspace the\xspace data\xspace. \section{Notions and Problem Formulation} \label{sec:pre} We\xspace denote\xspace by\xspace $\|\cdot\|$\xspace the\xspace Euclidean\xspace norm\xspace, and\xspace $\|\cdot\|_\infty$\xspace the\xspace infinity\xspace norm\xspace. Recall\xspace the\xspace notion\xspace of\xspace Lipschitz\xspace smoothness\xspace (it\xspace has\xspace other\xspace equivalent\xspace definitions\xspace, see\xspace textbook~\cite{Nesterov2004}). \begin{definition} \label{def:smooth-sc} A\xspace function\xspace $f \colon \mathbb{R}^d \to \mathbb{R}$\xspace is\xspace $L$\xspace -Lipschitz\xspace smooth\xspace (or\xspace $L$\xspace -smooth\xspace for\xspace short)\xspace if\xspace $$\textstyle \forall x,y\in \mathbb{R}^d\colon \|\nabla f(x) - \nabla f(y)\|\leq L \|x-y\| \enspace.$$ \end{definition} We\xspace propose\xspace the\xspace following\xspace model\xspace to\xspace capture\xspace the\xspace lingering\xspace of\xspace gradients\xspace. \begin{definition} For\xspace every\xspace $x\in \mathbb{R}^d$\xspace and\xspace index\xspace $i\in [n]$\xspace , let\xspace $\delta(x, i) \geq 0$\xspace be\xspace the\xspace \emph{lingering\xspace radius\xspace} of\xspace $\nabla f_i(x)$\xspace , meaning\xspace that\xspace% \footnote{Recall\xspace that\xspace, in\xspace practice\xspace, one\xspace should\xspace replace\xspace the\xspace exact\xspace equality\xspace with\xspace, for\xspace instance\xspace, $\|\nabla f_i(x) - \nabla f_i(y)\| \leq 10^{-10}$\xspace . To\xspace present\xspace the\xspace simplest\xspace statements\xspace, we\xspace do\xspace not\xspace introduce\xspace such\xspace an\xspace extra\xspace parameter\xspace.} $$ \nabla f_i(x) = \nabla f_i(y) \text{ for\xspace all\xspace $y\in\mathbb{R}^d$\xspace with\xspace $\|y - x\|\leq \delta(x,i)$\xspace } $$ Accordingly\xspace, for\xspace every\xspace $r\geq 0$\xspace we\xspace use\xspace $B(x,r)$\xspace to\xspace denote\xspace the\xspace set\xspace of\xspace indices\xspace $j$\xspace satisfying\xspace $\delta(x,j) < r$\xspace : $$B(x, r) \stackrel{\mathrm{\scriptscriptstyle def}}{=} \big\{ j \in [n] \, \big| \, \delta(x,j) < r \big\} \enspace.$$ \end{definition} In\xspace other\xspace words\xspace, as\xspace long\xspace as\xspace we\xspace travel\xspace within\xspace distance\xspace $\delta(x,i)$\xspace from\xspace $x$\xspace , the\xspace gradient\xspace $\nabla f_i(x)$\xspace can\xspace be\xspace reused\xspace to\xspace represent\xspace $\nabla f_i(y)$\xspace . Our\xspace main\xspace assumption\xspace of\xspace this\xspace paper\xspace is\xspace that\xspace \begin{assumption}\label{ass:time} Each\xspace $\delta(x,i)$\xspace can\xspace be\xspace computed\xspace in\xspace the\xspace same\xspace time\xspace complexity\xspace as\xspace $\nabla f_i(x)$\xspace . \end{assumption} Under\xspace \fistulare{ass:time}, if\xspace at\xspace some\xspace point\xspace $x$\xspace we\xspace have\xspace already\xspace computed\xspace $\nabla f_i(x)$\xspace for\xspace all\xspace $i \in [n]$\xspace , then\xspace we\xspace can\xspace compute\xspace $\delta(x,i)$\xspace as\xspace well\xspace within\xspace the\xspace same\xspace time\xspace complexity\xspace for\xspace every\xspace $i\in [n]$\xspace , and\xspace sort\xspace the\xspace indices\xspace $i\in [n]$\xspace in\xspace increasing\xspace order\xspace of\xspace $\delta(x,i)$\xspace . In\xspace the\xspace future\xspace, if\xspace we\xspace arrive\xspace at\xspace any\xspace point\xspace $y$\xspace , we\xspace can\xspace calculate\xspace $r = \|x-y\|$\xspace and\xspace use\xspace $$ \textstyle \nabla' = \frac{1}{n} \Big( \sum_{i\not\in B(x,r)} \nabla f_i(x) + \sum_{i\in B(x,r)} \nabla f_i (y) \Big) $$ to\xspace represent\xspace $\nabla f(y)$\xspace . The\xspace time\xspace to\xspace compute\xspace $\nabla'$\xspace is\xspace only\xspace proportional\xspace to\xspace $|B(x,r)|$\xspace . \begin{definition} We\xspace denote\xspace by\xspace $T_\mathsf{time}$\xspace the\xspace gradient\xspace complexity\xspace, which\xspace equals\xspace how\xspace many\xspace times\xspace $\nabla f_i(x)$\xspace and\xspace $\delta(x,i)$\xspace are\xspace calculated\xspace, divided\xspace by\xspace $n$\xspace . \end{definition} In\xspace computing\xspace $\nabla'$\xspace above\xspace, the\xspace gradient\xspace complexity\xspace is\xspace $|B(x,r)|/n$\xspace . If\xspace we\xspace always\xspace set\xspace $\delta(x,i)=0$\xspace then\xspace $|B(x,r)| = n$\xspace and\xspace the\xspace gradient\xspace complexity\xspace for\xspace computing\xspace $\nabla'$\xspace remains\xspace 1. However\xspace, if\xspace the\xspace underlying\xspace \famoso{eqn:the-problem} is\xspace nice\xspace enough\xspace so\xspace that\xspace $|B(x,r)|$\xspace becomes\xspace an\xspace increasing\xspace function\xspace of\xspace $r$\xspace , then\xspace the\xspace gradient\xspace complexity\xspace for\xspace computing\xspace $\nabla'$\xspace can\xspace be\xspace less\xspace than\xspace $1$\xspace . We\xspace can\xspace thus\xspace hope\xspace for\xspace designing\xspace faster\xspace algorithms\xspace. \subsection{Packing Linear Program} \label{sec:pre:revenue} Consider\xspace the\xspace LP\xspace relaxation\xspace of\xspace a\xspace canonical\xspace revenue\xspace management\xspace problem\xspace in\xspace which\xspace a\xspace manager\xspace needs\xspace to\xspace sell\xspace $d$\xspace different\xspace resources\xspace to\xspace $n$\xspace customers\xspace. Let\xspace $b_j \geq 0$\xspace be\xspace the\xspace capacity\xspace of\xspace resource\xspace $j\in [d]$\xspace ; let\xspace $p_{i,j} \in [0,1]$\xspace be\xspace the\xspace probability\xspace that\xspace customer\xspace $i\in[n]$\xspace will\xspace purchase\xspace a\xspace unit\xspace of\xspace resource\xspace $j$\xspace if\xspace offered\xspace resource\xspace $j$\xspace ; and\xspace let\xspace $r_j$\xspace be\xspace the\xspace revenue\xspace for\xspace each\xspace unit\xspace of\xspace resource\xspace $j$\xspace . We\xspace want\xspace to\xspace offer\xspace each\xspace customer\xspace one\xspace and\xspace only\xspace one\xspace candidate\xspace resource\xspace, and\xspace let\xspace $y_{i,j}$\xspace be\xspace the\xspace probability\xspace we\xspace offer\xspace customer\xspace $i$\xspace resource\xspace $j$\xspace . The\xspace following\xspace is\xspace the\xspace standard\xspace LP\xspace relaxation\xspace for\xspace this\xspace problem\xspace:% \footnote{The\xspace constraint\xspace $\sum_{j\in[d]} y_{i,j} = 1$\xspace here\xspace can\xspace be\xspace replaced\xspace with\xspace any\xspace other\xspace positive\xspace constant\xspace without\xspace loss\xspace of\xspace generality\xspace.} \begin{align This\xspace LP~\eqref{eqn:the-LP} and\xspace its\xspace variants\xspace have\xspace repeatedly\xspace found\xspace many\xspace applications\xspace, including\xspace adwords\xspace/ad\xspace allocation\xspace problems\xspace \citep{Zhong2015, FMMM09, doi:10.1287/moor.2013.0621, AHL12,wangTZZ2016, devanur2012asymptotically, MGS12, HMZ11}, and\xspace revenue\xspace management\xspace for\xspace airline\xspace and\xspace service\xspace industries\xspace \citep{JK12,RW08,FSLW16,Stein2016, WTB15,CF12}. Some\xspace authors\xspace also\xspace study\xspace the\xspace online\xspace version\xspace of\xspace solving\xspace LPs\xspace \citep{AWY14, devanur2009adwords, FHKMS10, Agrawal:2015:FAO:2722129.2722222}. A\xspace standard\xspace way\xspace to\xspace reduce\xspace \eqref{eqn:the-LP} to\xspace convex\xspace optimization\xspace is\xspace by\xspace regularization\xspace, see\xspace for\xspace instance\xspace \citet{Zhong2015}. Let\xspace us\xspace subtract\xspace the\xspace maximization\xspace objective\xspace by\xspace a\xspace regularizer\xspace $$R(y) \stackrel{\mathrm{\scriptscriptstyle def}}{=} \mu \sum_{i\in[n]} \bar{p_i} \sum_{j\in [d]} y_{i,j} \log y_{i,j},$$ where\xspace $\bar{p_i} \stackrel{\mathrm{\scriptscriptstyle def}}{=} \max_{i\in [n]} p_{i,j}$\xspace and\xspace $\mu>0$\xspace is\xspace some\xspace small\xspace regularization\xspace weight\xspace. Then\xspace, after\xspace transforming\xspace to\xspace the\xspace dual\xspace, we\xspace have\xspace a\xspace new\xspace minimization\xspace problem\xspace \begin{equation}\label{eqn:the-dual where\xspace \[ Z_i = \sum_{j=1}^d \exp\Big( \frac{(r_j-x_j) p_{i,j}}{\bar{p_i} \mu} \Big). \] If\xspace we\xspace let\xspace $f_i(x) \stackrel{\mathrm{\scriptscriptstyle def}}{=} \mu n \bar{p_i} \cdot \log Z_i + \langle x, b \rangle$\xspace , then\xspace \eqref{eqn:the-dual} reduces\xspace to\xspace \famoso{eqn:the-problem}. We\xspace conduct\xspace empirical\xspace studies\xspace on\xspace this\xspace packing\xspace LP\xspace problem\xspace in\xspace \spersola{sec:exp}. \begin{remark} Any\xspace solution\xspace $x$\xspace (usually\xspace known\xspace as\xspace the\xspace \emph{bid\xspace price\xspace}) to\xspace \eqref{eqn:the-dual} naturally\xspace gives\xspace back\xspace a\xspace solution\xspace $y$\xspace for\xspace the\xspace primal\xspace \eqref{eqn:the-LP}, by\xspace setting\xspace \begin{equation}\label{eqn:ybyx \end{remark} \subsection{Support Vector Machine} \label{sec:pre:svm} Classifying\xspace data\xspace is\xspace one\xspace of\xspace the\xspace most\xspace foundational\xspace tasks\xspace in\xspace machine\xspace learning\xspace. Suppose\xspace we\xspace are\xspace given\xspace data\xspace points\xspace $a_1,\dots,a_n \in \mathbb{R}^d$\xspace each\xspace belonging\xspace to\xspace one\xspace of\xspace two\xspace classes\xspace. We\xspace use\xspace $b_i = 1$\xspace to\xspace denote\xspace that\xspace data\xspace point\xspace $i$\xspace belongs\xspace to\xspace the\xspace first\xspace class\xspace, and\xspace $b_i = -1$\xspace to\xspace denote\xspace that\xspace data\xspace point\xspace $i$\xspace belongs\xspace to\xspace the\xspace second\xspace. The\xspace (soft\xspace-margin)\xspace support\xspace vector\xspace machine\xspace task\xspace is\xspace to\xspace minimize\xspace the\xspace following\xspace objective\xspace \begin{equation}\label{eqn:svm-obj where\xspace $\lambda$\xspace is\xspace the\xspace weight\xspace of\xspace the\xspace regularizer\xspace which\xspace encourages\xspace the\xspace objective\xspace to\xspace find\xspace a\xspace solution\xspace with\xspace large\xspace classification\xspace margin\xspace. If\xspace we\xspace set\xspace $f_i(x) = \frac{\lambda}{2}\|x\|^2 + \max\{0, 1 - b_i \langle x, a_i \rangle\}$\xspace , then\xspace \eqref{eqn:svm-obj} reduces\xspace to\xspace \famoso{eqn:the-problem}. In\xspace this\xspace formulation\xspace, the\xspace SVM\xspace objective\xspace $f(x)$\xspace is\xspace not\xspace Lipschitz\xspace smooth\xspace, making\xspace some\xspace of\xspace the\xspace popular\xspace practical\xspace methods\xspace unable\xspace to\xspace apply\xspace (at\xspace least\xspace in\xspace theory)\xspace. For\xspace such\xspace reason\xspace, people\xspace also\xspace study\xspace the\xspace smoothed\xspace version\xspace of\xspace SVM\xspace as\xspace follows\xspace.% \footnote{More\xspace generally\xspace, there\xspace is\xspace an\xspace ``optimal\xspace'' way\xspace to\xspace tweak\xspace the\xspace non\xspace-smooth\xspace objective\xspace to\xspace allow\xspace essentially\xspace any\xspace smooth\xspace-objective\xspace solver\xspace to\xspace apply\xspace, see\xspace \citep{AH2016-reduction}.} \begin{equation}\label{eqn:svm-obj:smooth Above\xspace, $\mu \geq 0$\xspace is\xspace a\xspace smoothing\xspace parameter\xspace. The\xspace larger\xspace $\mu$\xspace is\xspace, the\xspace more\xspace Lipschitz\xspace smooth\xspace the\xspace objective\xspace $f^\mu(x)$\xspace becomes\xspace. We\xspace conduct\xspace empirical\xspace studies\xspace on\xspace this\xspace SVM\xspace problem\xspace in\xspace \spersola{sec:exp:svm}. \section{GD with Lingering Radius} \label{sec:theory} In\xspace this\xspace section\xspace, we\xspace consider\xspace a\xspace convex\xspace function\xspace $f(x) = \frac{1}{n}\sum_{i=1}^n f_i(x)$\xspace that\xspace is\xspace $L$\xspace -smooth\xspace. Recall\xspace from\xspace textbooks\xspace (e\xspace.g\xspace.,~\cite{Nesterov2004}) that\xspace if\xspace gradient\xspace descent\xspace (GD)\xspace is\xspace applied\xspace for\xspace $T$\xspace iterations\xspace, starting\xspace at\xspace $x_0\in \mathbb{R}^d$\xspace , then\xspace we\xspace can\xspace arrive\xspace at\xspace a\xspace point\xspace $x$\xspace with\xspace $f(x) - f(x^*) \leq O \big( \frac{\|x_0 - x^*\|^2}{T} \big)$\xspace . This\xspace is\xspace the\xspace $\frac{1}{T}$\xspace convergence\xspace rate\xspace. To\xspace improve\xspace on\xspace this\xspace theoretical\xspace rate\xspace, we\xspace make\xspace the\xspace following\xspace assumption\xspace on\xspace $B(x,r)$\xspace : \begin{assumption}\label{ass:psi} There\xspace exists\xspace $\alpha \in [0,1], \beta \in (0,1], C > 0$\xspace such\xspace that\xspace, $$ \forall x\in \mathbb{R}^d, r\geq 0 \colon \quad \frac{|B(x,r)|}{n} \leq \psi(r) \stackrel{\mathrm{\scriptscriptstyle def}}{=} \max\{\alpha, (r / C)^\beta\} \enspace. $$ \end{assumption} It\xspace says\xspace that\xspace $|B(n,r)|$\xspace is\xspace a\xspace growing\xspace function\xspace in\xspace $r$\xspace , and\xspace the\xspace growth\xspace rate\xspace is\xspace $\propto r^{\beta}$\xspace . We\xspace also\xspace allow\xspace an\xspace additive\xspace term\xspace $\alpha$\xspace to\xspace cover\xspace the\xspace case\xspace that\xspace an\xspace $\alpha$\xspace fraction\xspace of\xspace the\xspace stochastic\xspace gradients\xspace always\xspace need\xspace to\xspace be\xspace recalculated\xspace, regardless\xspace of\xspace the\xspace distance\xspace. We\xspace shall\xspace later\xspace illustrate\xspace why\xspace \fistulare{ass:psi} holds\xspace in\xspace practice\xspace and\xspace why\xspace \fistulare{ass:psi} holds\xspace in\xspace theory\xspace under\xspace reasonable\xspace data\xspace assumptions\xspace. Our\xspace result\xspace of\xspace this\xspace section\xspace can\xspace be\xspace summarized\xspace as\xspace follows\xspace. Hiding\xspace $\|x_0 - x^*\|$\xspace , $L$\xspace , $C$\xspace , $\beta$\xspace in\xspace the\xspace big\xspace-$O$\xspace notion\xspace, and\xspace letting\xspace $T_\mathsf{time}$\xspace be\xspace the\xspace gradient\xspace complexity\xspace, we\xspace can\xspace modify\xspace GD\xspace so\xspace that\xspace it\xspace finds\xspace a\xspace point\xspace $x$\xspace with\xspace \adiacente We\xspace emphasize\xspace that\xspace our\xspace modified\xspace algorithm\xspace does\xspace not\xspace need\xspace to\xspace know\xspace $\alpha$\xspace or\xspace $\beta$\xspace . \begin{algorithm*}[t!] \caption{${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}(f, x^{(0)}, S, C, D)$\xspace \label{alg:recycle-gd} } \begin{algorithmic}[1] \Require $f(x) = \frac{1}{n} \sum_{i=1}^n f_i(x)$\xspace convex\xspace and\xspace $L$\xspace -smooth\xspace, starting\xspace vector\xspace $x^{(0)} \in \mathbb{R}^d$\xspace , number\xspace of\xspace epochs\xspace $S \geq 1$\xspace , parameters\xspace $C,D>0$\xspace . \Ensure vector\xspace $x \in \mathbb{R}^d$\xspace . \For{$s \gets 1$ \textbf{to} $S$} \State $x_0 \gets x^{(s-1)}$\xspace ; $m \gets \lceil \big( 1 + \frac{C^2}{16 D^2} \big)^s \rceil $\xspace ; and\xspace $\xi \gets \frac{C}{m}$\xspace . \State $\mathbf{g} \gets \vec{0}$\xspace and\xspace $\mathbf{g}_i \gets \vec{0}$\xspace for\xspace each\xspace $i\in[n]$\xspace . \For{$k \gets 0$ \textbf{to} $m-1$} \State Calculate\xspace $\Lambda_k \subseteq [n]$\xspace from\xspace $x_0,\dots,x_k$\xspace according\xspace to\xspace \ofiotossina{def:index-set}. \label{line:gi-update} \For{$i \in \Lambda_k$} \State $\mathbf{g} \gets \mathbf{g} + \frac{\nabla f_i(x_k) - \mathbf{g}_i}{n}$\xspace and\xspace $\mathbf{g}_i \gets \nabla f_i(x_k)$\xspace . \EndFor \State \label{line:recycle-gd:update} $x_{k+1} \gets x_k - \min\big\{ \frac{\xi}{\|\mathbf{g}\|}, \frac{1}{L} \big\} \mathbf{g}$\xspace \Comment{it\xspace satisfies\xspace $\mathbf{g} = \nabla f(x_k)$\xspace } \EndFor \State $x^{(s)} \gets x_{m}$\xspace ; \EndFor \State \Return $x = x^{(S)}$\xspace . \end{algorithmic} \end{algorithm*} \subsection{Algorithm Description} \newcommand{\Lambda^\perp}{\Lambda^\perp} In\xspace classical\xspace gradient\xspace descent\xspace (GD)\xspace, starting\xspace from\xspace $x_0 \in \mathbb{R}^d$\xspace , one\xspace iteratively\xspace updates\xspace $x_{k+1} \gets x_k - \frac{1}{L}\nabla f(x_k)$\xspace . We\xspace propose\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace (see\xspace \riduco{alg:recycle-gd}) which\xspace, at\xspace a\xspace high\xspace level\xspace, differs\xspace from\xspace GD\xspace in\xspace two\xspace ways\xspace: \begin{itemize} \item It\xspace performs\xspace a\xspace truncated\xspace gradient\xspace descent\xspace with\xspace travel\xspace distance\xspace $\|x_k - x_{k+1}\| \leq \xi$\xspace per\xspace step\xspace. \item It\xspace speeds\xspace up\xspace the\xspace process\xspace of\xspace calculating\xspace $\nabla f(x_k)$\xspace by\xspace using\xspace the\xspace lingering\xspace of\xspace past\xspace gradients\xspace. \end{itemize} Formally\xspace, ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace consists\xspace of\xspace $S$\xspace epochs\xspace $s=1,2,\dots,S$\xspace of\xspace growing\xspace length\xspace $m = \lceil \big( 1 + \frac{C^2}{16 D^2} \big)^s \big\rceil$\xspace . In\xspace each\xspace epoch\xspace, it\xspace starts\xspace with\xspace $x_0\in\mathbb{R}^d$\xspace and\xspace performs\xspace $m$\xspace truncated\xspace gradient\xspace descent\xspace steps\xspace $$ \textstyle x_{k+1} \gets x_k - \min\big\{ \frac{\xi}{\|\nabla f(x_k)\|}, \frac{1}{L} \big\} \cdot \nabla f(x_k) \enspace.$$ We\xspace choose\xspace $\xi = C / m$\xspace to\xspace ensure\xspace that\xspace the\xspace worst\xspace-case\xspace travel\xspace distance\xspace $\|x_m - x_0\|$\xspace is\xspace at\xspace most\xspace $m \xi = C$\xspace . (Recall\xspace that\xspace $r=C$\xspace is\xspace the\xspace maximum\xspace distance\xspace so\xspace that\xspace $\psi(r) \leq 1$\xspace .) In\xspace each\xspace iteration\xspace $k=0,1,\dots,m-1$\xspace of\xspace this\xspace epoch\xspace $s$\xspace , in\xspace order\xspace to\xspace calculate\xspace $\nabla f(x_k)$\xspace , ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace constructs\xspace index\xspace sets\xspace $\Lambda_0,\Lambda_1,\dots,\Lambda_{m-1} \subseteq [n]$\xspace and\xspace recalculates\xspace only\xspace $\nabla f_i(x_k)$\xspace for\xspace those\xspace $i\in \Lambda_k$\xspace . We\xspace formally\xspace introduce\xspace index\xspace sets\xspace below\xspace, and\xspace illustrate\xspace them\xspace in\xspace \etoidale{fig:Lambda:a}. \begin{definition}\label{def:index-set} Given\xspace $x_0,x_1,\dots,x_{m-1} \in \mathbb{R}^d$\xspace , we\xspace define\xspace index\xspace subsets\xspace $\Lambda_0, \dots \Lambda_{m-1} \subseteq [n]$\xspace as\xspace follows\xspace. Let\xspace $\Lambda_0 = [n]$\xspace . For\xspace each\xspace $k \in \{1,2,\dots,m-1\}$\xspace , if\xspace $(k_0,\dots,k_t)$\xspace is\xspace $k$\xspace 's\xspace lowbit\xspace sequence\xspace from\xspace \ofiotossina{def:lowbit}, then\xspace (recalling\xspace $k = k_t$\xspace ) \scudisciava where\xspace \[ B_k(r) \stackrel{\mathrm{\scriptscriptstyle def}}{=} \Lambda_k \cap B(x_k, r \cdot \xi) \enspace.\] \end{definition} \newcommand{\mathsf{lowbit}}{\mathsf{lowbit}} In\xspace the\xspace above\xspace definition\xspace, we\xspace have\xspace used\xspace the\xspace notion\xspace of\xspace ``lowbit\xspace sequence\xspace'' for\xspace a\xspace positive\xspace integer\xspace.% \footnote{If\xspace implemented\xspace in\xspace C++\xspace, we\xspace have\xspace $\mathsf{lowbit}(k) = \texttt{k\xspace\&(-k)\xspace}$\xspace .}\begin{definition}\label{def:lowbit} For\xspace positive\xspace integer\xspace $k$\xspace , let\xspace $\mathsf{lowbit}(k) \stackrel{\mathrm{\scriptscriptstyle def}}{=} 2^i$\xspace where\xspace $i \geq 0$\xspace is\xspace the\xspace maximum\xspace integer\xspace such\xspace that\xspace $k$\xspace is\xspace integral\xspace multiple\xspace of\xspace $2^i$\xspace . For\xspace instance\xspace, $\mathsf{lowbit}(34)=2$\xspace , $\mathsf{lowbit}(12)=4$\xspace , and\xspace $\mathsf{lowbit}(8)=8$\xspace . Given\xspace positive\xspace integer\xspace $k$\xspace , let\xspace the\xspace \incappucciare{lowbit sequence} of\xspace $k$\xspace be\xspace $(k_0,k_1,\dots,k_t)$\xspace where\xspace \ruspato For\xspace instance\xspace, the\xspace lowbit\xspace sequence\xspace of\xspace $45$\xspace is\xspace $(0,32,40,44,45)$\xspace . \end{definition} \begin{figure*}[t!] \centering \subfigure[\label{fig:Lambda:a}] {\includegraphics[page=2,trim={50mm 30mm 44mm 0mm},clip,height=0.4\textwidth]{photo.pdf}} \hspace{10mm} \subfigure[\label{fig:Lambda:b}] {\includegraphics[page=1,trim={200mm 30mm 44mm 0mm},clip,height=0.4\textwidth]{photo.pdf}} \caption{\label{fig:Lambda}Illustration of\xspace index\xspace sets\xspace $\Lambda_k$\xspace } \end{figure*} \subsection{Intuitions \& Properties of Index Sets} We\xspace show\xspace in\xspace this\xspace paper\xspace that\xspace our\xspace construction\xspace of\xspace index\xspace sets\xspace satisfy\xspace the\xspace following\xspace three\xspace properties\xspace. \begin{lemma}\label{lem:correctness} The\xspace construction\xspace of\xspace $\Lambda_0,\dots,\Lambda_{m-1}$\xspace ensures\xspace that\xspace $\mathbf{g} = \nabla f(x_k)$\xspace in\xspace each\xspace iteration\xspace $k$\xspace . \end{lemma} \begin{claim}\label{claim:index-set:const-time} The\xspace gradient\xspace complexity\xspace to\xspace construct\xspace $\Lambda_0,\dots,\Lambda_{m-1}$\xspace is\xspace $O\big( \frac{1}{n} \sum_{k=0}^{m-1} |\Lambda_k| \big)$\xspace under\xspace \fistulare{ass:time}. The\xspace space\xspace complexity\xspace is\xspace $O(n \log n)$\xspace . \end{claim} \begin{lemma}\label{lem:cardinality} Under\xspace \fistulare{ass:psi}, we\xspace have\xspace $\frac{1}{n} \sum_{k=0}^{m-1} |\Lambda_k| \leq O(\alpha m + m^{1-\beta} \log^2 m)\enspace.$\xspace \end{lemma} At\xspace high\xspace level\xspace, \sparlatore{lem:correctness} ensures\xspace that\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace follows\xspace exactly\xspace the\xspace full\xspace gradient\xspace direction\xspace per\xspace iteration\xspace; \attristiscono{claim:index-set:const-time} and\xspace \sparlatore{lem:cardinality} together\xspace ensure\xspace that\xspace the\xspace total\xspace gradient\xspace complexity\xspace for\xspace this\xspace epoch\xspace is\xspace only\xspace $\tilde{O}(m^{1-\beta} \log^2 m)$\xspace , as\xspace opposed\xspace to\xspace $O(m)$\xspace if\xspace we\xspace always\xspace recalculate\xspace $\nabla f_1(x_k),\dots,\nabla f_n(x_k)$\xspace . \attristiscono{claim:index-set:const-time} is\xspace easy\xspace to\xspace verify\xspace. Indeed\xspace, for\xspace each\xspace $\Lambda_\ell$\xspace that\xspace is\xspace calculated\xspace, we\xspace can\xspace sort\xspace its\xspace indices\xspace $j\in \Lambda_\ell$\xspace in\xspace the\xspace increasing\xspace order\xspace of\xspace $\delta(x_k, j)$\xspace .% \footnote{Calculating\xspace those\xspace lingering\xspace radii\xspace $\delta(x_k, j)$\xspace require\xspace gradient\xspace complexity\xspace $|\Lambda_\ell|$\xspace according\xspace to\xspace \fistulare{ass:time}, and\xspace the\xspace time\xspace for\xspace sorting\xspace is\xspace negligible\xspace.} Now\xspace, whenever\xspace we\xspace calculate\xspace $B_{k_i}(k - k_i) \setminus B_{k_i}(k_{t-1} - k_i)$\xspace , we\xspace have\xspace already\xspace sorted\xspace the\xspace indices\xspace in\xspace $\Lambda_{k_i}$\xspace , so\xspace can\xspace directly\xspace retrieve\xspace those\xspace $j$\xspace with\xspace $\delta(x_{k_i}, j) \in \big( k_{t-1} - k_i, k - k_i \big]$\xspace . As\xspace for\xspace the\xspace space\xspace complexity\xspace, in\xspace any\xspace iteration\xspace $k$\xspace , we\xspace only\xspace need\xspace to\xspace store\xspace $\lceil \log_2 k \rceil$\xspace index\xspace sets\xspace $\Lambda_\ell$\xspace for\xspace $\ell < k$\xspace . For\xspace instance\xspace, when\xspace calculating\xspace $\Lambda_{15}$\xspace (see\xspace \etoidale{fig:Lambda:b}), we\xspace only\xspace need\xspace to\xspace use\xspace $\Lambda_0,\Lambda_8, \Lambda_{12}, \Lambda_{14}$\xspace ; and\xspace from\xspace $k=16$\xspace onwards\xspace, we\xspace no\xspace longer\xspace need\xspace to\xspace store\xspace $\Lambda_1,\dots,\Lambda_{15}$\xspace . \sparlatore{lem:correctness} is\xspace technically\xspace involved\xspace to\xspace prove\xspace (see\xspace \zimasi{app:lem:correctness}), but\xspace we\xspace give\xspace a\xspace sketched\xspace proof\xspace by\xspace picture\xspace. Take\xspace $k=15$\xspace as\xspace an\xspace example\xspace. As\xspace illustrated\xspace by\xspace \etoidale{fig:Lambda:b}, for\xspace every\xspace $j \in [n]$\xspace , \begin{itemize} \item If\xspace $j$\xspace belongs\xspace to\xspace $\Lambda_{15}$\xspace ---i\xspace.e\xspace., boxes\xspace $4, 0, 9, 7$\xspace of\xspace \etoidale{fig:Lambda}--- We\xspace have\xspace calculated\xspace $\nabla f_j(x_k)$\xspace so\xspace are\xspace fine\xspace. \item If\xspace $j$\xspace belongs\xspace to\xspace $\Lambda_{14} \setminus B_{14}(1) $\xspace ---i\xspace.e\xspace., $\oplus$\xspace region\xspace of\xspace \etoidale{fig:Lambda:b}--- We\xspace have\xspace $\nabla f_j(x_{15}) = \nabla f_j(x_{14})$\xspace because\xspace $\|x_{15}-x_{14}\| \leq \xi$\xspace and\xspace $j\not\in B_{14}(1)$\xspace . Therefore\xspace, we\xspace can\xspace safely\xspace retrieve\xspace $\mathbf{g}_j = \nabla f_j(x_{14})$\xspace to\xspace represent\xspace $\nabla f_j(x_{15})$\xspace . \item If\xspace $j$\xspace belongs\xspace to\xspace $\Lambda_{12} \setminus B_{12}(3) $\xspace ---i\xspace.e\xspace., $\otimes$\xspace region\xspace of\xspace \etoidale{fig:Lambda:b}--- We\xspace have\xspace $\nabla f_j(x_{15}) = \nabla f_j(x_{12})$\xspace for\xspace similar\xspace reason\xspace above\xspace. Also\xspace, the\xspace most\xspace recent\xspace update\xspace of\xspace $\mathbf{g}_j$\xspace was\xspace at\xspace iteration\xspace $12$\xspace , so\xspace we\xspace can\xspace safely\xspace retrieve\xspace $\mathbf{g}_j$\xspace to\xspace represent\xspace $\nabla f_j(x_{15})$\xspace . \item And\xspace so\xspace on\xspace. \end{itemize} In\xspace sum\xspace, for\xspace all\xspace indices\xspace $j\in [n]$\xspace , we\xspace have\xspace $\mathbf{g}_j = \nabla f_j(x_k)$\xspace so\xspace $\mathbf{g} = \frac{\mathbf{g}_1+\cdots+\mathbf{g}_n}{n}$\xspace equals\xspace $\nabla f(x_k)$\xspace . \sparlatore{lem:cardinality} is\xspace also\xspace involved\xspace to\xspace prove\xspace (see\xspace \zimasi{app:lem:cardinality}), but\xspace again\xspace should\xspace be\xspace intuitive\xspace from\xspace the\xspace picture\xspace. The\xspace indices\xspace in\xspace boxes\xspace $1, 2, 3, 4$\xspace of\xspace \etoidale{fig:Lambda} are\xspace disjoint\xspace, and\xspace belong\xspace to\xspace $B(x_0, 15\xi)$\xspace , totaling\xspace at\xspace most\xspace $|B(x_0, 15\xi)| \leq n \psi(15 \xi)$\xspace . The\xspace indices\xspace in\xspace boxes\xspace $5, 6, 7$\xspace of\xspace \etoidale{fig:Lambda} are\xspace also\xspace disjoint\xspace, and\xspace belong\xspace to\xspace $B(x_8, 7\xi)$\xspace , totaling\xspace at\xspace most\xspace $|B(x_8, 7\xi)| \leq n \psi(7 \xi)$\xspace . If\xspace we\xspace sum\xspace up\xspace the\xspace cardinality\xspace of\xspace these\xspace boxes\xspace by\xspace carefully\xspace grouping\xspace them\xspace in\xspace this\xspace manner\xspace, then\xspace we\xspace can\xspace prove\xspace \sparlatore{lem:cardinality} using\xspace \fistulare{ass:psi}. \subsection{Convergence Theorem} So\xspace far\xspace, \sparlatore{lem:cardinality} shows\xspace we\xspace can\xspace reduce\xspace the\xspace gradient\xspace complexity\xspace from\xspace $O(m)$\xspace to\xspace $\tilde{O}(m^{1-\beta})$\xspace for\xspace every\xspace $m$\xspace steps\xspace of\xspace gradient\xspace descent\xspace. Therefore\xspace, we\xspace wish\xspace to\xspace set\xspace $m$\xspace as\xspace large\xspace as\xspace possible\xspace, or\xspace equivalently\xspace $\xi = C / m$\xspace as\xspace small\xspace as\xspace possible\xspace. Unfortunately\xspace, when\xspace $\xi$\xspace is\xspace too\xspace small\xspace, it\xspace will\xspace impact\xspace the\xspace performance\xspace of\xspace truncated\xspace gradient\xspace descent\xspace (see\xspace \sparlatore{lem:trunc-gd} in\xspace appendix)\xspace. This\xspace motivates\xspace us\xspace to\xspace start\xspace with\xspace a\xspace small\xspace value\xspace of\xspace $m$\xspace and\xspace increase\xspace it\xspace epoch\xspace by\xspace epoch\xspace. Indeed\xspace, as\xspace the\xspace number\xspace of\xspace epoch\xspace grows\xspace, $f(x_0)$\xspace becomes\xspace closer\xspace to\xspace the\xspace minimum\xspace $f(x^*)$\xspace , and\xspace thus\xspace we\xspace can\xspace choose\xspace smaller\xspace values\xspace of\xspace $\xi$\xspace . Formally\xspace, we\xspace have\xspace (proved\xspace in\xspace \zimasi{app:thm:theory-main}) \begin{theorem}\label{thm:theory-main} Given\xspace any\xspace $x^{(0)} \in\mathbb{R}^d$\xspace and\xspace $D > 0$\xspace that\xspace is\xspace an\xspace upper\xspace bound\xspace on\xspace $\|x^{(0)}-x^*\|$\xspace . Suppose\xspace \fistulare{ass:time} and\xspace \ref{ass:psi} are\xspace satisfied\xspace with\xspace parameters\xspace $C \in (0, D], \alpha \in [0,1], \beta \in (0, 1]$\xspace . Then\xspace, denoting\xspace by\xspace $m_s = \lceil \big( 1 + \frac{C^2}{16 D^2} \big)^s \rceil $\xspace , we\xspace have\xspace that\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}(f, x_0, S, C, D)$\xspace outputs\xspace a\xspace point\xspace $x\in\mathbb{R}^d$\xspace satisfying\xspace $ f(x) - f(x^*) \leq \frac{4 L D^2}{m_S}$\xspace with\xspace gradient\xspace complexity\xspace $T_\mathsf{time} = O\big( \sum_{s=1}^S \alpha m_s + m_s^{1-\beta} \log^2 m_s \big)$\xspace . \end{theorem} As\xspace simple\xspace corollaries\xspace, we\xspace have\xspace (proved\xspace in\xspace \zimasi{app:thm:theory-cor}) \begin{theorem}\label{thm:theory-cor} In\xspace the\xspace setting\xspace of\xspace \eustatico{thm:theory-main}, given\xspace any\xspace $T \geq 1$\xspace , one\xspace can\xspace choose\xspace $S$\xspace so\xspace that\xspace \begin{itemize} \item If\xspace $\beta=1$\xspace , then\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace finds\xspace a\xspace point\xspace $x$\xspace in\xspace gradient\xspace complexity\xspace $T_\mathsf{time} = O(T)$\xspace s\xspace.t\xspace. $$ \textstyle f(x) - f(x^*) \leq O\big( \frac{L D^4}{C^2} \cdot \frac{\alpha}{T} \big) + \frac{L D^2}{2^{\Omega(C^2 T / D^2)^{1/3}}} \enspace.$$ \item If\xspace $\beta\in(0,1)$\xspace is\xspace constant\xspace, ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace finds\xspace a\xspace point\xspace $x$\xspace in\xspace gradient\xspace complexity\xspace $T_\mathsf{time} = O(T \log^2 T)$\xspace s\xspace.t\xspace. $$ \textstyle f(x) - f(x^*) \leq O\big( \frac{L D^4}{C^2} \cdot \frac{\alpha}{T} + \frac{L D^{2 + \frac{2}{1-\beta}}}{C^{\frac{2}{1-\beta}}} \cdot \frac{1}{T^{1/(1-\beta)}} \big) \enspace.$$ \end{itemize} \end{theorem} We\xspace remark\xspace here\xspace if\xspace $\psi(r) = 1$\xspace (so\xspace there\xspace is\xspace no\xspace lingering\xspace effect\xspace for\xspace gradients)\xspace, we\xspace can\xspace choose\xspace $C = D$\xspace and\xspace $\beta=1$\xspace ; in\xspace this\xspace case\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace gives\xspace back\xspace the\xspace convergence\xspace $f(x) - f(x^*) \leq O\big( \frac{L D^2}{T} \big)$\xspace of\xspace GD\xspace. \section{SVRG with Lingering Radius} \label{sec:svrg} In\xspace this\xspace section\xspace, we\xspace use\xspace \fistulare{ass:time} to\xspace improve\xspace the\xspace running\xspace time\xspace of\xspace SVRG~\citep{JohnsonZhang2013-SVRG,MahdaviZhangJin2013-sc}, one\xspace of\xspace the\xspace most\xspace widely\xspace applied\xspace stochastic\xspace gradient\xspace methods\xspace in\xspace large\xspace-scale\xspace settings\xspace. The\xspace purpose\xspace of\xspace this\xspace section\xspace is\xspace to\xspace construct\xspace an\xspace algorithm\xspace that\xspace works\xspace well\xspace \emph{in\xspace practice\xspace}: to\xspace (1) work\xspace for\xspace any\xspace possible\xspace lingering\xspace radii\xspace $\delta(x,i)$\xspace , (2) be\xspace identical\xspace to\xspace SVRG\xspace if\xspace $\delta(x,i)\equiv 0$\xspace , and\xspace (3) be\xspace faster\xspace than\xspace SVRG\xspace when\xspace $\delta(x,i)$\xspace is\xspace large\xspace. Recall\xspace how\xspace the\xspace SVRG\xspace method\xspace works\xspace. Each\xspace \emph{epoch\xspace} of\xspace SVRG\xspace consists\xspace of\xspace $m$\xspace iterations\xspace ($m=2n$\xspace in\xspace practice)\xspace. Each\xspace epoch\xspace starts\xspace with\xspace a\xspace point\xspace $x_0$\xspace (known\xspace as\xspace the\xspace \emph{snapshot\xspace}) where\xspace the\xspace full\xspace gradient\xspace $\nabla f(x_0)$\xspace is\xspace computed\xspace exactly\xspace. In\xspace each\xspace iteration\xspace $k=0,1,\dots,m-1$\xspace of\xspace this\xspace epoch\xspace, SVRG\xspace updates\xspace $x_{k+1} \gets x_k - \eta \mathbf{g}$\xspace where\xspace $\eta>0$\xspace is\xspace the\xspace learning\xspace rate\xspace and\xspace $\mathbf{g}$\xspace is\xspace the\xspace gradient\xspace estimator\xspace $\mathbf{g} = \nabla f(x_0) + \nabla f_i (x_k) - \nabla f_i(x_0)$\xspace for\xspace some\xspace $i$\xspace randomly\xspace drawn\xspace from\xspace $[n]$\xspace . Note\xspace that\xspace it\xspace satisfies\xspace $\E_i[\mathbf{g}] = \nabla f(x_k)$\xspace so\xspace $\mathbf{g}$\xspace is\xspace an\xspace unbiased\xspace estimator\xspace of\xspace the\xspace gradient\xspace. In\xspace the\xspace next\xspace epoch\xspace, SVRG\xspace starts\xspace with\xspace $x_m$\xspace of\xspace the\xspace previous\xspace epoch\xspace.% \footnote{Some\xspace authors\xspace use\xspace the\xspace average\xspace of\xspace $x_1,\dots,x_m$\xspace to\xspace start\xspace the\xspace next\xspace epoch\xspace, but\xspace we\xspace choose\xspace this\xspace simpler\xspace version\xspace.} We\xspace denote\xspace by\xspace $x^{(s)}$\xspace the\xspace value\xspace of\xspace $x_0$\xspace at\xspace the\xspace beginning\xspace of\xspace epoch\xspace $s=0,1,2,\dots,S-1$\xspace . \subsection{Algorithm Description} Our\xspace algorithm\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace maintains\xspace \emph{disjoint\xspace} subsets\xspace $H_s \subseteq [n]$\xspace , where\xspace each\xspace $H_s$\xspace includes\xspace the\xspace set\xspace of\xspace the\xspace indices\xspace $i$\xspace whose\xspace gradients\xspace $\nabla f_i(x^{(s)})$\xspace from\xspace epoch\xspace $s$\xspace can\xspace still\xspace be\xspace safely\xspace reused\xspace at\xspace present\xspace. At\xspace the\xspace starting\xspace point\xspace $x_0$\xspace of\xspace an\xspace epoch\xspace $s$\xspace , we\xspace let\xspace $H_s = [n] \setminus (H_0 \cup \cdots \cup H_{s-1})$\xspace and\xspace re\xspace-calculate\xspace gradients\xspace $\nabla f_i(x_0)$\xspace only\xspace for\xspace $i\in H_s$\xspace ; the\xspace remaining\xspace ones\xspace can\xspace be\xspace loaded\xspace from\xspace the\xspace memory\xspace. This\xspace computes\xspace the\xspace full\xspace gradient\xspace $\nabla f(x_0)$\xspace . Then\xspace, we\xspace denote\xspace by\xspace $m=2|H_s|$\xspace and\xspace perform\xspace only\xspace $m$\xspace iterations\xspace within\xspace epoch\xspace $s$\xspace . We\xspace next\xspace discuss\xspace how\xspace to\xspace perform\xspace update\xspace $x_k \to x_{k+1}$\xspace and\xspace maintain\xspace $\{H_s\}_s$\xspace during\xspace each\xspace iteration\xspace. \begin{itemize} \item In\xspace each\xspace iteration\xspace $k$\xspace of\xspace this\xspace epoch\xspace, we\xspace claim\xspace that\xspace $\nabla f_i(x_k) = \nabla f_i(x_0)$\xspace for\xspace every\xspace $i \in H_0 \cup \cdots \cup H_s$\xspace .% \footnote{This\xspace is\xspace because\xspace for\xspace every\xspace $i\in H_s$\xspace , by\xspace definition\xspace of\xspace $H_s$\xspace we\xspace have\xspace $\nabla f_i(x_k) = \nabla f_i(x^{(s)}) = \nabla f_i(x_0)$\xspace ; for\xspace every\xspace $i\in H_{s'}$\xspace where\xspace $s' < s$\xspace , we\xspace know\xspace $\nabla f_i(x_k) = \nabla f_i(x^{(s')})$\xspace but\xspace we\xspace also\xspace have\xspace $\nabla f_i(x_0) = \nabla f_i(x^{(s')})$\xspace (because\xspace otherwise\xspace $i$\xspace would\xspace have\xspace been\xspace removed\xspace from\xspace $H_{s'}$\xspace ).} Thus\xspace, we\xspace can\xspace uniformly\xspace sample\xspace $i$\xspace from\xspace $[n] \setminus \big(H_0 \cup \cdots \cup H_s\big)$\xspace , and\xspace construct\xspace an\xspace unbiased\xspace estimator\xspace $$\textstyle \mathbf{g} \gets \nabla f(x_0) + \left( 1 - \frac{\sum_{s'=0}^s |H_{s'}|}{n} \right)[\nabla f_i(x_k) - \nabla f_i(x_0)] $$ of\xspace the\xspace true\xspace gradient\xspace $\nabla f(x_k)$\xspace . Then\xspace, we\xspace update\xspace $x_{k+1} \gets x_k - \eta \mathbf{g}$\xspace the\xspace same\xspace way\xspace as\xspace SVRG\xspace. We\xspace emphasize\xspace that\xspace the\xspace above\xspace choice\xspace of\xspace $\mathbf{g}$\xspace reduces\xspace its\xspace variance\xspace (because\xspace there\xspace are\xspace fewer\xspace random\xspace choices)\xspace, and\xspace it\xspace is\xspace known\xspace that\xspace reducing\xspace variance\xspace leads\xspace to\xspace faster\xspace running\xspace time~\citep{JohnsonZhang2013-SVRG}. \item As\xspace for\xspace how\xspace to\xspace maintain\xspace $\{H_s\}_s$\xspace , in\xspace each\xspace iteration\xspace $k$\xspace after\xspace $x_{k+1}$\xspace is\xspace computed\xspace, for\xspace every\xspace $s' \leq s$\xspace , we\xspace wish\xspace to\xspace remove\xspace those\xspace indices\xspace $i\in H_{s'}$\xspace such\xspace that\xspace the\xspace current\xspace position\xspace $x$\xspace lies\xspace outside\xspace of\xspace the\xspace lingering\xspace radius\xspace of\xspace $i$\xspace , i\xspace.e\xspace., $\delta(x^{(s)}, i) < \|x - x^{(s)}\|$\xspace . To\xspace efficiently\xspace implement\xspace this\xspace, we\xspace need\xspace to\xspace make\xspace sure\xspace that\xspace whenever\xspace $H_{s'}$\xspace is\xspace constructed\xspace (at\xspace the\xspace beginning\xspace of\xspace epoch\xspace $s'$\xspace ), the\xspace algorithm\xspace sorts\xspace all\xspace the\xspace indices\xspace $i\in H_{s'}$\xspace by\xspace increasing\xspace order\xspace of\xspace $\delta(x^{(s')}, i)$\xspace . We\xspace include\xspace implementation\xspace details\xspace in\xspace \zimasi{app:svrg}. \end{itemize} \begin{algorithm*}[h!] \caption{${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}(f, x^{(0)}, \eta, S)$\xspace \label{alg:recycle-svrg}} \begin{algorithmic}[1] \Require $f(x) = \frac{1}{n} \sum_{i=1}^n f_i(x)$\xspace , vector\xspace $x^{(0)} \in \mathbb{R}^d$\xspace , learning\xspace rate\xspace $\eta>0$\xspace , number\xspace of\xspace epochs\xspace $S \geq 1$\xspace . \Ensure vector\xspace $x \in \mathbb{R}^d$\xspace . \For{$s \gets 0$ \textbf{to} $S-1$} \State $x_0 \gets x^{(s)}$\xspace ; \quad $H_s \gets [n] \setminus \big(H_0 \cup \cdots \cup H_{s-1} \big)$\xspace ; \quad and\xspace $m \gets 2|H_s|$\xspace . \label{line:define-Hs} \State \label{line:full-g} compute\xspace full\xspace gradient\xspace $\nabla f(x_0)$\xspace according\xspace to\xspace \pedalando \For{$k \gets 0$ \textbf{to} $m-1$} \If{$H_0 \cup \cdots \cup H_s = [n]$} \State $\mathbf{g} \gets \nabla f(x_0)$\xspace . \Else \State randomly\xspace draw\xspace $i \in [n] \setminus \big( H_0 \cup \cdots \cup H_s \big) $\xspace . \State $\mathbf{g} \gets \nabla f(x_0) + \left( 1 - \frac{\sum_{s'=0}^s |H_{s'}|}{n} \right)[\nabla f_i(x_k) - \nabla f_i(x_0)]$\xspace . \EndIf \State $x_{k+1} \gets x_k - \eta \mathbf{g}$\xspace . \ForAll{$s' \leq s$ and $i \in H_{s'}$ such that $\delta(x^{(s')}, i) < \| x^{(s')} - x_{k+1}\|$}\label{line:removal} \State $H_{s'} \gets H_{s'} \setminus \{i\}$\xspace . \EndFor \EndFor \State $x^{(s+1)} \gets x_{m}$\xspace . \EndFor \State \Return $x = x^{(S)}$\xspace . \end{algorithmic} \end{algorithm*} \subsection{SCSG with Lingering Radius} \label{sec:scsg} When\xspace $n$\xspace is\xspace extremely\xspace large\xspace, it\xspace can\xspace be\xspace expensive\xspace to\xspace compute\xspace full\xspace gradient\xspace at\xspace snapshots\xspace, so\xspace a\xspace variant\xspace of\xspace SVRG\xspace is\xspace sometimes\xspace applied\xspace in\xspace practice\xspace. That\xspace is\xspace, at\xspace each\xspace snapshot\xspace $x_0$\xspace , instead\xspace of\xspace calculating\xspace $\nabla f(x_0)$\xspace , one\xspace can\xspace approximate\xspace it\xspace by\xspace a\xspace batch\xspace average\xspace $\frac{1}{|S|} \sum_{i\in S} \nabla f_i(x_0)$\xspace for\xspace a\xspace sufficiently\xspace large\xspace random\xspace subset\xspace $S$\xspace of\xspace $[n]$\xspace . Then\xspace, the\xspace length\xspace of\xspace an\xspace epoch\xspace is\xspace also\xspace changed\xspace from\xspace $m=2n$\xspace to\xspace $m=2|S|$\xspace . This\xspace method\xspace is\xspace studied\xspace by\xspace \cite{harikandeh2015stopwasting,LeiJordan2016less,LeiJCJ2017}, and\xspace we\xspace refer\xspace to\xspace it\xspace as\xspace SCSG\xspace due\xspace to\xspace \cite{LeiJordan2016less,LeiJCJ2017}. Our\xspace algorithm\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace can\xspace be\xspace easily\xspace extended\xspace to\xspace this\xspace setting\xspace, with\xspace the\xspace following\xspace modifications\xspace: \begin{itemize} \item We\xspace define\xspace parameter\xspace $\bar{m}_s = \min\{ n, \bar{m}_0 \cdot 2^{s} \}$\xspace , where\xspace $\bar{m}_0$\xspace is\xspace a\xspace given\xspace input\xspace (allegedly\xspace the\xspace length\xspace of\xspace the\xspace first\xspace epoch)\xspace. \item We\xspace replace\xspace \steatopigo{line:define-Hs} of\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace as\xspace follows\xspace. If\xspace there\xspace are\xspace more\xspace than\xspace $\bar{m}_s$\xspace elements\xspace in\xspace $[n] \setminus (H_0 \cup \cdots \cup H_{s-1})$\xspace , then\xspace \[ H_s \gets \text{ a\xspace random\xspace \sfeltrato of\xspace \scossero \sospirare \calderotto $[n] \setminus (H_0 \cup \cdots \cup H_{s-1})$\xspace }.\] Otherwise\xspace, set\xspace $H_s$\xspace in\xspace the\xspace same\xspace way\xspace as\xspace in\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace . \item We\xspace replace\xspace \steatopigo{line:full-g} of\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace , the\xspace full\xspace gradient\xspace computation\xspace, by\xspace an\xspace estimate\xspace \[ \nabla f(x_0) \approx \frac{1}{n} \left[ \sum_{s'=0}^{s-1} \sum_{i \in H_{s'}} \nabla f_i(x^{(s')}) + \frac{ \big|[n] \setminus (H_0 \cup \cdots \cup H_{s-1})\big| }{| H_s|} \sum_{ i \in H_s} \nabla f_i (x_0) \right] .\] It\xspace can\xspace be\xspace computed\xspace using\xspace $|H_s| \leq \bar{m}_s$\xspace computations\xspace of\xspace new\xspace gradients\xspace. \end{itemize} We\xspace call\xspace this\xspace algorithm\xspace ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace and\xspace also\xspace report\xspace its\xspace practical\xspace performance\xspace in\xspace our\xspace experiments\xspace. We\xspace note\xspace that\xspace having\xspace epoch\xspace size\xspace to\xspace grow\xspace exponentially\xspace was\xspace recommended\xspace for\xspace instance\xspace by\xspace the\xspace authors\xspace of\xspace SCSG~\citep{LeiJCJ2017} and\xspace others~\citep{MahdaviZhangJin2013-nonsc,AY2015-univr}. \section{Experiments on Packing LP} \label{sec:exp} In\xspace this\xspace section\xspace, we\xspace construct\xspace a\xspace revenue\xspace maximization\xspace LP\xspace \eqref{eqn:the-LP} using\xspace the\xspace publicly\xspace accessible\xspace dataset\xspace of\xspace Yahoo\xspace! Front\xspace Page\xspace Today\xspace Module\xspace \citep{LCLS2010,Chu2009case}. Based\xspace on\xspace this\xspace real\xspace-life\xspace dataset\xspace, we\xspace validate\xspace \fistulare{ass:psi} and\xspace our\xspace motivation\xspace behind\xspace lingering\xspace gradients\xspace. We\xspace also\xspace test\xspace the\xspace performance\xspace of\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace from\xspace \spersola{sec:svrg} and\xspace ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace from\xspace \spersola{sec:scsg} on\xspace optimizing\xspace this\xspace LP\xspace. \subsection{Experiment Setup} \label{sec:datasetDetails} We\xspace use\xspace part\xspace of\xspace the\xspace Today\xspace Module\xspace dataset\xspace corresponding\xspace to\xspace May\xspace 1, 2009. There\xspace are\xspace $d = 50$\xspace articles\xspace, which\xspace we\xspace view\xspace as\xspace resources\xspace, and\xspace $n \approx $\xspace 4.6 million\xspace users\xspace. We\xspace estimate\xspace $p_{i,j}$\xspace following\xspace the\xspace hybrid\xspace model\xspace in\xspace \cite{LCLS2010}. While\xspace \citet{LCLS2010} consider\xspace the\xspace online\xspace recommendation\xspace problem\xspace without\xspace any\xspace constraints\xspace on\xspace the\xspace total\xspace traffic\xspace that\xspace each\xspace article\xspace receives\xspace, we\xspace consider\xspace the\xspace \emph{offline\xspace} LP\xspace problem\xspace \eqref{eqn:the-LP} with\xspace resource\xspace capacity\xspace constraints\xspace. In\xspace practice\xspace, recommendation\xspace systems\xspace with\xspace resource\xspace constraints\xspace can\xspace better\xspace control\xspace the\xspace public\xspace exposure\xspace of\xspace any\xspace ads\xspace or\xspace recommendations\xspace \citep{Zhong2015}. In\xspace addition\xspace to\xspace estimating\xspace $p_{i,j}$\xspace from\xspace data\xspace, we\xspace generate\xspace other\xspace synthetic\xspace parameters\xspace in\xspace order\xspace to\xspace make\xspace the\xspace LP\xspace problem\xspace \eqref{eqn:the-LP} non\xspace-trivial\xspace to\xspace solve\xspace. From\xspace a\xspace high\xspace level\xspace, we\xspace want\xspace (i)\xspace some\xspace resources\xspace to\xspace have\xspace positive\xspace remaining\xspace capacities\xspace under\xspace optimal\xspace LP\xspace solutions\xspace, so\xspace that\xspace the\xspace LP\xspace is\xspace feasible\xspace (when\xspace \eqref{eqn:the-LP} is\xspace infeasible\xspace due\xspace to\xspace the\xspace equality\xspace constraints\xspace, the\xspace revenue\xspace-maximization\xspace problem\xspace becomes\xspace trivial\xspace because\xspace we\xspace can\xspace sell\xspace all\xspace the\xspace inventories)\xspace; (ii)\xspace some\xspace resources\xspace to\xspace have\xspace zero\xspace remaining\xspace capacities\xspace under\xspace optimal\xspace LP\xspace solutions\xspace, so\xspace that\xspace the\xspace optimal\xspace dual\xspace solution\xspace is\xspace not\xspace a\xspace (trivial)\xspace zero\xspace vector\xspace. Specifically\xspace, \begin{itemize} \item We\xspace arbitrarily\xspace pick\xspace a\xspace resource\xspace $k \in [d]$\xspace , and\xspace assign\xspace it\xspace infinity\xspace capacity\xspace $b_k > n$\xspace with\xspace relatively\xspace small\xspace revenue\xspace value\xspace $r_k = 0.05$\xspace .\item For\xspace other\xspace resources\xspace $i \in [d]$\xspace , we\xspace randomly\xspace draw\xspace $r_i$\xspace from\xspace a\xspace uniform\xspace distribution\xspace over\xspace $[0.05, 0.95]$\xspace , and\xspace set\xspace $b_i = 0.01 n / d$\xspace . \item We\xspace choose\xspace $\mu = 10^{-5}$\xspace as\xspace the\xspace regularization\xspace error\xspace. \item For\xspace each\xspace algorithm\xspace, we\xspace tune\xspace learning\xspace rates\xspace from\xspace the\xspace set\xspace $\eta \in \{10^{-k}, 3\times 10^{-k}, 5\times 10^{-k}\}$\xspace , and\xspace report\xspace the\xspace best\xspace-tuned\xspace performance\xspace. \end{itemize} Finally\xspace, we\xspace note\xspace that\xspace the\xspace dual\xspace objective\xspace \eqref{eqn:the-dual} is\xspace constrained\xspace optimization\xspace with\xspace $x\geq 0$\xspace . Although\xspace we\xspace specified\xspace our\xspace algorithm\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace (for\xspace notational\xspace simplicity)\xspace without\xspace constraints\xspace on\xspace $x$\xspace , it\xspace is\xspace a\xspace simple\xspace exercise\xspace to\xspace generalize\xspace it\xspace (as\xspace well\xspace as\xspace classical\xspace methods\xspace SVRG\xspace, SAGA)\xspace into\xspace the\xspace constrained\xspace setting\xspace. Namely\xspace, in\xspace each\xspace step\xspace, if\xspace the\xspace new\xspace point\xspace $x_{k+1}$\xspace moves\xspace out\xspace of\xspace the\xspace constraint\xspace, then\xspace project\xspace it\xspace to\xspace the\xspace closest\xspace point\xspace on\xspace the\xspace constraint\xspace. This\xspace is\xspace known\xspace as\xspace the\xspace proximal\xspace setting\xspace of\xspace first\xspace-order\xspace method\xspace, and\xspace see\xspace for\xspace instance\xspace the\xspace analysis\xspace of\xspace proximal\xspace SVRG\xspace of~\cite{XiaoZhang2014-ProximalSVRG}. We\xspace discuss\xspace implementation\xspace details\xspace of\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace and\xspace ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace in\xspace \zimasi{app:implementation}. \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-3mm} \centering \includegraphics[page=1,trim={20mm 90mm 20mm 100mm},clip,height=0.25\textwidth]{IllustrationB} \caption{\label{fig:B}$|B(x,r)|/n$\xspace as\xspace a\xspace function\xspace of\xspace $r$\xspace for\xspace packing\xspace LP\xspace. We\xspace choose\xspace $\theta=5$\xspace . Dashed\xspace curve\xspace is\xspace when\xspace $x=\vec 0$\xspace , and\xspace solid\xspace curve\xspace is\xspace when\xspace $x$\xspace is\xspace near\xspace the\xspace optimum\xspace. \vspace{10mm}} \end{wrapfigure} \subsection{Illustration of Lingering Radius} \label{sec:illustration} We\xspace calculate\xspace lingering\xspace radii\xspace on\xspace the\xspace dual\xspace problem\xspace \eqref{eqn:the-dual}. Let\xspace $\theta>0$\xspace be\xspace a\xspace parameter\xspace large\xspace enough\xspace so\xspace that\xspace $e^{-\theta}$\xspace can\xspace be\xspace viewed\xspace as\xspace zero\xspace. (For\xspace instance\xspace, $\theta=20$\xspace gives\xspace $e^{-20}\approx 2 \times 10^{-9}$\xspace .) Then\xspace, for\xspace each\xspace point\xspace $x\in\mathbb{R}_{\geq 0}$\xspace and\xspace index\xspace $i\in [n]$\xspace , we\xspace let\xspace \[\delta(x, i) = \min_{j \in [n], j \not= j^*} \frac{(r_{j^*} - x_{j^*}) p_{i,j^*} - (r_j - x_j)p_{i,j} - \theta \bar{p_i} \mu}{p_{i,j^*} + p_{i,j}}\] where\xspace \[ j^* = \operatornamewithlimits{arg\,max}_{j\in[n]} \big\{ (r_j - x_j) p_{i,j} \big\}.\] It\xspace is\xspace now\xspace a\xspace simple\xspace exercise\xspace to\xspace verify\xspace that\xspace, denoting\xspace by\xspace $\mathbf{e}_j$\xspace the\xspace $j$\xspace -th\xspace basis\xspace unit\xspace vector\xspace, then\xspace% \footnote{For\xspace any\xspace other\xspace coordinate\xspace $j\neq j^*$\xspace , it\xspace satisfies\xspace $ \frac{e^{(r_j-y_j) p_{i,j} / (\bar{p_i} \mu)}}{e^{(r_{j^*}-y_{j^*}) p_{i,j^*} / (\bar{p_i} \mu)}} \leq e^{-\theta} $\xspace and\xspace hence\xspace is\xspace negligible\xspace.} $$\nabla f_i(y) \approx (b_1,\dots,b_d) + n p_{i,j^*} \mathbf{e}_{j^*} \quad\text{for\xspace every\xspace}\quad \|y-x\|_\infty \leq \delta(x,i) \enspace.$$ In\xspace \etoidale{fig:B}, we\xspace plot\xspace $|B(x, r)| = \big|\big\{ j \in [n] \, \big| \, \delta(x,j) < r \big\} \big|$\xspace as\xspace an\xspace increasing\xspace function\xspace of\xspace $r$\xspace . We\xspace see\xspace that\xspace for\xspace practical\xspace data\xspace, $|B(x,r)|/n$\xspace is\xspace indeed\xspace bounded\xspace above\xspace by\xspace some\xspace increasing\xspace function\xspace $\psi(\cdot)$\xspace . We\xspace justify\xspace \etoidale{fig:B} as\xspace follows\xspace. For\xspace any\xspace point\xspace $x$\xspace and\xspace customer\xspace $i$\xspace , recall\xspace from\xspace \eqref{eqn:ybyx} that\xspace $y_{i,j} \propto \exp(\frac{(r_j-x_j) p_{i,j}}{\bar{p_i} \mu} ) $\xspace approximately\xspace captures\xspace the\xspace index\xspace $j = j^*$\xspace which\xspace maximizes\xspace the\xspace exponent\xspace. If\xspace $\mu$\xspace is\xspace small\xspace (recall\xspace small\xspace $\mu$\xspace gives\xspace more\xspace accurate\xspace solutions\xspace to\xspace primal\xspace LP~\eqref{eqn:the-LP}), then\xspace for\xspace most\xspace customers\xspace, $(y_{i,1},\dots,y_{i,d})$\xspace is\xspace approximately\xspace a\xspace unit\xspace vector\xspace $\mathbf{e}_{j^*}$\xspace , meaning\xspace we\xspace assign\xspace customer\xspace $i$\xspace to\xspace resource\xspace $j$\xspace with\xspace high\xspace probability\xspace. Now\xspace, as\xspace long\xspace as\xspace $x$\xspace stays\xspace in\xspace the\xspace lingering\xspace radius\xspace of\xspace $i$\xspace , we\xspace still\xspace offer\xspace customer\xspace $i$\xspace the\xspace same\xspace resource\xspace $j^*$\xspace . Naturally\xspace, when\xspace customers\xspace are\xspace sufficiently\xspace diverse\xspace --- which\xspace is\xspace usually\xspace the\xspace case\xspace in\xspace practice\xspace --- one\xspace should\xspace expect\xspace the\xspace lingering\xspace radii\xspace to\xspace be\xspace evenly\xspace distributed\xspace, and\xspace thus\xspace $|B(x, r)|$\xspace can\xspace behave\xspace like\xspace \etoidale{fig:B}. \begin{remark} This\xspace $\delta(x,i)$\xspace differs\xspace from\xspace our\xspace definition\xspace in\xspace \spersola{sec:pre} in\xspace two\xspace ways\xspace. First\xspace, it\xspace ensures\xspace $\nabla f_i(y) \approx \nabla f_i(x)$\xspace as\xspace opposed\xspace to\xspace exact\xspace equality\xspace; for\xspace practical\xspace purposes\xspace this\xspace is\xspace no\xspace big\xspace issue\xspace, and\xspace we\xspace choose\xspace $\theta=5$\xspace in\xspace our\xspace experiments\xspace. Second\xspace, $\|y-x\|_\infty \leq \delta(x,i)$\xspace gives\xspace a\xspace bigger\xspace ``safe\xspace region\xspace'' than\xspace $\|y-x\| \leq \delta(x,i)$\xspace ; thus\xspace, when\xspace implementing\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace , we\xspace adopt\xspace $\|\cdot\|_\infty$\xspace as\xspace the\xspace norm\xspace of\xspace choice\xspace. \end{remark} \begin{remark} \etoidale{fig:B} also\xspace confirms\xspace that\xspace \fistulare{ass:psi} holds\xspace in\xspace practice\xspace. Recall\xspace \fistulare{ass:psi} was\xspace used\xspace in\xspace proving\xspace the\xspace theoretical\xspace performance\xspace of\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace . We\xspace see\xspace that\xspace indeed\xspace $|B(x,r)|$\xspace grows\xspace increasingly\xspace in\xspace $r$\xspace . \end{remark} \begin{figure*}[t!] \centering \vspace{-4mm} \subfigure[\label{fig:dualGap:1} ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$ vs. $\mathtt{SVRG}$ and $\mathtt{SAGA}$ (dual error)] {\includegraphics[page=1,trim={20mm 80mm 20mm 80mm},clip,height=0.25\textwidth]{DualSVRG-reformat}} \hspace{10mm} \subfigure[\label{fig:dualGap:2} ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$ vs. $\mathtt{SCSG}$ (dual error)] {\includegraphics[page=1,trim={20mm 80mm 20mm 80mm},clip,height=0.25\textwidth]{DualSCSG-reformat}} \hspace{10mm} \subfigure[\label{fig:primal_time:1}${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$ vs. $\mathtt{SVRG}$ and $\mathtt{SAGA}$ (primal error)] {\includegraphics[page=1,trim={25mm 90mm 20mm 100mm},clip,height=0.25\textwidth]{Primal1}} \hspace{10mm} \subfigure[\label{fig:primal_time:2}${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$ vs. $\mathtt{SCSG}$ (primal error)] {\includegraphics[page=1,trim={25mm 90mm 20mm 100mm},clip,height=0.25\textwidth]{Primal2}} \hspace{10mm} \subfigure[\label{fig:running_time:1}${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$ vs. $\mathtt{SVRG}$ and $\mathtt{SAGA}$ (running time)] {\includegraphics[page=1,trim={22mm 90mm 20mm 100mm},clip,height=0.25\textwidth]{Time1}} \hspace{10mm} \subfigure[\label{fig:running_time:2}${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$ vs. $\mathtt{SCSG}$ (running time)] {\includegraphics[page=1,trim={22mm 90mm 20mm 100mm},clip,height=0.25\textwidth]{Time2}} \vspace{-3mm} \caption{\label{fig:exp-LP} Performance\xspace comparison\xspace for\xspace revenue\xspace management\xspace LP\xspace. } \end{figure*} \subsection{Performance Comparison} We\xspace consider\xspace solving\xspace the\xspace dual\xspace problem\xspace \eqref{eqn:the-dual}. In\xspace \etoidale{fig:dualGap:1} and\xspace \ref{fig:dualGap:2}, we\xspace plot\xspace the\xspace optimization\xspace error\xspace of\xspace \eqref{eqn:the-dual} as\xspace a\xspace function\xspace $\#$ grad$/n$\xspace , the\xspace number\xspace of\xspace stochastic\xspace gradient\xspace computations\xspace divided\xspace by\xspace $n$\xspace , also\xspace known\xspace as\xspace \emph{$\#$ passes\xspace of\xspace dataset\xspace}. \etoidale{fig:dualGap:1} compares\xspace our\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace to\xspace $\mathtt{SVRG}$\xspace and\xspace $\mathtt{SAGA}$\xspace (each\xspace for\xspace 3 best\xspace tuned\xspace learning\xspace rates)\xspace, and\xspace \etoidale{fig:dualGap:2} compares\xspace our\xspace ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace to\xspace $\mathtt{SCSG}$\xspace (also\xspace with\xspace 3 best\xspace tuned\xspace learning\xspace rates)\xspace.% \footnote{Each\xspace epoch\xspace of\xspace $\mathtt{SVRG}$\xspace consists\xspace of\xspace a\xspace full\xspace gradient\xspace computation\xspace and\xspace $2n$\xspace iterations\xspace, totaling\xspace $3n$\xspace computations\xspace of\xspace (new)\xspace stochastic\xspace gradients\xspace. (We\xspace do\xspace not\xspace count\xspace the\xspace computation\xspace of\xspace $\nabla f_i(0)$\xspace at\xspace $x=0$\xspace .) Each\xspace epoch\xspace of\xspace $\mathtt{SCSG}$\xspace needs\xspace to\xspace compute\xspace a\xspace batch\xspace gradient\xspace of\xspace size\xspace $|S|$\xspace , followed\xspace by\xspace $2|S|$\xspace iterations\xspace each\xspace computing\xspace 2 new\xspace stochastic\xspace gradients\xspace.} We\xspace can\xspace see\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace is\xspace close\xspace to\xspace $\mathtt{SVRG}$\xspace or\xspace $\mathtt{SAGA}$\xspace during\xspace the\xspace first\xspace 5-7 passes\xspace of\xspace the\xspace data\xspace. This\xspace is\xspace because\xspace initially\xspace, $x$\xspace moves\xspace fast\xspace and\xspace cannot\xspace usually\xspace stay\xspace in\xspace the\xspace lingering\xspace radii\xspace for\xspace most\xspace indices\xspace $i$\xspace . After\xspace that\xspace period\xspace, ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace requires\xspace a\xspace dramatically\xspace smaller\xspace number\xspace of\xspace gradient\xspace computations\xspace, as\xspace $x$\xspace moves\xspace slower\xspace and\xspace slower\xspace, becoming\xspace more\xspace easily\xspace to\xspace stay\xspace in\xspace the\xspace lingering\xspace radii\xspace. It\xspace is\xspace interesting\xspace to\xspace note\xspace that\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace does\xspace not\xspace significantly\xspace improve\xspace the\xspace optimization\xspace error\xspace as\xspace a\xspace function\xspace of\xspace number\xspace of\xspace epochs\xspace; the\xspace improvement\xspace primarily\xspace lies\xspace in\xspace improving\xspace the\xspace number\xspace of\xspace gradient\xspace computations\xspace per\xspace epoch\xspace. The\xspace comparison\xspace is\xspace similar\xspace for\xspace ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace vs\xspace. $\mathtt{SCSG}$\xspace . \subsection{Performance Comparison on Primal LP Objective} When\xspace $f(x)$\xspace is\xspace only\xspace approximately\xspace minimum\xspace, the\xspace optimization\xspace error\xspace of\xspace the\xspace dual\xspace \eqref{eqn:the-dual} does\xspace not\xspace represent\xspace the\xspace error\xspace for\xspace the\xspace primal\xspace LP\xspace \eqref{eqn:the-LP}. Therefore\xspace, the\xspace more\xspace interesting\xspace notion\xspace is\xspace the\xspace \emph{primal\xspace error\xspace}, defined\xspace as\xspace \[ \left[\mathtt{OPT} - \sum_{j \in [d]} r_j \min(b_j, \sum_{i\in[n]} p_{i,j} y_{i,j}) \right] / \mathtt{OPT} ,\] where\xspace $\mathtt{OPT}$\xspace is\xspace the\xspace optimal\xspace objective\xspace value\xspace of\xspace \eqref{eqn:the-LP}, and\xspace $y$\xspace is\xspace given\xspace by\xspace \eqref{eqn:ybyx}. This\xspace primal\xspace error\xspace captures\xspace the\xspace inefficiency\xspace caused\xspace by\xspace the\xspace in\xspace-feasibility\xspace of\xspace $y$\xspace . Indeed\xspace, when\xspace $x$\xspace is\xspace not\xspace the\xspace exact\xspace minimizer\xspace, the\xspace amount\xspace of\xspace demand\xspace assigned\xspace to\xspace a\xspace resource\xspace $j$\xspace may\xspace exceed\xspace its\xspace capacity\xspace $b_j$\xspace . If\xspace this\xspace happens\xspace, in\xspace the\xspace above\xspace expression\xspace, we\xspace measure\xspace the\xspace primal\xspace objective\xspace with\xspace respect\xspace to\xspace $y$\xspace by\xspace truncating\xspace all\xspace the\xspace demand\xspace that\xspace exceeds\xspace its\xspace capacity\xspace. \etoidale{fig:primal_time:1} compares\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace to\xspace $\mathtt{SVRG}$\xspace and\xspace $\mathtt{SAGA}$\xspace in\xspace the\xspace primal\xspace error\xspace, while\xspace \etoidale{fig:primal_time:2} compares\xspace ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace to\xspace $\mathtt{SCSG}$\xspace . (Here\xspace for\xspace simplicity\xspace, we\xspace have\xspace only\xspace plotted\xspace those\xspace prior\xspace works\xspace with\xspace respect\xspace to\xspace the\xspace best\xspace tuned\xspace learning\xspace rate\xspace.) We\xspace can\xspace see\xspace that\xspace standard\xspace stochastic\xspace descent\xspace algorithms\xspace need\xspace to\xspace spend\xspace more\xspace than\xspace 30 passes\xspace of\xspace data\xspace in\xspace order\xspace to\xspace achieve\xspace less\xspace than\xspace $10^{-4}$\xspace primal\xspace error\xspace, while\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace and\xspace ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace converge\xspace to\xspace $10^{-6}$\xspace within\xspace no\xspace more\xspace than\xspace 6 passes\xspace of\xspace the\xspace data\xspace. Note\xspace that\xspace the\xspace primal\xspace error\xspace also\xspace contains\xspace the\xspace loss\xspace caused\xspace by\xspace regularization\xspace (which\xspace recall\xspace we\xspace have\xspace chosen\xspace $\mu = 10^{-5}$\xspace ); our\xspace algorithms\xspace quickly\xspace achieve\xspace small\xspace primal\xspace errors\xspace comparable\xspace to\xspace $\mu$\xspace . In\xspace \etoidale{fig:running_time:1} and\xspace \ref{fig:running_time:2}, we\xspace also\xspace compare\xspace the\xspace running\xspace time\xspace of\xspace the\xspace algorithms\xspace. \section{Experiments on SVM} \label{sec:exp:svm} In\xspace this\xspace section\xspace, we\xspace construct\xspace an\xspace SVM\xspace objective\xspace for\xspace the\xspace binary\xspace classification\xspace task\xspace on\xspace the\xspace real\xspace-life\xspace libsvm\xspace Adult\xspace dataset~\citep{LibSVMdata}. \subsection{Experiment Setup} In\xspace the\xspace Adult\xspace dataset\xspace there\xspace are\xspace $n=32,561$\xspace datapoints\xspace and\xspace $d=123$\xspace dimensions\xspace. We\xspace re\xspace-scale\xspace the\xspace data\xspace points\xspace by\xspace a\xspace common\xspace constant\xspace so\xspace that\xspace their\xspace average\xspace Euclidean\xspace norm\xspace is\xspace 1. We\xspace choose\xspace $\lambda=1/n$\xspace as\xspace the\xspace regularizer\xspace weight\xspace for\xspace the\xspace SVM\xspace objective\xspace. \begin{itemize} \item We\xspace have\xspace implemented\xspace the\xspace vanilla\xspace PEGASOS\xspace method\xspace of\xspace \citet{Shalev-Shwartz2011pegasos} which\xspace does\xspace not\xspace require\xspace any\xspace parameter\xspace tuning\xspace and\xspace directly\xspace applies\xspace to\xspace the\xspace original\xspace SVM\xspace objective~\eqref{eqn:svm-obj}. \item For\xspace existing\xspace methods\xspace GD\xspace, SVRG\xspace, SCSG\xspace and\xspace our\xspace new\xspace methods\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}, {\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}, {\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace , we\xspace run\xspace them\xspace both\xspace on\xspace the\xspace original\xspace SVM\xspace objective~\eqref{eqn:svm-obj} (denoted\xspace by\xspace $\mu=0$\xspace ) as\xspace well\xspace as\xspace the\xspace smooth\xspace SVM\xspace objective~\eqref{eqn:svm-obj:smooth} (using\xspace $\mu = 0.01$\xspace ). We\xspace point\xspace out\xspace that\xspace the\xspace theory\xspace for\xspace many\xspace of\xspace these\xspace methods\xspace do\xspace not\xspace apply\xspace to\xspace the\xspace original\xspace non\xspace-smooth\xspace SVM\xspace objective\xspace, but\xspace in\xspace practice\xspace this\xspace is\xspace not\xspace an\xspace issue\xspace. \item For\xspace each\xspace algorithm\xspace (except\xspace PEGASOS)\xspace, we\xspace tune\xspace learning\xspace rates\xspace from\xspace the\xspace set\xspace $\eta \in \{10^{-k}, 2.5\times 10^{-k}, 5\times 10^{-k}, 7.5\times 10^{-k}\}$\xspace , and\xspace report\xspace the\xspace best\xspace-tuned\xspace performance\xspace. \item When\xspace plotting\xspace the\xspace training\xspace performance\xspace of\xspace any\xspace of\xspace these\xspace methods\xspace, we\xspace only\xspace show\xspace the\xspace vanilla\xspace SVM\xspace objective~\eqref{eqn:svm-obj}, but\xspace never\xspace the\xspace smooth\xspace alternative\xspace.% \footnote{In\xspace machine\xspace learning\xspace, the\xspace smooth\xspace SVM\xspace objective~\eqref{eqn:svm-obj:smooth} is\xspace usually\xspace viewed\xspace as\xspace an\xspace auxiliary\xspace objective\xspace whose\xspace purpose\xspace is\xspace to\xspace help\xspace minimize\xspace the\xspace original\xspace SVM\xspace objective~\eqref{eqn:svm-obj}.} \end{itemize} We\xspace discuss\xspace implementation\xspace details\xspace of\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace , ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace , and\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace in\xspace \zimasi{app:implementation}. \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-10mm}\centering \includegraphics[page=1,trim={20mm 90mm 20mm 70mm},clip,height=0.25\textwidth]{IllustrationB-svm} \caption{\label{fig:B:svm}$|B(x,r)|/n$\xspace as\xspace a\xspace function\xspace of\xspace $r$\xspace for\xspace SVM\xspace. Dashed\xspace curve\xspace is\xspace when\xspace $x=\vec 0$\xspace , and\xspace solid\xspace curve\xspace is\xspace when\xspace $x$\xspace is\xspace near\xspace the\xspace optimum\xspace. \vspace{-5mm}} \end{wrapfigure} \subsection{Illustration of Lingering Radius} The\xspace calculation\xspace of\xspace lingering\xspace radii\xspace is\xspace trivial\xspace for\xspace SVM\xspace. Recall\xspace for\xspace the\xspace smoothed\xspace SVM\xspace objective\xspace, each\xspace stochastic\xspace gradient\xspace (ignoring\xspace the\xspace regularizer\xspace term\xspace $\frac{\lambda}{2}\|x\|^2$\xspace ) can\xspace be\xspace explicitly\xspace written\xspace as\xspace% \footnote{The\xspace gradient\xspace of\xspace the\xspace regularizer\xspace is\xspace $\lambda x$\xspace so\xspace can\xspace be\xspace calculated\xspace efficiently\xspace without\xspace the\xspace necessity\xspace of\xspace reading\xspace the\xspace dataset\xspace. For\xspace such\xspace reason\xspace, we\xspace do\xspace not\xspace need\xspace to\xspace consider\xspace it\xspace while\xspace calculating\xspace the\xspace lingering\xspace radius\xspace.} \begin{equation}\label{eqn:svm-grad:smooth From\xspace this\xspace definition\xspace, we\xspace can\xspace calculate\xspace $\delta(x,i)$\xspace as\xspace follows\xspace. If\xspace $b_i \langle x, a_i\rangle \geq 1$\xspace then\xspace we\xspace set\xspace $\delta(x,i) = \frac{b_i \langle x, a_i\rangle - 1}{\|a_i\|}$\xspace ; if\xspace $b_i \langle x, a_i \rangle \leq 1 - \mu$\xspace then\xspace we\xspace set\xspace $\delta(x,i) = \frac{1 - \mu - b_i \langle x, a_i \rangle }{\|a_i\|}$\xspace ; and\xspace if\xspace otherwise\xspace, we\xspace set\xspace $\delta(x,i) = +\infty$\xspace . It\xspace is\xspace clear\xspace from\xspace the\xspace gradient\xspace formula\xspace that\xspace this\xspace definition\xspace is\xspace valid\xspace. In\xspace \etoidale{fig:B:svm}, we\xspace plot\xspace $|B(x, r)|/n$\xspace as\xspace a\xspace function\xspace of\xspace $r$\xspace . (We\xspace choose\xspace the\xspace smooth\xspace version\xspace with\xspace $\mu=0.01$\xspace and\xspace the\xspace non\xspace-smooth\xspace version\xspace is\xspace only\xspace better\xspace.) We\xspace see\xspace that\xspace for\xspace practical\xspace data\xspace, $|B(x,r)|/n$\xspace is\xspace indeed\xspace bounded\xspace from\xspace above\xspace by\xspace some\xspace almost\xspace-linear\xspace function\xspace in\xspace $r$\xspace as\xspace theory\xspace predicts\xspace (see\xspace \spersola{sec:B}). \begin{figure*}[t!] \centering \vspace{-4mm} \subfigure[\label{fig:svm:svrg}${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$ vs. $\mathtt{SVRG}$, $\mathtt{SAGA}$ and $\mathtt{PEGASOS}$] {\includegraphics[page=1,trim={20mm 90mm 20mm 90mm},clip,height=0.20\textwidth]{svm-SVRG}} \hspace{10mm} \subfigure[\label{fig:svm:svrg-time}${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$ vs. $\mathtt{SVRG}$, $\mathtt{SAGA}$ and $\mathtt{PEGASOS}$] {\includegraphics[page=1,trim={20mm 90mm 20mm 90mm},clip,height=0.20\textwidth]{svm-SVRG-time}} \hspace{10mm} \subfigure[\label{fig:svm:scsg}${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$ vs. $\mathtt{SCSG}$] {\includegraphics[page=1,trim={20mm 90mm 20mm 90mm},clip,height=0.20\textwidth]{svm-SCSG}} \hspace{10mm} \subfigure[\label{fig:svm:scsg-time}${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$ vs. $\mathtt{SCSG}$] {\includegraphics[page=1,trim={20mm 90mm 20mm 90mm},clip,height=0.20\textwidth]{svm-SCSG-time}} \hspace{10mm} \subfigure[\label{fig:svm:gd}${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$ vs. $\mathtt{GD}$] {\includegraphics[page=1,trim={20mm 90mm 20mm 90mm},clip,height=0.20\textwidth]{svm-GD}} \hspace{10mm} \subfigure[\label{fig:svm:gd-time}${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$ vs. $\mathtt{GD}$] {\includegraphics[page=1,trim={20mm 90mm 20mm 90mm},clip,height=0.20\textwidth]{svm-GD-time}} \vspace{-3mm} \caption{\label{fig:svm:in-pass} Performance\xspace comparison\xspace for\xspace training\xspace SVM\xspace. } \end{figure*} \subsection{Performance Comparison} We\xspace consider\xspace two\xspace types\xspace of\xspace performance\xspace charts\xspace. The\xspace first\xspace is\xspace the\xspace optimization\xspace error\xspace of\xspace the\xspace original\xspace SVM\xspace objective~\eqref{eqn:svm-obj} as\xspace a\xspace function\xspace $\#$ grad$/n$\xspace , also\xspace known\xspace as\xspace \emph{$\#$ passes\xspace of\xspace dataset\xspace}; the\xspace second\xspace is\xspace the\xspace optimization\xspace error\xspace as\xspace a\xspace function\xspace of\xspace the\xspace running\xspace time\xspace. \begin{itemize} \item \etoidale{fig:svm:svrg} and\xspace \ref{fig:svm:svrg-time} compare\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace to\xspace $\mathtt{SVRG}$\xspace and\xspace $\mathtt{SAGA}$\xspace (each\xspace with\xspace best\xspace tuned\xspace learning\xspace rates)\xspace as\xspace well\xspace as\xspace to\xspace $\mathtt{PEGASOS}$\xspace . Due\xspace to\xspace the\xspace non\xspace-smooth\xspace nature\xspace of\xspace SVM\xspace, existing\xspace methods\xspace all\xspace have\xspace a\xspace hard\xspace time\xspace achieving\xspace high\xspace accuracy\xspace such\xspace as\xspace $10^{-5}$\xspace error\xspace; in\xspace contrast\xspace, ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace can\xspace easily\xspace achieve\xspace this\xspace accuracy\xspace within\xspace 30 passes\xspace of\xspace the\xspace data)\xspace. Note\xspace also\xspace, if\xspace an\xspace algorithm\xspace is\xspace trained\xspace on\xspace the\xspace smoothed\xspace SVM\xspace objective~\eqref{eqn:svm-obj:smooth} with\xspace $\mu>0$\xspace , then\xspace its\xspace optimization\xspace error\xspace does\xspace not\xspace converge\xspace to\xspace zero\xspace on\xspace the\xspace original\xspace SVM\xspace objective~\eqref{eqn:svm-obj}. \item \etoidale{fig:svm:scsg} and\xspace \ref{fig:svm:scsg-time} compare\xspace our\xspace ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace (with\xspace best\xspace tuned\xspace learning\xspace rate)\xspace to\xspace $\mathtt{SCSG}$\xspace (with\xspace two\xspace of\xspace the\xspace best\xspace tuned\xspace learning\xspace rates)\xspace. Again\xspace, there\xspace is\xspace a\xspace clear\xspace performance\xspace gain\xspace to\xspace take\xspace into\xspace account\xspace the\xspace lingering\xspace of\xspace gradients\xspace. However\xspace, for\xspace this\xspace task\xspace of\xspace SVM\xspace, ${\hyperref[alg:recycle-scsg]{\mathtt{SCSG^{lin}}}}$\xspace and\xspace $\mathtt{SCSG}$\xspace do\xspace not\xspace seem\xspace to\xspace outperform\xspace the\xspace corresponding\xspace ${\hyperref[alg:recycle-svrg]{\mathtt{SVRG^{lin}}}}$\xspace and\xspace $\mathtt{SVRG}$\xspace . \item \etoidale{fig:svm:gd} and\xspace \ref{fig:svm:gd-time} compare\xspace our\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace (with\xspace best\xspace tuned\xspace learning\xspace rate)\xspace to\xspace $\mathtt{GD}$\xspace (with\xspace two\xspace of\xspace the\xspace best\xspace tuned\xspace learning\xspace rates)\xspace. Recall\xspace that\xspace ${\hyperref[alg:recycle-gd]{\mathtt{GD^{lin}}}}$\xspace serves\xspace as\xspace a\xspace \emph{theoretical\xspace} evidence\xspace that\xspace exploiting\xspace the\xspace lingering\xspace of\xspace gradients\xspace can\xspace drastically\xspace improve\xspace the\xspace \emph{convergence\xspace rate\xspace} of\xspace the\xspace method\xspace. From\xspace these\xspace two\xspace plots\xspace, one\xspace can\xspace verify\xspace that\xspace it\xspace is\xspace indeed\xspace so\xspace, and\xspace the\xspace performance\xspace gain\xspace is\xspace very\xspace similar\xspace to\xspace what\xspace theory\xspace predicts\xspace in\xspace \etoidale{fig:svm:theory}. \end{itemize} \section{Theoretical Evidence for Assumption~\ref{ass:psi}} \label{sec:B} Recall\xspace when\xspace proving\xspace our\xspace theoretical\xspace result\xspace in\xspace \spersola{sec:theory}, we\xspace have\xspace made\xspace \fistulare{ass:psi} which\xspace says\xspace $|B(x,r)|$\xspace is\xspace bounded\xspace by\xspace an\xspace explicit\xspace increasing\xspace function\xspace in\xspace $r$\xspace . In\xspace this\xspace section\xspace, in\xspace the\xspace context\xspace of\xspace SVM\xspace, we\xspace demonstrate\xspace why\xspace it\xspace is\xspace so\xspace under\xspace some\xspace natural\xspace randomness\xspace assumption\xspace of\xspace the\xspace data\xspace. \begin{assumption}\label{ass:svm-data} Suppose\xspace there\xspace exists\xspace parameters\xspace $\sigma>0$\xspace and\xspace $\kappa \geq 1$\xspace such\xspace that\xspace the\xspace following\xspace holds\xspace. Each\xspace data\xspace point\xspace $a_i \in \mathbb{R}^d$\xspace is\xspace drawn\xspace independently\xspace from\xspace a\xspace Gaussian\xspace distribution\xspace $a_i \sim \mathcal{N}(\mu_i, \Sigma_i)$\xspace with\xspace the\xspace property\xspace that\xspace $\Sigma_i \preceq \frac{\sigma^2}{d} \mathbf{I}$\xspace and\xspace $\sup_{x\in\mathbb{R}^d} \big\{ \frac{|x^\top \mu_i |}{\sqrt{x^\top \Sigma_i x}} \big\} \leq \kappa$\xspace . \end{assumption} As\xspace a\xspace simple\xspace example\xspace, if\xspace $\Sigma_i = \mathbf{I}$\xspace and\xspace $\|\mu_i\|\leq 10$\xspace , then\xspace one\xspace can\xspace choose\xspace $\sigma=\sqrt{d}$\xspace and\xspace $\kappa = 10$\xspace . \begin{remark} \fistulare{ass:svm-data} is\xspace quite\xspace natural\xspace in\xspace the\xspace following\xspace sense\xspace. The\xspace bound\xspace $\Sigma_i \preceq \frac{\sigma^2}{d} \mathbf{I}$\xspace ensures\xspace that\xspace the\xspace data\xspace point\xspace $a_i$\xspace has\xspace a\xspace bounded\xspace Euclidean\xspace norm\xspace. The\xspace bound\xspace $\frac{|x^\top \mu_i |}{\sqrt{x^\top \Sigma_i x}} \leq \kappa$\xspace ensures\xspace that\xspace $a_i$\xspace cannot\xspace be\xspace very\xspace adversarial\xspace: the\xspace projection\xspace of\xspace $\mu$\xspace to\xspace any\xspace direction\xspace $x$\xspace must\xspace be\xspace bounded\xspace by\xspace the\xspace amount\xspace of\xspace randomness\xspace in\xspace that\xspace direction\xspace. \end{remark} We\xspace main\xspace theorem\xspace of\xspace this\xspace section\xspace shows\xspace that\xspace \fistulare{ass:psi} holds\xspace with\xspace $\beta=1$\xspace when\xspace data\xspace is\xspace sufficiently\xspace random\xspace: \begin{theorem}\label{thm:B} Under\xspace \fistulare{ass:svm-data}, for\xspace every\xspace $\alpha_0 \in (0,1)$\xspace and\xspace $B > 0$\xspace , as\xspace long\xspace as\xspace $n \geq \Omega(\frac{d}{\alpha_0} \log \frac{B \sigma \kappa}{\alpha_0})$\xspace , with\xspace probability\xspace at\xspace least\xspace $1-e^{-\Omega(\alpha_0 n)}$\xspace it\xspace satisfies\xspace that\xspace for\xspace every\xspace $x\in\mathbb{R}^d$\xspace with\xspace $\|x\|\leq B$\xspace and\xspace $r\geq 0$\xspace , \begin{align* \end{theorem} \subsection{Proof of Theorem~\ref{thm:B}} The\xspace proof\xspace of\xspace \eustatico{thm:B} consists\xspace of\xspace several\xspace steps\xspace of\xspace careful\xspace probabilistic\xspace derivations\xspace. In\xspace the\xspace first\xspace lemma\xspace below\xspace, we\xspace fix\xspace some\xspace vector\xspace $x\in\mathbb{R}^d$\xspace , fix\xspace some\xspace radius\xspace $r\geq 0$\xspace , and\xspace fix\xspace some\xspace data\xspace point\xspace $i\in [n]$\xspace . Using\xspace the\xspace randomness\xspace of\xspace $a_i$\xspace , this\xspace lemma\xspace upper\xspace bounds\xspace the\xspace probability\xspace for\xspace the\xspace value\xspace $b_i \langle x, a_i \rangle $\xspace to\xspace be\xspace in\xspace a\xspace ``bad\xspace'' interval\xspace and\xspace $\|a_i\|$\xspace being\xspace too\xspace large\xspace. \begin{lemma}\label{lem:B:prob} Under\xspace \fistulare{ass:svm-data}, for\xspace every\xspace $\alpha_0 \in (0,1)$\xspace there\xspace exists\xspace parameter\xspace $\xi \geq 1$\xspace such\xspace that\xspace, for\xspace every\xspace $i\in[n]$\xspace , every\xspace $x\in\mathbb{R}^d$\xspace and\xspace every\xspace $r\geq 0$\xspace , it\xspace satisfies\xspace \mnemonismo \end{lemma} \begin{proof}[Proof of \sparlatore{lem:B:prob}] We\xspace first\xspace note\xspace by\xspace tail\xspace bound\xspace for\xspace chi\xspace-square\xspace distribution\xspace that\xspace $\Pr[\|a_i\| > \xi \sigma] \leq e^{-\xi^2 d / 5}$\xspace . As\xspace for\xspace the\xspace other\xspace probability\xspace, \begin{align* Now\xspace, notice\xspace that\xspace $\langle x, a_i \rangle \sim \mathcal{N}(x^\top \mu, x^\top \Sigma x)$\xspace . Therefore\xspace, define\xspace $g = \frac{\langle x, a_i\rangle - x^\top \mu}{\sqrt{x^\top \Sigma x}}$\xspace , it\xspace follows\xspace from\xspace distribution\xspace $g\sim \mathcal{N}(0,1)$\xspace . Therefore\xspace \begin{align* On\xspace one\xspace hand\xspace, by\xspace property\xspace of\xspace standard\xspace Gaussian\xspace, there\xspace exists\xspace $t$\xspace such\xspace that\xspace the\xspace above\xspace probability\xspace $$\Xi = \Pr\Big[ g \in \Big[t, t+ \frac{4 r \xi \sigma + \mu}{\sqrt{x^\top \Sigma x}} \Big] \Big] \leq \frac{1}{\sqrt{2\pi}} \frac{4 r \xi \sigma + \mu}{\sqrt{x^\top \Sigma x}} \enspace. $$ On\xspace the\xspace other\xspace hand\xspace, by\xspace our\xspace assumption\xspace $\frac{|x^\top \mu|}{\sqrt{x^\top \Sigma x}} \leq \kappa$\xspace . If\xspace $\frac{1}{\sqrt{x^\top \Sigma x}} \geq 2\kappa$\xspace , then\xspace we\xspace know\xspace that\xspace $g$\xspace in\xspace the\xspace above\xspace range\xspace implies\xspace $g \geq \frac{1/2}{\sqrt{x^\top \Sigma x}} - \frac{2 r \xi \sigma + \mu}{\sqrt{x^\top \Sigma x}}$\xspace . Thus\xspace, if\xspace both\xspace $\frac{1}{\sqrt{x^\top \Sigma x}} \geq 2\kappa$\xspace and\xspace $2 r \xi \sigma + \mu \leq 1/4$\xspace are\xspace satisfied\xspace, the\xspace above\xspace probability\xspace is\xspace at\xspace most\xspace \begin{align* Finally\xspace, let\xspace $\nu\geq 2\kappa$\xspace be\xspace a\xspace parameter\xspace to\xspace be\xspace chosen\xspace later\xspace. Consider\xspace the\xspace following\xspace two\xspace possibilities\xspace: $\frac{1}{\sqrt{x^\top \Sigma x}} \geq \nu$\xspace and\xspace $\frac{1}{\sqrt{x^\top \Sigma x}} \leq \nu$\xspace . In\xspace the\xspace former\xspace case\xspace we\xspace have\xspace $\Xi \leq e^{-\Omega(\nu^2)}$\xspace ; and\xspace in\xspace the\xspace latter\xspace case\xspace we\xspace have\xspace $\Xi \leq O\big( (r \xi \sigma + \mu) \nu \big)$\xspace . In\xspace sum\xspace, we\xspace have\xspace \erotizzazione For\xspace similar\xspace reason\xspace, we\xspace also\xspace have\xspace \motoaratrice Together\xspace, we\xspace conclude\xspace that\xspace for\xspace any\xspace parameter\xspace $\xi \geq 1$\xspace and\xspace $\nu \geq 2\kappa$\xspace , \vigile If\xspace we\xspace choose\xspace $\xi = 1 + \Theta( \sqrt{ d^{-1} \log\frac{1}{\alpha_0} } )$\xspace and\xspace $\nu = 2 \kappa + \Theta( \sqrt{ \log \frac{1}{\alpha_0} }) $\xspace , then\xspace \rilavorazione \end{proof} The\xspace next\xspace lemma\xspace extends\xspace \sparlatore{lem:B:prob} in\xspace two\xspace directions\xspace. First\xspace, it\xspace applies\xspace Chernoff\xspace bound\xspace to\xspace turn\xspace the\xspace probability\xspace upper\xspace bound\xspace in\xspace \sparlatore{lem:B:prob} into\xspace an\xspace upper\xspace bound\xspace on\xspace the\xspace actual\xspace number\xspace of\xspace data\xspace points\xspace; second\xspace, it\xspace applies\xspace standard\xspace epsilon\xspace-net\xspace argument\xspace to\xspace turn\xspace \sparlatore{lem:B:prob} into\xspace a\xspace ``for\xspace all\xspace'' argument\xspace with\xspace respect\xspace to\xspace all\xspace $x$\xspace and\xspace to\xspace all\xspace $r$\xspace . \begin{lemma}\label{lem:B:count} Under\xspace \fistulare{ass:svm-data}, for\xspace every\xspace $\alpha_0 \in (0,1)$\xspace there\xspace exists\xspace some\xspace parameter\xspace $\xi \geq 1$\xspace such\xspace that\xspace, as\xspace long\xspace as\xspace $n \geq \Omega(\frac{d}{\alpha_0} \log \frac{B \sigma \kappa}{\alpha_0})$\xspace , with\xspace probability\xspace at\xspace least\xspace $1-e^{-\Omega(\alpha_0 n)}$\xspace , for\xspace every\xspace $x\in\mathbb{R}^d$\xspace with\xspace $\|x\|\leq B$\xspace and\xspace every\xspace $r\geq 0$\xspace , \esterificazione \end{lemma} \begin{proof}[Proof of \sparlatore{lem:B:count}] We\xspace first\xspace apply\xspace \sparlatore{lem:B:prob} and\xspace Chernoff\xspace bound\xspace to\xspace derive\xspace that\xspace, for\xspace fixed\xspace $x\in \mathbb{R}^d$\xspace and\xspace fixed\xspace $r\geq 0$\xspace , with\xspace probability\xspace at\xspace least\xspace $1 - e^{-\Omega(\alpha_0 n)}$\xspace over\xspace the\xspace randomness\xspace of\xspace $a_1,\dots,a_n$\xspace , \begin{align}\label{eqn:B:count:single Next\xspace, we\xspace essentially\xspace want\xspace to\xspace take\xspace union\xspace bound\xspace with\xspace respect\xspace to\xspace all\xspace possible\xspace $x$\xspace and\xspace $r$\xspace . As\xspace for\xspace $r\geq 0$\xspace , it\xspace suffices\xspace for\xspace us\xspace to\xspace consider\xspace only\xspace $\Omega\big( \frac{\alpha_0}{\sigma \kappa \log (1/\alpha_0) } \big) \leq r \leq O \big( \frac{1}{\sigma\kappa \log(1/\alpha_0)} \big)$\xspace . This\xspace is\xspace because\xspace if\xspace $r$\xspace is\xspace larger\xspace than\xspace this\xspace upper\xspace bound\xspace then\xspace the\xspace lemma\xspace becomes\xspace trivial\xspace, and\xspace if\xspace $r$\xspace is\xspace smaller\xspace than\xspace this\xspace lower\xspace bound\xspace then\xspace we\xspace can\xspace replace\xspace it\xspace with\xspace this\xspace lower\xspace bound\xspace and\xspace then\xspace proceed\xspace the\xspace proof\xspace. Furthermore\xspace, because\xspace we\xspace are\xspace hiding\xspace constants\xspace inside\xspace the\xspace big\xspace-$O$\xspace notation\xspace, it\xspace suffices\xspace to\xspace consider\xspace only\xspace finitely\xspace many\xspace values\xspace $r$\xspace in\xspace this\xspace interval\xspace: for\xspace instance\xspace, $r = r_0 \cdot 1.1^k$\xspace for\xspace every\xspace $k\in\{0,1,2,\dots\}$\xspace where\xspace $r_0 = \Theta\big( \frac{\alpha_0}{\sigma \kappa \log (1/\alpha_0) } \big)$\xspace . As\xspace for\xspace $x$\xspace , we\xspace cover\xspace the\xspace space\xspace of\xspace $\|x\|\leq B$\xspace by\xspace an\xspace $\varepsilon$\xspace -net\xspace with\xspace $\varepsilon=r_0 / 10$\xspace . This\xspace net\xspace has\xspace $e^{O(d \log(B/r_0)))}$\xspace many\xspace points\xspace $x$\xspace and\xspace satisfies\xspace that\xspace each\xspace $x$\xspace with\xspace $\|x\|\leq B$\xspace is\xspace $\varepsilon$\xspace -close\xspace to\xspace at\xspace least\xspace one\xspace point\xspace in\xspace this\xspace net\xspace. Applying\xspace union\xspace bound\xspace, we\xspace know\xspace that\xspace as\xspace long\xspace as\xspace $\alpha_0 n \geq \Omega(d \log \frac{B \sigma \kappa}{\alpha_0})$\xspace , the\xspace above\xspace \eqref{eqn:B:count:single} holds\xspace for\xspace every\xspace $x$\xspace in\xspace this\xspace $\varepsilon$\xspace -net\xspace and\xspace every\xspace $r = r_0 1.1^k$\xspace in\xspace the\xspace interval\xspace. It\xspace is\xspace not\xspace hard\xspace to\xspace derive\xspace that\xspace as\xspace a\xspace consequence\xspace, for\xspace all\xspace $x$\xspace with\xspace $\|x\|\leq B$\xspace and\xspace all\xspace $\Omega\big( \frac{\alpha_0}{\sigma \kappa \log (1/\alpha_0) } \big) \leq r \leq O \big( \frac{1}{\sigma\kappa \log(1/\alpha_0)} \big)$\xspace , it\xspace satisfies\xspace \begin{align}\label{eqn:B:count:final \eqref{eqn:B:count:final} then\xspace implies\xspace for\xspace all\xspace $x$\xspace with\xspace $\|x\|\leq B$\xspace and\xspace all\xspace $r\geq 0$\xspace , \begin{align* \end{proof} We\xspace are\xspace now\xspace ready\xspace to\xspace prove\xspace \eustatico{thm:B}. \begin{proof}[Proof of \eustatico{thm:B}] We\xspace first\xspace observe\xspace that\xspace for\xspace each\xspace $i\in [n]$\xspace and\xspace $x\in \mathbb{R}^d$\xspace , \begin{align* This\xspace is\xspace because\xspace, if\xspace $b_i \langle x, a_i \rangle \leq 1-r \xi \sigma - \mu$\xspace and\xspace $\|a_i\| \leq \xi \sigma$\xspace then\xspace $b_i \langle y, a_i \rangle \leq 1- \mu$\xspace so\xspace $\nabla f_i(x) = \nabla f_i(y) = - b_i a_i$\xspace , and\xspace similarly\xspace if\xspace $b_i \langle x, a_i \rangle \geq 1 + r \xi \sigma$\xspace and\xspace $\|a_i\| \leq \xi \sigma$\xspace then\xspace $\nabla f_i(x) = \nabla f_i(y) = b_i a_i$\xspace . As\xspace a\xspace result\xspace \begin{align* \end{proof} \section*{Acknowledgements} We\xspace would\xspace like\xspace to\xspace thank\xspace Greg\xspace Yang\xspace, Ilya\xspace Razenshteyn\xspace and\xspace S\xspace\'{e\xspace}bastien\xspace Bubeck\xspace for\xspace discussing\xspace the\xspace motivation\xspace of\xspace this\xspace problem\xspace. \clearpage
train/arxiv
BkiUdKc4eILhQJztnGk8
5
1
\section{Introduction} Experimental studies (see, e.g., Refs.~\onlinecite{Kl-Mu,Kl-Mu2}) proved that layered superconductors can be treated as periodic structures where thin superconducting layers (with thicknesses $s$ of about 0.2~{nm}) are coupled through thicker dielectric layers (with thicknesses $d$ of about 1.5~{nm} and a dielectric constant $\varepsilon \sim 15$) via the \emph{intrinsic Josephson effect}. Strongly anisotropic high-temperature superconductor crystals $\rm Bi_2Sr_2CaCu_2O_{8+\delta}$ or artificial compounds $\rm Nb /Al$~-~${\rm Al O}_x\rm / Nb$ are examples of such materials. They are of great interest from both technological and fundamental-science view points. For fundamental science, the interest in layered superconductors is related to the specific type of plasma formed there due to its layered structure, the so-called Josephson plasma. Because of the essential difference in the nature of the currents along and across layers, this plasma is strongly anisotropic. Indeed, the strong currents along the layers are of the same nature as in bulk superconductors, while the relatively small currents across the layers are related to the Josephson effect. The strong current anisotropy provides the possibility for propagation of specific Josephson plasma electromagnetic waves (JPWs) in layered superconductors (see, e.g., Refs.~\onlinecite{Thz-rev,rev2} and references therein). JPWs belong to the terahertz frequency range, which is important for various applications, but not easily reachable with modern devices. The electrodynamics of layered superconductors is described by a set of coupled sine-Gordon equations~\cite{sine-gord,SG2,SG3,SG4,SG5,Thz-rev} for the gauge-invariant interlayer phase difference $\varphi$ of the order parameter. These equations are nonlinear due to the nonlinear relation $J\propto\sin\varphi$ between the Josephson interlayer current $J$ and $\varphi$. Here we will consider the case of weak nonlinearity, $|\varphi| \ll 1$, when $\sin\varphi$ can be expanded as $\sin\varphi \approx \varphi-\varphi^3/6$. As was shown in Refs.~\onlinecite{nl1,nl2,nl3,nl4}, even in this case the non-trivial nonlinear phenomena accompanying the propagation of JPWs can be observed, e.g., slowing down of light~\cite{nl1}, self-focusing of terahertz pulses~\cite{nl1,nl2}, excitation of nonlinear waveguide modes~\cite{nl3}, and self-induced transparency of the layered superconductors~\cite{nl4}. The noticeable change in the transparency of the cuprate superconductor, when increasing the wave amplitude, was recently observed in the important experiment, Ref.~\onlinecite{dienst13}, where the excitation of Josephson plasma solitons led to an effective decrease of the Josephson resonance frequency. In this paper, we study theoretically the nonlinear interaction of the electromagnetic waves with different polarizations in a slab of layered superconductor placed into a waveguide with ideal metal walls. The waveguide axis is assumed to be parallel to the superconducting layers (see Fig.~\ref{wavegAB}). Due to the strong current anisotropy of the layered superconductor-vacuum interface (the $yz$-plane), the transformation of the polarizations can be observed for the reflected and transmitted waves. We calculate the dependence of the transformation coefficients on the amplitude of the incident wave for the cases when the transverse magnetic (TM) mode transforms to the transverse electric (TE) mode and vice versa. The main result of our paper consists in revealing a specific superposition principle which is valid even in the \emph{nonlinear} case. We introduce two special mutually-orthogonal wave polarizations matched with the $y$-axis, which is perpendicular both to the waveguide axis ($x$-axis) and the crystallographic \textbf{c}-axis of the layered superconductor ($z$-axis). The magnetic field in the wave of the first polarization (we call it as H$_\perp$ polarization) and the electric field in the wave of the second polarization (we call it as E$_\perp$ polarization) are perpendicular to the $y$-axis. We show that, in the main order with respect to the anisotropy parameter, the waves of these polarizations interact with the layered superconductor \emph{independently}. The incident wave of the H$_\perp$ polarization has the electric field component parallel both to the vacuum-superconductor interface and to the crystallographic \textbf{ab}-plane. This wave excites strong shielding currents along the layers and, therefore, it penetrates into the sample over short length in the form of an evanescent wave and reflects almost completely from the superconductor. At the same time, the wave of the E$_\perp$ polarization does not contain the electric field component parallel both to the sample surface and to the crystallographic \textbf{ab}-plane. Therefore, the shielding currents flow along the \textbf{c}-axis only and they are relatively small in this case. So, the wave of the E$_\perp$ polarization can propagate in a layered superconductor and penetrates over long distances. This wave partially reflects and partially transmits through the sample. We show that, in spite of the nonlinearity, the H$_\perp$ and E$_\perp$ waves do not practically interact. Therefore, to study the reflection and transmission of the TE and TM incident waves (or the waves of any other polarization), we can perform the following steps: \begin{enumerate} \item Express the incident wave as the \textit{superposition} of two modes of H$_\perp$ and E$_\perp$ polarizations. \item Study the reflection and transmission of these modes \textit{separately}. \item Represent the reflected and transmitted fields of the H$_\perp$ and E$_\perp$ modes as superpositions of the TE and TM modes. \end{enumerate} \begin{figure}[h] \begin{center} \includegraphics [width=16cm]{f1.pdf} \caption{(Color online) Schematic geometry for waves propagating in a waveguide along the superconducting layers. Note that here S and I stand for superconducting and insulator layers, respectively. The light pink translucent layer (cut-off to show the sample inside) represents the walls of the waveguide.} \label{wavegAB} \end{center} \end{figure} This paper is organized as follows. In the next section, we derive the electromagnetic fields in the vacuum and in the layered superconductor and justify the superposition principle for the waves with H$_\perp$ and E$_\perp$ polarizations. In the third section, we study the nonlinear reflection and transmission of the E$_\perp$ waves through the slabs of layered superconductors. In the fourth section, we apply the revealed superposition principle for the cases of the TE and TM waves incidence. In the final section, we summarize the results obtained in the paper. \section{Electromagnetic field distribution} \subsection{Geometry of the problem} Consider a waveguide of lateral sizes $L_y$ and $L_z$ with a sample of layered superconductor of length $D$ inside it (see Fig.~\ref{wavegAB}). The coordinate system is chosen in such a way that the crystallographic $\mathbf{ab}$-plane of the layered superconductor coincides with the $xy$-plane, and the $\mathbf{c}$-axis is along the $z$-axis. An electromagnetic mode of frequency $\omega$ propagates in the waveguide along the $x$-axis, which is parallel to the superconducting layers. The incident wave partly reflects from the layered superconductor and partly transmits through it, as is shown schematically in Fig.~\ref{wavegAB}. The electric ${\vec E(\vec{r},t)}$ and magnetic ${\vec H(\vec{r},t)}$ fields in the waveguide can be expressed via the vector potential ${\vec A(\vec{r},t)}$ by the usual equations, \begin{equation}\label{HE} {\vec H}(\vec{r},t) = {\rm rot} \left[{\vec A}(\vec{r},t)\right], \quad {\vec E}(\vec{r},t)= - \frac{1}{c}\frac{\partial {\vec A}(\vec{r},t)}{\partial t}. \end{equation} The scalar potential is assumed to be equal to zero. Using the boundary conditions (the tangential components of the electric field are zero on the waveguide walls), we present the components of the vector potential in the following form: \begin{gather} A_x(\vec{r},t)=\mathcal{A}_x(x,t) \sin(k_y y)\sin(k_z z), \notag\\ A_y(\vec{r},t)=\mathcal{A}_y(x,t) \cos(k_y y)\sin(k_z z), \label{A_mult-s}\\ A_z(\vec{r},t)=\mathcal{A}_z(x,t) \sin(k_y y)\cos(k_z z), \notag \end{gather} where $k_y=\pi n_y/L_y$, $k_z=\pi n_z/L_z$; $n_y$ and $n_z$ are positive integer numbers that define the propagating mode in the waveguide. \subsection{Electromagnetic field in the vacuum} The electromagnetic field in the vacuum can be presented as a superposition of waves of the H$_\perp$ and E$_\perp$ polarizations. In the H$_\perp$-polarized wave, the magnetic field is perpendicular to the $y$-axis, \begin{equation}\label{pol_I} {\vec E}^{(1)} = \{E_x^{(1)}, E_y^{(1)}, E_z^{(1)}\}, \quad {\vec H}^{(1)} = \{H_x^{(1)}, 0, H_z^{(1)}\}, \end{equation} whereas the electric field is orthogonal to the $y$-axis in the wave of the E$_\perp$ polarization, \begin{equation}\label{pol_II} {\vec E}^{(2)} = \{E_x^{(2)}, 0, E_z^{(2)}\}, \quad {\vec H}^{(2)} = \{H_x^{(2)}, H_y^{(2)}, H_z^{(2)}\}. \end{equation} Hereafter, the superscripts (1) and (2) denote the H$_\perp$ and E$_\perp$ polarizations, respectively. The incident and reflected modes of the H$_\perp$ and E$_\perp$ polarizations propagate in the vacuum region $x<0$. The Maxwell equations give the following expressions for the amplitudes $\vec {\mathcal{A}}_{\rm inc}(x,t)$ of the vector potential of the incident wave: \begin{eqnarray} \mathcal{A}_{x\,{\rm inc}}(x,t)&=&-H_{\rm inc}^{(1)}\dfrac{k_xk_y}{k^3} \sin(k_xx-\omega t+\phi_{\rm inc}^{(1)}) \notag\\ &&-H_{\rm inc}^{(2)}\dfrac{k_z}{k^2} \sin(k_xx-\omega t+\phi_{\rm inc}^{(2)}), \notag\\ \mathcal{A}_{y\,{\rm inc}}(x,t)&=&-H_{\rm inc}^{(1)}\dfrac{k^2-k_y^2}{k^3} \cos(k_xx-\omega t+\phi_{\rm inc}^{(1)}), \label{A_inc} \\ \mathcal{A}_{z\,{\rm inc}}(x,t)&=&H_{\rm inc}^{(1)}\dfrac{k_yk_z}{k^3} \cos(k_xx-\omega t+\phi_{\rm inc}^{(1)}) \notag\\ &&-H_{\rm inc}^{(2)}\dfrac{k_x}{k^2} \cos(k_xx-\omega t+\phi_{\rm inc}^{(2)}), \notag \end{eqnarray} where $H_{\rm inc}^{(1)}$, $\phi_{\rm inc}^{(1)}$, $H_{\rm inc}^{(2)}$, and $\phi_{\rm inc}^{(2)}$ are the amplitudes and phases of the magnetic field for the incident waves of the H$_\perp$ and E$_\perp$ polarizations. Similar expressions can be written for the vector potential of the reflected waves, \begin{eqnarray} \mathcal{A}_{x\,{\rm ref}}(x,t)&=&-H_{\rm ref}^{(1)}\dfrac{k_xk_y}{k^3} \sin(k_xx+\omega t-\phi_{\rm ref}^{(1)}) \notag\\ &&+H_{\rm ref}^{(2)}\dfrac{k_z}{k^2} \sin(k_xx+\omega t-\phi_{\rm ref}^{(2)}), \notag\\ \mathcal{A}_{y\,{\rm ref}}(x,t)&=&-H_{\rm ref}^{(1)}\dfrac{k^2-k_y^2}{k^3} \cos(k_xx+\omega t-\phi_{\rm ref}^{(1)}), \label{A_ref} \\ \mathcal{A}_{z\,{\rm ref}}(x,t)&=&H_{\rm ref}^{(1)}\dfrac{k_yk_z}{k^3} \cos(k_xx+\omega t-\phi_{\rm ref}^{(1)}) \notag\\ &&+H_{\rm ref}^{(2)}\dfrac{k_x}{k^2} \cos(k_xx+\omega t-\phi_{\rm ref}^{(2)}), \notag \end{eqnarray} where $H_{\rm ref}^{(1)}$, $\phi_{\rm ref}^{(1)}$, $H_{\rm ref}^{(2)}$, and $\phi_{\rm ref}^{(2)}$ are the amplitudes and phases of the magnetic field for the reflected waves of the H$_\perp$ and E$_\perp$ polarizations. In the second vacuum region, at $x>D$, the transmitted waves of the H$_\perp$ and E$_\perp$ polarizations propagate. Their vector potential can be written as \begin{eqnarray} \mathcal{A}_{x\,{\rm tr}}(x,t)&=&-H_{\rm tr}^{(1)}\dfrac{k_xk_y}{k^3} \sin[k_x(x-D)-\omega t+\phi_{\rm tr}^{(1)}] \notag\\ &&-H_{\rm tr}^{(2)}\dfrac{k_z}{k^2} \sin[k_x(x-D)-\omega t+\phi_{\rm tr}^{(2)}], \notag\\ \mathcal{A}_{y\,{\rm tr}}(x,t)&=&-H_{\rm tr}^{(1)}\dfrac{k^2-k_y^2}{k^3} \notag\\ &&\times\cos[k_x(x-D)-\omega t+\phi_{\rm tr}^{(1)}], \label{A_tr} \\ \mathcal{A}_{z\,{\rm tr}}(x,t)&=&H_{\rm tr}^{(1)}\dfrac{k_yk_z}{k^3} \cos[k_x(x-D)-\omega t+\phi_{\rm tr}^{(1)}] \notag\\ &&-H_{\rm tr}^{(2)}\dfrac{k_x}{k^2} \cos[k_x(x-D)-\omega t+\phi_{\rm tr}^{(2)}]. \notag \end{eqnarray} where $H_{\rm tr}^{(1)}$, $\phi_{\rm tr}^{(1)}$, $H_{\rm tr}^{(2)}$, and $\phi_{\rm tr}^{(2)}$ are the amplitudes and phases of the magnetic field for the transmitted waves of the H$_\perp$ and E$_\perp$ polarizations. \subsection{Electromagnetic field in the layered superconductor} The electromagnetic field in the layered superconductor is defined by the distribution $\varphi({\vec r},t)$ of interlayer gauge-invariant phase difference of the order parameter. This phase difference is governed by a set of coupled sine-Gordon equations~\cite{sine-gord,SG2,SG3,SG4,SG5,Thz-rev}. Though this set does not take into account some important features of the cuprate high-temperature superconductors (e.g., the d-wave pairing), it describes properly the propagation of the electromagnetic waves in layered superconductors and allows important predictions. For instance, a way to produce coherent terahertz radiation was proposed in Ref.~\onlinecite{Bul_kosh} on the basis of the coupled sine-Gordon equations and then realized in the experiment~\cite{Ozyuzer}. In the continuum limit, the coupled sine-Gordon equations reduce to \begin{equation}\label{3} \left(1-\lambda_{ab}^2\frac{\partial^2}{\partial z^2}\right)\left(\frac{1}{\omega_J^2}\frac{\partial^2 \varphi}{\partial t^2} + \sin\varphi\right)- \lambda_c^2\left(\frac{\partial^2 \varphi}{\partial x^2}+\frac{\partial^2 \varphi}{\partial y^2}\right)=0. \end{equation} Here $\lambda_{ab}$ and $\lambda_c=c/\omega_J\varepsilon^{1/2}$ are the London penetration depths across and along the layers, respectively, $\omega_J = (8\pi e d J_c/\hbar\varepsilon)^{1/2}$ is the Josephson plasma frequency, $J_c$ is the maximal Josephson current density, and $e$ is the elementary charge. We do not take into account the relaxation terms since they are small at low temperatures and do not play an essential role in the phenomena considered here. Note that the component $E_{z}$ of the electric field causes the breakdown of electro-neutrality of the superconducting layers and results in an additional, so-called capacitive, interlayer coupling. However, this coupling does not affect the propagation of the Josephson plasma waves along the waveguide and can be safely neglected because of the smallness of the parameter $\alpha = R_D^2\varepsilon/sd$. Here $R_D$ is the Debye length for a charge in the superconductor. In this case, the gauge of the vector potential can be chosen in such a way that the order parameter is real and the gauge-invariant phase difference $\varphi$ is related to the $z$-component of the vector potential by a simple expression (see, e.g., Ref.~\onlinecite{SG3}): \begin{equation}\label{Az} A_z = - \frac{\Phi_0}{2\pi d} \varphi, \end{equation} where $\Phi_0=\pi c \hbar/e$ is the magnetic flux quantum. Note that Eq.~(\ref{3}) can be obtained from the more general wave equation, \begin{equation}\label{wave_equation} {\rm grad}\,{\rm div}{\vec A} - \Delta {\vec A}= - \frac{\varepsilon}{c^2}\frac{\partial ^2 {\vec A}}{\partial t^2} + \frac{4\pi}{c}{\vec J}, \end{equation} with the current components \begin{equation}\label{Jab} J_x = - \frac{c}{4\pi\lambda_{ab}^2}A_x, \quad J_y = - \frac{c}{4\pi\lambda_{ab}^2}A_y, \end{equation} and \begin{equation}\label{Jc} J_z = J_c \sin \varphi = - J_c \sin\left(\frac{2\pi d}{\Phi_0}A_z\right). \end{equation} Indeed, excluding $A_x$ and $A_y$ from Eq.~(\ref{wave_equation}) and using Eqs.~(\ref{Jab}) and (\ref{Jc}), we derive Eq.~\eqref{3}. Relations (\ref{HE}), (\ref{wave_equation}), (\ref{Jab}), and (\ref{Jc}) represent a complete set of equations for finding the electromagnetic field in the layered superconductor in the continual limit. We will use it for the study of the propagation of weakly nonlinear JPWs with $|\varphi|\ll 1$, when the density of the Josephson current can be presented as $J_c (\varphi-\varphi^3/6)$. It is important to emphasize that strongly nonlinear phenomena can be observed in the propagation of the JPWs even in the low-amplitude case, at $|\varphi|\ll 1$, if the wave frequency is close to the cutoff frequency $\omega_{\rm cut}$ (see Refs.~\onlinecite{nl1,nl2,nl3,nl4}). Here $\omega_{\rm cut}$ is the minimum frequency of the linear JPWs that can propagate in the layered superconductor. It is convenient to represent the electromagnetic field inside of a layered superconductor as a sum of waves with so-called \textit{ordinary} and \textit{extraordinary} polarizations. The electric field in the ordinary wave is perpendicular to the $z$-axis. Hence, the phase difference $\varphi$ and the component $A_z$ of the vector potential are equal to zero for this wave. Thus, the ordinary wave does not produce a current along the $z$-axis and, therefore, this mode is always \textit{linear}. Taking into account that $A_z=0$, one can solve Eq.~\eqref{wave_equation} and obtain components of the vector potential for the ordinary modes, \begin{eqnarray} \mathcal{A}_x^{\rm ord}&=&\dfrac{k_y}{k^2}\Big[e^{-p_xx}H_-^{\rm ord}\sin(\omega t-\phi_-^{\rm ord}) \notag\\ &&+\,e^{p_x(x-D)}H_+^{\rm ord}\sin(\omega t-\phi_+^{\rm ord})\Big], \notag\\ \mathcal{A}_y^{\rm ord}&=&\dfrac{p_x}{k^2}\Big[e^{-p_xx}H_-^{\rm ord}\sin(\omega t-\phi_-^{\rm ord}) \notag\\ &&-\,e^{p_x(x-D)}H_+^{\rm ord}\sin(\omega t-\phi_+^{\rm ord})\Big], \\ \mathcal{A}_z^{\rm ord}&=&0, \notag \end{eqnarray} where $H_-^{\rm ord}$, $\phi_-^{\rm ord}$, $H_+^{\rm ord}$, and $\phi_+^{\rm ord}$ are the amplitudes and phases of the decreasing and increasing evanescent fields inside the superconductor, $p_x=\lambda_{ab}^{-1}$. The extraordinary polarization is perpendicular to the ordinary one, and the magnetic field in the wave of this polarization is perpendicular to the $z$-axis. This mode exhibits the \textit{nonlinear} properties of the Josephson plasma. We seek $A_z$ in the form of a wave with $x$-dependent amplitude $a(x)$ and phase $\eta(x)$, \begin{equation}\label{varphi} \mathcal{A}_z^{\rm ext}= \mathcal{H}_0\tilde{\Omega}\lambda_c a(x)\sin[\omega t-\eta(x)] \end{equation} with \begin{equation}\label{kappa_pm} \mathcal{H}_0=\frac{4\sqrt{2}}{3\pi}\dfrac{\Phi_0}{d\lambda_c}, \quad \tilde{\Omega}=|\Omega^2-\Omega_{\rm cut}^2|^{1/2}, \quad \Omega=\frac{\omega}{\omega_J}. \end{equation} The cutoff frequency $\Omega_{\rm cut}$ is the minimum frequency of the linear extraordinary JPW which can propagate in the layered superconductor, \begin{equation}\label{omega-cut} \Omega_{\rm cut} = \Big(1+\dfrac{k_y^2 \lambda_c^2}{1+\lambda_{ab}k_z^2}\Big)^{1/2}. \end{equation} Introducing the dimensionless coordinate and the normalized length of the sample, \begin{equation}\label{notations_ab} \xi=\frac{x}{\lambda_c}\tilde{\Omega}\,, \qquad \delta=\frac{D}{\lambda_c}\tilde{\Omega}\,, \end{equation} and substituting Eq.~\eqref{varphi} into Eq.~\eqref{wave_equation}, one can obtain the other components of the vector potential for the extraordinary modes: \begin{align} &\mathcal{A}_x^{\rm ext}=\mathcal{H}_0\tilde{\Omega}^2\lambda_{ab}^2 k_z\left[a\sin(\omega t-\eta)\right]^\prime, \\\notag &\mathcal{A}_y^{\rm ext}=\mathcal{H}_0\tilde{\Omega} \lambda_{ab}^2 \lambda_ck_yk_z a\sin(\omega t-\eta), \end{align} and two differential equations for the functions $\eta (\xi)$ and $a(\xi)$, \begin{eqnarray}\label{from_sine-Gordon} (a^2\eta^\prime)'=0,\quad a^{\prime\prime}=- \sigma a-a^3+a{\eta^\prime}^{2}. \end{eqnarray} Here $\sigma = {\rm sign} (\Omega-\Omega_{\rm cut})$ and the prime denotes derivation over $\xi$. We will use these equations for the numerical calculations of the electromagnetic field distribution inside the sample of the layered superconductor. \subsection{Superposition principle} Matching the tangential components of the electric and magnetic fields at both interfaces (at $x=0$ and $x=D$) between the vacuum regions and the layered superconductor, we obtain two sets of equations. The boundary conditions at the interface $x=0$ give the equations, \begin{subequations}\label{sys1} \begin{eqnarray} &\mu\big[\tilde{h}_{\rm inc}^{(1)}+\tilde{h}_{\rm ref}^{(1)}\big]=i(k_x\lambda_{c})^{-1}\alpha\gamma\tilde{h}_-^{\rm ord} -i\gamma^{2} a(0) e^{i \eta(0)}, \label{sys1-1}\quad\\ &\tilde{h}_{\rm inc}^{(1)}+\tilde{h}_{\rm ref}^{(1)}-\alpha \big[\tilde{h}_{\rm inc}^{(2)}-\tilde{h}_{\rm ref}^{(2)}\big]=i a(0) e^{i \eta(0)}, \label{sys1-2}\quad\\ &\mu\big[\tilde{h}_{\rm inc}^{(2)}+\tilde{h}_{\rm ref}^{(2)}\big]=\gamma^{2}\tilde{h}_-^{\rm ord}-\beta \big[a(\xi)e^{i\eta(\xi)}\big]_{\xi=0}', \label{sys1-3}\quad\\ &\alpha\big[\tilde{h}_{\rm inc}^{(1)}-\tilde{h}_{\rm ref}^{(1)}\big]+\tilde{h}_{\rm inc}^{(2)}+\tilde{h}_{\rm ref}^{(2)}= -\tilde{h}_-^{\rm ord}. \label{sys1-4}\quad \end{eqnarray} \end{subequations} Here we introduce the normalized amplitudes of the waves of H$_\perp$ and E$_\perp$ polarizations in the vacuum and of the ordinary waves in the layered superconductor, \begin{align}\label{h_nl} &\tilde{h}_{\rm inc,\,ref,\,tr}^{(1),\,(2)}=h_{\rm inc,\,ref,\,tr}^{(1),\,(2)} \exp\big[{i\phi_{\rm inc,\,ref,\,tr}^{(1),\,(2)}}\big], \notag\\ &h_{\rm inc,\,ref,\,tr}^{(1),\,(2)}=\dfrac{k_yk_z}{\mathcal{H}_0k^3\tilde{\Omega}\lambda_c} H_{\rm inc,\,ref,\,tr}^{(1),\,(2)},\\ &\tilde{h}_{\pm}^{\rm ord}=h_{\pm}^{\rm ord}\exp({i\phi_{\pm}^{\rm ord}}),\quad h_{\pm}^{\rm ord}=\dfrac{H_{\pm}^{\rm ord}}{\mathcal{H}_0k^3\tilde{\Omega}\lambda_c\lambda_{ab}^2},\notag \end{align} and the parameters, \begin{gather} \label{const} \alpha=\dfrac{kk_x}{k_yk_z}, \; \beta=\dfrac{\tilde{\Omega}}{kk_yk_z\lambda_{c}^{3}}, \; \mu=\dfrac{k^2-k_y^2}{k_y^{2}k_z^{2}\lambda_{c}^{2}}, \; \gamma=\dfrac{\lambda_{ab}}{\lambda_c} \end{gather} The boundary conditions at the interface $x=D$ give the equations, \begin{subequations}\label{sys2} \begin{eqnarray} &\mu\tilde{h}_{\rm tr}^{(1)}=-i(k_x\lambda_{c})^{-1}\alpha\gamma\tilde{h}_+^{\rm ord}-i\gamma^{2} a(\delta) e^{i\eta(\delta)}, \label{sys2-1}\\ &\tilde{h}_{\rm tr}^{(1)}-\alpha \tilde{h}_{\rm tr}^{(2)}=i a(\delta) e^{i\eta(\delta)}, \label{sys2-2}\\ &\mu\tilde{h}_{\rm tr}^{(2)}=\gamma^{2}\tilde{h}_+^{\rm ord}-\beta\big[a(\xi)e^{i\eta(\xi)}\big]_{\xi=\delta}', \label{sys2-3}\\ &\alpha\tilde{h}_{\rm tr}^{(1)}+\tilde{h}_{\rm tr}^{(2)}=-\tilde{h}_+^{\rm ord}.\label{sys2-4} \end{eqnarray} \end{subequations} Here we omit the terms with $\exp(-p_xD)$ because we assume that the sample length $D$ is much larger than the London penetration depth $\lambda_{ab}$. Since $\gamma=\lambda_{ab}/\lambda_c\ll1$, the right-hand side in Eq.~\eqref{sys1-1} is relatively small. So, in the main approximation with respect to the anisotropy parameter $\gamma$, we have \begin{equation} \tilde{h}_{\rm ref}^{(1)}\approx -\tilde{h}_{\rm inc}^{(1)}. \end{equation} Correspondingly, Eq.~\eqref{sys2-1} shows that the amplitude $\tilde{h}_{\rm tr}^{(1)}$ of the transmitted wave of the H$_\perp$ polarization is much less than the incident amplitude $\tilde{h}_{\rm inc}^{(1)}$, $|\tilde{h}_{\rm tr}^{(1)}/\tilde{h}_{\rm inc}^{(1)}|\sim \gamma\ll1$. This means that the incident mode of the H$_\perp$ polarization \textit{nearly completely reflects} from the layered superconductor. Moreover, this behavior of the H$_\perp$-polarized wave \textit{does not depend on the presence of the orthogonal mode with the E$_\perp$ polarization}. In the main approximation with respect to $\gamma$, relations~\eqref{sys1-2}, \eqref{sys1-3},~\eqref{sys2-2}, and~\eqref{sys2-3} give the following equations, which describe the reflection and transmission of the wave with the E$_\perp$ polarization: \begin{subequations}\label{sys3} \begin{eqnarray} &-\alpha \big(\tilde{h}_{\rm inc}^{(2)}-\tilde{h}_{\rm ref}^{(2)}\big)=i a(0) e^{i \eta(0)}, \\ &\mu\big(\tilde{h}_{\rm inc}^{(2)}+\tilde{h}_{\rm ref}^{(2)}\big)=-\beta e^{i\eta(0)}\left[a^\prime(0)+ia(0)\eta^\prime(0)\right], \\ &-\alpha \tilde{h}_{\rm tr}^{(2)}=i a(\delta) e^{i\eta(\delta)}, \\ &\mu\tilde{h}_{\rm tr}^{(2)}=-\beta e^{i\eta(\delta)}\big[a^\prime(\delta)+i a(\delta)\eta^\prime(\delta)\big]. \end{eqnarray} \end{subequations} This set of equations, as well as Eq.~\eqref{from_sine-Gordon}, do not contain the parameters of the H$_\perp$-polarized wave. This means that the reflection and transmission of the wave with the E$_\perp$ polarization \textit{do not depend on the presence of the mode of the H$_\perp$ polarization}. Thus, we have shown that, in the main approximation with respect to the anisotropy parameter $\gamma$, the modes of the H$_\perp$ and E$_\perp$ polarizations reflect and transmit through the layered superconductor independently of each other. This means that the superposition principle for these specific modes works even in the strongly-nonlinear regime. After calculating the reflected and transmitted amplitudes for the modes of the H$_\perp$ and E$_\perp$ polarizations, we can use Eqs.~\eqref{sys1-4} and~\eqref{sys2-4} to determine the amplitudes $\tilde{h}_-^{\rm ord}$ and $\tilde{h}_+^{\rm ord}$ of the ordinary modes in the layered superconductor. In order to illustrate the superposition principle, we solve three problems of reflection and transmission of the incident waves: the incident mode has only H$_\perp$ polarization only, with $h_{\rm inc}^{(1)}=2$; the incident mode is of the E$_\perp$ polarization only with $h_{\rm inc}^{(2)}=2$; the incident mode is a superposition of the waves of H$_\perp$ and E$_\perp$ polarizations with $h_{\rm inc}^{(1)}=h_{\rm inc}^{(2)}=2$. Figure~\ref{colorplots} shows the distribution of the normalized characteristic amplitude $\bar{W}$ of the electromagnetic field inside of the waveguide for these three cases, \begin{equation} \label{norm-EMF} \bar{W}=\dfrac{k_yk_z}{\mathcal{H}_0k^3\tilde{\Omega}\lambda_c} \sqrt{\big\langle|{\vec E}(\vec{r},t)|^{2}+|{\vec H}(\vec{r},t)|^{2}\big\rangle_t}, \end{equation} where $\langle\ldots\rangle_t$ denotes averaging over time $t$. \begin{figure} \begin{center} \includegraphics*[width=14cm]{f2-colorplot.pdf} \caption{(Color online) Spatial distribution (over coordinates $x$ and $y$ at $z=L_z/3$) of the normalized amplitude $\bar{W}$ of electromagnetic field, Eq.~\eqref{norm-EMF}, inside the waveguide. Panels (a), (b), and (c) correspond to the cases when the incident wave is of the H$_\perp$ polarization only, of the E$_\perp$ polarization only, and a mix of the H$_\perp$ and E$_\perp$ polarizations, respectively. In the panel (a): $h_{\rm inc}^{(1)}=2$, $h_{\rm inc}^{(2)}=0$; in the panel (b): $h_{\rm inc}^{(1)}=0$, $h_{\rm inc}^{(2)}=2$; in the panel (c): $h_{\rm inc}^{(1)}=h_{\rm inc}^{(2)}=2$. The color determines the value of the amplitude. The vertical straight lines show the edges of the superconducting sample. The parameters used here are: $\tilde{\Omega}= 0.1$, $\sigma=-1$, $D=15\lambda_c$, $L_y=L_z=0.1$~cm, $n_y=n_z=1$, $\phi_{\rm inc}^{(1)}=\phi_{\rm inc}^{(2)}=0$, $\lambda_c=4\cdot 10^{-3}$~cm, $\lambda_{ab}=2000$~\AA, $\omega_J/2\pi=0.3$~THz. } \label{colorplots} \end{center} \end{figure} As seen in Fig.~\ref{colorplots}(a), the wave of H$_\perp$ polarization completely reflects from the sample of the layered superconductor and does not excite a propagating mode inside it. Only the evanescent mode, which decays in a very narrow region near the surface of the sample, exists. Hence, there is also no transmitted wave. The wave of E$_\perp$ polarization, Fig.~\ref{colorplots}(b), partly reflects from the layered superconductor and partly transmits through it. When we apply the mix of the waves of the H$_\perp$ and E$_\perp$ polarizations with the same amplitudes, Fig.~\ref{colorplots}(c), the field distribution in the superconductor and in the right vacuum region is the same as in Fig.~\ref{colorplots}(b). This demonstrates that the modes of the H$_\perp$ and E$_\perp$ polarizations do not interact with each other and can be treated independently. \section{Transmission and reflection of the E$_\perp$-polarized waves}\label{Ep} In this section, we consider the nonlinear phenomena in the reflection and transmission of the E$_\perp$-polarized waves through the sample of the layered superconductor placed inside of the waveguide. Using Eqs.~\eqref{from_sine-Gordon} and~\eqref{sys3}, we calculate the amplitudes of the reflected and transmitted waves. We start from the analysis of the phase trajectories $a'(a)$ that correspond to certain solutions of these equations. Excluding $\eta'$ from Eqs.~\eqref{from_sine-Gordon} and~\eqref{sys3}, we derive the explicit equations for the phase trajectories, \begin{eqnarray} \label{phase-tr} {a'}^{2}(a) &=&\sigma\big[a^{2}(\delta)-a^{2}\big] +\dfrac{1}{2}\big[a^{4}(\delta)-a^{4}\big] \notag\\ &&+\dfrac{a^{4}(\delta)}{(\alpha\beta/\mu)^{2}} \big[a^{-2}(\delta)-a^{-2}\big]. \end{eqnarray} \begin{figure} \begin{center} \includegraphics*[width=14cm]{f4-phase_portr.pdf} \caption{(Color online) Phase trajectories $a'(a)$ plotted for $\delta=3$ ($D=30\lambda_c$) that correspond to the solid thick blue curve in Fig.~\ref{T(hiII)}. Other parameters are the same as in Fig.~\ref{colorplots}. (a) Unclosed phase trajectories plotted for $a(\delta)$=0.2 (red curve 1) and $a(\delta)$=0.7 (orange curve 2); nearly closed loop plotted for $a(\delta)$=1.39 (brown curve 3); the green curve 4 plotted for $a(\delta)$=1.8 is a loop with overlapping portions. The movement along the phase trajectory, when the spatial coordinate $\xi$ changes from zero to $\delta$, is shown by the arrows. (b) Phase trajectories plotted for $a(\delta)$=2.4 (purple curve 5), $a(\delta)$=2.7 (green curve 6), $a(\delta)$=3 (blue curve 8), and $a(\delta)$=3.2 (dark cyan curve 9). The black dashed curves are the envelopes for the phase trajectories. The point 7 is a shrunken phase trajectory plotted for $a(\delta) = a_{\rm cr} \approx 2.84$. } \label{phase_diagr} \end{center} \end{figure} Figure~\ref{phase_diagr} presents the phase diagram of Eqs.~\eqref{phase-tr} for different values of the integration constant $a(\delta)$. The arrows show the direction of movement when increasing the coordinate $\xi$. This figure demonstrates how the phase trajectories $a'(a)$ change when increasing $a(\delta)$. The phase trajectories are open loops at $a(\delta) < a_1 $ (e.g., curves 1 and 2 in Fig.~\ref{phase_diagr}). For the set of parameters considered in Fig.~\ref{phase_diagr}, $a_1 \approx 1.39$. If $a(\delta) > a_1$, the phase trajectories have overlapping portions (curves 4--9 in Fig.~\ref{phase_diagr}). We stress that there exists a special value of $a(\delta) = a_{\rm cr} \approx 2.84$, where the phase trajectory $a'(a)$ shrinks into a point (see point 7 in Fig.~\ref{phase_diagr}. This point corresponds to the uniform spatial distribution of the amplitude $a(\xi)$, i.e. $a(\xi)=a_{\rm cr}$, for all $\xi$. The uniform solution $a(\xi)=a_{\rm cr}$ occurs when the amplitude $H_{\rm inc}^{(2)}$ of the incident wave takes on a critical value $H_{\rm cr}$, \begin{equation} \label{hcr} H_{\rm cr}=\mathcal{H}_0\dfrac{k^2\lambda_c}{k_x} \sqrt{\Big[\dfrac{(k^2-k_y^2) \lambda_c}{k_x}\Big]^{2}+ \dfrac{\omega_{\rm cut}^2-\omega^2}{\omega_J^{2}}}. \end{equation} In this case, according to Eqs.~\eqref{from_sine-Gordon}, the phase $\eta$ changes linearly with $\xi$. This means that the electromagnetic field inside the sample behaves similarly to a linear wave propagating only in one direction (along the $x$-axis in Fig.~\ref{wavegAB}), and there is no wave reflected from the boundary $x=D$. The superconducting slab is totally transparent in this case. Note that the phase trajectories for $a<a_{\rm cr}$ and $a>a_{\rm cr}$ correspond to the amplitudes $H_{\rm inc}^{(2)}<H_{\rm cr}$ and $H_{\rm inc}^{(2)}>H_{\rm cr}$, respectively. Figure~\ref{T(hiII)} shows the dependence of the transmittance $T$ on the amplitude $H_{\rm inc}^{(2)}$ (normalized to $H_{\rm cr}$) of the incident E$_\perp$-polarized wave for different sizes of $D$, $L_x$, and $L_y$ of the superconducting sample. \begin{figure} \begin{center} \includegraphics*[width=12cm]{f3-tr_vs_hi.pdf} \caption{(Color online) Transmittance $T$ versus the normalized amplitude $H_{\rm inc}^{(2)}/H_{\rm cr}$ (see Eq.~\eqref{hcr}) of the incident E$_\perp$-polarized wave. \\ \textbf{Main panel.} Curves $T(H_{\rm inc}^{(2)}/H_{\rm cr})$ plotted for different sizes of the sample. Solid thick blue curve corresponds to $L_y=L_z=0.1$~cm, $D=30\lambda_c$ ($\delta=3$); solid thin green curve is plotted for $L_y=L_z=0.3$~cm, $D=15\lambda_c$ ($\delta=1.5$); dashed thin red curve corresponds to $L_y=L_z=0.1$~cm, $D=15\lambda_c$ ($\delta=1.5$). Other parameters are the same as in Fig.~\ref{colorplots}. The black dash-dotted curve is the envelope for all $T(H_{\rm inc}^{(2)}/H_{\rm cr})$ curves. \\ \textbf{Inset.} Hysteresis of the $T(H_{\rm inc}^{(2)}/H_{\rm cr})$ dependence when moving along the solid thick blue curve shown in the main panel. The red lower and upward arrows show movement along the $T(H_{\rm inc}^{(2)}/H_{\rm cr})$ curve when increasing amplitude $H_{\rm inc}^{(2)}$. The black upper and downward arrows correspond to decreasing amplitude $H_{\rm inc}^{(2)}$.} \label{T(hiII)} \end{center} \end{figure} All the curves in the main panel of Fig.~\ref{T(hiII)} have an oscillating structure so that the transmittance $T$ tops the maximum value $T=1$ at different amplitudes of the incident wave. Besides the considered case $H_{\rm inc}^{(2)}=H_{\rm cr}$, the complete transmission of the sample ($T=1$) occurs in conditions when the phase trajectories $a'(a)$ in Fig.~\ref{phase_diagr} represent closed loops with integer numbers of complete turns along them, i.e., for all cases when $a(0)=a(\delta)$. Physically, such conditions correspond to the cases when the sample length $D$ is equal to the integer number of the wavelengths of the nonlinear mode. All the curves $T(H_{\rm inc}^{(2)}/H_{\rm cr})$ in Fig.~\ref{T(hiII)} plotted for samples with different sizes $L_x$, $L_y$, and $D$ have the common envelope curve (the dash-dotted curve). The critical amplitude $H_{\rm inc}^{(2)}=H_{\rm cr}$ is the only point, where all the curves and the envelope curve touch each other. It should be noted that the $T(H_{\rm inc}^{(2)}/H_{\rm cr})$ dependence plotted for another frequency detuning ${(\omega-\omega_{\rm cut})}$ behaves similarly to the curves shown in the main panel in Fig.~\ref{T(hiII)}, however with another envelope. The dependence $T(H_{\rm inc}^{(2)}/H_{\rm cr})$ has interesting hysteretic features. The inset in Fig.~\ref{T(hiII)} shows a fragment of such a dependence. When increasing the amplitude $H_{\rm inc}^{(2)}$, one should move along the red bottom arrow. When the right endpoint on this branch is reached, further movement along this branch is not possible. Increasing the incident wave amplitude results in a jump to the higher branch, along the red upward arrow. Similar jump occurs when decreasing the amplitude $H_{\rm inc}^{(2)}$, see the black upper and downward arrows. At first the transmittance increases, but when the left endpoint on this branch is reached, a jump to the lower branch takes place. \section{Transmission and reflection of the TE and TM modes} In this section, we study the nonlinear transmission, reflection, and mutual transformation of the TE and TM modes in the waveguide with a sample of layered superconductor. For the geometry shown in Fig.~\ref{wavegAB}, the TE (TM) wave is defined as a mode with the electric (magnetic) field perpendicular to the $x$-axis. The vector potential $\vec{\mathcal{A}}_{\rm inc}^{\rm (TE)}$ of the incident TE-polarized wave has the components, \begin{eqnarray}\label{TE-fields_inc} \mathcal{A}_{x\, {\rm inc}}^{\rm (TE)}&=&0, \notag\\ \mathcal{A}_{y\, {\rm inc}}^{\rm (TE)}&=&-H_{\rm inc}^{\rm (TE)}\dfrac{kk_z}{k^3} \cos[k_xx-\omega t+\phi_{\rm inc}^{\rm (TE)}], \\\notag \mathcal{A}_{z\, {\rm inc}}^{\rm (TE)}&=&H_{\rm inc}^{\rm (TE)}\dfrac{kk_y}{k^3} \cos[k_xx-\omega t+\phi_{\rm inc}^{\rm (TE)}], \end{eqnarray} where $H_{\rm inc}^{\rm (TE)}$ and $\phi_{\rm inc}^{\rm (TE)}$ are the amplitude and phase of the magnetic field in this wave. For the incident TM-polarized wave with amplitude $H_{\rm inc}^{\rm (TM)}$ and phase $\phi_{\rm inc}^{\rm (TM)}$ of the magnetic field, we have \begin{eqnarray}\label{TM-fields_inc} \mathcal{A}_{x\, {\rm inc}}^{\rm (TM)}&=&H_{\rm inc}^{\rm (TM)}\dfrac{k^2-k_x^2}{k^3} \sin[k_xx-\omega t+\phi_{\rm inc}^{\rm (TM)}], \notag\\ \mathcal{A}_{y\, {\rm inc}}^{\rm (TM)}&=&H_{\rm inc}^{\rm (TM)}\dfrac{k_xk_y}{k^3} \cos[k_xx-\omega t+\phi_{\rm inc}^{\rm (TM)}],\notag \\ \mathcal{A}_{z\, {\rm inc}}^{\rm (TM)}&=&H_{\rm inc}^{\rm (TM)}\dfrac{k_xk_z}{k^3} \cos[k_xx-\omega t+\phi_{\rm inc}^{\rm (TM)}]. \end{eqnarray} The components of the vector potential for the reflected TE and TM modes can be written in a similar form, \begin{eqnarray}\label{TE-fields_ref} \mathcal{A}_{x\, {\rm ref}}^{\rm (TE)}&=&0, \notag\\ \mathcal{A}_{y\, {\rm ref}}^{\rm (TE)}&=&-H_{\rm ref}^{\rm (TE)}\dfrac{kk_z}{k^3} \cos[k_xx+\omega t-\phi_{\rm ref}^{\rm (TE)}], \\\notag \mathcal{A}_{z\, {\rm ref}}^{\rm (TE)}&=&H_{\rm ref}^{\rm (TE)}\dfrac{kk_y}{k^3} \cos[k_xx+\omega t-\phi_{\rm ref}^{\rm (TE)}], \end{eqnarray} \begin{eqnarray}\label{TM-fields_ref} \mathcal{A}_{x\, {\rm ref}}^{\rm (TM)}&=&-H_{\rm ref}^{\rm (TM)}\dfrac{k^2-k_x^2}{k^3} \sin[k_xx+\omega t-\phi_{\rm ref}^{\rm (TM)}], \notag\\ \mathcal{A}_{y\, {\rm ref}}^{\rm (TM)}&=&-H_{\rm ref}^{\rm (TM)}\dfrac{k_xk_y}{k^3} \cos[k_xx+\omega t-\phi_{\rm ref}^{\rm (TM)}], \notag\\ \mathcal{A}_{z\, {\rm ref}}^{\rm (TM)}&=&-H_{\rm ref}^{\rm (TM)}\dfrac{k_xk_z}{k^3} \cos[k_xx+\omega t-\phi_{\rm ref}^{\rm (TM)}]. \end{eqnarray} Finally, in the second vacuum region (for $x>D$), only the transmitted wave exists. In this region, the components of vector potential can be written as \begin{eqnarray}\label{TE-fields_tr} \mathcal{A}_{x\, {\rm tr}}^{\rm (TE)}&=&0, \notag\\ \mathcal{A}_{y\, {\rm tr}}^{\rm (TE)}&=&-H_{\rm tr}^{\rm (TE)}\dfrac{kk_z}{k^3} \cos[k_x(x-D)-\omega t+\phi_{\rm tr}^{\rm (TE)}], \\\notag \mathcal{A}_{z\, {\rm tr}}^{\rm (TE)}&=&H_{\rm tr}^{\rm (TE)}\dfrac{kk_y}{k^3} \cos[k_x(x-D)-\omega t+\phi_{\rm tr}^{\rm (TE)}], \end{eqnarray} \begin{eqnarray}\label{TM-fields_tr} \mathcal{A}_{x\, {\rm tr}}^{\rm (TM)}&=&H_{\rm tr}^{\rm (TM)}\dfrac{k^2-k_x^2}{k^3} \sin[k_x(x-D)-\omega t+\phi_{\rm tr}^{\rm (TM)}], \notag\\ \mathcal{A}_{y\, {\rm tr}}^{\rm (TM)}&=&H_{\rm tr}^{\rm (TM)}\dfrac{k_xk_y}{k^3} \cos[k_x(x-D)-\omega t+\phi_{\rm tr}^{\rm (TM)}], \notag\\ \mathcal{A}_{z\, {\rm tr}}^{\rm (TM)}&=&H_{\rm tr}^{\rm (TM)}\dfrac{k_xk_z}{k^3} \cos[k_x(x-D)-\omega t+\phi_{\rm tr}^{\rm (TM)}]. \end{eqnarray} Evidently, the electromagnetic field of the TE and TM waves can be presented as superpositions of the H$_\perp$- and E$_\perp$-polarized waves. The analysis of Eqs.~\eqref{A_inc}--\eqref{A_tr} and~\eqref{TE-fields_inc}--\eqref{TM-fields_tr} gives the following expressions for the complex dimensionless amplitudes $\tilde{h}_{\rm inc, \, ref, \, tr}^{\rm (TE),\, (TM)}$ of the incident, transmitted, and reflected TE and TM waves via the amplitudes $\tilde{h}_{\rm inc,\,ref,\,tr}^{(1),(2)}$ of the H$_\perp$- and E$_\perp$-polarized waves: \begin{eqnarray}\label{TE-TM_inc_tr} \tilde{h}_{\rm inc, \, tr}^{\rm (TE)}&=&\dfrac{kk_z\tilde{h}_{\rm inc,\,tr}^{(1)}-k_xk_y\tilde{h}_{\rm inc,\,tr}^{(2)}}{k_y^2+k_z^2}, \notag\\ \quad \tilde{h}_{\rm inc, \, tr}^{\rm (TM)} &=& -\dfrac{k_xk_y\tilde{h}_{\rm inc,\,tr}^{(1)}+kk_z\tilde{h}_{\rm inc,\,tr}^{(2)}}{k_y^2+k_z^2}, \end{eqnarray} \begin{eqnarray}\label{TE-TM_ref} \tilde{h}_{\rm ref}^{\rm (TE)}&=&\dfrac{kk_z\tilde{h}_{\rm ref}^{(1)}+k_xk_y\tilde{h}_{\rm ref}^{(2)}}{k_y^2+k_z^2}, \notag\\ \quad \tilde{h}_{\rm ref}^{\rm (TM)} &=& \dfrac{k_xk_y\tilde{h}_{\rm ref}^{(1)}-kk_z\tilde{h}_{\rm ref}^{(2)}}{k_y^2+k_z^2}. \end{eqnarray} Here $\tilde{h}_{\rm inc,\,tr,\,ref}^{\rm (TE),\,(TM)}$ is defined similarly to Eq.~\eqref{h_nl}, \begin{eqnarray}\label{TE-12} \tilde{h}_{\rm inc,\,tr,\,ref}^{\rm (TE),\,(TM)}&=&h_{\rm inc,\,tr,\,ref}^{\rm (TE),\,(TM)} \exp\big[{i\phi_{\rm inc,\,tr,\,ref}^{\rm (TE),\,(TM)}}\big], \notag\\ h_{\rm inc,\,tr,\,ref}^{\rm (TE),\,(TM)}&=&\dfrac{k_yk_z}{\mathcal{H}_0k^3\tilde{\Omega}\lambda_c} H_{\rm inc,\,tr,\,ref}^{\rm (TE),\,(TM)}. \end{eqnarray} First, we consider the case when the wave incident onto the layered superconductor is TE-polarized. Using Eqs.~(\ref{TE-TM_inc_tr}) and (\ref{TE-TM_ref}), the superposition principle for the waves with the H$_\perp$ and E$_\perp$ polarizations, and the results of the previous sections on the nonlinear reflection and transmission of the H$_\perp$- and E$_\perp$-polarized waves, we can find the reflectance $R_{\rm TE\rightarrow TE}$ and transmittance $T_{\rm TE\rightarrow TE}$ for the TE waves, \begin{equation}\label{R} R_{{\rm TE\rightarrow TE}}=\Big|\dfrac{h_{\rm ref}^{\rm (TE)}}{h_{\rm inc}^{\rm (TE)}}\Big|^{2}, \quad T_{{\rm TE\rightarrow TE}}=\Big|\dfrac{h_{\rm tr}^{\rm (TE)}}{h_{\rm inc}^{\rm (TE)}}\Big|^{2}. \end{equation} In addition, we obtain the transformation coefficients $R_{{\rm TE \rightarrow TM}}$ and $T_{{\rm TE \rightarrow TM}}$ for the TM waves that appear in the vacuum regions $x<0$ and $x>D$, respectively, due to the anisotropy in the $yz$-plane, \begin{equation}\label{trans} R_{{\rm TE \rightarrow TM}}=\Big|\dfrac{h_{\rm ref}^{\rm (TM)}}{h_{\rm inc}^{\rm (TE)}}\Big|^{2}, \quad T_{{\rm TE \rightarrow TM}}=\Big|\dfrac{h_{\rm tr}^{\rm (TM)}}{h_{\rm inc}^{\rm (TE)}}\Big|^{2}. \end{equation} Figure~\ref{TE} shows the numerically-calculated dependences of the coefficients $R_{\rm TE\rightarrow TE}$, $T_{\rm TE\rightarrow TE}$, $R_{{\rm TE \rightarrow TM}}$, and $T_{{\rm TE \rightarrow TM}}$ on the dimensionless amplitude $h_{\rm inc}^{\rm (TE)}$ of the incident TE wave. Note that all these dependences exhibit hysteretic behavior when changing the incident amplitude, see the discussion in the previous section. \begin{figure} \begin{center} \includegraphics*[width=14cm]{f5-TE_reftr.pdf} \caption{(Color online) (a) Reflectance $R_{\rm TE\rightarrow TE}$ (red solid line) and transformation coefficient $R_{{\rm TE \rightarrow TM}}$ (blue dashed line) versus the dimensionless amplitude $h_{\rm inc}^{\rm (TE)}$ of the incident TE wave. (b) Transmittance $T_{\rm TE\rightarrow TE}$ (orange solid line) and transformation coefficient $T_{{\rm TE \rightarrow TM}}$ (green dashed line) versus the dimensionless amplitude $h_{\rm inc}^{\rm (TE)}$ of the incident TE wave. The parameters used here are the same as in Fig.~\ref{colorplots}.} \label{TE} \end{center} \end{figure} Similarly, we considered the case when the wave incident onto the layered superconductor is TM-polarized, and calculated numerically the reflectance $R_{\rm TM\rightarrow TM}$ and transmittance $T_{\rm TM\rightarrow TM}$ for the TM waves, \begin{equation}\label{Rtm} R_{\rm TM\rightarrow TM}=\Big|\dfrac{h_{\rm ref}^{\rm (TM)}}{h_{\rm inc}^{\rm (TM)}}\Big|^{2}, \quad T_{\rm TM\rightarrow TM}=\Big|\dfrac{h_{\rm tr}^{\rm (TM)}}{h_{\rm inc}^{\rm (TM)}}\Big|^{2}, \end{equation} as well as the transformation coefficients $R_{{\rm TM \rightarrow TE}}$ and $T_{{\rm TM \rightarrow TE}}$ for the TE waves that appear in the vacuum regions $x<0$ and $x>D$, respectively, \begin{equation}\label{trans-tm} R_{\rm TM\rightarrow TE}=\Big|\dfrac{h_{\rm ref}^{\rm (TE)}}{h_{\rm inc}^{\rm (TM)}}\Big|^{2}, \quad T_{\rm TM\rightarrow TE}=\Big|\dfrac{h_{\rm tr}^{\rm (TE)}}{h_{\rm inc}^{\rm (TM)}}\Big|^{2}. \end{equation} Figure~\ref{TM} shows the dependences of these coefficients on the dimensionless amplitude $h_{\rm inc}^{\rm (TM)}$ of the incident TM wave. \begin{figure} \begin{center} \includegraphics*[width=14cm]{f6-TM_reftr.pdf} \caption{(Color online) (a) Reflectance $R_{\rm TM\rightarrow TM}$ (blue solid line) and transformation coefficient $R_{{\rm TM \rightarrow TE}}$ (red dashed line) versus the dimensionless amplitude $h_{\rm inc}^{\rm (TM)}$ of the incident TM wave. (b) Transmittance $T_{\rm TM\rightarrow TM}$ (green solid line) and transformation coefficient $T_{{\rm TM \rightarrow TE}}$ (orange dashed line) versus the dimensionless amplitude $h_{\rm inc}^{\rm (TM)}$ of the incident TM wave. The parameters used here are the same as in Fig.~\ref{colorplots}.} \label{TM} \end{center} \end{figure} \section{Conclusions} In this paper, we have studied theoretically the reflection and transmission of electromagnetic waves through a finite-length layered superconductor placed inside a waveguide with ideal metal walls. We assume that the superconducting layers are parallel to the waveguide axis. We show that, even in the nonlinear regime, the superposition principle is valid for two waves with mutually-orthogonal polarizations matched to the axis which is perpendicular to both the waveguide axis and the crystallographic \textbf{c}-axis of the superconductor. These two waves do not convert into each other after the reflection from the superconductor, propagate independently, and show principally different behavior. The wave of H-$_\perp$ polarization excites a strong shielding current along the crystallographic \textbf{ab}-plane of the superconductor and, therefore, reflects nearly completely from the superconductor and excites only an evanescent mode inside it. The wave of E-$_\perp$ polarization does not contain the electric field component parallel to both the sample surface and the crystallographic \textbf{ab}-plane. Therefore, it partially reflects and partially transmits through the sample. We have studied nonlinear reflection and transmission of the wave of E-$_\perp$ polarization and shown that the transmittance varies from 0 to 1 when changing the incident wave amplitude. We have also studied the nonlinear interaction and mutual transformation of the transverse electric and transverse magnetic modes in layered superconductors. \section{Acknowledgements} We gratefully acknowledge partial support from the RIKEN iTHES project, JSPS-RFBR Contract No.~12-02-92100, Grant-in-Aid for Scientific Research (S), and the Ukrainian-Japanese project ``Josephsonics'' (grant 52/417-2013).
train/arxiv
BkiUc_44eIXhsP2jpWZl
5
1
\section{Introduction} The $\phi$-meson photoproduction offers rich information on gluonic interactions at low energies. Because of almost pure $s\bar{s}$ components of the $\phi$-meson, meson exchanges in its interactions with nucleons are suppressed by the Okubo-Zweig-Iizuka rule, and multi-gluon exchanges are expected to be dominant. The slow rise of the total cross section with the energy $\sqrt{s}$ can be well understood by the $t$-channel exchange of gluonic objects with the vacuum quantum numbers, known as the Pomeron trajectory in the Regge phenomenology~\cite{regge}, in the framework of vector meson dominance~\cite{photopro}. The Pomeron trajectory has been discussed in connection with a glueball trajectory with $J^{PC} = 2^{++},~4^{++},~\cdots$, etc.~\cite{tensGB1, PomGB, tensGB2}, but it is still an open question what the physical particles lying on the Pomeron trajectory are. While the Pomeron exchange has successfully described the common features of diffractive hadron-hadron and photon-hadron scatterings at high energies~\cite{diffrac1,diffrac2,diffrac3}, its applicability to low energies is not completely clear~\cite{philowE1,philowE2}. In the other hadronic reactions such as $pp$ collisions or photoproduction with flavor changing such as pion or kaon production, it is difficult to study the Pomeron exchange at low energies because meson exchanges become significant. Therefore, the $\phi$-meson photoproduction is unique in studying the Pomeron exchange at low energies~\cite{expPom} and searching for a new glueball-associated trajectory, i.e. a daughter Pomeron trajectory~\cite{dautPom}, as inspired by the scalar glueball~($J^{PC}=0^{++}$, $M^{2} \sim 3 ~ \text{GeV}^{2}$) predicted by lattice QCD calculations~\cite{latticeQCD1,latticeQCD2}. The LEPS Collaboration measured the $\gamma p \rightarrow \phi p$ reaction near the threshold at forward angles~\cite{LEPSphip,LEPSint}, where the $t$-channel Pomeron exchange is expected to be dominant. The energy dependence of the forward cross section~($\theta = 0^\circ$) shows a local maximum around $E_{\gamma} \sim 2 ~ \text{GeV}$, which contradicts a monotonic behavior as a Pomeron exchange model predicts. Such a behavior was also observed by CLAS~\cite{CLASphip,CLASphip_neut}, whereas the data were obtained by extrapolating from the large scattering angle region. Recent measurements by LEPS extended the maximal beam energy from $2.4 ~ \text{GeV}$ to $2.9 ~ \text{GeV}$ and have confirmed an excess from the monotonic curve of a model prediction~\cite{LEPS3gev}. Several theoretical models have been proposed so far~\cite{philowE2,theor_bump_couple,*[Erratum:]theor_bump_couple_erra,theor_bump_sreso,*[Erratum:]theor_bump_sreso_erra,theor_bump_couple_2,theor_bump_diquark}, but no conclusive interpretation has been obtained yet. From measurements of the $\phi \rightarrow K^{+}K^{-}$ decay angular distributions with linearly polarized photons~\cite{LEPSphip,WC_SDM}, unnatural-parity exchanges such as the $\pi$ and $\eta$ exchanges are known to have a certain contribution~($\sim 30 \%$) near the threshold. A coherent photoproduction with an isoscalar target is very useful for studying the Pomeron exchange at low energies since the isovector $\pi$ exchange, which is a dominant meson exchange process, is forbidden~\cite{coh_phi_omega_fromD,phi_fromD}. The LEPS data for the coherent $\gamma d \rightarrow \phi d$ reaction~\cite{LEPScohphi_fromD} shows that a Pomeron exchange model including small contribution of the $\eta$ exchange~\cite{phi_fromD} underestimates the energy dependence of the forward cross section~($\theta = 0^\circ$). In this article, we present the first measurement of the differential cross sections and decay angular distributions for the coherent $\gamma {^{4}\text{He}} \rightarrow \phi {^{4}\text{He}}$ reaction at forward angles near the threshold with linearly polarized photons. This reaction has advantages compared to the $\gamma d$ reaction: First, thanks to the $0^{+}$ target, this reaction completely eliminates unnatural-parity exchanges since a $0^{+}$ particle cannot emit an unnatural-parity particle, remaining unchanged in spin and parity, due to spin-parity conservation. Second, owing to the large separation energy of helium-4 nuclei, the coherent production events could be cleanly separated from the incoherent ones, even better than what is with deuterium target. Accordingly, we can investigate natural-parity exchanges such as the Pomeron and multi-gluon exchanges at low energies with better accuracies. \section{Experiment and analysis} The experiment was carried out at the SPring-8 facility using the LEPS spectrometer~\cite{sumi_Kprod}. Linearly polarized photons were produced via the backward Compton scattering between UV-laser photons with a wavelength of $355~ \text{nm}$ and 8-GeV electrons in the storage ring~\cite{LEPbeam}. The photon energy was determined by the momentum analysis of the recoil electrons with tagging counters. The photon energy resolution~($\sigma$) was 13.5 MeV for all energies. The degree of photon polarization varied with photon energy; 69\% at $E_{\gamma} = 1.685 ~ \text{GeV}$, and 92\% at $E_{\gamma} = 2.385 ~ \text{GeV}$. The systematic uncertainty in the polarization degree was estimated to be less than 0.1\%. The tagged photons irradiated a liquid helium-4 target with a length of 15 cm. The integrated flux of the tagged photons was $4.6 \times 10^{12}$. The systematic uncertainty of the photon flux was estimated to be 3\%. Produced charged particles were detected at forward angles, and their momenta were analyzed by the LEPS spectrometer. The momentum resolution~($\sigma$) of the spectrometer was 0.9\% in ${\delta p}/p$ for typical $\text{1-GeV}/c$ particles. More details about the experimental setup can be found in Ref.~\cite{tpc_setup}. \begin{figure}[htb] \includegraphics[clip,width=8.5cm]{mkk_mmkk}% \caption{\label{fig:m_mmkk} (a) Invariant mass spectrum for $K^{+}K^{-}$ pairs. The dashed curve shows the MC-simulated background. The arrows show cut points for selecting the $\phi$-meson events. (b) Missing mass spectrum for the ${^{4}\text{He}}(\gamma, K^{+}K^{-})X$ reaction after selecting the $\phi$-meson events. The solid histogram shows the fit result with two MC templates for the coherent and incoherent processes~(dashed histograms). } \end{figure} The production of $\phi$-mesons was identified by detecting $K^{+}K^{-}$ tracks from the $\phi \rightarrow K^{+}K^{-}$ decay. $K^{+}K^{-}$ tracks were selected according to the reconstructed mass-squared and charge by the spectrometer with a $4\sigma$ cut, where $\sigma$ is the momentum-dependent resolution of the reconstructed mass-squared. The contamination of pions due to particle misidentifications was reduced to a negligible level by requiring the missing mass of the ${^{4}\text{He}}(\gamma, K^{+}K^{-})X$ reaction to be above $3.62 ~ \text{GeV}/c^{2}$. The $K^{+}K^{-}$ pairs produced inside the target were selected by imposing a cut on the $z$-positions of the reconstructed vertices of $K^{+}K^{-}$ pairs. Under this cut, the contamination from materials other than the target was estimated to be 2\% with empty-target data. Figure~\ref{fig:m_mmkk}(a) shows the invariant mass spectrum for the $K^{+}K^{-}$ pairs~[$\text{M}(K^{+}K^{-})$]. A clear signal for $\phi$-mesons was observed on small background contribution from the non-resonant $K^{+}K^{-}$ production. Note that the quasi-free $K^{+}\Lambda(1520)$ production followed by the $\Lambda(1520) \rightarrow K^{-}p$ decay was found to be negligible at small momentum transfers $|t|$ of our interest~(${-t} < 0.2 ~ \text{GeV}^{2}$). The $\phi$-meson yields including both coherent and incoherent processes were estimated by fitting invariant mass spectra with Monte Carlo~(MC) templates. The spectral shapes for the $\phi$-meson and non-resonant $K^{+}K^{-}$ events were reproduced by GEANT3~\cite{g3}-based MC simulations, where the geometrical acceptance, the photon energy resolution, the momentum resolution, and the detector efficiencies were implemented. The background level under the $\phi$-meson signal was estimated to be 1-15\%, depending on the photon energy and the momentum transfer. The coherent events were disentangled from the incoherent events by fitting missing mass spectra for the ${^{4}\text{He}}(\gamma, K^{+}K^{-})X$ reaction~[$\text{MM}(K^{+}K^{-})$] after selecting the $\phi$-meson events as $1.008 < \text{M}(K^{+}K^{-}) < 1.030 ~ \text{GeV}/c^{2}$~[Fig.~\ref{fig:m_mmkk}(b)]. A clear peak for the coherent $\gamma {^{4}\text{He}} \rightarrow \phi {^{4}\text{He}}$ reaction was observed around $\text{MM}(K^{+}K^{-}) \approx 3.73 ~ \text{GeV}/c^{2}$, corresponding to the mass of helium-4 nuclei. The spectral shapes for the coherent and incoherent processes were reproduced by the MC simulations. The missing mass $\text{MM}(K^{+}K^{-})$ resolution~($\sigma$) was estimated to be 14-17~$\text{MeV}/c^{2}$, which was consistent with estimates from hydrogen-target data. To reproduce the line shape of the $\text{MM}(K^{+}K^{-})$ spectra for the incoherent process, the Fermi motion and off-shell effects of the target nucleon inside a helium-4 nucleus were simulated as follows: For the off-shell correction, we adopted the first approach in Ref.~\cite{LEPScohphi_fromD}. The Fermi momenta of the target nucleon were taken from the numerical results of variational Monte Carlo calculations for the helium-4 wave function~\cite{momDist4He}. Moreover, following Ref.~\cite{LEPScohphi_fromD}, the energy dependence of the forward cross section~($\theta = 0^\circ$) for the $\phi$-meson photoproduction from off-shell nucleons as well as the differential cross section $d\sigma/dt$ was also taken into account. Systematic uncertainties due to contamination from events other than the coherent ones were estimated by considering additional processes, in the $\text{MM}(K^{+}K^{-})$ fits, such as \begin{equation} \begin{split} \gamma + {\text{\textquoteleft}t\text{\textquoteright}} &\rightarrow \phi + t, \\ \gamma + {\text{\textquoteleft}d\text{\textquoteright}} &\rightarrow \phi + d, \end{split} \label{eq:sem_coh} \end{equation} \noindent where ${\text{\textquoteleft}t\text{\textquoteright}}~({\text{\textquoteleft}d\text{\textquoteright}})$ stands for the triton~(deuteron) wave function in helium-4 nuclei. The off-shell effects of the triton and deuteron clusters inside a helium-4 nucleus were simulated in the same manner as that for the incoherent process. Their Fermi momenta were taken from Ref.~\cite{momDist4He}. The acceptance of the LEPS spectrometer including all the detector efficiencies and the analysis efficiency was calculated by using the MC simulation. The detector efficiencies were evaluated from the data channel by channel, and were taken into account position-dependently in the MC simulation. The simulation was iterated so as to reproduce the measured differential cross section $d\sigma/dt$ and decay angular distributions. The validity of the acceptance calculation as well as the normalization of the photon flux was checked with hydrogen-target data taken in the same period, by comparing the differential cross sections of other reactions with the previous LEPS measurements~\cite{LEPSphip,sumi_Kprod,sumi_pi0prod}. \section{Decay Angular Distribution} First, we present the $\phi \rightarrow K^{+}K^{-}$ decay angular distributions in the Gottfried-Jackson frame. The three-dimensional decay angular distribution, $W(\cos\Theta, \Phi, \Psi)$, with linearly polarized photons, as a function of the polar~($\Theta$) and azimuthal~($\Phi$) angles of the $K^{+}$ and the azimuthal angle~($\Psi$) of the photon polarization with respect to the production plane, are parametrized by the nine spin density matrix elements~($\rho^{i}_{jk}$) and the degree of photon polarization~($P_{\gamma}$)~\cite{classicalSDM}. Following Ref.~\cite{titovSDM}, one obtains five one-dimensional decay angular distributions: \begin{equation} \begin{split} W(\cos\Theta) &= \frac{3}{2} \left[ \frac{1}{2}(1-\rho^{0}_{00})\sin^{2}\Theta + \rho^{0}_{00}\cos^{2}\Theta \right], \\ W(\Phi) &= \frac{1}{2\pi}(1 - 2\text{Re}\rho^{0}_{1-1}\cos2\Phi), \\ W(\Phi-\Psi) &= \frac{1}{2\pi} \left[ 1 + 2P_{\gamma}{\overline{\rho}^{1}_{1-1}}\cos2(\Phi-\Psi) \right], \\ W(\Phi+\Psi) &= \frac{1}{2\pi} \left[ 1 + 2P_{\gamma}\Delta_{1-1}\cos2(\Phi+\Psi) \right], \\ W(\Psi) &= 1 - P_{\gamma}(2\rho^{1}_{11} + \rho^{1}_{00})\cos2\Psi, \end{split} \label{eq:decayangle} \end{equation} \noindent where ${\overline{\rho}^{1}_{1-1}} \equiv (\rho^{1}_{1-1} - \text{Im}\rho^{2}_{1-1})/2$ and $\Delta_{1-1} \equiv (\rho^{1}_{1-1} + \text{Im}\rho^{2}_{1-1})/2$. These distributions were measured at $0 < |t| - |t|_{\text{min}} < 0.2 ~ \text{GeV}^{2}$ for two photon energy regions~(E1: $1.985 < E_{\gamma} < 2.185 ~ \text{GeV}$, E2: $2.185 < E_{\gamma} < 2.385 ~ \text{GeV}$), where sufficient statistics were obtained. Here, $|t|_{\text{min}}$ is the minimum $|t|$ for a helium-4 nucleus. \begin{table*}[htb] \caption{\label{tab:SDMels} Extracted spin density matrix elements for the E1 and E2 regions. The first uncertainties are statistical and the second systematic.} \begin{ruledtabular} \begin{tabular}{crrrrr} $E_{\gamma}$ range (GeV) & \multicolumn{1}{c}{$\rho^{0}_{00}$} & \multicolumn{1}{c}{$\text{Re}\rho^{0}_{1-1}$} & \multicolumn{1}{c}{${\overline{\rho}^{1}_{1-1}}$} & \multicolumn{1}{c}{$\Delta_{1-1}$} & \multicolumn{1}{c}{$2\rho^{1}_{11} + \rho^{1}_{00}$} \\ \hline (E1)~1.985 -- 2.185 & $-0.015 \pm 0.016 ^{+0.000}_{-0.002}$ & $0.116 \pm 0.030 ^{+0.000}_{-0.006}$ & $0.454 \pm 0.024 ^{+0.014}_{-0.000}$ & $-0.111 \pm 0.033 ^{+0.006}_{-0.000}$ & $0.132 \pm 0.066 ^{+0.000}_{-0.033}$ \\ (E2)~2.185 -- 2.385 & $0.015 \pm 0.012 ^{+0.002}_{-0.000}$ & $0.054 \pm 0.020 ^{+0.000}_{-0.004}$ & $0.436 \pm 0.014 ^{+0.004}_{-0.000}$ & $-0.034 \pm 0.017 ^{+0.009}_{-0.000}$ & $0.074 \pm 0.041^{+0.011}_{-0.000}$ \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{figure}[htb] \includegraphics[clip,width=8.5cm]{decayangle}% \caption{\label{fig:decayangle} Acceptance-corrected decay angular distribution for the $\gamma {^{4}\text{He}}$ reaction. (a) $W(\cos\Theta)$ for E1 and (b) E2. (c) $W(\Phi-\Psi)$ for E1 and (d) E2. (e) $W(\Phi)$ for E1 and (f) E2. The error bars represent statistical ones only. The solid curves are the fits to the data by Eqs.~(\ref{eq:decayangle}).} \end{figure} Figures~\ref{fig:decayangle}(a) and (b) show the distribution $W(\cos\Theta)$. The extracted spin density matrix elements are summarized in Table~\ref{tab:SDMels}. For both the E1 and E2 regions, $\rho^{0}_{00}$ is consistent with zero, which is the same as those for the $\gamma p$ and $\gamma d$ reactions~\cite{LEPSphip,WC_SDM}. This indicates the dominance of helicity-conserving processes in $t$-channel. The decay asymmetry, ${\overline{\rho}^{1}_{1-1}}$, is obtained from $W(\Phi-\Psi)$~[Figs.~\ref{fig:decayangle}(c) and (d)]. It reflects the relative contribution of natural-parity and unnatural-parity exchanges, and gives ${+0.5}~({-0.5})$ for pure natural-parity~(unnatural-parity) exchanges when helicity-conservation holds~\cite{classicalSDM,titovSDM}. As shown in Figs.~\ref{fig:decayangle}(c) and (d), quite large oscillations were observed in $W(\Phi-\Psi)$, and therefore a finite bin size could affect the extracted values of ${\overline{\rho}^{1}_{1-1}}$ by directly using Eq.~(\ref{eq:decayangle}). To avoid such finite bin size effects, a fit chi-square, $\chi^{2}$, was defined as \begin{equation} \begin{split} \chi^{2}({\overline{\rho}^{1}_{1-1}},\alpha) &= \sum_{i=1}^{N}\frac{(\hat{O}_{i} - \alpha \hat{E}_{i})^{2}}{\sigma_{i}^{2}}, \\ \hat{E}_{i} &= \frac{1}{{\Delta x}} \int_{\bar{x}_{i} - \frac{1}{2}{\Delta x}}^{\bar{x}_{i} + \frac{1}{2}{\Delta x}} W(\Phi-\Psi;=x) \: dx, \end{split} \label{eq:chi2} \end{equation} \noindent where $N$ denotes the number of data points~(bins), $\hat{O}_{i}$ is the number of counts in the $i$-th bin, $\alpha$ denotes an overall normalization factor being a free parameter, $\sigma_{i}$ is the statistical error in the $i$-th bin, ${\Delta x}$ is the bin size, and $\bar{x}_{i}$ is the mean value of the $i$-th bin. We found ${\overline{\rho}^{1}_{1-1}}$ to be very close to ${+0.5}$ for both the E1 and E2 regions, indicating almost pure natural-parity exchanges. However, ${\overline{\rho}^{1}_{1-1}}$ sizably deviates from ${+0.5}$. This can be understood by the contribution from double helicity-flip transitions from the incident photon to the outgoing $\phi$-meson~\cite{titovSDM}. In fact, a rather large oscillation of $W(\Phi)$ was observed in the E1 region~[Fig.~\ref{fig:decayangle}(e)], giving the spin density matrix element of $\text{Re}\rho^{0}_{1-1} \sim 0.11$. This means that the interference of helicity-nonflip and double helicity-flip amplitudes has a non-zero value~\cite{SLAC_Vmeson_linearpol}. A non-zero $\text{Re}\rho^{0}_{1-1}$ was also observed in the $\gamma p$~\cite{LEPSphip,LEPS3gev,WC_SDM} and $\gamma d$ reactions~\cite{WC_SDM}. In particular, the $\text{Re}\rho^{0}_{1-1}$ obtained here exhibits a similar energy dependence to that in Ref.~\cite{LEPSphip}. Note that the deviation of ${\overline{\rho}^{1}_{1-1}}$ is not due to the contamination from the incoherent events with ${\overline{\rho}^{1}_{1-1}} \approx 0.25$~\cite{LEPSincphi_fromD} because such a deviation does not disappear when a tight mass cut, $\text{MM}(K^{+}K^{-}) < 3.72 ~ \text{GeV}/c^{2}$, is applied. \section{Differential Cross Section} The differential cross sections as a function of momentum transfer $\tilde{t} ~ (\equiv |t| - |t|_{\text{min}})$, $d\sigma/d\tilde{t}$, were measured in the energy range $E_{\gamma} = \text{1.685--2.385 GeV}$~(Fig.~\ref{fig:dsdtall}). A strong forward-peaking behavior of $d\sigma/d\tilde{t}$ predominantly comes from the helium-4 form factor. To extract the slope of $d\sigma/d\tilde{t}$, the fit was performed with an exponential function; $(d\sigma/dt)_{0}^{\gamma {^{4}\text{He}}}\exp(-b\tilde{t})$, where $(d\sigma/dt)_{0}^{\gamma {^{4}\text{He}}}$ is $d\sigma/d\tilde{t}$ at $t = -|t|_{\text{min}}$ and $b$ the slope parameter. No strong energy dependence of the slope $b$ was found, and the common slope $b$ was determined to be $23.81 \pm 0.95(\text{stat}) ~ ^{+5.16}_{-0.00}(\text{sys}) ~ \text{GeV}^{-2}$. The slope $b$ is consistent with a simple estimate from a single-scattering assumption~\cite{phi_fromD}, in which the slope $b$ is approximately expressed as $b \approx b_{0} + b_{F}$, where $b_{0}$ is the slope for the elementary $\gamma p$ reaction~($3.38 \pm 0.23 ~ \text{GeV}^{-2}$~\cite{LEPSphip}) and $b_{F}$ the slope for the squared charge form factor of helium-4 nuclei~($\approx 22 ~ \text{GeV}^{-2}$~\cite{FF_alpha}). The slope $b$ is also quite reasonable compared with that for other elastic scattering of a hadron off helium-4 in the diffractive regime~\cite{ela_alpha_p,ela_pi_alpha}. Note that the systematic error on the slope $b$ comes solely from the assumption of the additional processes~[Eq.~(\ref{eq:sem_coh})] in the $\text{MM}(K^{+}K^{-})$ fits. \begin{figure}[htb] \includegraphics[clip,width=8.5cm]{dsdtall}% \caption{\label{fig:dsdtall} Differential cross section $d\sigma/d\tilde{t}$ for the $\gamma {^{4}\text{He}}$ reaction. The smaller error bars on the vertical axis represent a statistical error, whereas the larger ones represent a sum of the statistical and systematic errors in quadrature. The dashed curves show the fit results by an exponential function with the common slope $b = 23.81~ \text{GeV}^{-2}$.} \end{figure} Figure~\ref{fig:comp_dsdt0}(a) shows the energy dependence of $(d\sigma/dt)_{0}^{\gamma {^{4}\text{He}}}$ with the common slope $b = 23.81 ~ \text{GeV}^{-2}$. The differences between the intercepts $(d\sigma/dt)_{0}^{\gamma {^{4}\text{He}}}$ with the fixed~(common) and variable~(energy-dependent) slopes were found to be within the statistical errors. Also, the systematic errors on $(d\sigma/dt)_{0}^{\gamma {^{4}\text{He}}}$ due to the assumption of the additional processes~[Eq.~(\ref{eq:sem_coh})] in the $\text{MM}(K^{+}K^{-})$ fits were found to be small~(1.5--6.5\%) compared with the statistical ones, though these are reflected in the final results. As we shall see, it is difficult to discuss the precise energy dependence of the forward cross section~($\theta = 0^\circ$) for the $\gamma p$ reaction arising from natural-parity exchanges~[$\equiv (d\sigma/dt)_{0}^{\gamma p; \text{NP}}$, where ``NP'' denotes the contribution from natural-parity exchanges.] directly from the $\gamma {^{4}\text{He}}$ data due to the helium-4 form factor. To evaluate the contribution from natural-parity exchanges to the $\gamma p$ reaction, we constructed three different models for the energy dependence of $(d\sigma/dt)_{0}^{\gamma p; \text{NP}}$, where their overall strengths are unknown and to be determined. The first one~(model-1) is simple; that is, $(d\sigma/dt)_{0}^{\gamma p; \text{NP}}$ grows with the energy as $(k_{\phi}/k_{\gamma})^{2}$~\cite{est_dsdt0_exp}, where $k_{\phi}$~($k_{\gamma}$) is the 3-momentum of $\phi$-mesons~(photons) in the center-of-mass frame. The second one~(model-2) is a conventional Pomeron exchange model as in Ref.~\cite{phi_fromD}. The third one~(model-3) describes a threshold enhancement in the energy dependence of $(d\sigma/dt)_{0}^{\gamma p; \text{NP}}$. This could be realized by modifying the conventional Pomeron exchange model, and/or a manifestation of additional natural-parity exchanges near the threshold. For model-3, we used the Pomeron and daughter Pomeron exchange model in Ref.~\cite{philowE2}. The relative contribution from the daughter Pomeron exchange was adjusted so as to fit available low-energy $\gamma p$ data~\cite{LEPSphip,CLASphip_neut,LEPS3gev}. \begin{figure}[htb] \includegraphics[clip,width=8.5cm]{comp_dsdt0}% \caption{\label{fig:comp_dsdt0} (a) Energy dependence of $(d\sigma/dt)_{0}^{\gamma {^{4}\text{He}}}$ with the common slope $b = 23.81 ~ \text{GeV}^{-2}$. The meanings of the error bars are the same as those in Fig.~\ref{fig:dsdtall}. The solid, dashed and dash-dotted curves are the best fits for model-1, -2 and -3~(explained in the text), respectively. (b) Contribution from natural-parity exchanges to the forward cross section~($\theta = 0^\circ $) for the $\gamma p$ reaction with model-1~(solid), -2~(dashed) and -3~(dash-dotted). The experimental data for the $\gamma p$ reaction are represented by filled squares~\cite{LEPSphip} and open circles~\cite{LEPS3gev}.} \end{figure} A theoretical calculation for the coherent $\gamma d$ reaction has been done by A. I. Titov \textit{et al.}~\cite{phi_fromD}, in which they describe the forward cross section by using the amplitudes for the elementary $\gamma p$ reaction and the deuteron form factor. Similarly, $(d\sigma/dt)_{0}^{\gamma {^{4}\text{He}}}$ is described by using the charge form factor for helium-4~($|F_{C}|^{2}$)~\cite{FF_alpha} as \begin{equation} \left(\frac{d\sigma}{dt}\right)_{0}^{\gamma {^{4}\text{He}}} = 16|F_{C}|^{2} \left(\frac{d\sigma}{dt}\right)_{0}^{\gamma p; \text{NP}}. \label{eq:cs0} \end{equation} \noindent Here, $|F_{\text{C}}|^{2}$ is evaluated at $t= -|t|_{\text{min}}$. To fix the overall strengths for the above models, we used this relation in the fit to the $\gamma {^{4}\text{He}}$ data with the overall strengths as free parameters. The best fits for model-1, -2 and -3 are depicted in Fig.~\ref{fig:comp_dsdt0}(a) as solid, dashed and dash-dotted curves, respectively. The $\chi^{2}/\text{ndf}$'s are $48.8/5$, $39.8/5$ and $10.2/5$ for model-1, -2 and -3, respectively. Figure~\ref{fig:comp_dsdt0}(b) shows the contribution from natural-parity exchanges to the forward cross section~($\theta = 0^\circ$) for the $\gamma p$ reaction with each model, together with the experimental data by LEPS~\cite{LEPSphip,LEPS3gev}. Model-1 and -2 gave similar results, and we found both the curves to be slightly above the data points for $E_{\gamma} > 2.4 ~ \text{GeV}$. On the other hand, the experimental data on the decay asymmetry ${\overline{\rho}^{1}_{1-1}}$~\cite{LEPS3gev} shows a sizable 20-30\% contribution from unnatural-parity exchanges to the forward cross section for $2.4 < E_{\gamma} < 2.9 ~ \text{GeV}$. This suggests that destructive interference between natural-parity and unnatural-parity exchanges is needed to explain both the measurements of the forward cross section and decay asymmetry. In contrast to model-1 and -2, model-3 describes the experimental data fairly well. For $E_{\gamma} > 1.9 ~ \text{GeV}$, we found the curve to be below the data by $\sim 20\%$, except for a few data points. This can be compensated by the observed 20-40\% contribution from unnatural-parity exchanges~\cite{LEPSphip,WC_SDM,LEPS3gev}. In such case, large interference effects between natural-parity and unnatural-parity exchanges are not needed, which is compatible with our current understanding that the interference effect between the Pomeron and $\pi$ exchanges would be small~\cite{photopro,philowE2}. Note that destructive interference between natural-parity and unnatural-parity exchanges is also needed for $E_{\gamma} < 1.9 ~ \text{GeV}$ because simply adding the unnatural-parity contribution~($\sim 30\%$) overestimates the experimental data. \section{Conclusion} In conclusion, we presented the first measurement of the differential cross sections and decay angular distributions for coherent $\phi$-meson photoproduction from helium-4 at forward angles with linearly polarized photons in the energy range $E_{\gamma} = \text{1.685-2.385 GeV}$. With the elimination of unnatural-parity exchanges, this reaction provides a unique and clean way of investigating natural-parity exchanges in $\phi$-meson photoproduction at low energies. The measurement of ${\overline{\rho}^{1}_{1-1}}$ demonstrates the strong dominance~($> 94\%$) of natural-parity exchanges in this reaction. Three different models were constructed for describing the contribution from natural-parity exchanges to the forward cross section~($\theta=0^\circ$) for the $\gamma p$ reaction near the threshold, and their overall strengths were determined from the present data. The comparison of them to available $\gamma p$ data suggests that enhancement of the forward cross section arising from natural-parity exchanges, and/or destructive interference between natural-parity and unnatural-parity exchanges is needed in the $\gamma p$ reaction near the threshold. Further theoretical and experimental efforts are of great help for revealing the underlying reaction mechanisms in the $\phi$-meson photoproduction at low energies. \begin{acknowledgments} The authors thank the staff at SPring-8 for supporting the LEPS experiment. We thank A. I. Titov, A. Hosaka and H. Nagahiro for fruitful discussions. The experiment was performed at the BL33LEP of SPring-8 with the approval of the Japan Synchrotron Radiation Research Institute~(JASRI) as a contract beamline~(Proposal No. BL33LEP/6001). This work was supported in part by the Ministry of Education, Science, Sports and Culture of Japan, the National Science Council of the Republic of China~(Taiwan), the National Science Foundation~(USA) and the National Research Foundation~(Korea). \end{acknowledgments}
train/arxiv
BkiUc4o5qWTA9fx-_4ze
5
1
\section{Introduction} The advent of high-precision space-based broadband optical photometry with satellites (or ensembles of satellites) such as the \textit{Microvariability and Oscillations of STars} (MOST; \citealt{2003PASP..115.1023W}) and \textit{BRIght Target Explorer} (BRITE; \citealt{2014PASP..126..573W}) missions has opened the door to a brand new picture of the photospheres of hot massive stars. In particular, recent studies have led to the detection of co-rotating bright spots on the surface of a few O stars \citep{2014MNRAS.441..910R, 2017arXiv171008414R}. The existence of such spots has been proposed \citep{1996ApJ...462..469C, 2017MNRAS.470.3672D} to explain the formation of \textit{co-rotating interaction regions} (CIRs), which in turn are postulated \citep{1986A&A...165..157M} to lead to recurring \textit{discrete absorption components} (DACs) which migrate through the velocity space of the absorption troughs of ultraviolet resonance lines, as revealed in timeseries of spectra obtained by the \textit{International Ultraviolet Explorer} (IUE; e.g., \citealt{1989ApJS...69..527H, 1996A&AS..116..257K}). While the physical origin of these bright spots remains contested, one popular hypothesis contends that they are caused by small-scale magnetic fields which can be generated in the subsurface convection zone due to the iron opacity bump (FeCZ; \citealt{2011A&A...534A.140C}). $\epsilon$ Ori and $\kappa$ Ori are two bright early B supergiants (respectively B0Ia and B0.5Ia) which have been observed by BRITE. Together, they constitute an interesting testbed for the study of photospheric perturbations such as bright spots, for a few reasons. First, given their magnitudes (respectively, $m_V$ = 1.69 and $m_V$ = 2.06; \citealt{2002yCat.2237....0D}), they are prime candidates for high signal-to-noise ratio (SNR) observations, allowing us to place very tight constraints on their properties. Secondly, their current evolutionary stage is of interest for the study of spots\footnote{It should be noted, however, that B stars are also known to be the theatre of various forms of variability, from SPB/$\beta$ Cephei pulsations to rotational modulation \citep{2011MNRAS.413.2403B}.} as the envelopes of hot supergiants are expected to host more convection than their main sequence counterparts \citep{2009A&A...499..279C}. Finally, if one naively posits that the properties of putative bright spots should be intimately related to stellar parameters, it would therefore be expected that if $\epsilon$ Ori and $\kappa$ Ori show signatures of co-rotating bright surface spots, these spots would have similar characteristics, an assertion that we can test. Conversely, any departure from that expectation informs us about the nature of these photospheric structures. $\epsilon$ Ori is a known variable star. \citet{2002A&A...388..587P} have traced the evolution of a DAC in its ultraviolet lines for at least 17h, and periods ranging from about 0.8 to 19 days have been recovered from its optical spectra (e.g., \citealt{2013AJ....145...95T}). From these, a rotational period of either $\sim$4 days or $\sim$18 days is inferred. From its radius ($24.0 R_\odot$; \citealt{2006A&A...446..279C}) and its projected rotational velocity (60 km/s; \citealt{2014A&A...562A.135S}), we derive a maximum rotational period of about 20 days. On the other hand, repeating the same calculation for $\kappa$ Ori ($ R_* = 22.2 R_\odot$; $v \sin i = 54$ km/s), we obtain a maximum period of about 21 days. \section{Observations} So far, the Orion field has been observed five times by BRITE (2013, 2014, 2015, 2016 and 2017) in both red and blue wavebands. However, we focus here on the first two observing runs. The details of the observations are presented in Table~\ref{tab:obsrun}. \begin{table} \caption{Details of the two Orion observing runs presented in this study. The satellites are UniBRITE (UBr; red waveband), BRITE Austria (BAb; blue waveband), BRITE Lem (BLb; blue waveband), BRITE Heweliusz (BHr; red waveband) and BRITE Toronto (BTr; red waveband).}\label{tab:obsrun} \begin{tabular}{lllll} Run & Starting date & Total length (days) & Telescopes\\ Orion I & 2013-11-07 & 131 & UBr, BAb\\ Orion II & 2014-09-24 & 174 & BAb, BLb, BTr, BHr\\ \end{tabular} \end{table} Sample light curves are shown in Fig.~\ref{fig:lc}. The main characteristic that we can observe (for both stars) is that there appear to be significant variations with a maximum amplitude of roughly 30 mmag. These variations do not exhibit an obvious periodic behaviour. The point-to-point precision is of millimagnitude order. \begin{figure} \includegraphics[width=\textwidth]{epsori_2014_lc.jpg} \caption{Orion II light curves (red and blue filters) of $\epsilon$ Ori. The blue light curve is shifted downwards. We can see that both light curves exhibit similar variations, with a maximum amplitude of roughly 30 mmag. However, no obvious repeatable pattern is observed.} \label{fig:lc} \end{figure} \section{Preliminary analysis} We perform a period search on these light curves to find any periodicity. While neither star shows clear, periodic variations, various frequencies are detected (see Fig.~\ref{fig:ft}), and a time-frequency analysis suggests that these frequencies appear and disappear over time. In the case of $\epsilon$ Ori, clumps of frequencies around $\sim$0.25 c/d, $\sim$ 0.5 c/d, $\sim$0.75 c/d and $\sim$1 c/d (therefore, roughly the first few integer multiples of a base frequency of around 0.25 c/d) are detected at any given time. At face value, this seems somewhat consistent with the type of observational signatures historically associated with the bright spot/co-rotating interaction regions (CIR) phenomenology (e.g., \citealt{2011ApJ...735...34C}), in which case this base frequency could be of rotational origin (meaning that $P \simeq 4$ days, as previously suggested as one of the possible periods by optical spectra). The result of this analysis on a portion of the blue Orion II light curve is shown in Fig.~\ref{fig:stft}. \begin{figure} \centering \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth]{hd37128_blue_2014.jpg} \caption{Periodogram of the blue light curve of $\epsilon$ Ori from the Orion II run; we can see groupings of frequencies around 0.25 c/d, 0.5 c/d and 1 c/d.} \label{fig:ft} \end{minipage} \quad \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth]{hd37128_stft_blue_2014.jpg} \caption{Time-frequency analysis of a portion of about 30 days of the Orion II blue light curve of $\epsilon$ Ori performed with an 8-day window; frequencies can notably be found around 0.25 c/d, 0.5 c/d and 1 c/d. The integrated periodogram is shown on the top.} \label{fig:stft} \end{minipage} \end{figure} However, more analysis is required in order to lend credence to the bright spot scenario. In particular, pulsations must first be investigated in depth before ruling them out as being responsible for the observed variability. A similar analysis was performed on the light curves of $\kappa$ Ori, revealing frequencies at around 0.4 c/d and 1.2 c/d. While it is too early to conclude whether these periods are due to rotational modulation, it should be noted that if the base frequencies of $\epsilon$ Ori and $\kappa$ Ori are indeed of rotational origin, both stars can then be inferred to be viewed at a rather small inclination, which might be problematic. \section{Conclusions and future work} While this study remains in the early stages and much more work is required, preliminary results show pleasant parallels with the observational signatures typically ascribed to rotational modulation due to co-rotating bright surface spots. If this scenario is favored, these observations can help us learn a great deal about the properties and the nature of such photospheric perturbations. In particular, the bright spot/magnetic spot connection can hopefully be further investigated, as both stars are ideal candidates for ultra-deep magnetometry given their magnitude and low projected rotational velocity (e.g., \citealt{2016MNRAS.456....2W}). Furthermore, advances in modelling will also prove invaluable in constraining the properties of bright spots and associated wind structures; building on our recent work \citep{2017MNRAS.470.3672D}, the next logical step will be to produce hydrodynamical models of co-rotating interaction regions in three dimensions, studying, among other things, the effects of inclination on observational signatures. Meanwhile however, a more in-depth period analysis, including the data from all 5 BRITE Orion runs, will be necessary to more robustly establish whether the observed variations are indeed consistent with bright spots. \acknowledgements{ADU gratefully acknowledges support from the \textit{Fonds qu\'{e}b\'{e}cois de la recherche en nature et technologies}.} \bibliographystyle{ptapap}
train/arxiv
BkiUdNQ5qoTArwHy4HS3
5
1
\section{Introduction.} To form the classical empirical process one starts with iid random variables, $\{X_{j}:j\ge 1\},$ with distribution function, $F$, and with \begin{align}\mathbb{P}_{n}(A)=\dfrac1{n}\sum_{j=1}^{n}I_{X_{j}\in A}\label{emp-proc} \end{align} one considers the process $\mathbb{F}_{n}(y)=\mathbb{P}_{n}((-\infty,y])$. By the classical Glivenko-Cantelli Theorem \[\sup_{y\in\mathbb{R}}|\mathbb{F}_{n}((-\infty,y])-F(y)|\longrightarrow 0 \text{ a.s.} \] By Donsker's theorem \[\{\sqrt{n}\bigl(\mathbb{F}_{n}(y)-F(y)\bigr): y\in\mathbb{R}\} \] converges in distribution in a sense described more completely below. Hence limit theorems for such processes, such as the Law of Large numbers and the Central Limit theorem (CLT), allow one to asymptotically get uniform estimates for the unknown cdf, $F(y)=\Pr(X\le y)$ via the sample data. A more general version of these processes is to replace the indicators of half-lines in (\ref{emp-proc}) by functions of a ``random variable'' taking values in some abstract space $(S,\mathcal{S})$. More specifically, \begin{align}\{\dfrac{1}{\sqrt{n}}\sum_{j=1}^n \bigl(f(X_j)-\ensuremath{\mathbb E} f(X)\bigr):f\in\mathcal{F}\}, \end{align} where the index set, $\mathcal{F}$, is a subset of $\mathcal{L}_{\infty}(S,\mathcal{S})$ or an appropriate subset of $\mathcal{L}_{2}(S,\mathcal{S},P)$. However, even in the case that the class of functions is a class of indicators, unlike the classical case, it is easy to see there are many classes of sets, $\mathcal{C}$, for which the limit theorem does not hold. As a matter of fact the limiting Gaussian may not be continuous, for example, if $\mathcal{C}=$ all Borel sets of $\mathbb{R}$ or even $\mathcal{C}=$ all finite sets of $\mathbb{R}$. And further, even if the limiting Gaussian process is continuous, the limit theorem may still fail. Luckily, in the case of sets, modulo questions of measurability, there are necessary and sufficient conditions for this sequence to converge in distribution to some mean zero Gaussian process. However, all the nasc's are described in terms of the asymptotic behavior of a complicated function of the sample, $\{X_{n}\}_{n=1}^{\infty}$. What we attempt to do in this paper is to obtain additional sufficient conditions that are useful when $X$ takes values in some function space $S$, and the sets in $\mathcal{C}$ involve the time evolution of the stochastic process $X$. Of course, $\mathcal{C}$ is still a class of sets, but a primary goal that emerges here is to provide sufficient conditions for a uniform CLT in terms of the process $X=\{X(t): t \in E\}$ that depend as little as possible on the parameter set $E$. However, classes of sets such as this rarely satisfy the Vapnik-\v{C}ervonenkis condition. Also, this class of examples arises naturally from the study of the median process for independent Brownian Motions (see \cite{swanson-scaled-median}, \cite{swanson-fluctuations}), where he studies the limiting quantile process for independent Brownian motions. This was observed by Tom Kurtz, and the followup questions led us to start this study. Here we concentrate on empirical process CLT's, and our main result is Theorem 3 below. Another theorem and some examples showing the applicability of Theorem 3 appear in section 7. In section 8 there are additional examples which show some obvious conjectures one might be tempted to make regarding the CLT formulated here are false. In particular, the examples in section 8.4 motivate the various assumptions we employ in Theorem 3. As for future work, an upgraded version of \cite{vervaat-quantiles} would perhaps allow one to relate the results obtained here and those of Swanson, but this is something to be done elsewhere. \section{Previous Results and Some Definitions.} Let $(S, \mathcal{S},P)$ be a probability space and define $(\Omega, \Sigma,Pr)$ to be the infinite product probability space $(S^N,\mathcal{S}^N,P^N)$. If $X_j:\Omega \rightarrow S$ are the natural projections of $\Omega$ into the $j^{th}$ copy of $S$, and $\mathcal{F}$ is a subset of $\mathcal{L}^2(S,\mathcal{S},P)$ with \begin{align} \sup_{ f \in \mathcal{F}} |f(s)|< \infty, s \in S, \end{align} then we define \begin{align} \nu_n(f)=\dfrac{1}{\sqrt{n}}\sum_{j=1}^n \bigl(f(X_j)-\ensuremath{\mathbb E} f(X)\bigr), f \in \mathcal{F}. \end{align} Let $\ell_{\infty}(\mathcal{F})$ be the bounded real valued functions on $\mathcal{F}$, with the $\sup$-norm, and recall that a Radon measure $\mu$ on $\ell_{\infty}(\mathcal{F})$ is a finite Borel measure which is inner regular from below by compact sets. Then the functions $f \rightarrow f(X_j)- E(f(X_j)), j \geq 1,$ are in $\ell_{\infty}(\mathcal{F})$, and we say $\mathcal{F} \in CLT(P)$ if the stochastic processes $\{\nu_n(f),f \in \mathcal{F}\}, n \ge 1,$ converge weakly to a centered Radon Gaussian measure $\gamma_P$ on $\ell_{\infty}(\mathcal{F})$. More precisely, we have the following definition. \begin{defin} Let $\mathcal{F}\subset \mathcal{L}_{2}(P)$ and satisfy (3). Then $\mathcal{F} \in CLT(P)$, or $\mathcal{F}$ is a $P-$Donsker class, if there exists a centered Radon Gaussian measure $\gamma_P$ on $\ell_{\infty}(\mathcal{F})$ such that for every bounded continuous real valued function $H$ on $\ell_{\infty}(\mathcal{F})$ we have $$ \lim_{n \rightarrow \infty} E^{*}(H(\nu_n))= \int Hd\gamma_P, $$ where $E^{*}H$ is the usual upper integral of $H$. If $\mathcal{C}$ is a collection of subsets from $\mathcal{S}$, then we say $\mathcal{C} \in CLT(P)$ if the corresponding indicator functions are a $P$-Donsker class. \end{defin} The probability measure $\gamma_P$ of Definition 1 is obviously the law of the centered Gaussian process $G_P$ indexed by $\mathcal{F}$ having covariance function $$ \mathbb{E} G_{P}(f)G_{P}(g)=\mathbb{E}_{P}f g-\mathbb{E}_{P}f\mathbb{E}_{P}g, f ,g \in \mathcal{F}, $$ and $L^2$ distance $$ \rho_{P}(f,g)=\mathbb{E}_{P}(\{(f-g)-\mathbb{E}_{P}(f-g)\}^2)^{\frac{1}{2}}, f,g \in \mathcal{F}. $$ Moreover, if $\gamma_P$ is as in Definition 1, then it is known that the process $G_P$ admits a version all of whose trajectories are bounded and uniformly $\rho_P$ continuous on $\mathcal{F}$. Hence we also make the following definition. \begin{defin} A class of functions $\mathcal{F}\subset \mathcal{L}_{2}(P)$ is said to be $P$-pregaussian if the mean zero Gaussian process $\{G_{P}(f): f\in\mathcal{F}\}$ with covariance and $L_2$ distance as indicated above has a version with all the sample functions bounded and uniformly continuous on $\mathcal{F}$ with respect to the $L_2$ distance $\rho_P(f,g)$. \end{defin} Now we state some results which are useful for what we prove in this paper. The first is important as it helps us establish counterexamples to natural conjectures one might make in connection to our main result appearing in Theorem 3 below. \begin{thm}\label{delta-c}(Gin\'e and Zinn \cite{empirical-specialinvited} for sufficiency and Talagrand \cite{talagrand-donsker-sets} for necessity of the $\Delta^{\mathcal{C}}$ condition in (ii)) Let $\Delta^{\mathcal{C}}(A)$ denote the number of distinct subsets of $A$ obtained when one intersects all sets in $\mathcal{C}$ with $A$. Then, modulo measurability assumptions, conditions, (i) and (ii) below are equivalent.\par \begin{enumerate} \item[(i)] The central limit theorem holds for the process \[\bigl\{\dfrac1{\sqrt{n}}\sum_{j=1}^n \big[I_{X_j\in C} - \Pr(X\in C)\big]:C\in\mathcal{C}\bigr\}\] or more briefly $\mathcal{C}\in CLT(P)$. \item[(ii)] \begin{align} &(a)\ \dfrac{\ln \Delta^{\mathcal{C}}(X_1,\ldots,X_n)}{\sqrt{n}}\to 0 \text{ in (outer) probability, and } \label{delta-condition}\\ &(b)~\mathcal{C}~\rm{is}~\rm~{P-pregaussian}. \end{align} \end{enumerate} \end{thm} A sufficient condition for the empirical CLT, which is used in the proof of our main theorem, is given in Theorem 4.4 of \cite{clt-localcondit}. \begin{thm}\cite{clt-localcondit}\label{agoz} Let $\mathcal{F}\subset\mathcal{L}_{2}(S,\mathcal{S},P)$ and let $F=\sup_{f\in\mathcal{F}}|f(X)|$. Assume that $\|Pf\|_{\mathcal{F}}<\infty$ and \begin{enumerate} \item[(i)] $u^{2}\Pr^{*}(F >u)\to 0 \text{ as } u\to\infty$, \item[(ii)] $\mathcal{F}$ is $P$-pregaussian, and \item[(iii)] there exists a centered Gaussian process $\{G(f): f \in \mathcal{F}\}$ with $L_2$ distance $d_G$ such that $G$ is sample bounded and uniformly $d_G$ continuous on $\mathcal{F}$, and for some $K>0,$ all $f \in \mathcal{F}$, and all $\epsilon>0$, \end{enumerate} \[\bigl[\sup_{u>0}u^{2}{\Pr}^{*}(\sup_{\{g:d_G (g,f)< \epsilon\}}|f-g|>u)\bigr]^{1/2}\le K\epsilon. \] Then $\mathcal{F} \in CLT(P)$. \end{thm} In this paper we take iid copies $\{X_{j}\}_{j=1}^{\infty}$ of a process $\{X(t): t \in E \}$, and consider \begin{align}\{\dfrac{1}{\sqrt{n}}\sum_{j=1}^n \bigl[I_{X_j(t)\le y}-\Pr(X(t)\le y)\bigr]:t\in E, y \in \mathbb{R}\} ,\label{sets} \end{align} with the goal of determining when these processes converge in distribution in some uniform sense to a mean zero Gaussian process. For example, if the process $X$ has continuous sample paths on $E$, then $S=C(E)$ and the class of sets $\mathcal{C}$ in (i) of Theorem \ref{delta-c} consists of the sets $\{f\in C(E): f(t)\le y\}$ for $t \in E$ and $y\in\mathbb{R}$, and we examine when $\mathcal{C} \in CLT(P)$. As a result of the previous definitions, the limiting centered Gaussian process has a version with sample paths in a separable subspace of $\ell_{\infty}(\mathcal{C})$, and as a consequence of the addendum to Theorem 1.5.7 of \cite{vw}, p. 37, almost all sample paths are uniformly $L_2$ continuous on $\mathcal{C}$ provided we identify the indicator functions of sets in $\mathcal{C}$ with $\mathcal{C}$ itself. Furthermore, we also have that $\mathcal{C}$ is totally bounded in the $L_2$ distance with this identification. In addition, the following remark is important in this setting. \begin{rem} The assumption that a centered process $\{X(t): t \in T\}$ with $L_2$ distance $d$ is sample bounded and uniformly continuous on $(T,d)$ is easily seen to follow if $(T,d)$ is totally bounded and the process is uniformly continuous on $(T,d)$. Moreover, if the process is Gaussian, the converse also holds using Sudakov's result as presented in Corollary 3.19 of \cite{led-tal-book}. \end{rem} To state and prove our result we will make use of a distributional transform that appears a number of places in the literature, see \cite{ferguson-book}. The paper \cite{ruschendorf} provides an excellent introduction to its history, and some uses. In particular, it is used there to obtain an elegant proof of Sklar's theorem, see \cite{sklar}, and also in some related applications. Given the distribution function $F$ of a real valued random variable, $Y$, let $V$ be a uniform random variable independent of $Y$. In this paper we use the distributional transform of $F$ defined as $$ \tilde{F}(x,V)=F(x^{-})+V(F(x)-F(x^{-})), $$ and Proposition 1 in \cite{ruschendorf} shows that \begin{align}\tilde{F}(Y,V) \text{ is uniform on $[0,1]$ }.\label{uniform} \end{align} R\"uschendorf calls $\tilde{F}(Y,V)$ the distributional transform of $Y$, and we also note that $\tilde F(x,V)$ is non-decreasing in $x$. \section{The Main Conditions} Let $\{X(t): t \in E\}$ be a stochastic process as in (\ref{sets}), and assume $$ \rho(s,t)=(E(H_t-H_s)^2)^{\frac{1}{2}}, s,t \in E, $$ where $\{H(t): t \in E\}$ is some Gaussian process which is sample bounded and $\rho$ uniformly continuous on $E$. In our main result, see Theorem 3 below, we hypothesize the relationship between $\{X(t): t \in E\}$ and $\rho(s,t), s,t \in E$ given in (\ref{Lcondition}). The importance of this condition in the proof of our theorem is 2-fold. First, it allows one to establish the limiting Gaussian process for our CLT actually exists. This verifies condition (ii) in Theorem 2 above, and is accomplished via a subtle application of the necessary and sufficient conditions for the existence and sample function regularity of a Gaussian process given in (\cite {talagrand-generic}). Secondly, it also allows us to verify that the remaining non-trivial condition sufficient for our CLT, namely condition (iii) of Theorem 2, applies in this setting. This is useful in applications as condition (iii) is in terms of a single random element involved in our CLT, and hence is far easier to veri fy than the typical condition which depends on the full sample of the random elements as in \ref{delta-condition}. The reader should also note that Theorem 2 is a refinement of a number of previous results sufficient for the CLT, and covers a number of important earlier empirical process papers. The existence of such a $\rho(\cdot,\cdot)$ is obtained for a number of specific processes $\{X(t): t \in E\}$ in section 7. Nevertheless, given the process $X$, determining whether a suitable $\rho$ satisfying (\ref{Lcondition}) exists, or does not exist, may be quite difficult. However, using (\ref{L1distance})) we may limit our choice for $\rho$ in (\ref{Lcondition}) to be such that $$ \rho(s,t) \ge c^{-1} \sup_{x \in \mathbb{R}} (\mathbb{E}( |I_{X_{t}\le x} -I_{X_{s}\le x}|^2))^{\frac{1}{2}} $$ for all $s,t \in E$ and some constant $c \in (0,\infty)$. Throughout, $\rho(s,t), s,t \in E$, denotes the $L_2$ metric of a Gaussian process indexed by $E$ and, to simplify notation, we let $$ \tilde{F}_{t}(x) \equiv \tilde{(F_t)}(x,V), x \in \mathbb{R} $$ be the distributional transform of $F_t$, the distribution function of $X_t$. Note that this simplification of notation also includes using $X_t$ for $X(t)$ when the extra parenthesis make the latter clumsy or unnecessary. Moreover, this variable notation is also employed for other stochastic processes in the paper. In addition, for each $\epsilon>0,$ let \begin{align}\label{weakLcondition}\sup_{\{s,t \in E:\rho(s,t)\le \epsilon\}}&{\Pr}^{*}(|\tilde{F}_{t}(X_{s}) -\tilde{F}_{t}(X_{t})|>\epsilon^{2})\le L\epsilon^{2},\\ &(\text{\bf the weak L condition})\notag \end{align} and \begin{align}\label{Lcondition}\sup_{t \in E}{\Pr}^{*}(\sup_{\{s:\rho(s,t)\le \epsilon\}}&|\tilde{F}_{t}(X_{s}) -\tilde{F}_{t}(X_{t})|>\epsilon^{2})\le L\epsilon^{2},\\ &(\text{\bf the L condition})\notag \end{align} \begin{rem}\label{Cepsilon^{2}}In the L conditions the probabilities involve an $\epsilon^{2}$. However, since for any constant, $C\in (0,\infty)$, an $L_{2}$ metric, $\rho$, is pre-Gaussian iff $C\rho$ is pre-Gaussian, WLOG we can change to $C\epsilon^{2}$. Moreover, note that any constant $L$ sufficient for (\ref{Lcondition}) will also suffice for (\ref{weakLcondition}), and hence to simplify notation we do not distinguish between them. \end{rem} \begin{lem} \label{weakLlemma} Let $L$ be as in (\ref{weakLcondition}), and take $s,t \in E$. Then, for all $ x \in \mathbb{R}$ \begin{align}\label{basicLineq} \Pr(X_s \leq x < X_t) \leq (L+1)\rho^2(s,t). \end{align} and by symmetry \begin{align}\mathbb{E} |I_{X_{t}\le x} -I_{X_{s}\le x}|&=\Pr(X_t \leq x < X_s)+\Pr(X_s \leq x < X_t) \label{L1distance}\\ &\notag\\ &\le 2(L+1)\rho^2(s,t).\notag \end{align} Further, we have \begin{align}\label{weakLdistribution}\sup_{x}|F_{t}(x)-F_{s}(x)|\le 2(L+1)\rho^2(s,t). \end{align} \end{lem} \begin{rem} As in Lemma 1, Lemmas 2 and 3 below use only the weak L condition (\ref{weakLcondition}). Actually for Lemmas 2 and 3 all we need is Lemma 1. However in Lemma 4 we need the stronger form as stated in (\ref{Lcondition}). \end{rem} \begin{proof} Since $\tilde F_t$ is non-decreasing and $x<y$ implies $F_t(x) \leq \tilde F_t(y)$, we have $$ P(X_s \leq x < X_t) \leq P(\tilde F_t(X_s) \leq \tilde F_t(x),F_t(x) \leq \tilde F_t(X_t)). $$ Thus $$ P(X_s \leq x < X_t) \leq P(F_t(x) \leq \tilde F_t(X_t) \leq F_t(x) + \rho^2(s,t),\tilde F_t(X_s) \leq \tilde F_t(x)) + $$ $$ P(\tilde F_t(X_t) > F_t(x) + \rho^2(s,t),\tilde F_t(X_s) \leq \tilde F_t(x)), $$ and hence $$ P(X_s \leq x < X_t) \leq P(F_t(x) \leq \tilde F_t(X_t) \leq F_t(x) + \rho^2(s,t)) $$ $$ ~~~~~~~~~~~~~~~~~~~~~~~~+ P(|\tilde F_t(X_t) - \tilde F_t(X_s)| >\rho^2(s,t)). $$ Now (\ref{weakLcondition}) implies for all $s,t \in E$ that $$ P(|\tilde F_t(X_t) - \tilde F_t(X_s)| >\rho^2(s,t))\le L\rho^2(s,t), $$ since its failure for $s_0,t_0 \in E$ and $\epsilon= \rho(s_0,t_0)$ in (\ref{weakLcondition}) implies a contradiction. Therefore, since $\tilde F_t (X_t)$ is uniform on $[0,1]$, we have $$ P(X_s \leq x < X_t) \leq \rho^2(s,t) + L\rho^2(s,t). $$ The last conclusion follows by moving the absolute values outside the expectation. \end{proof} \section{The Main Result.} Recall the relationship of the $X$ process and $\rho$ as described at the beginning of section 3. Then we have: \begin{thm}\label{main result} Let $\rho$ be given by $\rho^{2}(s,t)=\mathbb{E}(H(s)-H(t))^{2}$, for some centered Gaussian process $H$ that is sample bounded and uniformly continuous on $(E,\rho)$ with probability one. Furthermore, assume that for some $L<\infty,$ and all $\epsilon >0$, the $L$ condition (\ref{Lcondition}) holds, and $D(E)$ is a collection of real valued functions on $E$ such that $P(X(\cdot) \in D(E))=1.$ If $$ \mathcal{C}=\{C_{s,x}: s \in E, x\in \mathbb{R}\}, $$ where $$ C_{s,x}=\{z\in D(E): z(s) \le x\} $$ for $s \in E,x \in \mathbb{R},$ then $\mathcal{C} \in CLT(P)$. \end{thm} \begin{rem} Note that a sample function of the $X$-process is in the set $C_{s,x}$ iff $X_s \le x$. Hence, if we identify a point $(s,x) \in E \times \mathbb{R}$ with the set $C_{s,x},$ then instead of saying $\mathcal{C} \in CLT(P)$, we will often say \[\{I_{X_{s}\le x}-\Pr(X_{s}\le x): s\in E, x\in \mathbb{R}\} \] satisfies the CLT in $\ell_{\infty}(\mathcal{C})$ (or in $\ell_{\infty}(E \times \mathbb{R}))$. \end{rem} \begin{rem} At this point one might guess the reader is questioning the various assumptions in Theorem 3. First we mention that $D(E)$ is some convenient function space. For example, typically the process $X$ has continuous sample paths on $E$, so $D(E)=C(E)$ in these situations. More perplexing, at least for most readers, is probably the appearance of the distributional transforms $\{\tilde{F}_{t}: t \in E\}$ in the L-condition (\ref{Lcondition}). If the distribution functions $F_t$ are all continuous, then $F_t= \tilde{F}_{t}, t \in E,$ and our proof obviously holds with $F_t$ replacing $\tilde{F}_{t}$ in the $L$ condition. However, without all the distribution functions $F_t$ assumed continuous, the methods required in our proof fail with this substitution. An interesting case where the distributional transforms are useful occurs when one has a point $t_0 \in E$ such that $P(X(t)=X(t_0) \rm ~{for~all~ t~ in~ E})=1$, and $F_{t_0}$ is possibly discontinuous. In this situation, the $L$ condition (\ref{Lcondition}) holds for the Gaussian process $H(t)=g$ for all $ t \in E$, $ g$ a standard Gaussian random variable, and $X(t_0)$ having any distribution function $F_{t_0}$. Thus Theorem 3 applies, and yields the classical empirical CLT when the set $S$ is the real line and the class of sets consists of half-lines for all laws $F_{t_0}$. A similar result also applies if $E$ is a finite disjoint union of non-empty sets, and the process $\{X_t: t \in E\}$ is constant on each of the disjoint pieces of $E$ regardless of the distribution functions $F_t, t \in E.$ More importantly, however, is not only does our method of proof fail when the L-condition is given in terms of at least one discontinuous $F_t$, but even when the empirical process is assumed to be pregaussian, the example in section 8.4 shows that the L-condition for the process $\{F_s(X_s): s \in E\}$ is not sufficient for the CLT of Theorem 3. One needs to assume something more, and our results show th e L-condition for the process $\{ \tilde{F}_{s}(X_s): s \in E \}$ is sufficient. \end{rem} In section 7 we will provide another theorem showing how Theorem 3 can be applied, and hence the examples obtained there are motivation for its formulation in terms of the $L$ condition (\ref{Lcondition}). The following remark also motivates the presence of the process $\{\tilde{F}_{t}(X_{s}): s \in E\}$ and the $L$-condition in our CLT. In particular, we sketch an argument that for each $t \in E$ a symmetric version of this process satisfies a CLT in $\ell_{\infty}(E)$. This remark is meant only for motivation, and in its presentation we are unconcerned with a number of details. \begin{rem}\label{NecCond}Let \[\{I_{X_{s}\le x}-\Pr(X_{s}\le x): s\in E, x\in \mathbb{R}\} \] satisfy the Central Limit Theorem in the closed subspace of $\ell_{\infty}(E \times \mathbb{R})$ consisting of functions whose s-sections are Borel measurable on $\mathbb{R}$. We denote this subspace by $\ell_{\infty, m}(E\times \mathbb{R})$, and also assume the distribution functions $F_t$ are all continuous. Then, for each fixed $t \in E$ we define the bounded linear operator $\phi:\ell_{\infty,m}(E\times \mathbb{R})\longrightarrow \ell_{\infty}(E)$ given by \[\phi(f)(s)=\int f(s,x)F_{t}(dx). \] Now by the symmetrization Lemma (Lemma 2.7 in \cite{empirical-specialinvited}) we have for a Rademacher random variable $\epsilon$ independent of the empirical process variables that $\{\epsilon I_{C_{s,x}}: s \in E, x \in \mathbb{R}\}$ satisfy the CLT in $\ell_{\infty}(E \times \mathbb{R})$. Taking $f(s,x)= I_{C_{s,x}},$ we have for all $t \in E$ fixed that $$ \phi(f)(s)= 1 - F_t(z(s)^{-})= 1 - F_t(z(s)), $$ as we are assuming the $F_t$ are continuous. Therefore the continuous mapping theorem (see, e.g., Theorem 1.3.6 in \cite{vw}) implies that for each $t\in E$, \begin{align*} Z_n(s)=\frac1{\sqrt{n}}\sum_{j=1}^{n}\epsilon_{j}(1- F_t(X_s)), s \in E, \end{align*} satisfies the CLT in $\ell_{\infty}(E)$. In addition, since we are assuming $F_t=\tilde F_t$, we should then have ``asymptotically small oscillations'', namely, for every $\delta>0$ there exists $\epsilon>0$ such that \begin{align*}{\Pr}^*\bigl(\sup_{\rho(s,t)\le \epsilon}\frac1{\sqrt{n}}\sum_{j=1}^{n}\epsilon_{j}\bigl(\tilde{F}_{t}(X_{s})-\tilde{F}_{t}(X_{t})\bigr)>\delta\bigr)\le\delta. \end{align*} By using standard symmetry arguments this last probability dominates \begin{align*}\dfrac12 {\Pr}^* \bigl(\max_{j\le n}\sup_{\rho(s,t)\le \epsilon}|\tilde{F}_{t}(X_{s})-\tilde{F}_{t}(X_{t})|>\sqrt{n}\delta\bigr)\le\delta, \end{align*} which (again by standard arguments) implies (modulo multiplicative constants) \begin{align*}n{\Pr}^* \bigl(\sup_{\rho(s,t)\le \epsilon}|\tilde{F}_{t}(X_{s})-\tilde{F}_{t}(X_{t})|>\sqrt{n}\delta\bigr)\le\delta. \end{align*} While this is different from the hypotheses in our Theorem, it indicates that the quantity $\sup_{\rho(s,t)\le \epsilon}|\tilde{F}_{t}(X_{s})-\tilde{F}_{t}(X_{t})|$ is relevant to any such theorem. \end{rem} \vskip0.2in \section{Preliminaries for generic chaining.} \noindent Let $T$ be an arbitrary countable set. Then, following Talagrand in \cite{talagrand-generic} we have: \begin{mydef}An admissible sequence is an increasing sequence $(\mathcal{A}_{n})$ of partitions of $T$ such that \[{\rm Card} \mathcal{A}_{n}\le N_{n}=2^{2^{n}}.\] The partitions $(\mathcal{A}_{n})$ are increasing if every set in $\mathcal{A}_{n+1}$ is a subset of some set of $\mathcal{A}_n$. \end{mydef} We also have \begin{mydef} If $t \in T$, we denote by $A_n(t)$ the unique element of $\mathcal{A}_n$ that contains $t$. For a psuedo-metric $e$ on $T$, and $A \subseteq E$, we write $\Delta_{e}(A)$ to denote the diameter of $A$ with respect to $e$. \end{mydef} Using generic chaining and the previous definitions, Theorem 1.4.1 of \cite{talagrand-generic} includes the following: \begin{thm} Let $\{H(t): t \in T\}$ be a centered Gaussian process with $L_2$ distance $e(s,t)$ on $T$, and assume $T$ is countable. Then the process $\{H(t): t \in T\}$ is uniformly continuous and sample bounded on $(T,e)$ with probability one if and only if there is an admissible sequence of partitions of $T$ such that \begin{align}\label{Talagrand}\lim_{r\to \infty}\sup_{t \in T} \sum_{n \ge r} 2^{n/2} \Delta_e(A_n(t))=0. \end{align} \end{thm} Under the assumption that $H$ is centered Gaussian and uniformly continuous on $(T,e)$, then, recalling remark 1, it follows that $H$ is sample bounded on $T$ is equivalent to $(T,e)$ being totally bounced. Also, an immediate corollary of this result used below is as follows. \begin{prop} Let $H_1$ and $H_2$ be mean zero Gaussian processes with $L_2$ distances $e_1, e_2$, respectively, on $T$. Furthermore, assume T is countable, and $e_1(s,t) \leq e_2(s,t)$ for all $s,t \in T$. Then, $H_2$ sample bounded and uniformly continuous on $(T,e_2)$ with probability one, implies $H_1$ is sample bounded and uniformly continuous on $(T,e_1)$ with probability one. \end{prop} \begin{rem} One can prove this using Slepian's Lemma (see, e.g., \cite{fernique-1975}). However, the immediate conclusion is that $H_{1}$ is sample bounded and uniformly continuous on $(T,e_{2})$. Then, a separate argument is needed to show the statement in this proposition. Using the more classical formulation for continuity of Gaussian processes involving majorizing measures (see, e.g., Theorem 12.9 of \cite{led-tal-book}), the result also follows similarly to what is explained below. \end{rem} \begin{proof} By the previous theorem $\{H_2(t): t \in T\}$ is sample bounded and uniformly continuous on $(T,e_2)$ with probability one if and only if there exists an admissible sequence of partitions of $T$ such that $$ \lim_{r \rightarrow \infty} \sup _{t \in T} \sum_{n \geq r} 2^{n/2} \Delta_{e_2}(A_n(t))=0. $$ Since $\Delta_{e_1}(A_n(t)) \leq \Delta_{e_2}(A_n(t))$, we have $$ \lim_{r \rightarrow \infty} \sup _{t \in T} \sum_{n \geq r} 2^{n/2} \Delta_{e_1}(A_n(t))=0, $$ and hence Theorem 4 implies $H_1$ is sample bounded and uniformly continuous on $(T,e_1)$ with probability one. Thus the proposition is proven. \end{proof} \section{The Proof of Theorem 3.} \vskip0.2in First we establish some necessary lemmas, and the section ends with the proof of Theorem 3. Throughout we take as given the assumptions and notation of that theorem. \subsection{Some Additional Lemmas.} In order to simplify notation, we denote the $L_2$ distance on the class of indicator functions $$ \mathcal{F}= \{I_{X_s \leq x} : s \in E, x \in \mathbb{R}\} $$ by writing $$ \tau((s,x),(t,y)) = \{E(( I_{X_s \le x} - I_{X_t \le y})^2)\}^{\frac{1}{2}}, $$ and identifying $\mathcal{F}$ with $E \times \mathbb{R}$. Our next lemma relates the $\tau$-distance and the $\rho$-distance. It upgrades (\ref{L1distance}) when $x \not=y$. \begin{lem}\label{tau estimate}Assume that (\ref{weakLcondition}) holds. Then \begin{align} \tau^{2}((s,x),(t,y)) \leq \min_{u \in \{s,t\}} |F_{u}(y)-F_{u}(x)| + (2L+2)\rho^{2}(t,s). \label{tau-rho} \end{align} Moreover, if $Q$ denotes the rational numbers, there is a countable dense set $E_0$ of $(E,\rho)$ such that $\mathcal{F}_0= \{I_{X_s \le x}: (s,x) \in E_0 \times Q\}$ is dense in $(\mathcal{F},\tau)$. \end{lem} \begin{proof} First observe that by using the symmetry in $s$ and $t$ of the right hand term of (\ref{tau-rho}), we have by applying (\ref{basicLineq}) in the second inequality below that $$ \tau^{2}((s,x),(t,y))=\mathbb{E}|I_{X_{t}\le y}-I_{X_{s}\le x}|\le \mathbb{E}|I_{X_{t}\le y}-I_{X_{t}\le x}|+\mathbb{E}|I_{X_{t}\le x}-I_{X_{s}\le x}| $$ $$ =|F_{t}(y)-F_{t}(x)| + \Pr\big(X_{s}\le x<X_{t}\bigr) + \Pr\bigl(X_{t}\le x<X_{s}\bigr) $$ $$ \leq |F_{t}(y)-F_{t}(x)| + (2+2L)\rho^2(s,t). $$ Similarly, we also have by applying (\ref{basicLineq}) again that $$ \tau^{2}((s,x),(t,y)) \leq |F_{s}(y)-F_{s}(x)| + (2+2L)\rho^2(s,t). $$ Combining these two inequalities for $\tau$, the proof of (\ref{tau-rho}) holds. Since $(E, \rho)$ is assumed totally bounded, there is a countable dense set $E_0$ of $(E,\rho)$, and hence the right continuity of the distribution functions and (\ref{tau-rho}) then imply the final statement in Lemma 2. \end{proof} \vskip0.2in Using Lemma 2 and the triangle inequality we can estimate the $\tau$-diameter of sets as follows. \begin{cor}\label{tau diam}If $t_{B}\in B\subseteq E$ and $D\subseteq \mathbb{R}$ , then \[\text{diam}_{\tau}(B\times D)\le 2\{ (2L+2)^{1/2}\text{diam}_{\rho}(B) + \sup_{x,y\in D}|F_{t_{B}}(y)-F_{t_{B}}(x)|^{1/2}\}.\] \end{cor} \begin{lem}\label{tau lower} Assume that $(s,x)$ satisfies $$ \tau((s,x),(t,y))=\|I_{X_{s}\le x}-I_{X_{t}\le y}\|_{2}\le\epsilon, $$ $\rho(s,t)\le \epsilon,$ and (\ref{weakLcondition}) holds. Then, for $c=(2L+2)^{1/2}+1$, \[|F_{t}(x) - F_{t}(y)| \le (c\epsilon)^{2}, \] or in other words \[|F_{t}(x) - F_{t}(y)| \le (c\max\{\tau((s,x),(t,y)),\rho(s,t)\})^{2}. \] \end{lem} \begin{proof} Using (\ref{basicLineq})) in the second inequality below we have $$ |F_{t}(y)-F_{t}(x)|^{1/2} =\|I_{X_{t}\le x}-I_{X_{t}\le y}\|_{2}\le\|I_{X_{s}\le x}-I_{X_{t}\le y}\|_{2}+\|I_{X_{s}\le x}-I_{X_{t}\le x}\|_{2} $$ $$ ~~~~~~~~~~~~~~~~~~~~~~~\le \epsilon +\biggl(\Pr(X_{s}\le x<X_{t})+\Pr(X_{t}\le x< X_{s})\biggr)^{1/2} $$ $$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~\leq \epsilon+(2L\epsilon^{2}+2\epsilon^{2})^{1/2}=[(2L+2)^{1/2}+1]\epsilon \equiv c\epsilon. $$ Hence the lemma is proven. \end{proof} The next lemma is an important step in verifying the weak-$L_2$ condition in item (iii) of Theorem 2 above, i.e. see Theorem 4.4 of \cite{clt-localcondit}. \begin{lem}If (\ref{Lcondition}) holds, $c$ is as in Lemma 3, and $$ \lambda((s,x),(t,y)) = \max\{ \tau((s,x),(t,y)), \rho(s,t)\}, $$ then for all $(t,y)$ and $\epsilon>0$ \[{\Pr}^{*}\bigl(\sup_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon\}}|I_{X_{t}\le y} - I_{X_{s}\le x}|>0\bigr)\le (2c^{2}+2L+1)\epsilon^{2}. \] \end{lem} \begin{proof}First we observe that \begin{align*}{\Pr}^*(&\sup_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon\}}|I_{X_{t}\le y} -I_{X_{s}\le x}|>0)\\ &\\ &= {\Pr}^*(\sup_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon\}}I_{X_{t}\le y,X_{s}>x} + I_{X_{s}\le x,X_{t}>y}>0).\notag\\ \end{align*} Again, using the fact that $x<y$ implies $F_t(x) \leq \tilde F_t(y)$, we have \begin{align*}{\Pr}^*(&\sup_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon\}}|I_{X_{t}\le y} -I_{X_{s}\le x}|>0)\\ &\\ &\le{\Pr}^*(\sup_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon \}}I_{\tilde{F}_{t}(X_{t})\le \tilde{F}_{t}(y), {F}_{t}(x)\le \tilde{F}_{t}(X_{s})}>0)\notag\\ &\\ &\phantom{*******}+{\Pr}^*(\sup_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon \}}I_{\tilde{F}_{t}(X_{s})\le \tilde{F}_{t}(x), {F}_{t}(y)\le\tilde{F}_{t}(X_{t})}>0)=I + II,\notag\\ \end{align*} where \begin{align} I= {\Pr}^*(\sup_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon\}}I_{\tilde{F}_{t}(X_{t})\le \tilde{F}_{t}(y), {F}_{t}(x)\le \tilde{F}_{t}(X_{s})}>0)\label{inf} \end{align} and \begin{align} II={\Pr}^*(\sup_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon \}}I_{\tilde{F}_{t}(X_{s})\le \tilde{F}_{t}(x), {F}_{t}(y)\le\tilde{F}_{t}(X_{t})}>0) \label{I,II}. \end{align} \vskip0.3in At this point we use Lemma \ref{tau lower} to see that in (\ref{inf}) we can use \[ \inf_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon\}}F_{t}(x)\ge F_{t}(y)-(c\epsilon)^{2}. \] Therefore, by again using (\ref{Lcondition}) \begin{align*}I\le {\Pr}^*(\sup_{\{(s,x):\lambda((t,y),(s,x))\le \epsilon\}}&I_{\tilde{F}_{t}(X_{t})\le {F}_{t}(y), F_{t}(y)-(c\epsilon)^{2}\le \tilde{F}_{t}(X_{s})}>0)\\ &\\ \le{\Pr}^*(\sup_{\{(s,x):\lambda((t,y),(s,x))\le \epsilon \}}&I_{\tilde{F}_{t}(X_{t})\le {F}_{t}(y), F_{t}(y)-(c\epsilon)^{2}\le\tilde{F}_{t}(X_{t})+ \epsilon^2 } >0) +L\epsilon^{2}\\ &\\ \le\Pr(&{F_{t}(y)-(c\epsilon)^{2}-\epsilon^{2}\le \tilde{F}_{t}(X_{t})\le {F}_{t}(y)}\bigr)+L\epsilon^{2}\\ &\\ \le &(c^{2}+L+1)\epsilon^{2} \text{ by } (\ref{uniform}). \end{align*} Now, we estimate II in (\ref{I,II}). Using the fact that $\tilde F_t(x) \leq F_t(x)$ for all $x$, Lemma 3, and our definition of $L$, we therefore have \begin{align*}{\Pr}^*(&\sup_{\{(s,x):\lambda((t,y),(s,x))\le\epsilon \}}I_{\tilde{F}_{t}(X_{s})\le \tilde{F}_{t}(x), {F}_{t}(y)\le\tilde{F}_{t}(X_{t})}>0)\\ &\\ &\le\Pr\Bigl(\tilde{F}_{t}(X_{t})-\epsilon^{2}\le F_{t}(y)+(c\epsilon)^{2}, {F}_{t}(y)\le\tilde{F}_{t}(X_{t})\Bigr)+L\epsilon^{2}\\ &\\ &\le (c^{2}+L+1)\epsilon^{2}. \end{align*} \end{proof} \subsection{The Construction and the proof of Theorem 3.} Since $(E, \rho)$ is totally bounded by Remark 1, take $E_0$ to be any countable dense subset of $E$ in the $\rho$ distance. Then by Theorem 4, Talagrand's continuity theorem, there exists an admissible sequence of partitions, $\mathcal{B}_{n}$ of $E_0$ for which \begin{align}\label{gamma-tau} \lim_{r\rightarrow \infty}\sup_{t \in E_0}\sum_{n\ge r}2^{n/2}\Delta_{\rho}(B_{n}(t)) =0. \end{align} Fix $n$. Then, for each $B\in\mathcal{B}_{n-1}$ choose $t_{B}\in B$. Fix the distribution function $F_{B}:=F_{t_{B}} $ and $\mu_{B}$ the associated probability measure. Put $\alpha=(\Delta_{\rho}(B) + 2^{-n})^{2}$ and set $z_{1}=\sup\{x\in\mathbb{R}: F_{B}(x) < \alpha\}$. We consider two cases. \begin{itemize} \item $F_{B}(z_{1})\le \alpha$ and \item $F_{B}(z_{1})> \alpha$. \end{itemize} In the first case $F_{B}(z_{1})=\alpha$. If $F_{B}(z_{1})<\alpha$, then by right continuity there exist $w>z_{1}$ such that $F_{B}(w)<\alpha$, which contradicts the definition of $z_{1}$. In this case we consider $C_{1}=(-\infty, z_{1}]$ and $D_{1}=\emptyset$. In the second case we let $C_{1}=(-\infty, z_{1})$ and $D_{1}=\{z_{1}\}$. In either case $\mu_B(C_{1}\cup D_{1})\ge\alpha$. Now assume that we have constructed $C_{1},\ldots,C_{k}$ and $D_{1},\ldots,D_{k}$. Therefore we have $z_{k}$. If $\mu_{B}\bigl((z_{k},\infty)\bigr)\ge \alpha$, let $z_{k+1}=\sup\{x>z_{k}: F_{B}(x)-F_{B}(z_{k})< \alpha\}$. If $z_{k+1}=\infty$, we set $C_{k+1}=(z_k,\infty)$ and $D_{k+1}= \emptyset$. Otherwise, if $z_{k+1} < \infty$ there are two cases. Again, we have two cases. \begin{itemize} \item $F_{B}(z_{k+1})-F_{B}(z_{k})\le \alpha$ and \item $F_{B}(z_{k+1})-F_{B}(z_{k})> \alpha$. \end{itemize} In the first case we consider $C_{k+1}=(z_{k}, z_{k+1}]$ and $D_{k+1}=\emptyset$. In the second case we let $C_{k+1}=(z_{k}, z_{k+1})$ and $D_{k+1}=\{z_{k+1}\}$. As before, $\mu_B(C_{k+1}\cup D_{k+1})\ge\alpha$. Hence, there can be at most $\dfrac1{\alpha}$ steps before $\{C_{k},D_{k}\}_{k}$ cover $\mathbb{R}$. Therefore, after eliminating any empty set, we have a cover with at most $\dfrac2{\alpha}$ sets. By our choice of $\alpha$ the cover has at most $2^{2n+1}$ sets. Hence since we have $B \in \mathcal{B}_{n-1}$, the number of sets used to cover $E_0 \times \mathbb{R}$ is less than or equal to $ 2^{2^{n-1}}2^{2n+1}$. The reader should note that the points $\{z_k\}$ depend on the set $B$, but we have suppressed that to simplify notation. We now check the $\tau$-diameters of the non-empty $B\times C_{k}$ and $B\times D_{k}$. Estimating these diameters by doubling the radius of the sets, the triangle inequality allows us to upper bound their radius using one of $s$ and $t$ to be $t_{B}$. Also note that in Lemma 2, or Corollary 1, the term which contains $|F_{t_{B}}(y)-F_{t_{B}}(x)|$ would cause trouble in the case $D_{k}\neq\emptyset$, since this is only known to be $\ge \alpha$. Luckily it doesn't appear when $D_k \not = \emptyset$. First we consider the $\tau$-diameter of sets of the form $B\times C_{k}$ when $D_{k}=\emptyset$. Then $C_k=(z_{k-1,B}, z_{k,B}]$. Hence for $(s,x), (t,y)\in B\times C_{k}$, Corollary 1 implies \[\Delta_{\tau}(B\times (z_{B,k-1},z_{B,k}])\le 2\Bigg((2L+2)^{1/2}\Delta_{\rho}(B) + \Delta_{\rho}(B) + \dfrac1{2^{n}}\Bigg). \] When $D_{k}\neq\emptyset$, then $C_k=(z_{k-1,B}, z_{k,B})$, so again by Corollary 1 the $\tau$-diameter of $B \times C_k$ has an upper bound as in the previous case. If $D_{k}\neq\emptyset$, then the only element of $D_{k}$ is $z_{k,B}$ and by Corollary 1 we have $$ \Delta_{\tau}(B \times D_k) \leq 2(2L+2)^{\frac{1}{2}}\text{diam}_{\rho}(B). $$ So, in either case \begin{align}\label{Delta}\Delta_{\tau}(B\times C_{B,k}\text{ or }D_{B,k})\le 2\Bigg((2L+2)^{1/2}\Delta_{\rho}(B) + \Delta_{\rho}(B) + \dfrac1{2^{n}}\Bigg). \end{align} \begin{lem}\label{partitions}Let $\mathcal{G}_{n}$ be a sequence of partitions of an arbitrary parameter set $T$ with pseudo metric $e$ on $T$ satisfying both \begin{enumerate} \item ${\rm Card}(\mathcal{G}_{n})\le 2^{2^{n}}$ and \item $\lim_{r\to\infty}\sup_{t \in T}\sum_{n\ge r}2^{n/2}\Delta_{e}(G_{n}(t))=0$, \end{enumerate} and set $\mathcal{H}_{n}:=\mathcal{P}(\bigcup_{1 \le k\le n-1}\mathcal{G}_{k})$, where $\mathcal{P}(\mathcal{D})$ denotes the minimal partition generated by the sets in $\mathcal{D}$. Then the sequence $\mathcal{H}_n$ {\bf (Notice the $n-1$ in the union)} also satisfies those conditions. \end{lem} \begin{proof}The first condition holds since a simple induction on $n$ implies the minimal partition $$ \mathcal{H}_n=\mathcal{P}( \cup_{1 \leq k \leq n-1} \mathcal{G}_k) $$ has cardinality at most $\prod_{k=1}^{n-1} 2^{2^{k}} \leq 2^{2^n}$. The second condition holds since the partitions are increasing collections of sets, and hence $\text{diam}_{e}(H_{n}(t))\le \text{diam}_{e}(G_{n-1}(t)).$ \end{proof} \begin{lem}\label{tauPG}Let $E_0$ be a countable dense subset of $(E, \rho)$. Then there exists an admissible sequence of partitions $\{\mathcal{A}_n: n \ge 0\}$ of $E_0 \times \mathbb{R}$ such that \begin{align}\label{tauPreG} \lim_{r\to\infty}\sup_{(t,y) \in E_0\times \mathbb{R}}\sum_{n\ge r}2^{n/2}\Delta_{\tau}(G_{n}((t,y)))=0. \end{align} \end{lem} \begin{proof} We construct the admissible sequence of partitions $\mathcal{A}_{n}$ as above. More precisely, let $\{\mathcal{B}_n: n \ge 0\}$ be an increasing sequence of partitions of $E_0$ such that (\ref{gamma-tau}) holds, and after the construction above we also have (\ref{Delta}). That is, for $k \ge 1$ let $$ \mathcal{G}_k =\{B \times F:B \in \mathcal{B}_{k-1},~F \in \mathcal{E_B}\} $$ where $$ \mathcal{E}_B = \{C_{j,B},D_{j,B} ~\rm{all~sets~nonempty}\} $$ and $C_{j,B},D_{j,B}$ are constructed from $B \in \mathcal{B}_{k-1}$ as above. Then, for $n \ge 3$ set $$ \mathcal{A}_n =\mathcal{P}( \cup_{2 \leq k \leq n-1} \mathcal{G}_k), $$ where $\mathcal{P}(\mathcal{D})$ is the minimal partition generated by the sets in $\mathcal{D}$, and for $n=1,2$ we take $\mathcal{A}_n$ to be the single set $E_0\times \mathbb{R}$. Since the cardinality of the partitions $\mathcal{G}_k$ defined above is less than or equal to $ 2^{2^{k-1}}2^{2k+1}$, then a simple induction on $n$ implies the minimal partition $$ \mathcal{A}_n=\mathcal{P}( \cup_{2 \leq k \leq n-1} \mathcal{G}_k) $$ has cardinality at most $\prod_{k=2}^{n-1} 2^{2^{k-1}}2^{2k+1} \leq 2^{2^n}$. By (\ref{Delta}) and Lemma 5 we have \begin{align*}\sup_{(t,y)}\sum_{n\ge r}2^{n/2}\Delta_{\tau}(A_{n}(t,y))\le C\{ \sup_{t} \sum_{n\ge r}2^{n/2}\Delta_{\rho}(B_{n}(t))+ \sum_{n\geq r}2^{n/2}2^{-n}\}. \end{align*} Thus (\ref{gamma-tau}) implies $\tau$ satisfies (\ref{tauPreG}) with respect to the sequence of admissible partitions $\mathcal{A}_n$ on $E_0 \times \mathbb{R}$. \end{proof} \noindent{\sl Proof of Theorem \ref{main result}.} Let $Q$ denote the rational numbers. Then, if we restrict the partitions $\mathcal{A}_n$ of $E_0 \times \mathbb{R}$ in Lemma 6 to $E_0 \times Q$, we immediately have \begin{align}\label{PreG1} \lim_{r\to\infty}\sup_{(t,y) \in E_0\times Q}\sum_{n\ge r}2^{n/2}\Delta_{\tau}(G_{n}((t,y)))=0, \end{align} and $(E_0\times Q, \tau)$ is totally bounded. Now let $\{G_{(s,x)}: (s,x) \in E \times \mathbb{R}\}$ be a centered Gaussian process with $E(G_{(s,x)}G_{(t,y)})= P(X_s \leq x, X_t \le y)$. Then, $G$ has $L_2$ distance $\tau$, and by (\ref{PreG1}) and Theorem 4, it is uniformly continuous on $(E_0 \times Q, \tau)$. Hence if $\{H_{(s,x)}: (s,x) \in E_0 \times Q\}$ is a centered Gaussian process with $$ E(H_{(s,x)}H_{(t,y)})= P(X_s \leq x, X_t \le y)-P(X_s\le x)P(X_t \le y), $$ then $$ E((H_{(s,x)} - H_{(t,y)})^2) =\tau^2((s,x),(t,y))- (P(X_s \le x)- P(X_t \le y))^2). $$ Hence the $L_2$ distance of $H$ is smaller than that of $G$, and therefore Proposition 1 implies the process $H$ is uniformly continuous on $(E_0 \times Q, d_H)$. By Lemma 2 the set $E_0 \times Q$ is dense in $(E \times \mathbb{R}, \tau)$, and since $$ d_H((s,x),(t,y)) \le \tau((s,x),(t,y)) $$ we also have $E_0 \times Q$ is dense in $(E \times \mathbb{R}, d_H)$. Thus the Gaussian process $\{H_{(s,x)}: (s,x) \in E \times \mathbb{R}\}$ has a uniformly continuous version, which we also denote by $H$, and since $(E, d_H)$ is totally bounded the sample functions are bounded on $E$ with probability one. If $\mathcal{F}= \{ I_{X_s \le x}: (s,x) \in E \times \mathbb{R})\}$, then since $$ d_H((s,x),(t,y))= \rho_P(I_{X_s \le x}, I_{X_t \le y}), $$ the continuity of $H$ on $(E \times \mathbb{R},d_H)$ implies condition (ii) in Theorem \ref{agoz} is satisfied. Since $I_{X_{t}\le y}$ is bounded, condition (i) in Theorem \ref{agoz} is also satisfied. Therefore, Theorem 3 follows once we verify condition (iii) of Theorem \ref{agoz}. To verify (iii) we use Lemma 4. As before, we identify the function $f=I_{X_s \le x} \in \mathcal{F}$ with the point $(s,x) \in E\times \mathbb{R}$. Hence, for the centered Gaussian process $$ \{G_f: f \in \mathcal{F}\} $$ in (iii) of Theorem \ref{agoz}, for $(s,x) \in E \times \mathbb{R}$, we take the process $$ \tilde G_{(s,x)} = G_{(s,x)} + \tilde H_{s}. $$ In our definition of $\tilde G$ we are assuming: (a) $\{\tilde H_s: s \in E\}$ is a Gaussian process whose law is that of the process $\{H_s: s \in E\}$ given in the theorem, and independent of everything in our empirical model, and (b) $\{G_{(s,x)}: (s,x) \in E\times \mathbb{R}\}$ is a uniformly continuous and sample bounded version of the Gaussian process, also denoted by $G_{(s,x)}$, but defined above on $E_0 \times Q$. The extension to all of $E \times \mathbb{R}$ again follows by the fact that $E_0 \times Q$ is dense in $(E\times \mathbb{R},\tau)$. Therefore, $\tilde G$ is sample bounded and uniformly continuous on $E \times \mathbb{R}$ with respect to its $L_2$ distance $$ d_{\tilde G}((s,x),(t,y))= \{ \tau^2((s,x),(t,y)) + \rho^2(s,t)\}^{1/2}. $$ Condition (iii) of Theorem \ref{agoz} now follows easily from Lemma 4 since for $(t,y)$ fixed, $$ \{(s,x): \lambda((s,x),(t,y)) \le \epsilon/2\} \subseteq \{(s,x): d_{\tilde G}((s,x),(t,y)) \le \epsilon\}, $$ and for a random variable $Z$ bounded by one we have $$ \sup_{t >0}t^2P(|Z|>t)\le P(|Z|>0). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\square $$ \section{Another Theorem and some Examples.} Let $\{X_t: t \in E\}$ be a sample continuous process such that (I) $\sup_{t \in E}|F_t(x) - F_t(y)| \le k|x-y|^{\beta}$ for all $x,y \in \mathbb{R}, k < \infty$, and some $\beta \in (0,1]$.\\ (II) $|X_t - X_s| \le \Gamma \phi(s,t)$ for all $s,t \in E$, and for some $\eta>0$ and all $x \ge x_0$\\ $$ P(\Gamma \ge x) \le x^{-\eta}. $$ (III) For $\alpha \in (0,\alpha_0)$ we have $(\phi(s,t))^{\alpha} \le \rho_{\alpha}(s,t), s,t \in E$, where $\rho_{\alpha}(s,t)$ is the $L_2$ distance of a sample bounded, uniformly continuous, centered Gaussian process on $(E, \rho_{\alpha})$, which we denote by $\{H^{\alpha}(t): t \in E\}$. \begin{thm} Let $\{X_t: t \in E\}$ be a sample continuous process satisfying (I-III) above. Then \[\{I_{X_{s}\le x}-\Pr(X_{s}\le x): s\in E, x\in \mathbb{R}\} \] satisfies the Central Limit Theorem in $\mathcal{L}^{\infty}(E\times \mathbb{R})$. This CLT also holds under (I-III) provided (II) is strengthened to hold for all $\eta>0$ and $x \ge x_{\eta}$, and (III) holds for some $ \alpha \in (0, 2/\beta)$. \end{thm} \begin{rem} If the process $\{X_t: t \in E\}$ in (III) is a Gaussian process, then the CLT of Theorem 5 holds provided (I) is satisfied, (II) is such that $|X_t - X_s| \le \Gamma \phi(s,t)$ for all $s,t \in E$ and $\Gamma< \infty$, and (III) holds only for some $\alpha \in (0, 2/\beta)$. This follows from the final conclusion of the theorem, and the Fernique-Landau-Shepp result, which implies exponential estimates for the probability in (II). \end{rem} \begin{proof} The theorem follows by taking $\{H(t): t \in E \}$ in Theorem 3 to be a suitably chosen $\{H^{\alpha}(t): t \in E\}$ as in (III), and then verifying condition (\ref{Lcondition}). Since (I) implies the distribution functions $F_t$ are all continuous, the distributional transforms in (\ref{Lcondition}) are simply the distributions themselves. Therefore, applying (I), with $\alpha \in (0, \alpha_0)$ to be chosen later, we have \begin{align} {\Pr}^*(\sup_{\rho_{\alpha}(s,t) \le \epsilon}|F_t(X_s)- F_t(X_t)| \ge \epsilon^2)\le {\Pr}^*(\sup_{\rho_{\alpha}(s,t) \le \epsilon}|X_s- X_t| \ge (\frac{\epsilon^2}{k})^{\frac{1}{\beta}}). \end{align} Hence (II) implies \begin{align} {\Pr}^*(\sup_{\rho_{\alpha}(s,t) \le \epsilon}|F_t(X_s)- F_t(X_t)| \ge \epsilon^2)\le P( \Gamma \ge (\frac{\epsilon^2}{k})^{\frac{1}{\beta}} (\sup_{\rho_{\alpha} (s,t) \le \epsilon} \phi(s,t))^{-1})), \end{align} and since $$ (\sup_{\rho_{\alpha} (s,t) \le \epsilon} \phi(s,t))^{-1} \ge (\sup_{\rho_{\alpha} (s,t) \le \epsilon} \rho_{\alpha}(s,t))^{\frac{-1}{\alpha}} \ge \epsilon^{\frac{-1}{\alpha}}, $$ (III) therefore implies \begin{align} {\Pr}^*(\sup_{\rho_{\alpha}(s,t) \le \epsilon}|F_t(X_s)- F_t(X_t)| \ge \epsilon^2)\le k^{\frac{\eta}{\beta}} \epsilon^{\frac{-2\eta}{\beta}} \epsilon^{\frac{\eta}{\alpha}} \le k^{\frac{\eta}{\beta}} \epsilon^{2} , \end{align} provided $\alpha >0$ is chosen sufficiently small so that $\eta(\frac{1}{\alpha} -\frac{2}{\beta}) \ge 2$ and $0< \epsilon< \epsilon_0$ is sufficiently small to imply $k^{\frac{-1}{\beta}}\epsilon^{\frac{2}{\beta} - \frac{1}{\alpha}} > x_0$. To obtain the final conclusion of the theorem take $\alpha \in (0,2/\beta)$ and $\eta$ sufficiently large that $\eta(1/\alpha -2/\beta)>2$. Then, for $0< \epsilon< \epsilon_{\eta}$ sufficiently small that $k^{\frac{-1}{\beta}}\epsilon^{\frac{2}{\beta} - \frac{1}{\alpha}} > x_{\eta}$ we again have (24). \end{proof} \begin{cor}Let $\{Y_t: t \in [0,T]\}$ be a sample continuous process $\gamma$-fractional Brownian motion for $0<\gamma< 1$ such that $Y_0=0$ with probability one, and set $X_t = Y_t + Z$, where $Z$ is independent of $\{Y_t: t \in [0,T]\}$ and has a bounded density function. Then, \[\{I_{X_{s}\le x}-\Pr(X_{s}\le x): s\in [0,T], x\in \mathbb{R}\} \] satisfies the Central Limit Theorem in $\mathcal{L}^{\infty}([0,T]\times \mathbb{R})$. \end{cor} \begin{rem} The addition of the random variable $Z$ in the previous corollary suffices to make condition (I) hold, and is not used in any other way. Furthermore, below we will see that something of this sort is necessary, since we will show that the CLT of the previous corollary fails for the fractional Brownian motion process $Y$, itself. \end{rem} \begin{proof} The $L_2$ distance for $\{X_t: t \in [0,T]\}$ is given by \begin{align} E((X_s -X_t)^2)^{\frac{1}{2}}=c_{\gamma}|s-t|^{\gamma}, s,t \in [0,T], \end{align} and without loss of generality we may assume the process to be normalized so that $c_{\gamma}=1$. Furthermore, it is well known that these processes are H\"older continuous on $[0,T]$, i.e. for every $\theta< \gamma/2$ we have \begin{align} |X_t-X_s| \leq \Gamma|t-s|^{\theta}, s, t \in [0,T], \end{align} where \begin{align} E(\exp\{c\Gamma^2\}) <\infty, \end{align} for some $c>0.$ That $\Gamma$ has exponential moments is due to the Fernique-Landau-Shepp Theorem, and hence conditions (I-III) hold provided we take the Gaussian process $H^{\alpha}$ of condition (III) to be an $\alpha\theta$ fractional Brownian motion for any $\theta<\gamma/2$. Hence the corollary follows from Theorem 5. \end{proof} \begin{cor}Let $I=[0,T]$ and $\{Y_{(s,t)}: (s,t) \in I\times I\}$ be a sample continuous Brownian sheet such that $Y_{(0,t)}=Y_{(s,0)}=0, 0\le s,t \le T$ with probability one, and set $X_{(s,t)} = Y_{(s,t)} + Z$, where $Z$ is independent of $\{Y_{(s,t)}: (s,t) \in I \times I\}$ and has a bounded density function. Then, \[\{I_{X_{(s,t)}\le x}-\Pr(X_{(s,t)}\le x): (s,t) \in I\times I), x\in \mathbb{R}\} \] satisfies the Central Limit Theorem in $\mathcal{L}^{\infty}((I\times I)\times \mathbb{R})$. \end{cor} \begin{proof} First of all observe that since $Z$ has a bounded density, and is independent of the Brownian sheet $Y$, we have (I) holding. Furthermore, from Theorem 1 in the paper \cite{yeh-wiener}, these processes are H\"older continuous on $I \times I $, i.e. for $(s,t), (u,v) \in I \times I$ and $0< \gamma<1/2$ we have \begin{align} |X_{(s,t)}-X_{(u,v)}| \leq \Gamma((\frac{u-s}{T})^2 + (\frac{v-t}{T})^2)^{\frac{\gamma}{2}}. \end{align} where \begin{align} E(\exp\{c\Gamma^2\}) <\infty, \end{align} for some $c>0.$ That $\Gamma$ has exponential moments is due to the Fernique-Landau-Shepp Theorem, and hence conditions (I-III) hold provided we take the Gaussian process $H^{\alpha}_{(s,t)}$ of condition (III) to be \begin{align} H^{\alpha}_{(s,t)} = Y_s + Z_t + Z, (s,t) \in I \times I, \end{align} where the processes $\{Y_s: s \in I\}$ and $\{Z_t: t \in I\}$ are independent $\theta$-fractional Brownian motions, independent of the normal zero-one random variable $Z$. To determine $\theta$ we set $$ \phi((s,t),(u,v)) = ((\frac{u-s}{T})^2 + (\frac{v-t}{T})^2)^{\frac{\gamma}{2}}, $$ where $0< \gamma<1/2$. Then, $$ \phi((s,t),(u,v)) \le(|\frac{u-s}{T}|+|\frac{v-t}{T}|)^{\gamma} \le [|\frac{u-s}{T}|^{\gamma} + |\frac{v-t}{T}|^{\gamma}] $$ Normalizing the $Y_s$ and $Z_t$ processes suitably, we have $$ \rho_{\alpha}^2((s,t),(u,v))=E((H^{\alpha}_{(u,v)} - H^{\alpha}_{(s,t)})^2)=( |\frac{u-s}{T}|^{2\theta} +|\frac{v-t}{T}|^{2\theta}). $$ Hence for $0<\alpha<1$ , $\gamma =1/4$, and $ \theta<\alpha/8$ we have $$ \phi^{\alpha}((s,t),(u,v))\le \rho_{\alpha}((s,t),(u,v)) $$ Hence the corollary follows from Theorem 5. \end{proof} \section{Examples where our CLT fails.} \subsection{Fractional Brownian Motions.} Since the class of sets in our CLT arises using the Vapnik-Cervonenkis class of half lines, one might think that perhaps if i.i.d copies of the process $\{X_t: t \in E\}$ satisfied the CLT in $C(E)$, then the class of sets $\mathcal{C}$ of Theorem 3 would satisfy the $CLT(P)$. Our first example shows this fails, even if the process $X_t$ is Brownian motion on $[0,1]$ tied down at $t=0$. In this example the process fails condition (I) in Theorem 5 since $P(X_0=0)=1$. To prove this we show the necessary condition for $\mathcal{C}$ to satisfy the $CLT(P)$ appearing in (ii-b) of Theorem 1 fails. More precisely, since measurability is an issue here, the next lemma shows that there is a countable subclass $\mathcal{C}_{Q}$ of sets in $\mathcal{C}$ such that by Theorem 3 of \cite{talagrand-donsker-sets}, $\mathcal{C}_{Q} \epsilon \hspace{-0.37em}/ CLT(P)$. Thus $\mathcal{C}$ fails the $CLT(P)$, as otherwise all subclasses also are in $CLT(P)$. \begin{lem}\label{BrownianMotEx} Let $\mathcal{C}=\{C_{t,x}: 0\leq t \leq 1,~ \infty<x< \infty\}$, where $C_{t,x}=\{z \in C[0,1]: z(t) \leq x\}$, and assume $\{X_t: t \in [0,1]\}$ is a sample continuous Brownian motion tied down at zero. Also, let $\mathcal{C}_{Q}$ denote the countable subclass of $\mathcal{C}$ given by $\mathcal{C}_{Q} = \{ C_{t,y} \in \mathcal{C}: t,y \in Q \}$, where $Q$ denotes the rational numbers. Then, for each integer $n \geq 1$, with probability one $$ \Delta^{\mathcal{C}_{Q}} (B_1,\cdots,B_n) =2^n, $$ where $\Delta^{\mathcal{C}_{Q}} (B_1,\cdots,B_n)= \rm {card}\{C\cap \{B_1,\cdots,B_n\}: C \in \mathcal{C}_{Q} \},$ and $B_1,\cdots,B_n$ are independent Brownian motions on $[0,1]$ starting at zero. \end{lem} \begin{proof} Suppose one wants the $k$ functions $\{B_{j_1},\cdots,B_{j_k}\}$ with probability one, where $1\leq j_1< j_2<\cdots<j_k\leq n$. Let $$ {\bf u}=(u_1, \cdots,u_n) $$ where $u_{j_1}=u_{j_2}=\cdots=u_{j_k}=1$ and all other $u_j=2$. Then $||{\bf u}|| =(\sum_{j=1}^n u_j^2)^{1/2}=(4n-3k)^{1/2}.$ Now set ${\bf v}=(v_1,\cdots,v_n)$, where $$ v_{j_1}=v_{j_2}=\cdots=v_{j_k}=\frac{1}{2(4n-3k)^{1/2}} $$ and all other $v_j=\frac{1}{(4n-3k)^{1/2}}$. Then ${\bf v}= {\bf u}/(2||{\bf u}||)$ and $||{\bf v}||=1/2$. \bigskip Let $$ W(s)= \frac{ (B_1(s),\cdots, B_n(s))}{(2sLL(1/s))^{1/2}} $$ for $0<s\leq 1$. Then the LIL implies with probability one that $$ \liminf_{s \downarrow 0} ||{\bf v} - W(s)||=0, $$ and hence with probability one there are infinitely many rational numbers $t \downarrow 0$ such that $$ C_{t,x(t)} \cap \{B_1,\cdots, B_n\} =\{B_{j_1},\cdots, B_{j_k}\}, $$ where $x(t) \in Q$ for $t \in Q$ and $$ |x(t) -\frac{3(2tLL(1/t))^{1/2}}{4(4n-3k)^{1/2}}| < \frac{(2tLL(1/t))^{1/2}}{16(4n-3k)^{1/2}}. $$ Since $k$ and the set $\{j_1,\cdots,j_k\}$ were arbitrary, and with probability one we can pick out the subset $\{B_{j_1},\cdots,B_{j_k}\}$, the lemma follows as the intersection of $2^n$ subsets of probability one has probability one. \end{proof} The failure of the CLT also holds for all sample continuous fractional Brownian motions $\{X_H(t): t \in [0,1]\}$ which are tied down at zero. The proof of this again depends on the law of the iterated logarithm for $n$ independent copies of this process at $t=0$, which then allows us to prove an analogue of the previous lemma. The LIL result at $t=0$ for a single copy follows, for example, by Theorem 4.1 of \cite{goodman-kuelbs-clustering}, and then one can extend that result to $n$ independent copies by classical proofs as in \cite{kuelbs-strong-convergence}. The details of this last step are lengthy, but at this stage are more or less routine in the subject, and hence are omitted. Of course, the CLT for i.i.d. copies of these processes is obvious as they are Gaussian. \subsection{A uniform CLT Example.} In the previous examples, when the distribution function $F_t$ of $X_t$ jumped, the oscillation of the processes at that point caused a failure of our CLT. Hence one other possible idea is that if the process $\{X_t: t \in [0,1]\}$ is Lip-1 on $[0,1]$, then our CLT might hold. For example, this is true for the Lip-1 process $X_t=tU, t \in [0,1]$, where $U$ is uniform on $[0,1]$. Moreover, in this example the densities of $F_t$ still are unbounded near $t=0$. To see this let $X_{t,j}, j=1,\cdots,n, t \in [0,1]$ be i.i.d. copies of $X_t=tU, t \in [0,1],$ and define $$ W_n(C_{t,y})=\frac{1}{\sqrt n} \sum_{j=1}^n [I(X_{(\cdot),j} \in C_{t,y}) -P(X_{(\cdot),j} \in C_{t,y})], $$ where $\mathcal{C}=\{ C_{t,y}: t \in [0,1], y \in \mathbb{R}\}$ and $C_{t,y}= \{z \in C[0,1]: z(t) \le y\}$. Therefore, $W_n(C_{t,y})=0$ for all $y \in \mathbb{R}$ when $t=0$, and also when $y/t \geq 1$. Moreover, if we define $\mathcal{G}=\{ (-\infty,r] : 0 \le r \le 1\}$, and $$ \phi(I_{C_{t,y}})=I_{(-\infty,1]} \rm ~{if}~ y/t>1, 0 \le t \le 1,\rm~{or}~y=0~\rm{and}~ t=0, $$ $$ \phi(I_{C_{t,y}})=I_{(\infty,0]} \rm ~{if}~ y/t \le 0~\rm~{but ~not}~y=0~\rm{and}~t=0, $$ and $$ \phi(I_{C_{t,y}})=I_{(-\infty,y/t]} \rm ~{if}~0< y/t<1,0< y< 1, 0 < y < 1. $$ Then $\phi(\mathcal{C})= \{I_{U \leq r}: 0 \le r \le 1\} \equiv \mathcal{G}$ and $\phi$ maps $L_2$ equivalence classes of $\mathcal{C}$ onto $\mathcal{G}$ with respect to the law of $\{X_t: t \in [0,1]\}$ for sets in $\mathcal{C},$ and the law of $U$ for sets in $\mathcal{G}$. Now $\mathcal{G}$ satisfies the $CLT(\mathcal{L}(U))$ by the classical empirical CLT, i.e. see Theorem 16.4 of \cite{billingsley}, and since $\phi$ preserves covariances we thus have $W_n(C_{t,y})$ converges weakly to the Gaussian centered process $W(C_{t,y})= Y(\phi(C_{t,y}))$ on $\mathcal{C}$, where $Y((-\infty,s]))=B(s)-sB(1)$ is the tied down Wiener process on $[0,1]$, i.e. $B(\cdot)$ is a Brownian motion. \subsection{A Lip-1 Example without the CLT.} In this example we see that the Lip-1 property for $\{X_t: t \in [0,1]\}$ is not always sufficient for our CLT. Here $X_0=0$, and for $0<t\leq 1$ we define $$ X_t=t \sum_{j=1}^{\infty} (\alpha_j(t)+2)I_{E_j}(t)U, $$ where (i) $E_j= (2^{-j},2^{-(j-1)}]$ for $j \geq 1$. \medskip (ii) $\{\alpha_j(t): j \geq 1\}$ are independent random processes with $\alpha_j(\cdot)$ defined on $E_j$ such that for $j \ge 1$ $$ P(\alpha_j (t) = \sin(2 \pi 2^{j}t), t \in E_j)=1/2, $$ and $$ P(\alpha_j (t) = \sin (2 \pi 2^{j+1}t), t \in E_j)=1/2. $$ \medskip (iii) $U$ is a uniform random variable on $[3/2,2]$, independent of the $\{\alpha_j\}$. \medskip Since the $\alpha_j$'s are zero at endpoints of the $E_j$ and we have set $X(0)= 0,$ it is easy to see $X(t)$ has continuous paths on $[0,1]$. Moreover, $X(t)$ is Lip-1 on $[0,1]$ and $X(t)$ has a density for each $t \in (0,1]$, but our CLT fails. The failure of the empirical CLT can be shown by verifying a lemma of the sort we have above for Brownian motion, and again we see the lack of uniformly bounded densities is a determining factor. \bigskip For each integer $n \geq 1,$ let $X_1,\cdots,X_n$ be independent copies of $X$, and again take $\mathcal{C}=\{C_{t,x}: 0\leq t \leq 1,~ \infty<x< \infty\}$, where $C_{t,x}=\{z \in C[0,1]: z(t) \leq x\}$. Also, define $\mathcal{C}_{Q}$ as in Lemma 7. Then, we have the following lemma, and combined with the argument in section 8.1 we see the empirical CLT fails when this $X(\cdot)$ is used. \bigskip {\bf Lemma 8.} For each integer $n \geq 1$, with probability one $$ \Delta^{\mathcal{C}_{Q}} (X_1,\cdots,X_n) =2^n, $$ where $\Delta^{\mathcal{C}_{Q}} (X_1,\cdots,X_n)= \rm {card}\{C\cap \{X_1,\cdots,X_n\}: C \in \mathcal{C}_{Q}\}.$ \bigskip {\bf Proof.} Suppose one wants the $k$ functions $\{X_{i_1},\cdots,X_{i_k}\}$ with probability one, where $1\leq i_1< i_2<\cdots<i_k\leq n$. If we write $$ X_i(t)= t \sum_{j=1}^{\infty} (\alpha_{i,j}(t)+2)I_{E_j}(t)U_i $$ where the $\{\alpha_{i,j}: j \geq 1\}$ are independent copies of $\{\alpha_j(t): j \geq 1\}$ and $\{U_i: i \geq 1\}$ are independent copies of $U$, independent of all the $\alpha_{i,j}$'s, then this can be arranged by taking $$ \alpha_{i,j}(t) = \sin (2 \pi 2^{j+1}t),i \in \{i_1,\cdots, i_k\}, $$ and $$ \alpha_{i,j}(t) = \sin (2 \pi 2^{j}t),i \in \{1,\cdots,n\}\cap \{i_1,\cdots, i_k\}^c, $$ provided we set $t=t_j= 2^{-j} + \frac{1}{4}(2^{-(j-1)} - 2^{-j}).$ The probability of this configuration on the interval $E_j$ is $1/2^n$ and hence with probability one the Borel-Cantelli lemma implies there are infinitely many (random) $\{t_j\downarrow 0\}$ such that $$ \alpha_{i_1,j}(t_j)=\cdots=\alpha_{i_k,j}(t_j)=0, $$ and $$ \alpha_{i,j}(t_j)=1 $$ for all other $i \in \{1,\cdots,n\}$. Thus with probability one there are infinitely many rational numbers $t \downarrow 0$ such that $$ C_{t,x(t)} \cap \{X_1,\cdots, X_n\} =\{X_{i_1},\cdots, X_{i_k}\}, $$ provided $$ x=x(t)=\frac{17t}{4}. $$ Of course, $x(t)$ is then also in $Q$, and since $k$ and the set $\{i_1,\cdots,i_k\}$ were arbitrary, and we can pick out $\{X_{i_1},\cdots,X_{i_k}\}$ with probability one using sets in $\mathcal{C}_{Q}$, the lemma follows as the intersection of $2^n$ subsets of probability one has probability one. \bigskip To see $X(t)$ is Lip-1 on $[0,1]$, observe that since the intervals $\{E_j:j \geq 1\}$ are disjoint, $X(t)$ is differentiable on there interiors, and an easy computation implies $$ \sup_{j\geq 1} \sup_{2^{-j}<t<2^{-(j-1)}} |X'(t)| \leq [ 4 \pi + 3]U. $$ Furthermore, $X(t)$ is continuous on $[0,1]$, so we can apply the mean value theorem on each of these intervals. If $0 < s<t \leq 1$ we either have $s,t \in E_j$ for some $j\geq 1$ or $s \in E_j$ and $t\in E_k$ for $j>k$. If $0 < s<t \leq 1$, then since $t(\alpha_j(t)+2)U$ is differentiable on the interior of $E_j$ and continuous on $E_j$ the mean value theorem implies $$ |X(t) - X(s)| \leq (4\pi + 3)U|t-s|. $$ If $s \in E_j$ and $t \in E_k$ with $j>k$, then by applying the mean value theorem in each interval we obtain $$ |X(t) - X(s)| \leq |X(s)-X(2^{-(j-1)})|~~~~~~~~~~~~~~~~~ $$ $$ ~~~~~~~~~~~~~~~~~~~~~~~~~+ \sum_{r=1}^{j-k-1} |X(2^{-(j-1)}) - X(2^{-(j-1-r)})| + |X(2^{-k})-X(t)| $$ $$ ~~~~~~~~~~~~~~~~~~~~~~~~\leq (4\pi+3)U|t-s|. $$ If $s=0$, then since $X(0)=0$, the proof is immediate. Hence $X(t)$ is Lip-1 as claimed, and the Lipschits constant is bounded by $(4\pi +3)U$ with probability one. Furthermore, since $U$ is uniform on $[3/2,2]$ and independent of the $\{\alpha_j\}$, then $X(t)$ has a density for each $t \in (0,1]$. \subsection{Variations of the L-condition and the CLT.} Here we produce examples where the sets $\mathcal{C}$, or more precisely the class of indicator functions given by $\mathcal{C}$, are $P$-pregaussian, and yet $\mathcal{C} \epsilon \hspace{-0.37em}/ CLT(P)$. More importantly, they also satisfy the modified L-condition, i.e. we say $\{X_s: s \in E\}$ satisfies the modified L-condition if for all $t \in E$ and $\epsilon>0$ \begin{align}\label{modLcondition}{\Pr}^{*}(\sup_{\rho(s,t)\le \epsilon}|{F}_{t}(X_{s}) -{F}_{t}(X_{t})|>\epsilon^{2})\le L\epsilon^{2}. \end{align} Hence these examples also provide motivation for the use of the distributional transforms in the L-condition of (\ref{Lcondition})used in Theorem 3. Notation for the examples in this subsection is as follows. Let $E=\{1,2,3,\cdots\}$ and assume $D(E) =\{z: z(t) =0~ \rm {or}~ 1, t \in E\}$ with $\mathcal{C}=\{C_{t,y}: t \in E, y \in \mathbb{R}\}$, where $C_{t,y}=\{z \in D(E): z(t)\le y\}$. Then, since the functions in $D(E)$ take only the values zero and one, we have $$ \mathcal{C}= \mathcal{C}_0 \cup \mathcal{C}_1, $$ where $ \mathcal{C}_a=\{\tilde C_{t,a}: t \in E\}$ and for $t \in E, a=0,1$, $\tilde C_{t,a}=\{z \in D(E): z(t)=a\}$. Also, let $\Sigma$ denote the minimal sigma-field of subsets of $D(E)$ containing $\mathcal{C}$, and let $P$ denote the probability on $(D(E), \Sigma)$ such that $P(\tilde C_{t,1}) =p_t$ and the events $\{\tilde C_{t,1}: t \in E\}$ are independent events, i.e. $P$ is a product measure on the coordinate spaces of $D(E)=\{0,1\}^{\mathbb{N}}$ with the $t^{th}$ coordinate of $D(E)$ having the two point probability that puts mass $p_t$ on one, and $1-p_t$ on zero. \begin{prop} Let $\mathcal{C}$ be defined as above. Then, (i) $\mathcal{C}$ is $P$-pregaussian whenever $p_t= o((\rm {log}(t+2))^{-1}) ~\rm~{as}~ t \rightarrow \infty$. (ii) $\mathcal{C} \in CLT(P)$ if and only if for some $r>0$ $$ \sum_{t=1}^{\infty} (p_t(1-p_t))^r < \infty. $$ (iii) If $p_t= (\rm {log} (t+2))^{-2}$, and $\{H(t):t \in E\}$ consists of centered independent Gaussian random variables with $E(H(t)^2)= (\rm {log} (t+2))^{-\frac{3}{2}}$, then $\{X_s: s \in E\}$ satisfies the modified L-condition, $\mathcal{C}$ is $P$-pregaussian, and $ \mathcal{C} \epsilon \hspace{-0.37em}/ CLT$. In particular, in view of Theorem 3, it does not satisfy the L-condition. \end{prop} \begin{proof}Since the events of $\mathcal{C}_0$ are complements of those in $\mathcal{C}_1$, it is easy to see that $\mathcal{C}_0 \in CLT(P)$ if and only if $\mathcal{C}_1 \in CLT(P),$ and since Theorem 3.8.1 of \cite{Dudley-unif-clt} implies finite unions of classes of sets that satisfy the $CLT(P)$ are in $CLT(P)$, we have $\mathcal{C} \in CLT(P)$ if and only if $\mathcal{C}_1 \in CLT(P)$. Therefore, since the events of $\mathcal{C}_1$ are independent, Theorem 3.9.1 in \cite{Dudley-unif-clt}, p. 122, implies that $\mathcal{C}_1 \in CLT(P)$ if and only if for some $r>0$ $$ \sum_{t=1}^{\infty} (p_t(1-p_t))^r < \infty. $$ Hence (ii) holds. Now the centered Gaussian process $\{G_P(C): C \in \mathcal{C}_1\}= \{G_P(C_{t,1}): t\in E\},$ and since the random variables $\{G_P(C_{t,1}): t\in E\}$ are mean zero and $E(G_P(C_{t,1})^2) = p_t(1-p_t)$, we have $\mathcal{C}_1$ is $P$-pregaussian provided $$ p_t=o((\rm {log}(t+2))^{-1}) ~\rm~{as}~ t \rightarrow \infty. $$ Similarly, taking complements we have$\{G_P(C): C \in \mathcal{C}_0\}= \{G_P(C_{t,0}): t\in E\}$ is also $P$-pregaussian under this condition for $p_t$, and hence $\mathcal{C}=\mathcal{C}_0 \cup \mathcal{C}_1$ is $P$-pregaussian whenever $p_t=o((\rm {log}(t+2))^{-1})$ as $t \rightarrow \infty$. Hence (i) holds. To verify (iii) we take $p_t= (\rm {log} (t+2))^{-2}$, and $\{H(t):t \in E\}$ to be centered independent Gaussian random variables with $E(H(t)^2)= (\rm {log} (t+2))^{-\frac{3}{2}}$. If $ \rho^2(s,t)=E((H(s)-H(t))^2)$, then, for $s \neq t,$ $$ \rho^2(s,t)= (\rm {log} (s+2))^{-\frac{3}{2}}+ (\rm {log} (t+2))^{-\frac{3}{2}} \ge \rm{max} \{ (\rm {log} (s+2))^{-\frac{3}{2}}, (\rm {log} (t+2))^{-\frac{3}{2}}\}. $$ In addition, we have $|F_t(X_s)-F_t(X_t)|= 0$ if $X_s=X_t=0$ or $X_s=X_t=1$, and $|F_t(X_s)-F_t(X_t)|= p_t$ if $X_t \neq X_s$. Therefore, for all $t \in E$ fixed and $\epsilon>0$ we have $$ {\rm {Pr}^{*}} (\sup_{\{s:\rho(s,t)\le \epsilon\}} |F_t(X_s)-F_t(X_t)|>\epsilon^2)= 0 \rm~{if}~ p_t \leq \epsilon^2, $$ and $$ {\rm {Pr}^{*}} (\sup_{\{s:\rho(s,t)\le \epsilon\}} |F_t(X_s)-F_t(X_t)|>\epsilon^2) \leq \rm {Pr}^{*}(\sup_{\{s:\rho(s,t)\le \epsilon\}} I_{X_t \neq X_s} >0 ) \rm~{for}~ p_t > \epsilon^2. $$ Of course, $E$ countable makes the outer probabilities in the above ordinary probabilities, but for simplicity we retained the outer probability notation used in (\ref{weakLcondition}) and (\ref{Lcondition}). Now $\rho(s,t) \leq \epsilon$ implies $$ \rm{max} \{ (\rm {log} (s+2))^{-\frac{3}{4}}, (\rm {log} (t+2))^{-\frac{3}{4}}\} \le \epsilon. $$ Thus if $ p_t = (\rm {log} (t+2))^{-2}> \epsilon^2 $ we have $\{\sup_{\{s:\rho(s,t)\le \epsilon\}} I_{X_t \neq X_s} >0 \} = \emptyset$. Combining the above we have for each fixed $t \in E$ and $\epsilon>0$ that $$ {\rm {Pr}^{*}} (\sup_{\{s:\rho(s,t)\le \epsilon\}} |F_t(X_s)-F_t(X_t)|>\epsilon^2)= 0, $$ and hence the modified L-condition for $\{X_s: s \in E\}$ holds. Thus (iii) follows. \end{proof} \section{Appendix on Talagrand's Continuity Result for Gaussian Processes.} The following result appears in \cite{talagrand-generic} without proof. There is a curious wording at the end of the first sentence in its statement there, which after closer inspection suggests that cutting and pasting led to something being omitted. Hence we include a proof, where we have added the necessary assumption of total boundedness of the parameter space to the statement of the theorem. \begin{thm} \label{talagrand-generic}Let $\{X_t: t \in T\}$ be a centered Gaussian process with $L_2$ distance $d(s,t), s,t \in T$, where $T$ is countable and $(T,d)$ is totally bounded. Then, the following are equivalent: (i) $X_t$ is uniformly continuous on $(T, d)$ with probability one. (ii) We have \[ \lim_{\epsilon \rightarrow 0} E(\sup_{d(s,t) \le \epsilon}(X_s - X_t)) = 0. \] (iii) There exists an admissible sequence of partitions of $T$ such that \[ \lim_{k \rightarrow \infty} \sup_{t \in T} \sum_{n \geq k} 2^{n/2}\Delta(A_n(t)) =0. \] \end{thm} \begin{proof} First we will show (i) and (ii) are equivalent. If (ii) holds, then by Fatou's lemma we have $$ 0 = \lim_{n \rightarrow \infty} E(\sup_{d(s,t) \le 1/n}|X_s - X_t|) \ge E( \liminf_{n \rightarrow \infty} \sup_{d(s,t) \le 1/n}|X_s - X_t|). $$ Thus, with probability one $$ \liminf_{n \rightarrow \infty} \sup_{d(s,t) \le 1/n}|X_s - X_t|=0, $$ and since the random variables $ \sup_{d(s,t) \le 1/n}|X_s - X_t|$ decrease as $n$ increases, this implies with probability one $$ \lim_{n \rightarrow \infty} \sup_{d(s,t) \le 1/n}|X_s - X_t|=0, $$ which implies (i). If we assume (i), then since $(T,d_X)$ is assumed totally bounded, we have $$ Z=\sup_{t \in T} |X_t| < \infty $$ with probability one, and the Fernique-Landau-Shepp Theorem implies $Z$ is integrable. Since $$ \sup_{d(s,t) \le \epsilon}|X_s- X_t| \leq 2Z, $$ and (i) implies $$ \lim_{\epsilon \rightarrow 0} \sup_{d(s,t)\le \epsilon}|X_s- X_t| =0 $$ with probability one, the dominated convergence theorem implies (ii). Thus (i) and (ii) are equivalent. Now we assume (i) and (ii), and choose $\varepsilon_{k}\downarrow 0$ such that \[\sup_{s}\mathbb{E} \sup_{\{t: d(t,s)\le \epsilon_{k}\}}X_{t} \le\sup_{s}\mathbb{E}\sup_{\{t: d(t,s)\le \epsilon_{k}\}}(X_{t}-X_{s})\le\mathbb{E} \sup_{d(s,t)\le \varepsilon_{k}}(X_{t}-X_{s}) \le 2^{-k}. \] Since $(T,d)$ is totally bounded, and (i) holds, by Theorem 2.1.1 of \cite {talagrand-generic} there exists an admissible sequence of partitions $\{\mathcal{\tilde A}_n: n \ge 0\}$ of $(T,d)$ such that for a universal constant $L$ we have $$ \frac{1}{2L} \sup_{t \in T} \sum_{n \geq k} 2^{n/2}\Delta(A_n(t)) \le E(\sup_{t \in T} X_t). $$ Now choose $\{n_{k}: k \ge 1\}$ to be a strictly increasing sequence of integers such that $ n_1>4$ and \begin{align} 2^{\sum_{2 \le j\le k}1/\epsilon_{j}^{2}}\le n_{k-1}. \end{align} Based on the $n_k$'s we define an increasing sequence of partitions, $\mathcal{B}_{n}$. For $0 \le n\le n_{1}$ we let $\mathcal{B}_{n}=\mathcal{\tilde A}_{n}$. For $n_{1}<n\le n_{2}$ we proceed as follows. First we choose a maximal set $\{s_{1},\ldots, s_{N(\epsilon_{2})}\}$ of $(T,d)$ for which $d(s_{i},s_{j})\ge \epsilon_{2}.$ By Sudakov's inequality we know that $N(\epsilon_{2})\le 2^{1/\epsilon_{2}^{2}}$. To define the partitions for $n_1<n \le n_2$ we next consider the partition of $T$ formed by the sets \begin{align} C_j= B(s_j,\epsilon_2) \cap(\cup_{k=1}^{j-1}B(s_k, \epsilon_2))^{c}, 1 \le j \le N(\epsilon_2), \end{align} and the sets $B(s,\epsilon)$ are $\epsilon$ balls centered at $s$. Then by Theorem 2.1.1 of \cite{talagrand-generic} for every integer $1 \le j \le N(\epsilon_2)$ there exists an admissible sequence of partitions for $(C_j,d)$, which we denote by $\mathcal{B}_{n_1,n}^{s_j}$, such that \begin{align*}&2^{-2}\ge \mathbb{E}\sup_{\{t\in C_{j}\}}X_t \label{all s}\ge\frac{1}{2L} \sup_{{\{t\in C_{j}\}}}\sum_{n\ge 0}2^{n/2}\Delta(B^{s_{j}}_{n_{1},n}(t)) \\ &\\ &\ge \frac{1}{2L} \sup_{{\{t\in C_{j}\}}}\sum_{n_{1}<n\le n_{2}}2^{n/2}\Delta(A_{n_{1}}(t)\cap B^{s_{j}}_{n_{1},n}(t)).\\ \end{align*} Since the sets $C_{j}$ form a partition of $T$, if $B_{n_{1},n}$ is one of the sets, $B^{s_{j}}_{n_{1},n}(t)$, then \begin{align*} 2^{-2}\ge\frac{1}{2L} \sup_{t\in T}\sum_{n_{1}<n\le n_{2}}2^{n/2}\Delta(A_{n_{1}}(t)\cap B_{n_{1},n}(t)), \end{align*} and we define the increasing sequence of partitions $\mathcal{B}_{n_{1},n}$ to be all sets of the form $A_{n_{1}}(t)\cap B_{n_{1},n}(t), $ where $t \in T$ and $B_{n_1,n} \in \mathcal{B}^{s_{j}}_{n_{1},n}(t)$ for some $j \in [1, N(\epsilon_2)]$. Furthermore, since the $C_j$'s are disjoint, for $n_1<n \le n_2$ we have \begin{align} {\rm Card}(\mathcal{B}_{n_1,n}) \le 2^{2^{n_{1}}}2^{2^{n}}N(\epsilon_{2})\le 2^{2^{n+1}}2^{1/\epsilon_{2}^{2}}\le 2^{2^{n+1}} n_{1}\le 2^{2^{n+1}}2^{2^{n}}=2^{2^{n+2}}, \end{align} and for $n_1<n \le n_2$ we define $\mathcal{B}_n = \mathcal{B}_{n_1,n}$. Iterating what we have done for $n_1<n\le n_2$, we have increasing partitions $\mathcal{B}_{n_{k-1},n},~ n_{k-1}<n \le n_k,$ for which \begin{align*}& (2L)2^{-k}\ge \sup_{t}\sum_{n\ge 0} 2^{n/2}\Delta(B_{n_{k-1},n})\\ &\\ &\ge\sup_{t}\sum_{n_{k-1}<n\le n_{k}}2^{n/2}\Delta(B_{n_{k-1},n}(t)\cap B_{n_{k-2},n_{k-1}}(t)\cdots \cap B_{n_{1},n_{2}}(t)\cap A_{n_{1}}(t)), \end{align*} and for $n_{k-1}<n\le n_k$ we define $\mathcal{B}_n=\mathcal{B}_{n_{k-1},n}$. Therefore, we now have an increasing sequence of partitions $\{\mathcal{B}_n: n \ge 0\}$ such that \begin{align*}&(2L)\sum_{k\ge r}2^{-k}\ge \sum_{k\ge r}\sup_{t}\sum_{n_{k-1}<n\le n_{k}}2^{n/2}\Delta(B_{n_{k-1},n}(t)\cap( \cap_{j=2}^{k-1} B_{n_{j-1},n_{j}}(t))\cap A_{n_{1}}(t))\\ &\\ &\ge\sup_{t}\sum_{k\ge r}\sum_{n_{k-1}<n\le n_{k}}2^{n/2}\Delta(B_{n_{k-1},n}(t)\cap ( \cap_{j=2}^{k-1} B_{n_{j-1},n_{j}}(t))\cap A_{n_{1}}(t)), \end{align*} and, letting $B_n(t)$ denote the generic set of $\mathcal{B}_n$ containing $t$, we have \begin{align} (2L)\sum_{k\ge r}2^{-k}\ge \sup_{t}\sum_{k\ge r}\sum_{n_{k-1}<n\le n_{k}}2^{n/2}\Delta(B_{n}(t)). \end{align} Now to count the number of elements in each partition. Since (1) holds, the partitions $\mathcal{B}_{n_{k-1},n}^{s_j}$ are assumed admissible, and the $C_j$'s used at the subsequent iterations are always disjoint, we have for $n_{k-1}<n\le n_{k}$ that \begin{align*}{\rm Card}(\mathcal{B}_{n_{k-1},n})\le 2^{2^{n_{1}}}[\prod_{j=2}^{k-1}2^{2^{n_{j}}}N(\epsilon_{j})]2^{2^{n}}N(\epsilon_{k}) \le 2^{\sum_{1 \le j\le k-1}2^{n_{j}}+2^n}n_{k-1}\le 2^{2^{n+2}}. \end{align*} Given the increasing sequence of partitions $\{\mathcal{B}_n: n \ge 0\}$, we now define the partitions $\mathcal{A}_n$ to be the single set $T$ for $n=0,1$ and $\mathcal{A}_n = \mathcal{B}_{n-2}$ for $n \ge 2$. Since we have ${\rm Card}(\mathcal{B}_n) \le 2^{2^{n+2}}$ for $n\ge 0$ we thus have that the $\mathcal{A}_n$'s are admissible and using (4) above they satisfy (iii). \end{proof} \bibliographystyle{amsalpha}
train/arxiv
BkiUf8g5qhDC0uAdXcKs
5
1
\section{Introduction} In the current standard cosmological model, baryonic matter and cold dark matter together contribute only about a third of the total energy density of the Universe. One of the most important puzzles in cosmology is to account for the remaining two-thirds of the energy, which is required to render the Universe approximately spatially flat, as demanded by recent observations of the Cosmic Microwave Background \citep[CMB;][]{deBer00}. The existing set of cosmological data -- consisting principally of measurements of the CMB, of the local clustering of galaxies, and of distant supernovae -- can be understood by invoking the existence of the `cosmological constant' $\Lambda$ originally envisaged by Einstein, such that it contributes a present-day fractional energy density $\Omega_\Lambda = 0.73 \pm 0.04$ \citep{Sper03}. The remarkable consequence of this model is that $\Lambda$ acts as a `repulsive gravity', driving the rate of cosmic expansion into a phase of acceleration. Equally surprisingly, this acceleration has been observed reasonably directly by an apparent dimming of distant supernovae \citep{Riess98,Perl99}. The cosmological constant component is naturally attributed to the inherent energy density of the vacuum. However, the `expected' quantum-mechanical Planck energy density is larger than that required to account for the accelerating rate of cosmic expansion by an exceptionally large dimensionless factor of order $c^5\, G^{-1} \hbar^{-1} H_0^{-2}$ $\sim 10^{122}$. This profound difficulty has motivated the development of alternative models for the `dark energy' -- i.e.\ the causative agent of accelerating expansion. Many of these models, such as `quintessence' \citep{RP88}, feature a {\it dynamic} component of dark energy whose properties evolve with time (e.g.\ a rolling scalar field $\phi$). These predictions are commonly characterized in terms of the dark energy equation-of-state $w(z) = P/\rho$, relating its pressure $P$ to its energy density $\rho$ (in units where the speed-of-light $c = 1$). For the cosmological constant model, $w(z) = -1$ at all epochs. These competing models for the dark energy are essentially untested, because the current cosmological dataset is not adequate for delineating variations in the function $w(z)$ with redshift, which is the essential requirement for distinguishing quintessential cosmologies from a cosmological constant. New experiments are demanded, which must be able to recover the characteristics of dark energy with unprecedented precision, including any cosmic evolution. The study of the nature of dark energy is the current frontier of observational cosmology, and has been widely identified as having profound importance for physics as a whole \citep{QuarkCosmos}. Given the fundamental importance of accelerating cosmic expansion and the possibility of confounding systematic effects in the supernova data, other precision probes of the dark energy model are clearly desirable. In \cite{BG03}, hereafter Paper I, we suggested that the small-amplitude `baryonic oscillations', which should be present in the power spectrum of the galaxy distribution on large scales ($\gtrsim 30$ Mpc), could be used as a `standard cosmological ruler' to measure the properties of dark energy as a function of cosmic epoch, provided that a sufficiently-large high-redshift ($z \ga 1$) galaxy survey was available \citep[see also][]{SE03,Linder03,HH03}. The characteristic sinusoidal ruler scale encoded by the baryonic oscillations is the sound horizon at recombination, denoted $s$. This length scale is set by straight-forward linear physics in the early Universe and its value is determined principally by the physical matter density ($\rho_m \propto \Omega_m h^2$). The cosmological uncertainty in this parameter combination is ameliorated by an advantageous cancellation of pre-factors of $\Omega_m h^2$ in the ratio of low-redshift distances to $s$ (see Paper I). The residual dependence of $s$ on $\Omega_m$ and other cosmological parameters is small \citep{EW04}, rendering observations of the acoustic oscillations a powerful geometric probe of the cosmological model. This acoustic signature has recently been identified at low redshift in the distribution of Luminous Red Galaxies in the Sloan Digital Sky Survey \citep{Dan05} \citep[see also][]{Cole05}. Although these data are insufficient for precise measurements of the dark energy, this analysis represents a striking validation of the technique. The challenge now is to create larger and deeper surveys. In Paper I we demonstrated that, given a galaxy redshift survey at $z \sim 1$ mapping a total cosmic volume several times greater than that of the Sloan main spectroscopic survey in the local Universe (we define $V_{\rm Sloan} \equiv 2 \times 10^8 \, h^{-3}$ Mpc$^3$), the equation-of-state of the dark energy could be recovered to a precision $\Delta w \approx 0.1$ (assuming a model in which $w$ is constant). The precision of this experiment scales with cosmic volume $V$ in a predictable manner (roughly in accordance with $1/\sqrt{V}$) and it is not unfeasible to imagine an ultimate `all-sky' high-redshift spectroscopic survey within $\sim 20$ years. We believe that this standard ruler technique would powerfully complement proposed future supernova searches such as the SNAP project \citep[e.g.][ {\tt http://snap.lbl.gov}]{Ald02} permitting, for example, direct tests of the `reciprocity relation' which predicts that the true luminosity distance $d_L(z)$ (measured by a standard candle) is exactly the same as the angular diameter distance $d_A(z)$ (measured by a standard ruler), up to a factor of $(1+z)^2$ \citep{BK04}. Furthermore, we argue that it is {\it not yet conclusively proven} that the dimming of supernovae caused by cosmic acceleration can be distinguished with sufficient accuracy from other possible systematic effects, such as changes in the intrinsic properties of supernovae with galactic environment (e.g.\ metallicity), dust extinction effects, population drift and the difficulties of sub-percent-level photometric calibration (including K-corrections) across wide wavelength ranges. It is therefore important to pursue alternative precise high-redshift probes of the cosmological model. A particular advantage of the baryonic oscillations method is that it is probably free of major systematic errors, assuming that galaxy biasing on large scales is not pathological. In Paper I we developed a Monte Carlo, semi-empirical approach to transforming synthetic galaxy redshift surveys into constraints on the value of $w$. This contrasts with, and complements, other analytical approaches to the problem such as those using Fisher-Information techniques \citep[e.g.][]{SE03,HH03}. In this present study we extend and refine our methodology to the more general case where we allow the equation-of-state of dark energy to have a dependence on redshift, $w(z) = w_0 + w_1 z$, and we recover joint constraints upon $(w_0,w_1)$. Furthermore, we apply the standard ruler independently to the radial and tangential components of the power spectrum, which has the effect of producing separate measurements of the co-moving angular-diameter distance to the effective redshift of each survey slice, $x(z)$, and the rate of change of this quantity with redshift, $x'(z) \equiv dx/dz$. We note for clarity that in a flat Universe: $$ x(z) = D_A(z) (1+z) $$ $$x'(z) = c/H(z)$$ where $D_A$ is the physical angular diameter distance and $H(z)$ is the Hubble factor (Universal expansion rate) at redshift $z$. We also examine in more detail the effect of uncertainties in cosmological parameters such as the matter density and the Hubble parameter. In addition, we present an in-depth discussion of the observational requirements and prospects for realistic surveys, based upon both spectroscopic redshifts and photometric redshifts. The plan of this paper is as follows: in Section~\ref{secmeth} we give a very detailed description of our methodology, greatly expanding on Paper I, and the approximations we made to allow us to simulate a range of large surveys, and in Section~\ref{secdetect} we quantify the size of redshift survey required to detect the oscillatory component of the power spectrum. The recovered constraints on $(w_0, w_1)$ for various simulated surveys are presented in Section~\ref{secmeas} for spectroscopic surveys and in Section~\ref{secphoto} for photometric-redshift surveys. Finally, in an Appendix we present the results of a very large computation designed to test the effects of our approximations. \section{Monte Carlo Methodology} \label{secmeth} Our methodology for simulating future galaxy redshift surveys and assessing their efficacy for measuring acoustic oscillations was summarized in Paper I. In this Section we provide a more detailed account of our procedures. In addition, we have implemented various extensions to the methodology of Paper I: \begin{enumerate} \item We now fit separate acoustic oscillation scales in the tangential and radial directions. In Paper I we fitted to the angle-averaged power spectrum, effectively assuming that the shift in the apparent radial and transverse scales as the cosmology was perturbed about the fiducial value were the same. This is only approximately true for $z\sim 1$ and breaks down at low and high redshift (see Figure~5 of Paper~I). In the new approach we make use of the different redshift dependencies of radial and transverse scales to provide extra cosmological constraints increasing signal:noise. Specifically, the tangential scale is controlled by the co-moving distance to the effective redshift of the survey, $x(z)$, and the radial scale is determined by the rate of change of this quantity with redshift, $x'(z) \equiv dx/dz = c/H(z)$, where $H(z)$ is the Hubble constant measured by an observer at redshift $z$. This is useful because $H(z)$ is directly sensitive to the dark energy density at the redshift in question. \item We allow the equation-of-state of dark energy $w(z)$ to have a dependence on redshift, $w(z) = w_0 + w_1 z$, and we recover joint constraints upon $(w_0,w_1)$. We do not claim that this expression faithfully describes dark energy in the real Universe. In particular, models with $w_1 > 0$ become unphysical at high redshift unless we impose a cut-off for the evolving term: we assume that $w(z > z_{\rm cut}) = w_0 + w_1 z_{\rm cut}$ where $z_{\rm cut} = 2$, and we ensure that matter dominates at high redshift, i.e. $w_1 \le (-w_0)/z_{\rm cut}$. However, usage of the equation $w(z) = w_0 + w_1 z$ facilitates comparison with other work such as \cite{SE03} who use the same parameterization, empirically describes a range of dark energy models \citep{WA02}, and permits a first disproof of the cosmological constant scenario, if $w_1 \ne 0$ and/or $w_0 \ne -1$. We note that the alternative parameterization $w(z) = w_0 + w_a (1-a)$, where $a=(1+z)^{-1}$ is the usual cosmological scale factor, encodes a more physically reasonable behaviour at high redshift \citep{Linder02}. In this paper we compute $(w_0,w_a)$ constraints for one case. \item We place Gaussian priors upon the other relevant cosmological parameters, the matter density $\Omega_{\rm m}$ and the Hubble parameter $h = H(z=0)/(100$ km s$^{-1}$ Mpc$^{-1})$, rather than assuming that their values are known precisely. \end{enumerate} \subsection{Method Summary} \label{secsumm} As described in Paper I, the philosophy of our analysis is to maintain maximum independence from models. When measuring the acoustic oscillations from simulated data, we divide out the overall shape of the power spectrum via a smooth `reference spectrum'. We then fit a simple empirically-motivated decaying sinusoid to the remaining oscillatory signal. Hence we do not utilize any information encoded by the shape of the power spectrum. This shape may be subject to smooth broad-band systematic tilts induced by such effects as complex galaxy biasing schemes, quasi-linear growth of structure, a running primordial spectral index, and redshift-space distortions. However, it would be very surprising if any of these phenomena introduced {\it oscillatory} features in $k$-space liable to obscure the distinctive acoustic peaks and troughs. We note that any model where the probability of a galaxy forming depends only on the local density field leads to linear bias on large scales \citep{Coles93,SW98}. Furthermore, linear biassing is observed to be a very good approximation on large scales \citep[e.g.][]{PeaDod,Cole05}. This is in agreement with numerical simulations of galaxy formation which show that galaxies and/or massive haloes faithfully reproduce the acoustic oscillations \citep{Spr05,Ang05}. Of course, a full power spectrum template should be fitted to real data as well: our aim here is to derive robust, conservative lower limits to the efficacy of baryon oscillations experiments, using only the information contained in the oscillations. An important point is that the fractional error in the measured galaxy power spectrum, $\sigma_P/P$, after division by a smooth overall fit, is independent of the absolute value of $P(k)$ if the error budget is dominated by cosmic variance rather than by shot noise. In this sense, an incorrect choice of the underlying model power spectrum in our simulations does not seriously affect the results presented here. Having secured a detection of the acoustic signature, if one is then prepared to model the underlying power spectrum -- correcting for such systematic effects as non-linear gravitational collapse, redshift-space distortions and halo bias -- then more accurate constraints on cosmological parameters would follow \citep[see][]{Dan05}. In summary (see Section \ref{secsteps} for a more detailed account): we generate a model matter power spectrum in the linear regime using the fitting formulae of \cite{EH98}, assuming a primordial spectral index $n = 1$ (as suggested by inflationary models) and fiducial cosmological parameters $\Omega_{\rm m} = 0.3$, $h = 0.7$ and baryon fraction $\Omega_{\rm b}/\Omega_{\rm m} = 0.15$, broadly consistent with the latest determinations \citep[e.g.][]{Sper03}. In Paper~I we showed that the cosmological constraints are fairly insensitive to the exact value of $\Omega_{\rm b}/\Omega_{\rm m}$ (Figures~7--8 in Paper~I; increasing $\Omega_{\rm b}$ results in a somewhat higher baryonic oscillation amplitude, hence a more precise measure of the standard ruler). We assume that the shape of $P(k)$ does not depend on the dark energy component, and take the $z = 0$ normalization $\sigma_8 = 1$. The model $P(k)$ is then used to generate Monte Carlo realizations of a galaxy survey covering a given geometry, deriving redshifts and angular co-ordinates for the galaxies using a fiducial flat cosmological constant model. The realizations are then analyzed for a grid of assumed dark energy models. $P(k)$ is measured using a Fast Fourier Transform (FFT) up to a maximum value of $k$ determined by a conservative estimate of the extent of the linear regime at the redshift in question (see Paper I, Figure 1). The measured power spectrum is fitted with a decaying sinusoid with the `wavelength' as a free parameter. By comparing the fiducial wavescale (determined using the values of $\Omega_{\rm m} h^2$ and $\Omega_{\rm b} h^2$ in conjunction with a standard fitting formula, e.g. \cite{EB99}) with the distribution of fitted wavescales across the Monte Carlo realizations, we can reject each assumed dark energy model with a derivable level of significance. We assume a flat universe (in Section~\ref{secmethlik}, Figure \ref{figcurvgen} below we indicate how our dark energy measurements weaken with declining knowledge of $\Omega_{\rm k}$). Throughout this paper we ensure that our model galaxy surveys contain sufficient objects that the contribution of shot noise to the error in the power spectrum is sub-dominant to that of cosmic variance. In an analytical treatment \citep[e.g.][]{Teggers97}, the relative contributions of shot noise and cosmic variance can be conveniently expressed in terms of the quantity $n \times P$, where $n$ is the typical number density of galaxies in the survey volume and $P$ is the galaxy power spectrum evaluated at some typical scale measured by the survey. Analytically, the errors due to shot noise and to cosmic variance are equal when $n \times P = 1$. For the simulations described in this paper, we uniformly populated the survey volume with sufficient galaxies that $n \times P = 3$, where $P$ is evaluated at a characteristic scale $k = 0.2 \, h$ Mpc$^{-1}$. The surface density of galaxies required to achieve this condition is illustrated in Figure \ref{fignp}. \begin{figure}[htbp] \begin{center} \epsfig{file=fig1.ps,angle=-90,width=7cm} \end{center} \caption{The surface density of galaxies per unit redshift, $dN/dz$, required to achieve power spectrum shot noise levels $n \times P = 1$ (dashed line; equal error contribution due to shot noise and to cosmic variance) and $n \times P = 3$ (solid line; adopted in our simulations). The power spectrum $P$ is evaluated at a `typical' scale $k = 0.2 \, h$ Mpc$^{-1}$; at zero redshift this value is taken to be $P = 2500 \, h^{-3}$ Mpc$^3$ (and is scaled to higher redshifts using the square of the linear growth factor $D_1(z)$). If the galaxies possess a linear bias $b$, then the amplitude of their power spectrum scales as $b^2$ and a proportionately lower surface density is required. For $b=1$ galaxies in a redshift slice of width $\Delta z = 0.2$ at $z=1$, a density $\approx 1800$ deg$^{-2}$ is required to achieve $n \times P = 3$.} \label{fignp} \end{figure} \subsection{Detailed fitting methodology} \label{secsteps} Here we outline the Monte-Carlo approach we have implemented in our code. In fact for most of our analysis we utilized a simple approximation as explained in Section \ref{secstream}: to be specific we omitted steps 8 and 9 below, which are very expensive in computational resources (we test this approximation in the Appendix). \begin{enumerate} \item A fiducial cosmology is chosen for the simulation: for example $(\Omega_{\rm m},h,w_0,w_1) = (0.3,0.7,-1,0)$. A fiducial baryon fraction is selected ($\Omega_{\rm b}/\Omega_{\rm m} = 0.15$). \item A survey redshift range $(z_{\rm min},z_{\rm max})$ and solid angle is specified: for example $1.0 < z < 1.3$ and $1000$ deg$^2$. The survey sky geometry is assumed to be bounded by lines of constant right ascension and declination of equal angular lengths. The three-dimensional geometry is therefore `conical' and the convolving effect of this window function is included. \item Using the fiducial cosmology, a cuboid for FFTs is constructed whose sides $(L_x, L_y, L_z)$ are just sufficient to bound the survey volume. Note, only the enclosed cone is populated by galaxies in order that the window function effect is treated properly. \item A model matter power spectrum $P_{\rm mass}(k,z=0)$ is computed for the chosen parameters $(\Omega_{\rm m},\Omega_{\rm b},h)$ from the fitting formula \citep{EH98}, assuming a $z=0$ normalization $\sigma_8 = 1$ and a primordial power-law slope $n=1$. The survey slice is assumed to have an `effective' redshift $z_{\rm eff} = (z_{\rm min} + z_{\rm max})/2$. The power spectrum is scaled to this redshift using the linear growth factor $D_1(z)$, obtained by solving the full second-order differential equation \citep[see e.g.][]{LinJen} to enable us to treat non-$\Lambda$CDM cosmologies: \begin{equation} P_{\rm gal}(k,z_{\rm eff}) = P_{\rm mass}(k,0) \, D_1(z_{\rm eff})^2 \, b^2 \end{equation} where we use a constant linear bias factor $b$ for the clustering of galaxies with respect to matter. The value $b=1$ is assumed for our surveys, unless otherwise stated. \item A set of Monte Carlo realizations (numbering 400 for all simulations presented here) is then performed to generate many different galaxy distributions consistent with $P_{\rm gal}(k)$, as described in steps 6 and 7. \item A cuboid of Fourier coefficients is constructed with grid lines set by $dk_i = 2\pi/L_i$, with a Gaussian distribution of amplitudes determined from $P_{\rm gal}(k)$, and with randomized phases. The gridding is sufficiently fine that the Nyquist frequencies in all directions are significantly greater than the smallest scale for which a power spectrum is extracted (i.e.\ the linear/non-linear transition scale at the redshift $z_{\rm eff}$). \item The Fourier cuboid is FFTed to determine the density field in the real-space box. This density field is then Poisson sampled within the survey `cone' to determine the number of galaxies in each grid cell. \item Using the fiducial cosmological parameters, this distribution is converted into a simulated catalogue of galaxies with redshifts and angular positions, for each Monte Carlo realization. \item We now assume a trial cosmology: for example $(\Omega_{\rm m},h,w_0,w_1) = (0.3,0.7,-0.9,0)$. The co-moving co-ordinates of the galaxies are computed in the trial cosmology as would be done by an observer without knowledge of the true cosmology. \item The power spectrum of the simulated survey for the trial cosmology is measured using standard estimation tools \citep{FKP94}. Power spectrum modes in Fourier space are divided into bins of $(k_{\rm perp},k_{\rm par})$ where, if the $x$-axis is the radial direction, $k_{\rm par} = k_x$ and $k_{\rm perp}^2 = k_y^2 + k_z^2$. \item An error bar is assigned to each power spectrum bin using the variance measured over the Monte Carlo realizations. Note that the distribution of realizations also encodes any covariances between different power spectrum bins, although the scale of correlations in $k$-space is expected to be very small (compared to the separation of the acoustic peaks) for the very large survey volumes considered here. \item The measured $P(k_{\rm perp},k_{\rm par})$ is divided by a smooth `reference spectrum' following Paper I \citep[i.e. the `no-wiggles' spectrum of][]{EH98}, and the result is fitted with a simple empirical formula, modified from Paper I to permit separate fitting of the sinusoidal scale in the radial and tangential directions: \begin{eqnarray} &\,& \frac{P(k_{\rm perp},k_{\rm par})}{P_{\rm ref}} = 1 + \nonumber \\ &\,& \hspace{-1cm} A \, k \, \exp{ \left[ - \left( \frac{k}{0.1 \, h \, {\rm Mpc}^{-1}} \right)^{1.4} \right] } \times \nonumber \\ &\,& \hspace{-1cm} \sin{ \left( 2 \pi \sqrt{ \left( \frac{k_{\rm perp}}{\lambda_{\rm perp}} \right)^2 + \left( \frac{k_{\rm par}}{\lambda_{\rm par}} \right)^2 } \right) } \end{eqnarray} where $k^2 = k_{\rm perp}^2 + k_{\rm par}^2$. The free parameters are then the tangential and radial sinusoidal wavescales $(\lambda_{\rm perp},\lambda_{\rm par})$ together with the overall amplitude $A$. \end{enumerate} We can now assign a probability to the trial cosmology. The Monte Carlo realizations produce a distribution of fitted wavescales $\lambda_{\rm perp}$ and $\lambda_{\rm par}$ (Figure \ref{figlamfit}) for the trial cosmology. Using these trial cosmological parameters, we can determine the length of the characteristic ruler $\lambda_{\rm theory}$ using a standard fitting formula for the sound horizon integral (Equation 1, Paper 1) in terms of $\Omega_{\rm m}$, $\Omega_{\rm b}$ and $h$ \citep{EB99}. (We remind the reader that $\lambda_{\rm theory}$ is set in the early Universe and is insensitive to dark energy parameters.) The location of the value of $\lambda_{\rm theory}$ in the distribution of tangential (radial) wavescales over the Monte Carlo realizations allows us to assign a probability for the trial cosmological parameters. For example: if $\lambda_{\rm theory}$ lies at the $16^{\rm th}$ percentile of the distribution, the rejection probability is $2 \times (50-16) = 68\%$. Note that the simulated observer does not need to know the fiducial cosmological parameters (including the dark energy model) to perform this analysis with real data. \begin{figure}[htbp] \begin{center} \epsfig{file=fig2.ps,angle=-90,width=7cm} \end{center} \caption{Distribution of fitted tangential and radial wavescales for 400 Monte Carlo realizations of a survey covering volume $10 \, V_{\rm Sloan}$ and redshift range $0.75 < z < 1.25$. The solid circle marks the expected value of the acoustic scale and the dashed lines bound regions containing $68\%$ of the fitted values. The tangential wavescale may be determined more accurately than the radial wavescale because more tangential modes are available (i.e.\ a cylindrical shell of radius $k_{\rm perp} = \sqrt{k_y^2 + k_z^2}$ centred upon the radial ($x$-) axis).} \label{figlamfit} \end{figure} \subsection{The streamlined approach} \label{secstream} The full methodology outlined above is too computationally intensive for the exploration of a full grid of trial cosmological parameters $(\Omega_{\rm m},h,w_0,w_1)$. In practice we pursue a streamlined approach that adopts some simple approximations. In the Appendix we use a test case to demonstrate that the results are equivalent to the utilization of the full methodology. In our streamlined approach, we exploit the fact that the accuracy of measurement of $\lambda_{\rm perp}$ is a very good approximation of the precision with which we can recover the quantity $x(z_{\rm eff})/s$, where $x(z_{\rm eff})$ is the co-moving radial distance to the effective redshift of the survey and $s$ is the value of the sound horizon at recombination. This is because (in the flat-sky approximation) the value of $x$ controls physical tangential scales in the slice (as displacements $\Delta r$ are governed by $\Delta r = x \times \Delta \theta$). Similarly, the accuracy of measurement of $\lambda_{\rm par}$ is equivalent to that of $x'(z_{\rm eff})/s$, where $x'(z) \equiv dx/dz = c/H(z)$ (since $\Delta r = x' \times \Delta z$). The value of $s$ appears in the denominators because a systematic shift in the standard ruler scale implies a similar variation in the recovered physical scales $x$ and $dx/dz$: cosmic distances are measured in units of the sound horizon at recombination (equivalently, we may think of this measuring rod as the distance to the CMB: $s = D_{\rm CMB} \times \theta_A$, where $\theta_A$ is the angular scale separating the CMB acoustic peaks). Therefore, rather than re-constructing the galaxy distribution using a trial cosmology, we instead fitted wavelengths $\lambda_{\rm perp}$ and $\lambda_{\rm par}$ directly in the fiducial cosmology (i.e.\ omitting steps 8 and 9 above). The $68\%$ scatter in these fits across the Monte Carlo realizations was assigned as a $1\sigma$ Gaussian error in the values of $x(z_{\rm eff})/s$ and $x'(z_{\rm eff})/s$, respectively. The likelihood contours for the trial cosmological parameters were then deduced using standard expressions for $dx/dz$ and $x$ in terms of $(\Omega_{\rm m},h,w_0,w_1)$ and a fitting formula for $s$ in terms of $(\Omega_{\rm m},\Omega_{\rm b},h)$ (see Section \ref{secmethlik}). This streamlined approach assumes that: \begin{enumerate} \item The scatter in fitted wavelengths is independent of the values of the cosmological parameters. In detail, changing the cosmological parameters will alter the cosmic volume surveyed between $z_{\rm min}$ and $z_{\rm max}$, and therefore the errors in the recovered power spectrum in any bin (and hence the accuracy with which the sinusoidal scale may be determined). However, these variations can be neglected for small perturbations about a fiducial cosmology. \item The values of $[x(z_{\rm eff}),x'(z_{\rm eff})]$ control tangential and radial scales, respectively. This statement is exact for a flat sky, and holds approximately for the conical geometry assumed here (if the survey solid angle is not too large). \end{enumerate} These approximations are tested in the Appendix. \subsection{Likelihoods for dark energy models} \label{secmethlik} The procedure thus far has permitted us to recover values and statistical errors for the quantities $x(z_{\rm eff})/s$ and $x'(z_{\rm eff})/s$ for each survey redshift bin. These measurements are statistically independent to a good approximation (this is evidenced by the distribution of fitted wavelengths in Figure \ref{figlamfit} being close to an ellipse aligned parallel with the axes, displaying only a weak tilt). Figure \ref{figxdx} illustrates the simulated recovery of $x(z)/s$ and $x'(z)/s$ for some Monte Carlo realizations of a $1000$ deg$^2$ survey, in redshift slices of width $\Delta z = 0.5$ from $z = 0.5$ to $z = 3.5$. Results for the accuracy of recovery of $x(z)/s$ and $x'(z)/s$ in these redshift slices are listed in Table \ref{tabxdx}. \begin{figure*}[htbp] \begin{center} \epsfig{file=fig3.ps,angle=0,width=12cm} \end{center} \caption{Simulated measurements of $x(z)/s$ and $x'(z)/s$ in six redshift slices for the first four Monte Carlo realizations of a $1000$ deg$^2$ survey. The plotted values in each redshift bin are inferred from the fractional deviation of the fitted tangential and radial wavelengths from their fiducial values and the error bars represent the standard deviation across the realizations. The various model curves represent the fractional deviations of $x(z)$ and $x'(z)$ as the dark energy model is varied from the fiducial point $(w_0,w_1) = (-1,0)$. The plots illustrate that these data would be sufficient to rule out dark energy parameters $(-0.8,0)$ and $(-1,0.4)$ with high confidence; but, owing to the cosmic degeneracy between $w_0$ and $w_1$, a model $(w_0,w_1) = (-0.8,-0.4)$ cannot be confidently excluded. Note that the evolution of $w(z) = w_0 + w_1 z$ is cut off at $z = z_{\rm cut} = 2$.} \label{figxdx} \end{figure*} \begin{table*}[htbp] \begin{center} \begin{tabular}{ccccc} \hline Survey & Area & $z$-bin & Accuracy & Accuracy \\ & (deg$^2$) & & $x/s$ ($\%$) & $x'/s$ ($\%$) \\ \hline spec-$z$ & 1000 & $0.5 - 1.0$ & 2.7 & 3.8 \\ & & $1.0 - 1.5$ & 1.4 & 2.1 \\ & & $1.5 - 2.0$ & 1.2 & 2.0 \\ & & $2.0 - 2.5$ & 1.0 & 1.9 \\ & & $2.5 - 3.0$ & 1.2 & 2.0 \\ & & $3.0 - 3.5$ & 1.1 & 1.9 \\ \hline spec-$z$ & 10000 & $0.5 - 0.7$ & 1.7 & 2.7 \\ & & $0.7 - 0.9$ & 1.1 & 2.0 \\ & & $0.9 - 1.1$ & 1.0 & 1.5 \\ & & $1.1 - 1.3$ & 0.7 & 1.4 \\ & & $1.3 - 1.5$ & 0.6 & 1.2 \\ \hline KAOS & 1000 & $0.5 - 1.3$ & 1.6 & 2.6 \\ & 400 & $2.5 - 3.5$ & 1.2 & 2.3 \\ \hline photo-$z$ $\sigma_0 = 0.03$ & 2000 & $0.5 - 1.5$ & 2.3 & -- \\ & & $1.5 - 2.5$ & 1.4 & -- \\ & & $2.5 - 3.5$ & 1.3 & -- \\ \hline \end{tabular} \end{center} \caption{Simulated precision of recovery of the quantities $x(z_{\rm eff})/s$ and $x'(z_{\rm eff})/s$ from a series of future spectroscopic and photometric redshift surveys. For spectroscopic redshift surveys, these quoted accuracies (for area $A_1$) may be approximately scaled to other survey areas ($A_2$) by multiplying by a factor $\sqrt{A_1/A_2}$ (since the errors in the power spectrum measurement $\delta P \propto 1/\sqrt{V}$). In order to scale the photometric redshift results to surveys with different r.m.s.\ redshift scatters $\sigma_2$, we can multiply by a further factor $\sqrt{\sigma_2/\sigma_1}$ (since the number of usable Fourier modes $m$ scales as $1/\sigma$, and $\delta P \propto 1/\sqrt{m}$). These simple scalings will break down (1) if the redshift range changes significantly, owing to the changing position of the non-linear transition, and (2) in the regime where we are just resolving the oscillations, when improvement is better than $\sqrt{V}$. The measurement precision of $x/s$ and $x'/s$ is determined by Monte Carlo realizations and is accurate to $\pm$ 0.1\%.} \label{tabxdx} \end{table*} We parameterize the dark energy model using an equation-of-state $w(z) = w_0 + w_1 z$. The accuracies of $x(z)/s$ and $x'(z)/s$ are then used to infer joint constraints over a grid of $(w_0,w_1)$, using the standard formulae for $x'(z) = c/H(z)$ and $x(z) = \int_0^z x'(z') \, dz'$, where the Hubble constant $H(z)$ at redshift $z$ is a function of $(\Omega_{\rm m},h,w_0,w_1)$. For the sound horizon $s$ we used the formulae of Efstathiou \& Bond (1999, equations 18-20) in terms of $(\Omega_{\rm m},\Omega_{\rm b},h)$ so we effectively assume that the effect of dark energy at early times is insignificant. We explore a range of uncertainties for these quantities below. For each grid point in the $(w_0,w_1)$ plane we derive a likelihood for each redshift slice by marginalizing over Gaussian priors for $\Omega_{\rm m} $ and $\Omega_{\rm m} h^2$ (the natural variables -- see below) centered upon $\Omega_{\rm m} = 0.3$ and $h = 0.7$. The sound horizon is only weakly dependent on $\Omega_{\rm b}$, therefore we simply fixed the value $\Omega_{\rm b} h^2 = 0.022$ (we checked that the likelihood contours remained unchanged for reasonable variations in $\Omega_{\rm b} h^2$). The overall $(68\%,95\%)$ likelihood contours were then determined by multiplying together the individual likelihoods inferred from the measurements of $x(z)/s$ and $x'(z)/s$ for each redshift slice. Figure \ref{figw0w1gen} displays the resulting $(w_0,w_1)$ contours for a $10{,}000$ deg$^2$ survey. A further approximation has been used to generate this plot (also tested in the Appendix): that the accuracies of the fitted wavelengths determined for the $1000$ deg$^2$ simulation (Figure \ref{figxdx}) may be scaled by a factor $\sqrt{10}$. This is simply equivalent to sub-dividing the $10{,}000$ deg$^2$ survey into $10$ separate independently-analyzed pieces. If we do not make this additional approximation, unfeasibly large Fourier transforms are required to handle the size of the survey cuboid. Furthermore, the statistical independence of $x(z_{\rm eff})$ and $x'(z_{\rm eff})$ in a given redshift slice becomes weaker, as this independence rests upon the flat-sky approximation. We note that: \begin{enumerate} \item There is a significant degeneracy in each redshift slice between $w_0$ and $w_1$, because approximately the same cosmology is produced if $w_0$ becomes more negative and $w_1$ becomes more positive. The axis of degeneracy is a slow function of redshift, which improves this situation somewhat as we combine different redshift slices. \item As redshift increases, the radial oscillations provide decreasingly powerful constraints upon the dark energy model, because $H(z)$ becomes independent of $(w_0,w_1)$. This conclusion is valid if we are perturbing about the cosmological constant model $(-1,0)$, but will not be true in general for models with $w_1 \ne 0$ for which dark energy may affect dynamics at higher redshift. \item The tightness of the likelihood contours in the $(w_0,w_1)$ plane depends significantly upon the fiducial dark energy model (see Figure \ref{figfid}). As $w_0$ and $w_1$ become more positive, dark energy grows more influential at higher redshifts and the simulated surveys constrain the dark energy parameters more accurately, despite the fact that the surveyed cosmic volume is decreasing. \end{enumerate} \begin{figure}[htbp] \begin{center} \epsfig{file=fig4.ps,angle=-90,width=7cm} \end{center} \caption{Likelihood contours for dark energy model parameters $(w_0,w_1)$ for a $10{,}000$ deg$^2$ survey. Contours are shown for six individual redshift slices ranging from $z=0.5$ to $z=3.5$ ($68\%$), together with the combined result ($68\%$,$95\%$). For this plot, precise knowledge of $\Omega_{\rm m}$ ($= 0.3$), $h$ ($= 0.7$) and $\Omega_{\rm k}$ ($= 0$) is assumed. In Figures \ref{figpriorsgen} and \ref{figcurvgen} we plot sets of contours for different priors in $\Omega_{\rm m}$, $h$ and $\Omega_{\rm k}$. The quoted errors $\Delta w_0$ and $\Delta w_1$ are produced by marginalizing over the other dark energy parameter (i.e.\ $w_1$ and $w_0$, respectively). This plot is obtained by scaling the inferred measurement accuracies of $x/s$ and $x'/s$ from a $1000$ deg$^2$ simulation by a factor $\sqrt{10}$.} \label{figw0w1gen} \end{figure} When generating Figure \ref{figw0w1gen}, we assume that the values of $\Omega_{\rm m}$ ($=0.3$) and $h$ ($=0.7$) are known perfectly. Although there is some useful cancellation between the trends of the distance scale $x(z)$ and the standard ruler scale $s$ with the value of $\Omega_{\rm m} h^2$, there is nevertheless some residual dependence of the experimental performance on the accuracy of our knowledge of both $\Omega_{\rm m}$ and $h$, independently. In this paper we choose not to combine our results with cosmological priors from specific proposed experiments (as could be achieved by combining Fisher matrix information, for example). We prefer to keep our presentation in general terms by marginalizing over different priors for $\Omega_{\rm m}$ and $h$. We recognize that this approach will not capture all the parameter degeneracies inherent in future CMB, large-scale structure or supernova surveys, but nevertheless we can robustly quantify the accuracy of knowledge required for these other cosmological parameters such that their uncertainities do not dominate the resulting error in dark energy parameters. Combinations with any specific future experiment can be achieved by using our results listed in Table \ref{tabxdx} which represent the fundamental observables recovered by this method: $x(z)$ and $x'(z)$ in units of the sound horizon. In general terms, one degree of freedom in other parameters is constrained by the excellent measurement of the physical matter density $\Omega_{\rm m} h^2$ afforded by the CMB angular power spectrum: accuracies of about $3\%$ ($\sigma(\Omega_{\rm m} h^2) \approx 0.004$) and $1\%$ ($\sigma(\Omega_{\rm m} h^2) \approx 0.001$) are possible with the WMAP and Planck satellites, respectively \citep{Balbi03}. However, a second independent constraint on a combination of $\Omega_{\rm m}$ and $h$ is also required. This is illustrated by Figure \ref{figpriorsgen}, in which we re-compute the overall likelihoods in the $(w_0,w_1)$ plane, marginalizing over the WMAP and Planck errors in $\Omega_{\rm m} h^2$ together with a second independent Gaussian prior on $\Omega_{\rm m}$. We conclude that for a survey of $10{,}000$ deg$^2$, $\Omega_{\rm m}$ must be known with an accuracy $\sigma(\Omega_{\rm m}) \simeq 0.01$ (in conjunction with the WMAP or Planck determination of $\Omega_{\rm m} h^2$) in order that this uncertainty is not limiting. Note that as the cosmic volume surveyed increases, the prior requirements of knowledge of $\Omega_{\rm m}$ become more stringent. For a $1000$ deg$^2$ experiment, only $\sigma(\Omega_{\rm m}) \la 0.03$ is required (see Figure \ref{figpriorskaos}). \begin{figure}[htbp] \begin{center} \epsfig{file=fig5.ps,angle=-90,width=7cm} \end{center} \caption{Likelihood contours ($68\%$) for dark energy model parameters $(w_0,w_1)$ for the same (combined) survey as Figure \ref{figw0w1gen}, considering various different Gaussian priors upon the values of $\Omega_{\rm m} h^2$ and $\Omega_{\rm m}$, with standard deviations $\sigma(\Omega_{\rm m} h^2)$ and $\sigma(\Omega_{\rm m})$. We assume $\Omega_{\rm k} = 0$. For the values of $\sigma(\Omega_{\rm m} h^2)$ we make assumptions representative of the WMAP and Planck CMB experiments, for each of these cases we further consider $\sigma(\Omega_{\rm m}) = 0.01$ and $0.03$. For a survey of $10{,}000$ deg$^2$ covering redshift range $0.5 < z < 3.5$, WMAP suffices to determine the value of $\Omega_{\rm m} h^2$, but we require an independent constraint $\sigma(\Omega_{\rm m}) \simeq 0.01$ in order that this uncertainty is not limiting.} \label{figpriorsgen} \end{figure} Large-scale structure and/or supernovae constraints together with the first-year WMAP CMB data currently deliver $\sigma(\Omega_{\rm m}) \approx 0.02$--0.04 assuming a flat Universe \citep[e.g.][]{Sper03,Teggers04,Cole05}. \cite{Balbi03} quote $\sigma(h) \simeq 0.02$ as attainable with Planck (equivalent to $\sigma(\Omega_{\rm m}) \simeq 0.02$), even allowing for uncertainty in the dark energy model. However, in light of Figure \ref{figpriorsgen}, this may not be sufficient for a $10{,}000$ deg$^2$ baryon oscillations survey. Table \ref{tabw0w1} lists some $68\%$ confidence ranges for dark energy parameters $(w_0,w_1)$ for a range of survey configurations and cosmological priors, assuming a fiducial model $(-1,0)$. \begin{table*}[htbp] \begin{center} \begin{tabular}{cccccc} \hline Survey & Configuration & $\sigma(\Omega_{\rm m} h^2)$ & $\sigma(\Omega_{\rm m})$ & $\sigma(w_0)$ & $\sigma(w_1)$ \\ \hline spec-$z$ & (10000 deg$^2$, $0.5<z<3.5$) & 0 & 0 & 0.03 & 0.06 \\ & & WMAP & 0.01 & 0.04 & 0.07 \\ & & WMAP & 0.03 & 0.08 & 0.10 \\ & & Planck & 0.03 & 0.07 & 0.09 \\ \hline KAOS & (1000 deg$^2$, $0.5<z<1.3$) + (400 deg$^2$, $2.5<z<3.5$) & 0 & 0 & 0.17 & 0.48 \\ & & WMAP & 0.03 & 0.27 & 0.63 \\ & & WMAP & 0.05 & 0.34 & 0.71 \\ & ($z \sim 1$) + (1000 deg$^2$, $1.5<z<2.5$) & 0 & 0 & 0.10 & 0.26 \\ \hline SKA & (20000 deg$^2$, $0.5<z<1.5$) & 0 & 0 & 0.04 & 0.11 \\ & & Planck & 0.01 & 0.05 & 0.13 \\ & & Planck & 0.03 & 0.11 & 0.18 \\ \hline photo-$z$ & (10000 deg$^2$, $0.5<z<3.5$, $\sigma_0=0.01$) & 0 & 0 & 0.07 & 0.19 \\ & & WMAP & 0.01 & 0.20 & 0.57 \\ & & WMAP & 0.03 & 0.30 & 0.95 \\ & (10000 deg$^2$, $0.5<z<3.5$, $\sigma_0=0.03$) & 0 & 0 & 0.12 & 0.32 \\ & & WMAP & 0.01 & 0.23 & 0.66 \\ & & WMAP & 0.03 & 0.31 & 0.95 \\ & (2000 deg$^2$, $0.5<z<3.5$, $\sigma_0=0.01$) & 0 & 0 & 0.19 & 0.51 \\ & & WMAP & 0.01 & 0.25 & 0.72 \\ & & WMAP & 0.03 & 0.31 & 0.95 \\ \hline \end{tabular} \end{center} \caption{Simulated $68\%$ confidence ranges for the dark energy parameters $(w_0,w_1)$ for a series of future spectroscopic and photometric redshift surveys, assuming a fiducial cosmology $(-1,0)$. A range of (Gaussian) priors on the values of $\Omega_{\rm m} h^2$ and $\Omega_{\rm m}$ are considered. The WMAP and Planck measurement precisions of $\sigma(\Omega_{\rm m} h^2)$ are assumed to be $0.004$ and $0.001$, respectively. We assume $\Omega_{\rm k} = 0$.} \label{tabw0w1} \end{table*} Throughout this paper we assume a spatially-flat ($\Omega_{\rm k} = 0$) cosmology. In Figure \ref{figcurvgen} we compute how the likelihood contours in the $(w_0,w_1)$ plane for our $10{,}000$ deg$^2$ survey degrade as our knowledge of $\Omega_{\rm k}$ weakens. We note that current determinations of the curvature \citep[$\sigma(\Omega_{\rm k}) \la 0.02$,][]{Sper03,Dan05} are almost adequate for this proposed experiment. Of interest is \cite{Bern05} which noted that the the {\it combination\/} of baryon oscillations with weak lensing constraints leads to direct breaking of degeneracies of curvature with dark energy and allows $\Omega_{\rm k}$ to be measured without any assumptions about the equation of state. \begin{figure}[htbp] \begin{center} \epsfig{file=fig6.ps,angle=-90,width=7cm} \end{center} \caption{Likelihood contours ($68\%$) for dark energy model parameters $(w_0,w_1)$ for the same (combined) survey as Figure \ref{figw0w1gen}, considering various different Gaussian priors upon the value of $\Omega_{\rm k} = 0$ with standard deviations $\sigma(\Omega_{\rm k})$. We consider both holding fixed the values of $\Omega_{\rm m}$ and $h$, and marginalizing over these parameters.} \label{figcurvgen} \end{figure} \subsection{Comparison with the Fisher matrix methodology} It is worth comparing our methodology and results with the Fisher matrix approach utilized by \cite{SE03} for the simulation of baryonic oscillations experiments. The input data and assumptions are not entirely consistent in the two cases, but we can make a reasonably direct comparison of, for example, Figure 5 in \cite{SE03} with Figure \ref{figw0w1kaos} in this study. In such comparisons we find that the accuracies of determination of $(w_0,w_1)$ are consistent within a factor of about $1.5$, with the Fisher matrix method yielding tighter contours. This is not surprising: the Fisher matrix method uses information from the whole power spectrum shape (which will also be distorted in an assumed cosmology, e.g.\ by the Alcock-Paczynski effect) whereas in our approach, this shape is divided out and only the oscillatory information is retained. The bulk of the Fisher information does appear to originate from the sinusoidal features (see Figure 5 in \cite{HH03} and Section 4.4 of \cite{SE03}). However, the improvement in dark energy precision resulting from fitting a full power spectrum template to the data may be up to $50\%$ compared to our `model-independent' treatment. Our results should provide a robust lower limit on the accuracy, as intended. We also note that the Fisher approach provides the {\it minimum possible} errors for an unbiased estimate of a given parameter based upon the curvature of the likelihood surface near the fiducial model. As a result the projected error contours for any combination of two parameters always form an ellipse (e.g. Figure~5 of \cite{SE03}). In our approach we explore the whole parameter space and estimate probabilities via Monte Carlo techniques: the error contours are thus larger and not necessarily elliptical (e.g.\ our Figure \ref{figw0w1kaos}). A further difference between the appearances of our Figure \ref{figw0w1kaos} and Figure 5 in \cite{SE03} is a noticeable change in the tilt of the principal degeneracy direction in the $(w_0,w_1)$ plane. The reason for this is readily identified: Seo \& Eisenstein additionally incorporate the CMB measurement of the angular diameter distance to recombination into their confidence plots. We choose not to do this in order to expose the low redshift independent cosmological constraints from galaxy surveys and to isolate our results from the effects of any unknown behaviour of the equation of state (which could extend beyond our $w_0, w_1$ formalism) between $z\sim 4$ and $z\sim 1100$. In summary, we are encouraged by the rough agreement in values of $\Delta w_0$ and $\Delta w_1$ between these two very different techniques. They represent respectively more conservative/robust and best possible dark energy measurements from future surveys for baryon oscillations. \section{Detectability and accuracy of wavescale extraction} \label{secdetect} In this Section we take a step back from questions of dark energy and employ our simulation tools to re-consider the fundamental question of the detectability of the acoustic oscillations in $P(k)$ as a function of survey size and redshift coverage. {\it The oscillations in the galaxy power spectrum are a fundamental test of the paradigm of the origin of galaxies in the fluctuations observed in the early Universe via the CMB.} Detection of the oscillations would be an extremely important validation of the paradigm. Furthermore our standard ruler technique cannot be confidently employed unless the sinusoidal signature in the power spectrum can be observed with a reasonable level of significance. Recently, analysis of the clustering pattern of SDSS Luminous Red Galaxies at $z = 0.35$ (a volume of $\sim 3.5 V_{\rm Sloan}$) has yielded the first convincing detection of the acoustic signal and application of the standard ruler \citep{Dan05}. Although this survey does not have sufficient redshift reach to strongly constrain dark energy models, this result is an important validation of the technique. Analysis of the final database of the 2dF Galaxy Redshift Survey has also yielded some visual evidence for baryonic oscillations \citep{Cole05}. Here we make the distinction between {\it detection of oscillations} and {\it detection of a baryonic signal} $\Omega_{\rm b} \ne 0$ in the clustering pattern: baryons produce an overall shape distortion in $P(k)$ as well as the characteristic pattern of oscillations. In this Section we define the `wiggles detectability' as the confidence of rejection of a smooth `no wiggles' model (i.e.\ the best-fitting smooth reference spectrum of step 12 in Section \ref{secsteps}). This is a different quantity to the `$3.4$-sigma confidence' of observing $\Omega_{\rm b} \ne 0$ reported by \cite{Dan05}. However, the techniques roughly agree on the measurement accuracy of the standard ruler (see the discussion of SDSS LRGs in our Paper I, Figure 3). Figure \ref{figdetect} tracks the detection significance against the percentage accuracy of recovery of the standard ruler for surveys covering three different redshift ranges: $0.25 < z < 0.75$, $0.75 < z < 1.25$ and $2.75 < z < 3.25$. The lines connect points separated by survey volume intervals of $1 V_{\rm Sloan}$; the solid circles denote volumes $(2, 5, 10) \, V_{\rm Sloan}$. For the purposes of this Section, an angle-averaged (isotropic) power spectrum is used rather than a power spectrum separated into tangential and radial components. We also only consider the pure vacuum $\Lambda$CDM model: the detectability is primarily driven by the cosmic volume surveyed, which is a relatively slow function of dark energy parameters. The detection significance is calculated from the average over the Monte Carlo realizations of the relative probability $P_{\rm rel}$ of the smooth `no-wiggles' model and best-fitting `wiggles' model, where \begin{equation} P_{\rm rel} = \exp{[-(\chi^2_{\rm no-wig} - \chi^2_{\rm best-wig})/2]} \label{eqprel} \end{equation} \begin{figure}[htbp] \begin{center} \epsfig{file=fig7.ps,angle=-90,width=7cm} \end{center} \caption{Variation of the `wiggles detectability' (i.e.\ the significance of rejection of a no-wiggles model), and the accuracy with which the characteristic sinusoidal scale may be extracted, with survey volume. Three surveys are considered at different redshifts, centred upon $z=0.5$, $z=1$ and $z=3$. The lines join points separated by volume intervals of $1 V_{\rm Sloan}$; the three solid circles on each line denote volumes $(2, 5, 10) \, V_{\rm Sloan}$.} \label{figdetect} \end{figure} We note that: \begin{enumerate} \item In order to obtain a significant ($3\sigma$) rejection of a no-wiggles model without using any power spectrum shape information, several $V_{\rm Sloan}$ must be surveyed. For the surveys centred at $z=(0.5,1,3)$ the required volume (in units of $V_{\rm Sloan}$) is approximately $(7,5,5)$. For the redshift ranges listed above, this corresponds to survey areas $(2300,700,500)$ deg$^2$. \item For a fixed wiggles detection significance, the accuracy of recovery of the standard ruler increases with redshift. This is due to the larger available baseline in $k$ at higher redshift, owing to the extended linear regime. Two full oscillations are visible at $z = 1$; four are unveiled by a survey at $z = 3$. \item For a fixed wavescale accuracy, the detection significance decreases with redshift, because the amplitude of the oscillations is damped with increasing $k$. As noted above, at higher redshifts there are more acoustic peaks available, thus a less significant measurement of each individual peak may be tolerated. \item The distribution of values of $P_{\rm rel}$ (equation \ref{eqprel}) over the Monte Carlo realizations is significantly skewed (see the discussion in \cite{BB05}). The median value of $P_{\rm rel}$ represents a more confident detection than the average plotted in Figure \ref{figdetect}. \end{enumerate} Figure \ref{figpkspec} displays some Monte Carlo power spectrum realizations of three surveys: ($0.5 < z < 1.3$, $1000$ deg$^2$), ($2.5 < z < 3.5$, $400$ deg$^2$) and ($0.5 < z < 1.5$, $10{,}000$ deg$^2$). The total volumes mapped in units of $V_{\rm Sloan}$ are $(10,8,133)$, respectively. The total numbers of galaxies observed in each survey (to ensure $n \times P = 3$) are $(6,2,85) \times 10^6$ (assuming linear bias $b=3$ for the $z \sim 3$ survey and $b=1$ otherwise). The first two redshift surveys could be performed by a next-generation wide-field optical spectrograph such as the KAOS instrument proposed for the Gemini telescopes \citep[][ {\tt http://www.noao.edu/kaos/}]{KAOS}. The third survey is possible in 6 months using the Square Kilometre Array to detect HI emission line galaxies \citep{AR04} or from a space mission (see Section~\ref{sec:space}). \begin{figure*}[htbp] \begin{center} \epsfig{file=fig8.ps,angle=-90,width=15cm} \end{center} \caption{Power spectrum realizations of three example high-redshift surveys, varying survey volume and redshift range. In all cases, we plot the angle-averaged power spectrum divided by the smooth reference spectrum. The dashed curve is the theoretical input $P(k)$ and the solid line is the best fit of our simple decaying sinusoidal function (Paper I, equation 3). The $x$-axis is marked in units of $k$ (in $h$ Mpc$^{-1}$) and represents the extent of the linear regime at the redshift in question (i.e.\ $z_{\rm eff} = (z_{\rm min} + z_{\rm max})/2$). The rows of the Figure represent surveys with geometries ($0.5 < z < 1.3$, $1000$ deg$^2$), ($2.5 < z < 3.5$, $400$ deg$^2$) and ($0.5 < z < 1.5$, $10{,}000$ deg$^2$); the columns display the first three Monte Carlo realizations in each case.} \label{figpkspec} \end{figure*} \section{Dark energy measurements from realistic spectroscopic redshift surveys} \label{secmeas} We now consider the prospects of performing these experiments with realistic galaxy redshift surveys. As noted above and in Paper I, such surveys must cover a minimum of several hundred deg$^2$ at high redshift, cataloguing at least several hundred thousand galaxies, in order to obtain significant constraints upon the dark energy model. \subsection{Existing surveys} These requirements are orders of magnitude greater than what has been achieved to date. Some existing surveys of high-redshift galaxies are the Canada-France Redshift Survey \citep[CFRS; a few hundred galaxies covering $\sim 0.1$ deg$^2$ to $z \approx 1.3$;][]{Lilly95}, the survey of $z \sim 3$ Lyman-break galaxies by \cite{Stei03} (roughly a thousand galaxies across a total area $\approx 0.4$ deg$^2$). Most other high redshift spectroscopic surveys \citep[e.g.][]{GDDS,K20,Stei04} cover equally small areas $\la 1$ deg$^2$. Some larger surveys are in progress: the DEEP2 project \citep{Davis03}, using the DEIMOS spectrograph on the Keck telescope, aims to obtain spectra for $60{,}000$ galaxies ($3.5$ deg$^2$, $0.7 < z < 1.4$); the VIRMOS redshift survey \citep{LeFev03}, using the VIMOS spectrograph at the VLT, will map $150{,}000$ redshifts over $16$ deg$^2$ (considering the largest-area component of each). Neither of these existing projects comes close to meeting our goals, primarily due to the limitations of existing instrumentation. The spectrographs used to perform these surveys have typical fields-of-view (FOV) of diameter $10-20'$ and are unable to cover hundreds of deg$^2$ in a reasonable survey duration. \subsection{New ground-based approaches (optical/IR)} Some proposed new optical instrumentation addresses this difficulty, permitting spectroscopic exposures over considerably larger FOVs using the 8-metre telescopes that are required to obtain spectra of sufficient quality at these redshift depths. For example, the KAOS project for the Gemini telescopes \citep[][ {\tt http://www.noao.edu/kaos/}]{KAOS} is a proposal for a $1.5$ deg diameter FOV, 4000 fibre-fed optical spectrograph. There are two proposed redshift surveys: $900{,}000$ ($0.5 < z < 1.3$) galaxies over $1000$ deg$^2$, and $600{,}000$ ($2.5 < z < 3.5$) galaxies across $400$ deg$^2$. These surveys would together take $\sim 170$ clear nights using the 8-metre Gemini telescope with realistic exposure times computed for the KAOS instrument sensitivity. The redshift ranges are driven by the strong spectral features available for redshift measurement in optical wavebands in a relatively short exposure time. The $z \sim 1$ range is cut off at $z = 1.3$ by the [OII] emission line and the calcium H \& K lines shifting to red/infrared wavelengths $>0.9$ \micron\ where the airglow is severe and conventional CCD detectors have low efficiency; the $z \sim 3$ component is driven by observing Ly$\alpha$ in the blue part of the optical range. The $w(z)$ measurements resulting from the proposed KAOS surveys, computed using the methodology of Section \ref{secmeth}, are displayed in Figure \ref{figw0w1kaos} (see also Tables \ref{tabxdx} and \ref{tabw0w1}). We show both the $z \sim 1$ and $z \sim 3$ contributions separately, and the joint constraint. We assume linear bias factors $b = (1,3)$ for the $z \sim (1,3)$ simulations, respectively. The measurement precision of the dark energy parameters is $\Delta w_0 \approx 0.2$ and $\Delta w_1 \approx 0.4$, significantly better than current supernovae constraints \citep{Riess04}. In statistical terms the KAOS performance is somewhat poorer than that projected for the proposed SNAP mission \citep{Ald02}, but the acoustic oscillations method is significantly less sensitive to errors of a systematic nature. Figure \ref{figw0w1kaos} illustrates that models with $w_1 < 0$ are harder to exclude owing to the diminishing sensitivity of cosmic distances to dark energy in this region of parameter space. \begin{figure}[htbp] \begin{center} \epsfig{file=fig9.ps,angle=-90,width=7cm} \end{center} \caption{Measurement of a dark energy model $w(z) = w_0 + w_1 z$ using a simulated survey with the KAOS spectrograph consisting of two components, $0.5 < z < 1.3$ ($1000$ deg$^2$) and $2.5 < z < 3.5$ ($400$ deg$^2$). Constraints are displayed for each redshift component ($68\%$), and for both surveys combined ($68\%,95\%$). Note that the likelihoods are generated over a much wider $(w_0,w_1)$ space than displayed in the Figure. The solid circle denotes the fiducial cosmology, $(w_0,w_1) = (-1,0)$. There is a significant degeneracy between $w_0$ and $w_1$, particularly for the $z \sim 3$ survey if $w_1 < 0$, owing to the lack of sensitivity of $H(z = 3)$ to dark energy in this case. For this plot, perfect knowledge of $\Omega_{\rm m}$ ($= 0.3$), $h$ ($= 0.7$) and $\Omega_{\rm k}$ ($= 0$) is assumed. The standard deviations quoted for $\Delta w_0$ and $\Delta w_1$ (i.e.\ half the interval between the $16^{\rm th}$ and the $84^{\rm th}$ percentiles) result from marginalizing over the other parameter (i.e.\ $w_1$ and $w_0$, respectively).} \label{figw0w1kaos} \end{figure} We note that the KAOS measurement of $H(z \approx 3)$ from the radial component of the $z \sim 3$ power spectrum provides little information about the dark energy model if we are perturbing around the cosmological constant: at $z \sim 3$, the dynamics of the Universe are entirely governed by the value of $\Omega_{\rm m} h^2$, and $H(z)$ is almost independent of $(w_0,w_1)$. However, the value of $x(z \approx 3)$ inferred from the tangential component of the $z \sim 3$ power spectrum does depend on dark energy, because $x(z=3)$ is an integral of $dx/dz = c/H(z)$ from $z = 0$ to $z = 3$, which is influenced by dark energy at lower redshifts. The $z \sim 3$ constraint thus reduces to a significant degeneracy between $w_0$ and $w_1$, as observed in Figure \ref{figw0w1kaos}, although models with $w_1 > 0$ can still be ruled out by this redshift component. The degeneracy is less severe for the $z \sim 1$ component owing to the availability of both $H(z)$ and $x(z)$ information. Figure \ref{figw0w1kaos} assumes that we have perfect prior knowledge of $\Omega_{\rm m}$ ($= 0.3$) and $h$ ($= 0.7$). Figure \ref{figpriorskaos} relaxes this assumption, illustrating the effect of including Gaussian priors upon $\Omega_{\rm m}$ and $\Omega_{\rm m} h^2$ of various widths. \begin{figure}[htbp] \begin{center} \epsfig{file=fig10.ps,angle=-90,width=7cm} \end{center} \caption{Likelihoods contours ($68\%$) of dark energy parameters $(w_0,w_1)$ for the same KAOS redshift surveys as Figure \ref{figw0w1kaos}, marginalizing over the cosmological parameters $(\Omega_{\rm m},\Omega_{\rm m} h^2)$. We assume a WMAP Gaussian prior upon $\Omega_{\rm m} h^2$ (as in Figure \ref{figpriorsgen}) and consider a range of different Gaussian priors for $\Omega_{\rm m}$. In each case, standard deviations are quoted for $w_0$ and $w_1$ as before. We assume $\Omega_{\rm k} = 0$.} \label{figpriorskaos} \end{figure} A drawback of this survey design is the absence of the redshift range $1.5 < z < 2.5$, sometimes called the `redshift desert'. There are no strong emission lines accessible to optical spectrographs in this interval: existing surveys of this region have required very long exposure times to secure spectra \citep{GDDS}, but this could be remedied by near-IR or near-UV spectroscopy of bright, star-forming galaxies \citep{Stei04}. We now consider the usefulness of these additional observations as regards measuring dark energy, along with the observational practicalities. First, we investigate the effect of this redshift range upon measurements of the dark energy model $w(z) = w_0 + w_1 z$ (assuming precise prior knowledge of $\Omega_{\rm m}$ and $h$ for the purposes of this comparison; Figure \ref{figpriorskaos} indicates how accurately these parameters must be known in order that their uncertainty is not limiting). In Figure \ref{figw0w1kaos2}, we remove the $z = 3$ component of the proposed KAOS experiment and extend the lower-redshift $1000$ deg$^2$ survey across the redshift range $1.5 < z < 2.5$, divided into two independent slices of width $\Delta z = 0.5$. The likelihood constraints in $(w_0,w_1)$ space tighten appreciably, {\it by a further factor of two}, principally due to the $1.5 < z < 2.0$ component, for which $H(z)$ still yields useful information about $(w_0,w_1)$. In Figure \ref{figw0w1kaos3} we add back in the $z = 3$ data; the dark energy measurements do not significantly improve. We conclude that coverage of the redshift desert would be highly desirable if it could be achieved. Simply increasing the area of the $z\sim 3$ component does not help nearly as much ($\Delta w_1$ is improved by $\sim 25\%$ for 1000 deg$^2$ at $z\sim 3$). This simplistic analysis is of course no substitute for a proper survey optimization assuming fixed total time or cost. \begin{figure}[htbp] \begin{center} \epsfig{file=fig11.ps,angle=-90,width=7cm} \end{center} \caption{Dark energy measurements resulting from $1000$ deg$^2$ surveys spanning the redshift ranges ($0.5 < z < 1.3$) and ($1.5 < z < 2.5$). We assume perfect prior knowledge of $(\Omega_{\rm m},h,\Omega_{\rm k})$.} \label{figw0w1kaos2} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{file=fig12.ps,angle=-90,width=7cm} \end{center} \caption{Dark energy measurements resulting from $1000$ deg$^2$ surveys spanning the redshift ranges ($0.5 < z < 1.3$) and ($1.5 < z < 2.5$), together with a $400$ deg$^2$ component covering ($2.5 < z < 3.5$). We assume perfect prior knowledge of $(\Omega_{\rm m},h,\Omega_{\rm k})$.} \label{figw0w1kaos3} \end{figure} We note, however, that there are other persuasive scientific reasons to include a $z = 3$ survey component \citep{Dan02}, amongst them: \begin{enumerate} \item A galaxy redshift survey at $z = 3$ unveils the linear power spectrum down to unprecedentedly small scales $k \approx 0.5 \, h$ Mpc$^{-1}$, measuring linear structure modes that cannot be accessed using the CMB. \item If dark energy is insignificant at $z = 3$, then measurement of the acoustic oscillations in such a redshift slice enables the standard ruler to be calibrated in a manner independent of the CMB. \item Our present analysis assumes that the fiducial dark energy model is a cosmological constant, $(w_0,w_1) = (-1,0)$. Based on current data, we have very little information about the value of $w_1$ (for the latest supernova analysis of \cite{Riess04}, $\sigma(w_1) \approx 0.9$). Should $w_1 \ne 0$, the influence of $w(z)$ upon higher-redshift dynamics could become more significant. A general redshift survey optimization, which is beyond the scope of this paper, should address the range of $w(z)$ to be explored \citep{Bassett05}. \end{enumerate} Figure \ref{figw0wa} considers a different parameterization for dark energy $w(z) = w_0 + w_a (1-a)$ $=w_0 + w_a/(1+z)$ (Linder 2002), where $a=(1+z)^{-1}$ is the usual cosmological scale factor, which encapsulates a more physically realistic behaviour at high redshifts $z \gtrsim 1$ in comparison to $w(z) = w_0 + w_1 z$. The rate of change of $w$ with redshift in the two models is $dw/dz = w_1 = w_a/(1+z)^2 < w_a$ hence for a given survey we expect the size of the error for $w_a$ to exceed that for $w_1$. Figure \ref{figw0wa} illustrates this for the same survey configuration as Figure \ref{figw0w1kaos3}. \begin{figure}[htbp] \begin{center} \epsfig{file=fig13.ps,angle=-90,width=7cm} \end{center} \caption{Dark energy measurements resulting from the same surveys as Figure \ref{figw0w1kaos3}, using the alternative parameterization $w(z) = w_0 + w_a(1-a)$. We assume perfect prior knowledge of $(\Omega_{\rm m},h,\Omega_{\rm k})$. Note that the likelihoods are generated over a much wider $(w_0,w_a)$ space than displayed in the Figure.} \label{figw0wa} \end{figure} Figure \ref{figfid} uses the KAOS surveys plus a $1.5 < z < 2.0$ extension to illustrate how the tightness of the likelihood contours in the $(w_0,w_1)$ plane is a strong function of the fiducial dark energy parameters, as discussed in Section \ref{secstream}. We computed the linear growth factor for non-$\Lambda$CDM models by solving the appropriate second-order differential equation \citep[e.g.][]{LinJen}. If $w_1 > 0$ then dark energy is more significant at higher redshifts and the model parameters can be constrained more tightly, despite both the resulting decrease in the available cosmic volume in a given redshift range and the movement of the linear/non-linear transition to larger scales (smaller $k$). Note that our cut-off to the evolution of $w(z) = w_0 + w_1 z$ at $z_{\rm cut} = 2$ ensures that dark energy does not dominate at high redshift for our model with $w_1 > 0$. \begin{figure}[htbp] \begin{center} \epsfig{file=fig14.ps,angle=-90,width=7cm} \end{center} \caption{The tightness of the $(68\%,95\%)$ likelihood contours in the $(w_0,w_1)$ plane is a strong function of the fiducial cosmological model. In this Figure we illustrate, for three different fiducial models $(w_0,w_1) = (-1,0)$, $(-0.8,0.4)$ and $(-1.2,-0.4)$, dark energy measurements resulting from $1000$ deg$^2$ surveys spanning the redshift ranges ($0.5 < z < 1.3$) and ($1.5 < z < 2.0$), together with a $400$ deg$^2$ component covering ($2.5 < z < 3.5$). We assume perfect prior knowledge of $(\Omega_{\rm m},h,\Omega_{\rm k})$.} \label{figfid} \end{figure} Next, we consider the practicalities of observing galaxies in the redshift range $1.5 < z < 2.5$ (the so-called `optical redshift desert'). Considering near infra-red wavebands first: the H$\alpha$ 6563\AA\ line is accessible in the 1--2$\mu$m band over the interval $0.5 < z < 2$. This regime is the non-thermal infra-red, in which the sky is sufficiently dark to permit high-redshift spectroscopy (e.g.\ the emission-line observations of \cite{Glz99} and \cite{Pett98}). The critical opportunity offered here is that the H$\alpha$ emission line is potentially very bright at redshifts $z \gtrsim 1$, owing to the steep evolution in the star-formation rate of galaxies in the Universe over $0 < z \lesssim 1$ \citep{Hop00}. Let us consider a redshift slice $1.5 < z < 1.7$. Our requirement for shot noise sampling ($n \times P = 3$) translates into a required surface density $(3900/b^2)$ deg$^{-2}$ within this slab (Figure \ref{fignp}), where $b$ is the linear bias factor of the surveyed galaxies. The luminosity function of H$\alpha$ emitters at $z \gtrsim 1$ has been reasonably well determined from NICMOS slitless grism surveys using the Hubble Space Telescope \citep{Hop00,Yan99}. Using the Hopkins et al.\ luminosity function, we find that we need to reach a line flux limit of $1.1 \times 10^{-16}$ ergs cm$^{-2}$ s$^{-1}$ in order to reach the required surface density (assuming $b=1$). The $z=1$ H$\alpha$ luminosity function has apparently evolved strongly in comparison with $z=0$ \citep{Gal95} but the measurements are fairly robust: we can double-check the luminosity function determinations by simply counting objects in the NICMOS surveys above our required line flux limit. In the Hopkins, Connolly \& Szalay sample there are 4 galaxies in the redshift range $1.5 < z < 1.7$ above this flux limit, spread over 4.4 arcmin$^2$, yielding a surface density $\simeq$ 3300 deg$^{-2}$. The H$\alpha$ identifications are also very reliable: the Yan et al. sample was observed in optical wavebands by \cite{Hicks02}, confirming $\ge 75\%$ of the H$\alpha$ identifications via associated [OII] emission at the same redshift. This agrees with expectations: analytic models of evolving line emission show that H$\alpha$ should dominate at these flux levels at 1-2\micron\ over other lines. This is encouraging, because these bright lines are accessible in relatively modest exposures. Let us assume an 8-metre telescope, a $25\%$ efficient $R = 4000$ near-IR spectrograph and detector (with a dispersion of 5\AA\ per arcsec), and consider an object observed in an aperture of size 0.8 arcsec $\times$ 0.8 arcsec, covering $2\times 2$ pixels and containing half the light (the Yan et al.\ objects have compact half-light radii of 0.2--0.7 arcsec). The signal-to-noise ratio is determined by detector readout noise, sky background and dark current. We will assume a readout noise of 4 electrons (this is the stipulated requirement of the James Webb Space Telescope (JWST) detectors) and an inter-OH sky background of 1.2 photons s$^{-1}$ nm$^{-1}$ arcsec$^{-2}$ m$^{-2}$ in the H-band (the OH night sky line forest is well-resolved at $R=4000$) which is a typical value measured at Gemini observatory\footnote{See \url{http://www.gemini.edu/sciops/ObsProcess/obsConstraints/\\ ocSkyBackground.html}}. We will neglect dark current, which is equivalent to assuming that this dark current is much lower than the sky counts. Given these assumptions, our H$\alpha$ flux limit at $z = 1.6$ corresponds to a signal-to-noise ratio of 20 in a 600 second integration, and we find the observation is background limited for any readout noise $<$ 15 electrons. This exposure time is encouragingly short, and implies that using a 1 degree FOV spectrograph, one could survey $1000$ deg$^2$ in only 20 nights. Admittedly our instrument specification is optimistic, especially for readout noise; however, it can be relaxed considerably whilst still achieving exposure times $< 1$ hour. Assuming a fibre spectrograph we estimate that observing 3900 objects simultaneously at this resolution would only require 2--3 detector arrays of size 4096$\times$4096 to cover the $H$-band. Obviously more objects or broader wavelength coverage would require more detectors or time. Finally we note that, at least in principle, the exposure times are sufficiently short that one could imagine performing such a survey on a smaller-aperture (4-metre class) telescope. The potential problem with the approach outlined above is that it is not known `a priori' which of the galaxies identified in deep images will be H$\alpha$ bright, or the redshifts of these galaxies. Possibly these data could be successfully predicted from other information (e.g.\ broad-band colours or sub-mm/radio fluxes), but this may be unreliable. Targetting fainter H$\alpha$ fluxes resulting from realistic target selections would obviously require longer exposure times. Further work on this problem is required in order to determine the distinguishing properties of known H$\alpha$-bright galaxies. A second potential difficulty is the effect of the night-sky OH emission lines in potentially making inaccessible certain redshift ranges, in a complex pattern. Since these redshift ranges are very narrow, the result is the removal of a series of redshift spikes at known locations in the radial window function. In order to assess the likely consequences, we manufactured a synthetic $n(z)$ possessing narrow gaps where no galaxies could be observed, i.e.\ when the redshifted H$\alpha$ emission line coincided with an OH line or landed in the water-absorption hole between the J and H bands. We assumed a spectrograph operating at a resolution $R = 4000$, which implied that $68\%$ of the $1.1 < z < 1.7$ redshift interval was accessible. With our simulation tools we then recovered $P(k)$ using this $n(z)$ (employing an FFT with sufficient gridding in the radial direction to resolve these narrow spikes). The principal effect of the window function is to damp the amplitude of the acoustic oscillations slightly in the radial direction, leaving the tangential modes largely unaffected. The fractional error ($\Delta k_A/k_A$) with which the acoustic scale is recovered is increased by no more than $25\%$, which is mostly due to the smaller effective survey volume owing to the absence of many thin redshift shells. We conclude that the OH lines are not a factor which will significantly hamper ground-based surveys of the acoustic peaks. Bright star-forming galaxies can alternatively be observed by targetting the [OII] 3727\AA\ emission line in optical wavebands. High-resistivity CCD detectors under development can maintain quantum efficiency out to 1\micron\ \citep{CCD} corresponding to $z=1.7$. The typical H$\alpha$:[OII] intensity is 2:1; high spectral resolution is again required to observe between the OH night sky lines, in which case the exposure times would be short if one could pre-select the [OII]-bright population. This could be more problematic than with H$\alpha$ as there is considerable extra scatter introduced in to the line ratio [OII]$/$H$\alpha$ due to variations in metallicity and extinction \citep{JFF01}. We also note that [OII] is accessible in the non-thermal IR up to $z=5$ and therefore is a potential probe of high-redshift acoustic oscillations. The high star-formation rate at earlier cosmic epochs also implies a considerable boost in the rest-frame UV luminosity of galaxies. High-altitude sites such as Mauna Kea have an atmospheric cutoff further into the near-UV, which can be exploited by blue-optimized spectrographs. For example, \cite{Stei04} have obtained spectra of star-forming galaxies over $1.4<z<2.5$ using exposure times of only a few hours at the 10-metre Keck telescope, reaching a lowest observed-frame wavelength of 3200\AA; this corresponds to Ly$\alpha$ at $z = 1.6$, or CIV at $z = 1.1$. The spectral region between Ly$\alpha$ and CIV is rich in interstellar lines and ripe for determination of accurate redshifts. It is possible that a UV approach may be superior to a near-IR approach targetting H$\alpha$: the observed surface density of Steidel et al. sample is sufficiently high for our requirements, but we note that a UV-optimized design would probably require a wide-field slit spectrograph because conventional fibres considerably attenuate the UV light for long runs ($>20$m). \subsection{New ground-based approaches (radio)} Next-generation radio interferometer arrays, such as the proposed Square Kilometre Array (SKA; {\tt http://www.skatelescope.org}; planned to commence operation in about 2015), will have sufficient sensitivity to detect the HI (21cm) transition of neutral hydrogen at cosmological distances that are almost entirely inaccessible to current radio instrumentation. This will provide a very powerful means of performing a large-scale redshift survey: once an HI emission galaxy has been located on the sky, the observed wavelength of the emission line automatically provides an accurate redshift. The key advantage offered by a radio telescope is that it may be designed with an instantaneous FOV exceeding $100$ deg$^2$ (at $1.4$ GHz), vastly surpassing the possibilities of optical spectrographs. Equipped with a bandwidth of many $100$ MHz, such an instrument could map out the cosmic web (probed by neutral hydrogen) at an astonishing rate: the SKA, if designed with a large enough FOV, could locate $\sim 10^9$ HI galaxies to redshift $z \approx 1.5$ over the whole visible sky in a timescale of $\sim 1$ year \citep{AR04,BABR04}. Deeper pointings could probe the HI distribution to $z \sim 3$ over smaller solid angles. A caveat is that the HI mass function of galaxies has been determined locally \citep{Zwaan03} but is currently very poorly constrained at high redshift. However, for a range of reasonable models, the number densities required to render shot noise negligible may be attained for HI mass limits {\it larger} than the break in the mass function \citep{AR04}. Another requirement of interferometer design is that a significant fraction of the collecting area must reside in a core of diameter a few km, to deliver the necessary surface brightness sensitivity for extended 21cm sources. Sharp angular resolution is not a pre-requisite, assuming that the observed galaxies are not confused. Figure \ref{figw0w1ska} displays measurements of the dark energy model resulting from a $20{,}000$ deg$^2$ neutral hydrogen survey over the redshift range $0.5 < z < 1.5$, analyzing acoustic peaks in redshift slices of width $\Delta z = 0.2$. We note that a smaller 21cm survey could be performed in the nearer future by SKA prototypes such as the HYFAR proposal \citep{Bun03}, which may cover several thousand deg$^2$ over a narrower bandwidth (corresponding to $0.8 < z < 1.2$). \begin{figure}[htbp] \begin{center} \epsfig{file=fig15.ps,angle=-90,width=7cm} \end{center} \caption{Likelihoods ($68\%$) of dark energy parameters $(w_0,w_1)$ for a future 21cm radio survey, using the Square Kilometre Array to map HI emission galaxies over $20{,}000$ deg$^2$, covering a redshift range $0.5 < z < 1.5$. Such a survey is possible in a timescale of less than 1 year if the SKA is designed with a sufficiently large FOV ($\sim 100$ deg$^2$ at $1.4$ GHz) and bandwidth ($\sim 200$ MHz). We marginalize over the cosmological parameters $(\Omega_{\rm m}, \Omega_{\rm m} h^2)$ using a range of different Gaussian priors. Our assumed prior for $\Omega_{\rm m} h^2$ is representative of that obtained by the Planck satellite. We assume $\Omega_{\rm k} = 0$.} \label{figw0w1ska} \end{figure} \subsection{Space-based approaches} \label{sec:space} An interesting alternative approach is to use a space-based dispersive but slitless survey to pick out emission-line objects directly over a broad redshift range. In space, the 1--2$\mu$m background is $1000$ times lower (in a broad band) than that observed from the ground \citep[based upon the JWST mission background simulator at distance 3 au from the Sun;][]{Petro02}; in the background-limited regime this gain is equivalent to a $1000$-fold increase in collecting area. The dispersing element could be either a large objective prism or a grism. The NICMOS surveys mentioned above already demonstrate that this technique is possible, but they lack somewhat in FOV and spectral resolution compared to what is desirable. As an illustrative example, let us consider a 0.5-metre space telescope with a $60\%$ overall system efficiency working in slitless dispersed imaging mode, again targeting $0.5''$ galaxies at redshift $z = 1.6$. We will assume the JWST-like background and that the slitless spectra are de-limited by a 2000\AA-wide blocking filter. Just considering the sky background, the signal-to-noise ratio in a 1800s exposure is 5 for our canonical flux limit of $1.1 \times 10^{-16}$ ergs cm$^{-2}$ s$^{-1}$. This signal-to-noise ratio is independent of spectral resolution for unresolved lines much brighter than the continuum (the spectral resolution must of course be sufficient for determining accurate redshifts). Highly-dispersive IR materials such as silicon could enable an objective prism approach with $R = 500$, using a simple prime focus imaging system. Such a satellite, if equipped with a 1 deg FOV and sensitive over 1--2\micron, could perform a spectroscopic survey over the redshift interval $0.5 < z < 2$ (using H$\alpha$), covering an area of $10{,}000$ deg$^2$ in a 4 year mission, obtaining dark energy constraints very similar to those presented in Figure \ref{figw0w1gen}. Accessing higher redshifts would be possible by either extending the wavelength range or going fainter with the [OII] line, in both cases requiring a larger diameter mirror. Ambiguous line identification is a potential problem, this could be remedied by cross-matching with a photometric-redshift imaging survey. We refer the reader to \cite{BOP} for a more detailed discussion of such a dedicated `Baryonic Oscillation Probe'. \section{Dark energy measurements from realistic photometric redshift surveys} \label{secphoto} We next explore the potential of surveys based on {\it photometric redshifts} for detecting the acoustic oscillations and placing constraints upon any cosmic evolution of the dark energy equation-of-state $w(z)$. For a detailed treatment of the significance of `wiggles detection' and the accuracy of measurement of the standard ruler with photometric redshift surveys, we refer the reader to \cite{BB05}. Here we present a summary and discuss the consequences for dark energy measurement. A photometric redshift (`photo-z') is obtained from multi-colour photometry of a galaxy: the object is imaged in several broad-band filters, ranging from the UV to the near-IR, producing a rough spectral energy distribution (SED). This observed SED is then fitted by model galaxy SEDs as a function of redshift to construct a likelihood distribution with redshift; the peak of the likelihood function indicates the best-fitting redshift. A classic example of this approach is the analysis of the Hubble Deep Field North \citep{FLY99}. There are obviously many options concerning the number and width of filter bands, and their placement in the UV-NIR range. Generally at least five broad bands are used, and IR coverage is essential for constraining galaxies with $1.2 < z < 2.2$ \citep{BMP00}. Some approaches have used as many as 17 broad $+$ narrow-band filters \citep[e.g.][]{COMBO17}. Many different techniques have been proposed for deriving the photo-z \citep[e.g.][]{Cs00,leB02,CL04}. The important question for our study is: what is the accuracy of the photo-z estimates? These errors can be divided into two main types. First, there is the random statistical error due to noise in the flux estimates and to the coarseness of the SED. Typically this is specified by a parameter $\sigma_0$ where \begin{equation} \sigma_0 = {\sigma_z \over (1+z) } \approx \hbox{constant} \end{equation} and $\sigma_z$ is the standard deviation of the redshift $z$. We note that $\sigma_0$ is approximately constant because it is proportional to the spectral resolution $\lambda/\Delta\lambda$ of the set of filters. \cite{Chen03} obtained $\sigma_0 = 0.08$; the COMBO17 survey achieved $\sigma_0 = 0.03$. In a theoretical study, \cite{Bud01} demonstrated that an optimized filter set produced results within the range $\sigma_0 = 0.02 - 0.05$, depending on the shape of the SED. In general, redder galaxies deliver more accurate photo-z's because the model colours change faster with redshift. The second source of photo-z error is the possibility of getting the redshift grossly wrong, either because the set of colours permit more than one redshift solution, or because the model SEDs are not sufficiently representative of real galaxies. Different authors disagree about the magnitude of this effect, which depends on the specific filter sets, photometric accuracy, spectroscopic calibration and photo-z methods used. A useful theoretical discussion is given in \cite{BMP00}. \cite{BB05} analyze various `realistic' redshift error distributions including outliers and systematic offsets. For our purposes we will ignore systematic errors and parameterize photo-z performance using the value of $\sigma_0$ alone. A realistic survey will contain additional systematic redshift errors, thus we will obtain lower limits. We will also assume that $\sigma_0$ is a constant, whereas for a realistic survey it will depend somewhat on redshift and galaxy type. What is the effect of a statistical redshift error ($\sigma_0$) on the measured power spectrum? This is fairly easy to estimate analytically. As discussed in Paper I \cite[see also][Section~4.5]{SE03} this photo-z error represents a radial smearing of galaxy positions. For example, $\sigma_0 = 0.03$ for a $z = 1$ galaxy corresponds to an error $\sigma_x \approx 100 \, h^{-1}$ Mpc in the radial co-moving coordinate. Since smoothing (i.e.\ convolution) by a Gaussian function in real space is equivalent to multiplication by a Gaussian function in Fourier space, we can model the photo-z effect as a multiplicative damping of the 3D power spectrum $P(k_x, k_y, k_z)$ by a term $\exp{(-k_x^2 \sigma_x^2)}$ (where we choose the $x$-axis as the radial direction). In the following, we implement a {\it flat-sky approximation} and presume that there is no tangential effect (i.e.\ parallel to the $(y,z)$-plane). Prior to the smearing effect of photometric redshifts, the available Fourier structure modes in the linear regime comprise a sphere in Fourier space of radius $k < k_{\rm linear} \approx 0.2 \, h$ Mpc$^{-1}$ (where the value of $k_{\rm linear}$ depends on redshift). Afterwards, the damping term $\exp{(-k_x^2 \sigma_x^2)}$ implies that only a thin slice of this sphere with $|k_x| \lesssim 2/\sigma_x \approx 0.02 \, h$ Mpc$^{-1}$ is able to contribute useful power spectrum signal. This is illustrated schematically in Figure~\ref{figphotexample}. Considering those modes contributing to a Fourier bin centred about scale $k$, the reduction in usable Fourier space volume corresponds to a factor $(k \times \sigma_x/2)$. At a scale $k = 0.2 \, h$ Mpc$^{-1}$, this represents a loss by a factor of 10 ($\sigma_0 = 0.03$, $z = 1$). For the same surveyed area, we can hence expect the error ranges in the derived power spectrum to worsen by a factor $\approx \sqrt{10}$ in comparison to a spectroscopic survey (although also note that the scaling of the error with $k$ also changes from $\delta P_{\rm spec} \propto k^{-1}$ to $\delta P_{\rm photo} \propto k^{-1/2}$). The errors on the resulting dark energy parameters will be worse by a factor of $\simeq \sqrt{20}$ than in a similar sized spectroscopic survey once one also accounts for the loss on the radial dimension. \begin{figure*}[htbp] \begin{center} \epsfig{file=fig16-small.ps,angle=0,width=15cm} \end{center} \caption{Illustration of the loss of modes in three dimensional Fourier space $(k_x, k_y, k_z)$ by smearing along the $x$ (redshift) axis. Only long-wavelength modes $>$ smearing length are unsuppressed and the $|k|<0.2$ sphere is truncated to a thin disk.} \label{figphotexample} \end{figure*} Critically, the radial damping due to photometric redshifts results in the loss of any ability to detect the acoustic oscillations in the {\it radial} component of the power spectrum (in the above example, the power spectrum is significantly suppressed for modes with $|k_x| > 2/\sigma_x \approx 0.02 \, h$ Mpc$^{-1}$, i.e.\ the whole regime containing the oscillations). We are only able to apply the standard ruler in the tangential direction. As noted earlier, the radial component provides a direct measure of $H(z)$, contributing significantly to the measurements of the dark energy model. Thus in the above example ($\sigma_0 = 0.03$, $z = 1$) the resulting errors in the dark energy parameters will worsen by a factor closer to $\approx \sqrt{20}$. Alternatively, one could compensate by increasing the survey area (and hence the density of states in Fourier space) by the corresponding factor. We note that with sufficient filter coverage and for special classes of galaxy, the photometric redshift error $\sigma_0$ may also be reduced to improve dark energy performance. We simulated photo-z surveys using analogous Monte Carlo techniques to those described in Section \ref{secmeth} (see \cite{BB05} for a more detailed account). We introduced a radial Gaussian smearing \begin{equation} \sigma_x = \sigma_0 \, (1 + z_{\rm eff}) \, x'(z_{\rm eff}) \end{equation} into our Poisson-sampled density fields (for a survey slice at redshift $z_{\rm eff}$). When measuring the power spectrum we restrict ourselves to modes with $|k_x| < 2/\sigma_x$ (where the factor of 2 was determined empirically to be roughly optimal). The residual damping in the shape of $P(k)$ is divided out using the known Gaussian damping expression. We bin the power spectrum modes in accordance with the total length of the Fourier vector $k = \sqrt{k_x^2 + k_y^2 + k_z^2}$, noting that only tangential modes (with $k_x \approx 0$) are being counted. We fit a 1D decaying sinusoid (Paper I, equation 3) to the result. The scatter of the fitted wavescales across the Monte Carlo realizations is interpreted as the accuracy of measurement of the quantity $x(z_{\rm eff})/s$ (given that only tangential modes are involved). We uniformly populated our survey volumes such that $n \times P = 3$. We consider two different photo-z accuracies: $\sigma_0 = 0.03$, representing the typical fidelity of the current best photo-z studies, and $\sigma_0 = 0.01$, which we somewhat arbitrarily adopt as an upper limit to future improvements. \cite{BB05} consider a much wider range of possibilities. We note that in our methodology, reducing the value of $\sigma_0$ by some factor is equivalent to covering a proportionately larger survey area. Figure \ref{figpkphot} displays some Monte Carlo realizations of measured power spectra for these photometric redshift surveys, assuming a redshift range $0.5 < z < 1.5$. We assume a survey solid angle of $10{,}000$ deg$^2$, but also consider a smaller project ($2000$ deg$^2$ with $\sigma_0 = 0.01$). \begin{figure*}[htbp] \begin{center} \epsfig{file=fig17.ps,angle=-90,width=15cm} \end{center} \caption{Power spectrum realizations for three example photometric redshift surveys, varying survey area and photo-z accuracy (parameterized by $\sigma_0$). In all cases we assume a survey redshift interval $0.5 < z < 1.5$ (i.e.\ $z_{\rm eff} = 1$). Only radial Fourier modes with $|k_x| < 2/\sigma_x$ (where $\sigma_x = \sigma_0 \, (1 + z_{\rm eff}) \, x'(z_{\rm eff})$) are binned when measuring $P(k)$, which is plotted divided by a smooth reference spectrum. As in Figure \ref{figpkspec}, the dashed curve is the theoretical input $P(k)$ and the solid line is the best fit of our simple decaying sinusoidal function (Paper I, equation 3). The $x$-axis is marked in units of $k$ (in $h$ Mpc$^{-1}$) and represents the extent of the linear regime at redshift $z_{\rm eff}$. The rows of the Figure represent surveys with parameters ($2000$ deg$^2$, $\sigma_0 = 0.01$), ($10{,}000$ deg$^2$, $\sigma_0 = 0.03$) and ($10{,}000$ deg$^2$, $\sigma_0 = 0.01$); the columns display the first three Monte Carlo realizations in each case.} \label{figpkphot} \end{figure*} Figure \ref{figw0w1phot} illustrates the resulting measurement of the dark energy parameters $(w_0,w_1)$ for these survey configurations, assuming that we can span the redshift range $0.5 < z < 3.5$ (also see Tables \ref{tabxdx} and \ref{tabw0w1}). These $(w_0,w_1)$ contours are computed using the same method as Section \ref{secmethlik}, utilizing measurements of $x(z)/s$ in three redshift bins of width $\Delta z = 1$ (for real data, narrower redshift slices would be used and the results co-added). Each of these redshift constraints corresponds to a degenerate line in the $(w_0,w_1)$ plane (i.e.\ constrains one degree of freedom in the dark energy model) but, as described in Section \ref{secmethlik}, the direction of degeneracy slowly rotates with redshift: the combination of the likelihoods for each redshift bin results in closed contours. \begin{figure}[htbp] \begin{center} \epsfig{file=fig18.ps,angle=-90,width=7cm} \end{center} \caption{Dark energy measurements resulting from three different photometric redshift surveys covering the interval $0.5 < z < 3.5$. We assume perfect prior knowledge of $(\Omega_{\rm m},h,\Omega_{\rm k})$. Table \ref{tabw0w1} lists results for a range of different priors.} \label{figw0w1phot} \end{figure} Note that for photometric redshift surveys, the cosmological priors on $\Omega_{\rm m} h^2$ and $\Omega_{\rm m}$ required to achieve a given measurement precision of $(w_0,w_1)$ are much tighter than for spectroscopic surveys (see Table \ref{tabw0w1}). This is because in the absence of $H(z)$ information, the photo-$z$ survey must achieve a significantly tighter measure of $x(z)$ to recover a corresponding measurement accuracy of the dark energy parameters, rendering it more susceptible to uncertainties in $\Omega_{\rm m}$ and $h$. The real figure-of-merit for comparison of practical instruments is the accuracy with which the dark energy model can be measured {\it for a fixed total observing time} or {\it at a fixed cost}. The myriad details of comparing large imaging cameras with large spectroscopic systems are beyond the scope of this paper. However, we note that the proposed Large Synoptic Survey Telescope \citep[LSST; ][]{LSST} could image half of the entire sky to the required depth ($V \approx 25$) in multiple colours every 25 nights; such a survey would produce dark energy constraints comparable with a spectroscopic (e.g.\ KAOS) survey of 1000 deg$^2$ (the latter requiring 170 nights) {\em if $\sigma_0 = 0.01$ could be achieved}. We regard this level of photometric redshift accuracy as unlikely for ground-based surveys. We note that the KAOS measurements additionally constrain $H(z)$ leading to qualitatively more robust measurement of dark energy. Further the KAOS survey could be improved by adding more area, whereas once the LSST has observed the whole celestial hemisphere, there is obviously no further gain in $w(z)$ information from further passes. However, since `all-sky' deep multi-colour surveys are being performed for other scientific reasons (e.g.\ cosmic shear analysis), it is of considerable value to utilize these data for baryonic oscillation studies. Furthermore, as discussed in detail by \cite{BB05}, a photometric redshift survey of several thousand square degrees may provide constraints competitive with spectroscopic surveys in the short term. \section{Conclusions} This study has extended the methodology of Blake \& Glazebrook (2003) to simulate measurements of the cosmic {\it evolution} of the equation-of-state of dark energy $w(z)$ from the baryonic oscillations, using the simple parameterization $w(z) = w_0 + w_1 z$. The methodology used is very similar to that of Paper I, treating the primordial baryonic oscillations in the galaxy power spectrum as a standard cosmological ruler, whilst dividing out the overall shape of the power spectrum in order to maximize model-independence. In this study we make the improvement of fitting independent radial and tangential wavescales, showing that this is directly equivalent to measuring $D_A(z)$ and $H(z)$ in a series of redshift slices in units of the sound horizon. This results in improved constraints upon $(w_0,w_1)$. We have tested the approximations encoded in our approach and found them all to be satisfactory, increasing our confidence in the inferred error distributions for the dark energy parameters. The simulated accuracies for $(w_0,w_1)$ are roughly consistent with other estimates in the literature based on very different analysis methods. Our baseline `KAOS-like' optical surveys of $\sim 1000$ deg$^2$, which can be realized by the next generation of spectroscopic instruments at ground-based observatories, deliver measurements of the dark energy parameters with precision $\Delta w_0 \approx 0.15-0.2$ and $\Delta w_1 \approx 0.3-0.4$. In statistical terms, these constraints are poorer than those which may be provided by a future space-based supernova project such as the SNAP proposal. We note, however, that the baryonic oscillations method appears to be substantially free of systematic error, with the principal limitation being the amount of cosmic volume mapped. In addition, any measurement of deviations from a cosmological constant model is of sufficient importance for physics that {\it entirely independent} experiments would be demanded to confirm the new model. A next-generation radio telescope with a FOV $\approx 100$ deg$^2$ at $1.4$ GHz, performing a redshift survey of 21cm emission galaxies over several $1000$ deg$^2$, may be available on a similar timescale. We have considered the observational possibilities of more extensive baryonic oscillation experiments covering a significant fraction of the whole sky. Such a survey (encompassing $0.5 < z < 3.5$) may be straight-forward using a dedicated several-year space mission with slitless spectroscopy. In radio wavebands, the Square Kilometre Array would be able to survey the entire visible sky out to $z \approx 1.5$ in 6 months, if equipped with a sufficiently large FOV. These experiments would deliver extremely precise measurements of the dark energy model with accuracy $\Delta w_0 \approx 0.03-0.05$ and $\Delta w_1 \approx 0.06-0.1$ and would be invaluable to pursue if a significant non-vacuum dark energy signal was detected by smaller surveys. We have also explored in detail the potential of photometric redshift optical imaging surveys for performing baryonic oscillations experiments. The loss of the radial oscillatory signal, due to the damping caused by the redshift errors, implies that we can no longer recover information about the Hubble constant at high redshift. However, the baryonic oscillations can still be measured using tangential Fourier modes. A deep $2000$ deg$^2$ imaging survey with excellent photometric redshift precision ($\sigma_0 < 0.03$) would allow the oscillations to be detected (2.5$\sigma$ significance). Useful constraints upon the dark energy model are possible if $\sim 20{,}000$ deg$^2$ can be surveyed (such an experiment with $\sigma_0 = 0.03$ is roughly equivalent to a $\sim 1000$ deg$^2$ spectroscopic survey). We conclude that the baryonic oscillations in the clustering power spectrum represent one of the rare accurate probes of the cosmological model, possessing the potential to delineate cleanly any cosmic evolution in the equation-of-state of dark energy, via accurate measurements of $D_A(z)$ and $H(z)$ in a series of redshift slices. Importantly, such an experiment is likely to be substantially free of systematic error. Recent observations of SDSS Luminous Red Galaxies at $z = 0.35$ have provided the first convincing detection of the acoustic signature and validation of the technique. The challenge now is to create the large-scale surveys at higher redshifts required for mapping the properties of the mysterious dark energy. \acknowledgments KG and CB acknowledge generous funding from the David and Lucille Packard foundation and the Center for Astrophysical Sciences, Johns Hopkins University. CB warmly thanks his colleagues at the University of New South Wales where most of this work took place, especially Warrick Couch, and acknowledges funding from the Australian Research Council. CB also thanks Sarah Bridle, Filipe Abdalla and Steve Rawlings for many valuable conversations. We are grateful for useful discussions with Dan Eisenstein and Eric Linder. CB acknowledges current funding from the Izaak Walton Killam Memorial Fund for Advanced Studies and the Canadian Institute for Theoretical Astrophysics.
train/arxiv
BkiUdCc4eIZjqV0yM_hY
5
1
\section{Introduction} \label{sec:intro} \footnotetext[1]{Department of Electrical and Electronic Engineering, Imperial College London, SW7 2BT, United Kingdom} We consider a specific setting of imitation learning - the task of policy learning from expert demonstrations - in which the learner only has a finite number of expert trajectories without any further access to the expert. Two broad categories of approaches to this settings are behavioral cloning (BC) \cite{pomerleau1991efficient}, which directly learns a policy mapping from states to actions with supervised learning from expert trajectories; and inverse reinforcement learning (IRL) \cite{ng2000algorithms, abbeel2004apprenticeship}, which learns a policy via reinforcement learning, using a cost function extracted from expert trajectories. Most notably, BC has been successfully applied to the task of autonomous driving \cite{bojarski2016end, bansal2018chauffeurnet}. Despite its simplicity, BC typically requires a large amount of training data to learn good policies, as it may suffer from compounding errors caused by covariate shift \cite{ross2010efficient, ross2011reduction}. BC is often used as a policy initialization step for further reinforcement learning \cite{nagabandi2018neural, rajeswaran2017learning}. IRL estimates a cost function from expert trajectories and uses reinforcement learning to derive policies. As the cost function evaluates the quality of trajectories rather than that of individual actions, IRL avoids the problem of compounding errors. IRL is effective with a wide range of problems, from continuous control benchmarks in the Mujoco environment \cite{ho2016generative}, to robot footsteps planning \cite{ziebart2008maximum}. Generative Adversarial Imitation Learning (GAIL) \cite{ho2016generative, baram2017end} connects IRL to the general framework of Generative Adversarial Networks (GANs) \cite{goodfellow2014generative}, and re-frames imitation learning as distribution matching between the expert policy and the learned policy via Jensen-Shannon Divergence. However, GAIL naturally inherits the challenges of GAN training, including possible training instability, and overfitting to training data \cite{arjovsky2017towards, brock2018large}. Similarly, generative moment matching imitation learning (GMMIL) considers distribution matching via maximum mean discrepancy. Both GAIL and GMMIL iteratively update the reward function and the learned policy during training. In this paper, we propose imitation learning via expert policy support estimation, a new general framework for recovering a reward function from expert trajectories. We propose Random Expert Distillation (RED) by providing a connection between Random Network Distillation (RND) \cite{burda2018exploration} -- a method to design intrinsic rewards for RL exploration based on the "novelty" of states visited -- and support estimation ideas. Our method computes a fixed reward function from expert trajectories, and obviates the need for dynamic update of reward functions found in classical IRL, GAIL and GMMIL. Evaluating RED using different reinforcement learning algorithms on both discrete and continuous domains, we show that the proposed method achieves comparable or better performance than the state-of-arts methods on a variety of tasks, including a driving scenario (see Figure~\ref{fig:setup}). To the best of our knowledge, our method is the first to explore expert policy support estimation for imitation learning. \section{Background} We review the formulation of Markov Decision Process (MDP) in the context of which we consider imitation learning. We also review previous approaches to imitation learning and support estimation related to our proposed method. ~\newline\noindent{\bf Setting.} We consider an infinite-horizon discounted MDP, defined by the tuple $(S, A, P, r, p_0, \gamma)$, where $S$ is the set of states, $A$ the set of actions, $P : S \times A \times S \rightarrow [0, 1]$ the transition probability, $r : S \times A \rightarrow \mathbb{R} $ the reward function, $p_0 : S \rightarrow [0, 1]$ the distribution over initial states, and $\gamma \in (0, 1)$ the discount factor. Let $\pi$ be a stochastic policy $\pi : S \times A \rightarrow [0, 1]$ with expected discounted reward $\mathbb{E}_{\pi}(r(s, a)) \triangleq \mathbb{E}(\sum_{t=0}^{\infty} \gamma^tr(s_t, a_t))$ where $s_0 \sim p_0$, $a_t \sim \pi(\cdot|s_t)$, and $s_{t+1} \sim P(\cdot |s_t, a_t)$ for $t \geq 0$. We denote $\pi_E$ the expert policy. \subsection{Imitation Learning} We briefly review the main methods for imitation learning: ~\newline\noindent{\em Behavioral Cloning (BC)} learns a control policy $\pi : S \rightarrow A$ directly from expert trajectories via supervised learning. Despite its simplicity, BC is prone to compounding errors: small mistakes in the policy cause the agent to deviate from the state distribution seen during training, making future mistakes more likely. Mistakes accumulate, leading to eventual catastrophic errors \cite{ross2011reduction}. While several strategies have been proposed to address this \cite{ross2010efficient, sun2017deeply}, they often require access to the expert policy during the entire training process, rather than a finite set of expert trajectories. BC is commonly used to initialize control policies for reinforcement learning \cite{rajeswaran2017learning, nagabandi2018neural}. ~\newline\noindent{\em Inverse Reinforcement Learning (IRL)} models the expert trajectories with a Boltzmann distribution \cite{ziebart2008maximum}, where the likelihood of a trajectory is defined as: \begin{equation} p_{\theta}(\tau) = \frac{1}{Z}\exp(-c_{\theta}(\tau)) \end{equation} where $\tau=\{s_1, a_1, s_2, a_2 ...\}$ is a trajectory, $c_\theta(\tau)=\sum_t c_\theta(s_t, a_t)$ a learned cost function parametrized by $\theta$, and the partition function $Z\triangleq \int \exp(-c_\theta(\tau))d(\tau)$ the integral over all trajectories consistent with transition function of the MDP. The main computational challenge of IRL is the efficient estimation the partition function $Z$. IRL algorithms typically optimize the cost function by alternating between updating the cost function and learning an optimal policy with respect to the current cost function with reinforcement learning \cite{abbeel2004apprenticeship, ng2000algorithms}. ~\newline\noindent{\em Imitation Learning via Distribution Matching} frames imitation learning as distribution matching between the distribution of state-action pairs of the expert policy and that of the learned policy. In \cite{ho2016generative, finn2016connection}, the authors connect IRL to distribution matching via Generative Adversarial Networks (GANs) \cite{goodfellow2014generative}. Known as Generative Adversarial Imitation Learning (GAIL), the method imitates an expert policy by formulating the following minimax game: \begin{equation} \min_{\pi} \max_{D \in (0, 1)}~ \mathbb{E}_{\pi} (\log D(s, a)) + \mathbb{E}_{\pi_E}(\log (1-D(s, a))) - \lambda H(\pi) \end{equation} where the expectations $\mathbb{E}_{\pi}$ and $\mathbb{E}_{\pi_E}$ denote the joint distributions over state-action pairs of the learned policy and the expert policy, respectively, and $H(\pi) \triangleq \mathbb{E}_{\pi}(-\log \pi(a|s))$ is the entropy of the learned policy. GAIL has been successfully applied to various control tasks in the Mujoco environment \cite{ho2016generative, baram2017end}. However, GAIL inherits the challenges of GANs, including possible training instability such as vanishing or exploding gradients, as well as overfitting to training data \cite{arjovsky2017towards, brock2018large}. While numerous theoretical and practical techniques have been proposed to improve GANs (e.g \cite{arjovsky2017wasserstein, salimans2016improved}), a large-scale study of GANs show that many GAN algorithms and architectures achieve similar performance with sufficient hyperparameter tuning and random restarts, and no algorithm or network architecture stands out as the clear winner on all tasks \cite{lucic2018gans}. Similar to GAIL, \cite{kim2018imitation} proposed generative moment matching imitation learning (GMMIL) by minimizing the maximum mean discrepancy between the expert policy and the learned policy. Though GMMIL avoids the difficult minimax game, the cost of each reward function evaluation grows linearly with the amount of training data, which makes scaling the method to large dataset potentially difficult. In addition, we demonstrate in our experiments that GMMIL may fail to estimate the appropriate reward functions. \subsection{Support Estimation with Kernel Methods}\label{sec:support-estimation-kernel} As we will motivate in detail in Sec.~\ref{sec:algorithm}, we argue that estimating the support of the expert policy can lead to good reward functions for imitation learning. In this section we review one of the most well-established approaches to support estimation of a distribution from a finite number of i.i.d. samples, which relies on a kernelized version of principal component analysis \cite{scholkopf1998nonlinear}. The idea is to leverage the {\em Separating Property} \cite{de2014universally} of suitable reproducing kernel Hilbert spaces, which guarantees the covariance operator of the embedded data to precisely capture the geometrical properties of the support of the underlying distribution. This allows us to derive a test function that is zero exclusively on points belonging to the support of the distribution. Formally, let ${\mathcal X}\subseteq{\mathbb{R}}^d$ be a set (in our setting ${\mathcal X} = S \times A$ is the joint state-action space) and let $k:{\mathcal X}\times{\mathcal X}\to{\mathbb{R}}$ be a positive definite kernel with associated reproducing kernel Hilbert space (RKHS) ${\mathcal H}$ \cite{aronszajn1950theory} and feature map $\phi:{\mathcal X}\to{\mathcal H}$, such that $k(x,x') = \scal{\phi(x)}{\phi(x')}_{\mathcal H}$ for any $x,x'\in{\mathcal X}$. For any set $U\subseteq{\mathcal X}$, denote by $\Phi(U) = \{\phi(x)~|~x\in U\}$ and $\overline{\Phi(U)}$ the closure of its span in ${\mathcal H}$. The separating property guarantees that for any closed subset $U$ of ${\mathcal X}$, $\Phi(U) = \Phi({\mathcal X}) \cap \overline{\Phi(U)}$. In other words, the separating property ensures that \begin{equation} \label{eq:sep_cond} x\in U ~~\Longleftrightarrow~~ \phi(x)\in\overline{\Phi(U)}. \end{equation} As shown in \cite{de2014universally}, several kernels allow the separating property to hold, including the popular Gaussian kernel $k(x,x') = \exp(-\nor{x-x'}/\sigma)$ for any $\sigma>0$. The separating property suggests a natural strategy to test whether a point $x\in{\mathcal X}$ belongs to a closed set $U$. Let $P_U:{\mathcal H}\to{\mathcal H}$ be the orthogonal projection operator onto $\overline{\Phi(U)}$ (which is a linear operator since $\overline{\Phi(U)}$ is a linear space), we have \eqals{ x\in U ~~\Longleftrightarrow~~ \nor{(I - P_U)\phi(x)}_{\mathcal H} = 0, } since Eq. (\ref{eq:sep_cond}) corresponds to $\phi(x) = P_U \phi(x)$, or equivalently $(I-P_U)\phi(x) = 0$. We can leverage the machinery introduced above to estimate the support ${\textrm{supp}}(\pi)\subseteq{\mathcal X}$ of a probability distribution $\pi$ on ${\mathcal X}$. We observe that the covariance operator $C_\pi = \mathbb{E}_\pi[\phi(x)\phi(x)^\top]$ encodes sufficient information to recover the orthogonal projector $P_{\textrm{supp}(\pi)}$ (denoted $P_\pi$ in the following for simplicity). More precisely, let $C_\pi^\dagger$ denote the pseudoinverse of $C_\pi$. A direct consequence of the separating property is that $P_\pi = C_\pi^\dagger C_\pi$ \cite{de2014universally}. When $\pi$ is unknown and observed only through a finite number of $N$ i.i.d examples $\{x_i\}_{i=1}^N$, it is impossible to obtain $C_\pi$ (and thus $P_{\pi}$) exactly. A natural strategy is to consider the {\em empirical covariance} operator $\hat C = \frac{1}{N}\sum_{i=1}^N \phi(x_i)\phi(x_i)^\top$ as a proxy of the ideal one. Then, the projector is estimated as $\hat P = \hat C_m^\dagger \hat C_m$, where $C_m$ is the operator comprising the $m\leq n$ principal component directions of $C$, where $m$ is a hyperparameter to control the stability of the pseudoinverse when computing $\hat P$. We can then test whether a point $x\in {\mathcal X}$ belongs to ${\textrm{supp}}(\pi)$, by determining if \begin{equation}\label{eq:supp_score} \nor{(I-\hat P)\phi(x)}_{\mathcal H}^2 = \scal{\phi(x)}{(I-\hat P)\phi(x)}_{\mathcal H} = k(x,x) - \scal{\phi(x)}{\hat P\phi(x)}_{\mathcal H}, \end{equation} is greater than zero (in practice a threshold $\tau>0$ is introduced). Note that we have used the fact that $\hat P^\top\hat P = \hat P$, since $\hat P$ is an orthogonal projector. Although $\hat C$ and $\hat P$ are operators between possibly infinite dimensional spaces, we have for any $x\in{\mathcal X}$ \citep[see][]{scholkopf1998nonlinear} \eqals{ \scal{\phi(x)}{\hat P \phi(x)}_{\mathcal H} = K_x^\top K_m^\dagger K_x, } where $K\in{\mathbb{R}}^{n\times n}$ is the empirical kernel matrix of the training examples, with entries $K_{ij} = k(x_i,x_j)$. Here, $K_m$ denotes the matrix obtained by performing PCA on $K$ and taking the first $m\leq n$ principal directions and $K_x\in{\mathbb{R}}^n$ is the vector with entries $(K_x)_i=k(x_i,x)$. Therefore $\langle{\phi(x)},{\hat P \phi(x)}\rangle_{\mathcal H}$ can be computed efficiently in practice. The theoretical properties of the strategy described above have been thoroughly investigated in the previous literature \cite{de2014universally,rudi2017regularized}. In particular, it has been shown that the Hausdorff distance between $\textrm{supp}(\pi)$ and the set induced by $\hat P$, tends to zero when $n\to+\infty$, provided that the number of principal components $m$ grows with the number of training examples. Moreover, it has been shown that the support estimator enjoys fast learning rate under standard regularity assumptions. \section{Imitation Learning via Expert Policy Support Estimation}\label{sec:algorithm} Formally, we are interested in extracting a reward function $\hat r(s, a)$ from a finite set of trajectories $D=\{\tau_i\}_{i=1}^N$ produced by an expert policy $\pi_E$ within a MDP environment. Here each $\tau_i$ is a trajectory of state-action pairs of the form $\tau_i=\{s_1, a_1, s_2, a_2, ..., s_T, a_T\}$. Assuming that the expert trajectories are consistent with some unknown reward function $r^*(s, a)$, our goal is for a policy learned by applying RL to $\hat r(s, a)$, to achieve good performance when evaluated against $r^*(s, a)$ (the standard evaluation strategy for imitation learning). In contrast with traditional imitation learning, we do not explicitly aim to match $\pi_E$ exactly. We motivate our approach with a thought experiment. Given a discrete action space and a deterministic expert such that $a_s^*=\pi_E(s)$, the resulting policy is a Dirac's delta at each state $s\in S$, for all $(s, a)$. Consider the reward function \begin{equation} r_E(s, a)=\begin{cases} 1 \text{ if } (s, a) \in \textrm{supp}(\pi_E)\\ 0 \text{ if } (s, a) \notin \textrm{supp}(\pi_E)\\ \end{cases} \end{equation} which corresponds to the indicator function of $\textrm{supp}(\pi_E)$. It follows that a RL agent trained with this reward function would be able to imitate the expert exactly, since the discrete action space allows the random exploration of the agent to discover optimal actions at each state. In practice, the expert policy is unknown and only a finite number of trajectories sampled according to $\pi_E$ are available. In these context we can use support estimation techniques to construct a reward function $\hat r$. For continuous action domains, sparse reward such as $r_E$ is unlikely to work since it is improbable for the agent, via random exploration, to discover the optimal actions, while all other actions are considered equally bad. Instead, since support estimation produces a score with Eq. (\ref{eq:supp_score}) for testing each input, we may directly use that score to construct the reward function $\hat r$. The rationale is that actions similar to that of the expert in any given state would still receive high scores from support estimation, allowing the RL agent to discover those actions during exploration. Based on the motivation above, we hypothesize that support estimation of the expert policy's state-action distribution provides a viable reward function for imitation learning. Intuitively, the reward function encourages the RL agent to behave similarly as the expert at each state and remain on the estimated support of the expert policy. Further, this allows us to compute a fixed reward function based on the expert trajectories and re-frame imitation learning in the standard context of reinforcement learning. We note that for stochastic experts, the support of $\pi_E$ might coincide with the whole space $S\times A$. Support estimation would hence produce an uninformative, flat reward function $r_E$ when given an infinite amount of training data. However, we argue that an infinite amount of training data should allow BC to successfully imitate the expert, bypassing the intermediate step of extracting a reward function from the expert trajectories. \subsection{Practical Support Estimation}\label{sec:support-estimation-rnd-intuition} Following the strategy introduced in Sec.~\ref{sec:support-estimation-kernel}, we consider a novel approach to support estimation. Our method takes inspiration from kernel methods but applies to other models such as neural networks. Let ${\mathcal H}$ be a RKHS and $f\in{\mathcal H}$ a function parametrized by $\theta\in\Theta$. Consider the regression problem that admits a minimizer on ${\mathcal H}$ \eqals{ \inf_{f\in{\mathcal H}} \int (f_\theta(x) - f(x))^2 ~d\pi(x), } The minimal $\nor{\cdot}_{\mathcal H}$ solution corresponds to $f_{\pi,\theta} = C_\pi^\dagger C_\pi f_\theta = P_\pi f_\theta$, the orthogonal projection of $f_\theta$ onto the range of $C_\pi$ \citep[see e.g][]{engl1996regularization}. For any $x\in{\mathcal X}$ we have \eqals{ f_\theta(x) - f_{\pi,\theta}(x)= \scal{f_{\theta}}{(I-P_\pi)\phi(x)}_{\mathcal H}. } In particular, if $x\in\textrm{supp}(\pi)$, we have $(I-P_\pi)\phi(x) = 0$ and hence $f_\theta(x) -f_{\pi,\theta}(x) = 0$. The converse is unfortunately not necessarily true: for a point $x\not\in \textrm{supp}(\pi)$, $(I-P_\pi)\phi(x)$ may still be orthogonal to $f_\theta$ and thus $f_\theta(x) - f_{\pi,\theta}(x) = 0$. A strategy to circumvent this issue is to consider multiple $f_\theta$, and then average their squared errors $(f_\theta - f_{\pi,\theta})^2$. The rationale is that if $x\not\in S_\pi$, by spanning different directions in ${\mathcal H}$, at least one of the $f_\theta$ will not be orthogonal to $(I-P_\pi)\phi(x)$, which leads to a non-zero value when compared against $f_{\pi,\theta}(x)$. In particular, if we sample $\theta$ according to a probability distribution over $\Theta$, we have \eqals{ \mathbb{E}_\theta (f_\theta - f_{\pi,\theta})^2 & = \mathbb{E}_\theta \scal{(I-P_\pi)\phi(x)}{f_\theta f_\theta^\top (I-P_\pi)\phi(x)}_{\mathcal H} \\ & = \scal{(I-P_\pi)\phi(x)}{\Sigma (I-P_\pi)\phi(x)}_{\mathcal H}, } where $\Sigma = \mathbb{E}_\theta [f_\theta f_\theta^\top]$ is the covariance of the random functions $f_\theta$ generated from sampling the parameters $\theta$. Ideally, we would like to sample $\theta$ such that $\Sigma = I$, which allows us to recover the support estimator $\nor{(I-P_\pi)\phi(x)}_{\mathcal H}$ introduced in Sec.~\ref{sec:support-estimation-kernel}. However, it is not always clear how to sample $f_\theta$ so that its covariance corresponds to the identity in practice. In this sense, this approach does not offer the same theoretical guarantees as the kernelized method reviewed in Sec.~\ref{sec:support-estimation-kernel}. The discussion above provides us with a general strategy for support estimation. Consider a hypotheses space of functions $f_\theta$ parametrized by $\theta\in\Theta$ (e.g. neural networks). We can sample $\theta_1,\dots,\theta_K$ functions according to a distribution over $\Theta$. Given a training dataset $\{x_i\}_{i=1}^N$ sampled from $\pi$, we solve the $K$ regression problems \begin{equation} \label{eq:prac_supp} \hat\theta_k = \min_{\theta\in\Theta} \frac{1}{N} \sum_{i=1}^N (f_\theta(x_i) - f_{\theta_k}(x_i))^2, \end{equation} for every $k=1,\dots,K$. Then for any $x\in{\mathcal X}$, we can test whether it belongs to the support of $\pi$ by checking whether \begin{equation} \label{eq:supp_test} \frac{1}{K}\sum_{k=1}^K (f_{\hat\theta_k}(x) - f_{\theta_k}(x))^2, \end{equation} is larger than a prescribed threshold. As $K$ and $N$ grow larger, the estimate above will approximate increasingly well the desired quantity $\mathbb{E}_\theta (f_{\pi,\theta}(x)-f_\theta(x))^2$. \subsection{Reward Function with Random Network Distillation} We may interpret the recent method of Random Network Distillation (RND) \cite{burda2018exploration} as approximate support estimation. Formally, RND sets up a randomly generated prediction problem involving two neural networks: a fixed and randomly initialized target network $f_{\theta}$ which sets the prediction problem, and a predictor network $f_{\hat{\theta}}$ trained on the state-action pairs of the expert trajectories. $f_{\theta}$ takes an observation to an embedding $f_{\theta} : S\times A \rightarrow \mathbb{R}^K$ and similarly so for $f_{\hat{\theta}}: S\times A \rightarrow \mathbb{R}^K$, which is analogous to the $K$ prediction problems defined in Eq. (\ref{eq:prac_supp}). The predictor network is trained to minimize the expected mean square error by gradient descent $L(s, a) = ||f_{\hat{\theta}}(s, a) - f_{\theta}(s, a)||_2^2$ with respect to its parameters $\hat{\theta}$. After training, the score of a state-action pair belonging to the support of $\pi_E$ is predicted by $L(s, a)$, analogous to Eq. (\ref{eq:supp_test}). One concern to RND is that a sufficiently powerful optimization algorithm may recover $f_{\theta}$ perfectly, causing $L(s, a)$ to be zero everywhere. Our experiments confirm the observations in \cite{burda2018exploration} that standard gradient-based methods do not behave in this undesirable way. With the prediction error $L(s, a)$ as the estimated score of $(s, a)$ belonging to the support of $\pi_E$ , we propose a reward function that has worked well in practice, \begin{equation} \label{eq:supp_rew} r(s, a) = \exp(-\sigma_1 L(s, a)) \end{equation} where $\sigma_1$ is a hyperparameter. As $L(s, a)$ is positive, $r(s, a) \in [0, 1]$. We choose $\sigma_1$ such that $r(s, a)$ from expert demonstrations are mostly close to 1. \subsection{Terminal Reward Heuristic} For certain tasks, there are highly undesirable terminal states (e.g. car crashes in autonomous driving). In such contexts, We further introduce a terminal reward heuristic to penalize the RL agent from ending an episode far from the estimated support of the expert policy. Specifically, let $\bar{r} = -\frac{1}{N}\sum_{i=1}^N r(s_i, a_i)$ be the average reward computed using expert trajectories, and $r(s_T, a_T)$ the reward of the final state of an episode, we define \begin{equation} \label{eq:term_rew} r_{term}=\left\{\begin{array}{cl} -\sigma_2 \bar{r} & \text{ if } \sigma_3\bar{r} > r(s_T, a_T) \\ 0 & \text{ otherwise }\\ \end{array} \right. \end{equation} where $\sigma_2, \sigma_3$ are hyperparameters. We apply the heuristic to an autonomous driving task in the experiment, and demonstrate that it corresponds nicely with dangerous driving situations and encourages RL agents to avoid them. We note that the heuristic is not needed for all other tasks considered in the experiments. \begin{algorithm}[t] \caption{{\sc Random Expert Distillation}} \label{alg:red} \begin{algorithmic} \STATE {\bfseries Input:} data $D=\{(s_i, a_i)\}_{i=1}^N$, $\Theta$ function models, initial policy $\pi_0$. \STATE~ \STATE Randomly sample $\theta\in\Theta$ \STATE $\hat\theta=${\sc Minimize}$(f_{\hat{\theta}},f_{\theta},D)$ \STATE $r(\cdot) = \exp(-\sigma_1\|f_{\hat{\theta}}(\cdot)-f_{\theta}(\cdot)\|_2^2)$ \STATE Compute $r_{\textrm{term}}$ from Eq. (\ref{eq:term_rew}) \STATE $\pi=${\sc ReinforcementLearning}$(r,r_{\textrm{term}},\pi_0)$. \STATE~ \STATE {\bfseries Return:} $\pi$. \end{algorithmic} \end{algorithm} \subsection{Random Expert Distillation} We present our algorithm Random Expert Distillation (RED) in Algorithm \ref{alg:red}. The algorithm computes the estimated support of the expert policy and generates a fixed reward function according to Eq. (\ref{eq:supp_rew}). The reward function is used in RL for imitation learning. As we report in the experiments, a random initial policy $\pi_0$ is sufficient for the majority of tasks considered in the experiments, while the remaining tasks requires initialization via BC. We will further discuss this limitation of our method in the results. \subsection{Alternative to RND} AutoEncoder (AE) networks \cite{vincent2008extracting} set up a prediction problem of $\min_\theta ||f_{\theta}(x)-x||_2^2$. AE behaves similarly to RND in the sense that it also yields low prediction errors for data similar to the training set. AE could be also seen as approximate support estimation in the context of Eq. (\ref{eq:supp_test}), by replacing randomly generated $f_{\theta_k}$ with identity functions on each dimension of the input. Specifically, AE sets up K prediction problems \begin{equation*} \min_{\theta} \frac{1}{N} \sum_{i=1}^N (f_\theta(x_i) - I_k(x_i))^2 \end{equation*} for $k=1,\dots,K$ where $I_k(x)=x_k$ and $K$ is the input size. However, as discussed in \cite{burda2018exploration}, AE with a bottleneck layer may suffer from input stochasticity or model misspecification, which are two sources of prediction errors undesirable for estimating the similarity of input to the training data. Instead, we have found empirically that overparametrized AEs with a $\ell_2$ regularization term prevent the trivial solution of an identity function, allow easier fitting of training data, and may be used in place of RND. We include AE in our experiments to demonstrate that support estimation in general appears to be a viable strategy for imitation learning. \begin{figure*}[t] \centering \begin{subfigure}[t]{.24\textwidth} \includegraphics[width=\textwidth]{./images/toy_5.pdf} \caption{n = 5} \label{fig:toy_5} \end{subfigure} \begin{subfigure}[t]{.24\textwidth} \includegraphics[width=\textwidth]{./images/toy_10.pdf} \caption{n = 10} \label{fig:toy_10} \end{subfigure} \begin{subfigure}[t]{.24\textwidth} \includegraphics[width=\textwidth]{./images/toy_50.pdf} \caption{n = 50} \label{fig:toy_50} \end{subfigure} \begin{subfigure}[t]{.24\textwidth} \includegraphics[width=\textwidth]{./images/toy_100.pdf} \caption{n = 100} \label{fig:toy_100} \end{subfigure} \caption{True mean episodic reward during training on the simple domain, with different expert dataset sizes.} \label{fig:toy_comp} \end{figure*} \section{Experiments} We evaluate the proposed method on multiple domains. We present a toy problem to motivate the use of expert policy support estimation for imitation learning; and to highlight the behaviors of different imitation learning algorithms. We then evaluate the proposed reward function on five continuous control tasks from the Mujoco environment. Lastly, we test on an autonomous driving task with a single trajectory provided by a human driver. The code for reproducing the experiments is available online.\footnote{\url{https://github.com/RuohanW/red.git}} \subsection{Simple Domain}\label{sec:toy} We consider a stateless task where $s \sim \text{unif}(-1, 1)$, a discrete action space $a \in \{-1, 1\}$ and the reward function $r(s, a) = as$. It is clear that the expert policy to this problem is $\pi_E(s)=\text{sign}(s)$. Using Deep Q Learning \cite{mnih2015human} as the reinforcement learning algorithm, we compare the proposed method against AE, GAIL and GMMIL with an expert dataset of size $n=5, 10, 50, 100$ respectively. \begin{figure}[t] \centering \begin{subfigure}{.32\textwidth} \includegraphics[width=\textwidth]{./images/GMMIL_toy.pdf} \caption{GMMIL} \label{fig:GMMIL_toy} \end{subfigure} \begin{subfigure}{.32\textwidth} \includegraphics[width=\textwidth]{./images/GAIL_toy.pdf} \caption{ GAIL} \label{fig:GAIL_toy} \end{subfigure} \begin{subfigure}{.32\textwidth} \includegraphics[width=\textwidth]{./images/ENC_toy.pdf} \caption{ AE} \label{fig:ENC_toy} \end{subfigure} \begin{subfigure}{.32\textwidth} \includegraphics[width=\textwidth]{./images/GMMIL_toy_b.pdf} \caption{ GMMIL fails intermittently} \label{fig:GMMIL_toy_b} \end{subfigure} \begin{subfigure}{.32\textwidth} \includegraphics[width=\textwidth]{./images/GAIL_toy_b.pdf} \caption{ \centering GAIL overfits with state near 1} \label{fig:GAIL_toy_b} \end{subfigure} \begin{subfigure}{.32\textwidth} \includegraphics[width=\textwidth]{./images/RED_toy.pdf} \caption{ RED } \label{fig:RED_toy} \end{subfigure} \caption{Estimated reward functions of the expert using different imitation learning algorithms on the simple domain. For better visualization of the reward, we use $r(s, a)=1 - \alpha_1 L(s, a)$ as the reward function for AE and RED.}\label{fig:toy_reward} \end{figure} In Figure \ref{fig:toy_comp}, we show the true mean episodic reward of different algorithms during training. Except for GMMIL, all other algorithms converge to the expert policy for all sizes of the dataset. In addition, RED and AE converge to the expert policy faster as the extracted reward functions recover the correct reward function nearly perfectly (Figures \ref{fig:RED_toy}, \ref{fig:ENC_toy}). In contrast, GAIL requires adversarial training to recover the correct reward function, a noisy process during which all actions from the learned policy, both optimal or not, are considered ``wrong'' by the discriminator. In fact, when the discriminator is overparametrized, it would overfit to the training data gradually and generate arbitrary reward for some region of the state-action space (Figure \ref{fig:GAIL_toy_b}). The results are consistent with observations from \cite{brock2018large} that the discriminator is overfitting, and that early stopping may be necessary to prevent training collapse. For GMMIL, we observe that the method intermittently generates wrong reward functions (Figure \ref{fig:GMMIL_toy_b}), causing the performance to oscillate and converges to lower mean reward, especially when the expert data is sparse. This is likely due to the noise in estimating distribution moments from limited samples. \begin{table*}[t] \vskip 0.15in \caption{Episodic reward (as provided by the environment) on Mujoco tasks by different methods evaluated over 50 runs. HalfCheetah and Ant uses BC initialization in RED and AE. We are unsuccessful with GMMIL on Ant.} \begin{center} \begin{scriptsize} \begin{sc} \begin{tabular}{cccccc} \toprule & Hopper & HalfCheetah & Reacher & Walker2d & Ant\\ \midrule GAIL & 3614.22 $\pm$ 7.17 & 4515.70 $\pm$ 549.49 & -32.37 $\pm$ 39.81 & 4877.98 $\pm$ 2848.37 & 3186.80 $\pm$ 903.57\\ GMMIL & 3309.30 $\pm$ 26.28 & 3464.18 $\pm$ 476.50 & -11.89 $\pm$ 5.27 & 2967.10 $\pm$ 702.03 & -\\ AE & 3478.31 $\pm$ 3.09 & 3380.74 $\pm$ 101.94 & -10.91 $\pm$ 5.62 & 4097.61 $\pm$ 118.06 & 3778.61 $\pm$ 422.63\\ RED & 3625.96 $\pm$ 4.32 & 3072.04 $\pm$ 84.71 & -10.43 $\pm$ 5.20 & 4481.37 $\pm$ 20.94 & 3552.77 $\pm$ 348.67\\ \bottomrule \end{tabular} \label{tab:mujoco} \end{sc} \end{scriptsize} \end{center} \vskip -0.1in \end{table*} \subsection{Mujoco Tasks} We further evaluate RED on five continuous control tasks from the Mujoco environment: Hopper, Reacher and HalfCheetah, Walker2d and Ant. Similarly, we compare RED against AE, GAIL and GMMIL, using Trust Region Policy Optimization (TRPO) \cite{schulman2015trust} in Table \ref{tab:mujoco}. Similar to the experiments in \cite{ho2016generative}, we consider learning with 4 trajectories of expert demonstration generated by an expert policy trained with RL. All RL algorithms terminate within 5M environment steps. We note that we were unsuccessful in the Ant task with GMMIL after an extensive search for the appropriate hyperparameters. \begin{figure}[t] \centering \includegraphics[width=.50\textwidth, trim={0 0 0 .0cm}, clip]{images/trpo_mean.pdf} \caption{True mean episodic reward of GMMIL, GAIL, AE and RED on Hopper during training. Agents trained with RED improve faster compared to other methods.} \label{fig:trpo} \end{figure} The results suggest that support estimation of expert policies is a viable strategy for imitation learning, as the AE and RED are both able to achieve good results with the fixed reward functions constructed from support estimation of the expert policy. While RED and AE underperform on the HalfCheetah, both methods are comparable or better than GAIL and GMMIL in all other tasks. In particular, RED and AE achieve much smaller variance in performance, likely due to the fixed reward function. Consistent with the observations from the simple domain in Sec.~\ref{sec:toy}, RED appears to achieve faster performance improvements during early training (Figure \ref{fig:trpo}). We note that though the best performance AE can achieve is similar to RED, AE is much more sensitive to the random initialization of parameters, seen from the large standard deviation in Figure \ref{fig:trpo}. A limitation of our method is that HalfCheetah and Ant require a policy initialized with BC to achieve good results while GAIL and GMMIL could start with a random policy in the two tasks. We hypothesize that the evolving reward functions of GAIL and GMMIL may provide better exploration incentives to the RL agent. As GAIL is orthogonal to our method and may be used together, we leave it to future work on how to combine the benefits of both RED and GAIL to achieve more robust algorithms. \begin{figure}[t] \centering \begin{subfigure}{.4\textwidth} \caption{Episodic reward (distance travelled without collision) on the driving task with different methods. The expert performance of 7485 corresponds with track completion without collision. Both AE and RED uses the terminal reward heuristic.} \label{tab:car} \vskip 0.15in \begin{center} \begin{footnotesize} \begin{sc} \begin{tabular}{cccc} \toprule & Average & Std & Best\\ \midrule GAIL & 795 & 395 & 1576\\ GMMIL & 2024 & 981 & 3624\\ BC & 1033 & 474 & 1956\\ AE & 4378 & 1726 & 7485\\ RED & 4825 & 1552 & 7485\\ Expert & 7485 & 0 & 7485\\ \bottomrule \end{tabular} \end{sc} \end{footnotesize} \end{center} \vskip -0.1in \end{subfigure} \qquad \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=0.9\textwidth, trim={0 0 0 5cm}, clip]{./images/CarSim_Setup_ICML_2} \caption{The expert driver providing demonstration on the driving scenario. The scenario consists of driving through a straight track while avoiding obstacles.} \label{fig:setup} \end{subfigure} \end{figure} \subsection{Autonomous Driving Task} Lastly, we evaluate RED, AE, GAIL, GMMIL and BC on an autonomous driving task, using a single demonstration provided by a human driver. The environment consists of a straight track with obstacles placed randomly at one of the three lanes of the track (Figure \ref{fig:setup}). A human demonstrator was instructed to drive through the track and avoid any obstacles, while keeping the speed around 100 km/h. We sampled the expert driving actions at 20 Hz. For the environment, we use a vector of size 24 to represent state (20 dimensions for the LIDAR reading, 3 dimensions for the relative position, and orientation of the track with respect to the car, and 1 dimension for the vehicle speed) to output the steering and the accelerator command for the vehicle. We include the terminal reward heuristic defined in Eq. \ref{eq:term_rew}. For evaluation, we measure the distance a learned policy is able to control the vehicle through the track without any collision with the obstacles. The task is challenging as the learner must generalize from the limited amount of training data, without explicit knowledge about the track width or the presence of obstacles. The driving task also allows us to qualitatively observe the behaviors of learned policies. In this task, we initialize all policies with BC and use stochastic value gradient method with experience replay \cite{heess2015learning} as the reinforcement learning algorithm. The algorithm is referred to as SVG(1)-ER in the original paper. \begin{figure*}[t] \centering \begin{subfigure}[t]{.24\textwidth} \includegraphics[width=\textwidth]{./images/scene_close} \caption{Near Collision} \end{subfigure} \begin{subfigure}[t]{.24\textwidth} \includegraphics[width=\textwidth]{./images/scene_crash} \caption{Collision} \end{subfigure} \begin{subfigure}[t]{.24\textwidth} \includegraphics[width=\textwidth]{./images/scene_corner} \caption{Off-road Collision} \end{subfigure} \begin{subfigure}[t]{.24\textwidth} \includegraphics[width=\textwidth]{./images/scene_out} \caption{Off-road} \end{subfigure} \caption{Representative scenarios where the reward functions of AE and RED assign near zero rewards. The scenarios correspond well with various dangerous states as they are dissimilar to those from expert demonstrations.}\label{fig:car_danger} \end{figure*} Table \ref{tab:car} shows the average and best performance of each method evaluated over 5 runs through the same track. We note that the expert performance of 7485 corresponds with track completion without collision. The results suggest that RED achieves the best performance. For both RED and AE, the reward functions correctly identify dangerous situations, such as collision or near-collision, by assigning those states with zero rewards. Figure \ref{fig:car_danger} shows a few representative states where the reward function outputs zero reward. In contrast, we observe that GAIL tends to overfit to the expert trajectory. During training, GAIL often "forgets" how to avoid the same obstacle after the discriminator update the reward function. Such behaviors prevent the learned policy from avoiding obstacles consistently. For GMMIL, we observe behaviors similar to that found in Sec.~\ref{sec:toy}: the reward function intermittently produces problematic incentives, such as assigning positive rewards for states immediately preceding collisions. \section{Conclusion} We propose a new general framework of imitation learning via expert policy support estimation. We connect techniques such as Random Network Distillation and AutoEncoders to approximate support estimation; and introduce a method for efficiently learning a reward function suitable for imitation learning from the expert demonstrations. We have shown empirically in multiple tasks that support estimation of expert policies is a viable strategy for imitation learning, and achieves comparable or better performance compared to the state-of-art. For future works, combining different approaches of recovering the expert's reward function appears to be a promising direction.
train/arxiv
BkiUbk85qg5A5wELC6cS
5
1
\section{Introduction} A connected dominating set in a graph $G=(V,E)$ is a set of vertices whose closed neighborhood is $V$ that induces a connected subgraph. A connected dominating set is inclusion minimal if it does not contain another connected dominating set as a proper subset. Enumerating all minimal connected dominating sets in a given graph can be trivially performed in $O(2^n)$. Whether a better enumeration algorithm exists was one of the most important open problems posed in the first workshop on enumeration (Lorentz Center, Netherlands, 2015) \cite{lorentz}. The problem has been subsequently addressed in \cite{belowAllsubsets} where an algorithm that runs in $O((2-10^{-50})^n)$ was presented. This slightly improves the upper bound on the number of minimal connected dominating sets in a (general) graph. On the other hand, the maximum number of minimal connected dominating sets in a graph was shown to be in $\Omega(3^{\frac{n}{3}})$ \cite{GHK16}. This lower bound is obviously very low compared to the upper bound and to the running time of the current asymptotically-fastest exact algorithm, which is in $O(1.862^n)$ \cite{abu2011}. The gap between upper and lower bounds is narrower when it comes to special graph classes. On chordal graphs, for example, the upper bound has been recently improved to $O(1.4736^n)$ \cite{Petr2020}. Other improved lower/upper bounds have been obtained for AT-free, strongly chordal, distance-hereditary graphs, and cographs in \cite{GHK16}. Further improved bounds for split graphs and cobipartite graphs have been obtained in \cite{skjorten2017faster}. In this note we report an improved lower bound on the maximum number of minimal connected dominating sets in a graph. This is related to the enumeration of all the minimal connected dominating sets since it also gives a lower bound on the asymptotic performance of any input-sensitive enumeration algorithm. \section{Graphs with Large Minimal Connected Dominating Sets} Given arbitrary positive integers $k,t$, we construct a graph $G_t^k$ of order $n = k(2t+1) + 1$ as follows. The main building blocks of $G_t^k$ consist of $k$ copies of a base-graph $G_{t}$, of order $2t-1$. The vertex set of $G_{t}$ consists of three layers. The first layer is a set $X=\{x_{1}\ldots x_{t}\}$ that induces a clique. The second is an independent set $Y=\{y_{1},\ldots y_{t}\}$, while the third layer consists of a singleton $\{z\}$. Each vertex $x_{i}\in X$ has exactly $t-1$ neighbors in $Y$: $N(x_{i}) = \{y_{j}\in Y: i\neq j\}$. In other words, the base-graph $G_{t}$ has a maximum anti-matching\footnote{An anti-matching in $G$ is a collection of disjoint non-adjacent pairs of its vertices. } $\{\{x_{j},y_{j}\}: 1\leq j\leq t\}$. In fact, $X\cup Y$ induces a copy of $K_{t,t}$ minus a perfect matching. Finally the vertex $z$ is adjacent to all the $t$ vertices in $Y$. Figure \ref{basic-block} below shows the graph $G_t$ for $t=4$. \begin{figure}[htb!] \centering \begin{tikzpicture}[->, scale=0.33, every node/.style={anchor=center, scale=0.7},node distance=1cm,main node/.style={circle,fill=white!20,draw,font=\sffamily\Large\bfseries}] \node [main node] (1) at (-2,2) {\(~z~\)}; \node[main node] (2) at (-12,-4) {\(y_1\)}; \node [main node] (3) at (-6,-4) {\(y_2\)}; \node [main node] (4) at (2,-4) {\(y_3\)}; \node [main node] (5) at (8,-4) {\(y_3\)}; \node[main node] (6) at (-12,-11) {\(x_1\)}; \node [main node] (7) at (-6,-11) {\(x_2\)}; \node [main node] (8) at (2,-11) {\(x_3\)}; \node [main node] (9) at (8,-11) {\(x_4\)}; \begin{scope}[-] \draw [thin] (1) -- (2); \draw [thin] (1) -- (3); \draw [thin] (1) -- (4); \draw [thin] (1) -- (5); \draw [thin] (2) -- (7); \draw [thin] (2) -- (8); \draw [thin] (2) -- (9); \draw [thin] (3) -- (6); \draw [thin] (3) -- (8); \draw [thin] (3) -- (9); \draw [thin] (4) -- (6); \draw [thin] (4) -- (7); \draw [thin] (4) -- (9); \draw [thin] (5) -- (6); \draw [thin] (5) -- (7); \draw [thin] (5) -- (8); \draw [thin] (6) -- (7); \draw [thin] (6) to[bend right] (8); \draw [thin] (6) to[bend right] (9); \draw [thin] (7) -- (8); \draw [thin] (8) -- (9); \draw [thin] (7) to[bend right] (9); \end{scope} \end{tikzpicture} \caption{The graph $G_4$ } \label{basic-block} \end{figure} \begin{lemma} \label{Gi} For each $t>0$, the graph $G_t$ has exactly $\frac{t^3+t^2}{2}-t$ minimal connected dominating sets that have non-empty intersection with the set $X$. \end{lemma} \begin{proof} The set $X$ cannot have more than two vertices in common with any minimal connected dominating set, since any two elements of $X$ dominate $X\cup Y$. Any minimal connected dominating set that contains exactly one vertex $x_i$ of $X$ must contain the vertex $z$, to dominate $y_i$, and one of the $t-1$ neighbors of $x_i$ (to be connected). There are $t(t-1)$ sets of this type. Moreover, each pair of elements of $X$ dominates $Y$. So a minimal connected dominating set can be formed by (any) two elements of $X$ and any of the elements of $Y$ (to dominate $z$). There are $t\frac{t(t-1)}{2}$ such sets. \end{proof} The hub-vertex $s$ in $G_t^k$ must be in any connected dominating set, being a cut-vertex. Therefore, there is no need for the set $X$ in $G_t$ to induce a clique (in $G_t^k$), being always dominated by $s$. In other words, the counting used in the above proof still holds if each copy of $G_t$ is replaced by $G_{t}-E(X)$ in $G_t^k$. Here $E(X)$ denotes the set of edges connecting pairs of vertices in $X$. The below figure shows $G_3^3$ without the edges between pairs of element of $X$ in each copy of $G_3$. \vspace{5pt} \begin{figure}[htb!] \centering \begin{tikzpicture}[->, scale=0.29, every node/.style={anchor=center, scale=0.6},node distance=1cm,main node/.style={circle,fill=white!20,draw,font=\sffamily\Large\bfseries}] \node [main node] (1) at (-17,0) {\(~z_1~\)}; \node[main node] (2) at (-22,-4) {\(y_{11}\)}; \node [main node] (3) at (-17,-4) {\(y_{12}\)}; \node [main node] (4) at (-12,-4) {\(y_{13}\)}; \node[main node] (5) at (-22,-8) {\(x_{11}\)}; \node [main node] (6) at (-17,-8) {\(x_{12}\)}; \node [main node] (7) at (-12,-8) {\(x_{13}\)}; \node [main node] (8) at (-2,0) {\(~z_2~\)}; \node[main node] (9) at (-7,-4) {\(y_{21}\)}; \node [main node] (10) at (-2,-4) {\(y_{22}\)}; \node [main node] (11) at (3,-4) {\(y_{23}\)}; \node[main node] (12) at (-7,-8) {\(x_{21}\)}; \node [main node] (13) at (-2,-8) {\(x_{22}\)}; \node [main node] (14) at (3,-8) {\(x_{23}\)}; \node [main node] (15) at (13,0) {\(~z_3~\)}; \node[main node] (16) at (8,-4) {\(y_{31}\)}; \node [main node] (17) at (13,-4) {\(y_{32}\)}; \node [main node] (18) at (18,-4) {\(y_{33}\)}; \node[main node] (19) at (8,-8) {\(x_{31}\)}; \node [main node] (20) at (13,-8) {\(x_{32}\)}; \node [main node] (21) at (18,-8) {\(x_{33}\)}; \node [main node] (22) at (-2,-18) {\(~s~\)}; \begin{scope}[-] \draw [thin] (8) -- (9); \draw [thin] (8) -- (10); \draw [thin] (8) -- (11); \draw [thin] (9) -- (13); \draw [thin] (9) -- (14); \draw [thin] (10) -- (12); \draw [thin] (10) -- (14); \draw [thin] (11) -- (12); \draw [thin] (11) -- (13); \draw [thin] (22) -- (5); \draw [thin] (22) -- (6); \draw [thin] (22) -- (7); \draw [thin] (22) -- (12); \draw [thin] (22) -- (13); \draw [thin] (22) -- (14); \draw [thin] (22) -- (19); \draw [thin] (22) -- (20); \draw [thin] (22) -- (21); \draw [thin] (1) -- (2); \draw [thin] (1) -- (3); \draw [thin] (1) -- (4); \draw [thin] (2) -- (6); \draw [thin] (2) -- (7); \draw [thin] (3) -- (5); \draw [thin] (3) -- (7); \draw [thin] (4) -- (5); \draw [thin] (4) -- (6); \draw [thin] (15) -- (16); \draw [thin] (15) -- (17); \draw [thin] (15) -- (18); \draw [thin] (16) -- (20); \draw [thin] (16) -- (21); \draw [thin] (17) -- (19); \draw [thin] (17) -- (21); \draw [thin] (18) -- (19); \draw [thin] (18) -- (20); \end{scope} \end{tikzpicture} \caption{The graph $G_3^3$ } \label{G3-3} \end{figure} \begin{theorem} \label{lowerbound} The maximum number of minimal connected dominating sets in a simple undirected connected graph of order $n$ is in $\Omega(1.489^n)$. \end{theorem} \begin{proof} By Lemma \ref{Gi}, each copy of the graph $G_{t}$ has $\frac{t^3+t^2}{2}-t$ minimal connected dominating sets. There are $k$ such graphs in $G_t^k$, in addition to the vertex $s$ that connects them all. Every minimal connected dominating set must contain $s$ and at least one element from $N(s)$ in each $G_{t}$. Therefore, the total number of minimal connected dominating sets in $G_t^k$ is $(\frac{t^3+t^2}{2}-t)^k = (\frac{t^3+t^2}{2}-t)^{\frac{n-1}{2t+1}}$. The claimed lower bound is achieved when $t=4$, which gives a total of $36^{\frac{n-1}{9}} \in \Omega(1.489^n)$. \end{proof} We note that $G_t^k$ is a $t$-degenerate graph that is also bipartite (since the set $X$ in each copy of $G_{t}$ can be an independent set). Furthermore, we observe that $G_3^k$ is planar. To see this, simply re-order the elements of $Y$ in each copy of $G_3$ as shown in Figure \ref{planar} below. \vspace{-20pt} \begin{figure}[htb!] \centering \begin{tikzpicture}[->, scale=0.28, every node/.style={anchor=center, scale=0.6},node distance=1cm,main node/.style={circle,fill=white!20,draw,font=\sffamily\Large\bfseries}] \node [main node] (1) at (-17,0) {\(~z_1~\)}; \node[main node] (2) at (-22,-4) {\(y_{12}\)}; \node [main node] (3) at (-17,-4) {\(y_{13}\)}; \node [main node] (4) at (-12,-4) {\(y_{11}\)}; \node[main node] (5) at (-22,-8) {\(x_{11}\)}; \node [main node] (6) at (-17,-8) {\(x_{12}\)}; \node [main node] (7) at (-12,-8) {\(x_{13}\)}; \node [main node] (8) at (-2,0) {\(~z_2~\)}; \node[main node] (9) at (-7,-4) {\(y_{22}\)}; \node [main node] (10) at (-2,-4) {\(y_{23}\)}; \node [main node] (11) at (3,-4) {\(y_{21}\)}; \node[main node] (12) at (-7,-8) {\(x_{21}\)}; \node [main node] (13) at (-2,-8) {\(x_{22}\)}; \node [main node] (14) at (3,-8) {\(x_{23}\)}; \node [main node] (15) at (13,0) {\(~z_3~\)}; \node[main node] (16) at (8,-4) {\(y_{32}\)}; \node [main node] (17) at (13,-4) {\(y_{33}\)}; \node [main node] (18) at (18,-4) {\(y_{31}\)}; \node[main node] (19) at (8,-8) {\(x_{31}\)}; \node [main node] (20) at (13,-8) {\(x_{32}\)}; \node [main node] (21) at (18,-8) {\(x_{33}\)}; \node [main node] (22) at (-2,-18) {\(~s~\)}; \node[] (50) at (-17,2) {}; \begin{scope}[-] \draw [thin] (8) -- (9); \draw [thin] (8) -- (10); \draw [thin] (8) -- (11); \draw [thin] (12) -- (9); \draw [thin] (12) -- (10); \draw [thin] (13) -- (10); \draw [thin] (13) -- (11); \draw [thin] (14) -- (11); \draw [thin] (22) -- (5); \draw [thin] (22) -- (6); \draw [thin] (22) -- (7); \draw [thin] (22) -- (12); \draw [thin] (22) -- (13); \draw [thin] (22) -- (14); \draw [thin] (22) -- (19); \draw [thin] (22) -- (20); \draw [thin] (22) -- (21); \draw [thin] (1) -- (2); \draw [thin] (1) -- (3); \draw [thin] (1) -- (4); \draw [thin] (5) -- (2); \draw [thin] (5) -- (3); \draw [thin] (6) -- (3); \draw [thin] (6) -- (4); \draw [thin] (7) -- (4); \draw (2) .. controls (-17,8) and (-7,-2) .. (7); \draw (9) .. controls (-2,8) and (8,-2) .. (14); \draw (16) .. controls (13,8) and (23,-2) .. (21); \draw [thin] (15) -- (16); \draw [thin] (15) -- (17); \draw [thin] (15) -- (18); \draw [thin] (19) -- (16); \draw [thin] (19) -- (17); \draw [thin] (20) -- (17); \draw [thin] (20) -- (18); \draw [thin] (21) -- (18); \end{scope} \end{tikzpicture} \caption{A plane drawing of $G_3^3$ } \label{planar} \end{figure} Therefore, we can obtain an improved lower bound for 3-degenerate, bipartite and planar graphs. We conclude with the following corollary. \begin{corollary} The maximum number of minimal connected dominating sets in a 3-degenerate bipartite planar graph of order $n$ is in $\Omega(1.472^n)$. \end{corollary} \section{Conclusion} The method we adopted for constructing asymptotic worst-case examples for enumerating minimal connected dominating sets consists of combining copies of a certain base-graph having a particular subset of vertices that must contain elements of any minimal connected dominating set, being linked to a main hub-vertex. For example, the graph $G_{4}$ has 36 minimal connected dominating sets, each of which must contain elements of the set $X$, which in turn is linked to the hub-vertex $s$ in $G_4^k$. The main question at this stage is: can we do better? We believe it is very difficult to find a base-graph of order eight or less that can be used to achieve a higher lower-bound since it would have to have at least 25 minimal connected dominating sets. Moreover, any better example that contains more than 9 vertices must have a much larger number of minimal connected dominating sets. For example, to achieve a better lower bound with a base-graph of order 10 (or 11), such a graph must have at least 54 (respectively 80) minimal connected dominating sets. It would be challenging to obtain such a construction, which is hereby posed as an open problem.
train/arxiv
BkiUdFk241xiD5zZ-sBE
5
1
\section{Introduction} In this work, we investigate the thin film equation with linear mobility in arbitrary space dimensions, that is, the partial differential equation \begin{equation}\label{TFE} \partial_\tau u+\nabla \cdot \left(u\nabla\Delta u\right)=0 \end{equation} in the whole space $\mathbb{R}^N$. This equation models the flow of an $N+1$ dimensional viscous fluid with high surface tension over a flat substrate, and thus, the real physical three-dimensional setting corresponds to the case $N=2$. The evolving scalar variable $u=u(\tau ,y)$ in \eqref{TFE} represents the height of the liquid film, and is assumed to be nonnegative \cite{RevModPhys.69.931,MR1642807}. In the $1+1$ dimensional case, equation \eqref{TFE} can also be seen as the lubrication approximation in a two-dimensional Hele--Shaw cell \cite{GiacomelliOtto03}. The thin film equation is degenerate parabolic in the sense that the diffusion flux decreases to zero where $u$ vanishes. It follows that the speed of propagation is finite and thus droplet configurations stay compactly supported for all times. On a mathematical level, we are thus concerned with a free boundary problem. We will be focusing on a setting in which droplet solutions are slowly spreading over the full space, a regime that is commonly referred to as \emph{complete wetting.} This is obtained mathematically by prescribing the contact angle at the droplet boundary $\partial\{u>0\}$ to be zero, that is, $\nabla u=0$. A reference spreading droplet configuration is given by Smyth and Hill's self-similar solution \cite{SmythHill,MR1148286,MR1479525} \begin{equation} \label{smythhillsolution} u_*(\tau ,y) = \frac{1}{\tau^{\frac{N}{N+4}}}\alpha_N\left(\sigma_M-\frac{\vert y\vert^2}{\tau^{\frac{2}{N+4}}}\right)^2_+, \end{equation} where $\alpha_N=\frac{1}{8(N+4)(N+2)}$ and $\sigma_M$ is a positive constant only depending on the mass constraint \[ \int\limits_{\mathbb{R}^N} u_*dy=M. \] Moreover, we write $(s)_+$ for the positive part $ \max\!\left\{0,s\right\}\!$ of a quantity $s$. These source-type solutions \eqref{smythhillsolution} play a distinguished role in the theory since they are, similar to related parabolic problems, believed to describe the large time asymptotic behavior of \emph{any} solution of mass $M$ to the thin film equation, i.e., \begin{equation} \label{119} u(\tau ,y)\approx u_*(\tau ,y)\quad \mbox{for any }\tau \gg1. \end{equation} This convergence has been proved for strong solutions in the one-dimensional setting ($N=1$) via entropy methods by Carrillo and Toscani \cite{CarrilloToscani02} and for minimizing movement solutions in arbitrary dimensions via gradient flow techniques by Matthes, McCann and Savar\'e \cite{MatthesMcCannSavare09}. Both contributions provide sharp rates of convergence and exploit the intimate relation between the thin film equation \eqref{TFE} and the porous medium equation \begin{equation} \label{118} \partial_{\tau} u - \Delta u^{m}=0 \end{equation} in the case $m=3/2$. In fact, up to a suitable rescaling, the Smyth--Hill solutions \eqref{smythhillsolution} coincide with the self-similar Barenblatt solutions \cite{Zelcprimedovic1950,Barenblatt1952,Pattle1959} of the porous medium equation \eqref{118}, and the surface energy, which is dissipated by the thin film equation \eqref{TFE}, coincides with the rate of dissipation of the Tsallis entropy under the porous medium flow \eqref{118}. See \cite{MatthesMcCannSavare09,McCannSeis15} for a clean formulation of this entropy-information relation from a gradient flow perspective. The link between the two equations can be further exploited in order to get deeper insights into the large time behavior of solutions to \eqref{TFE}: When linearizing both equations about the self-similar solutions, it turns out that the linear porous medium operator $\L$ translates into the linear thin film operator in a simple algebraic way, namely $\L^2 +N\L$ \cite{McCannSeis15}. It immediately follows that the eigenfunctions of both operators agree, while the transformation of the eigenvalues from the porous medium setting to the thin film setting obeys the same algebraic formula. The operator $\L$ was diagonalized in \cite{ZeldovicBarenblatt58,Seis14}, and thus, the full spectral information is also available for the thin film equation \cite{McCannSeis15}, see Theorem \ref{spektrum} below. The spectrum of the one-dimensional operator was computed earlier in \cite{BernoffWitelski02}. The knowledge of the complete spectrum does not only give information on the sharp rate of convergence (for which information about the spectral gap would be sufficient), but also on the geometry of all modes through the knowledge of all eigenfunctions. One may thus analyze in detail the role played by affine symmetries such as dilations, rotations or shears, and we will in this paper obtain improved rates of convergence for the thin film equation \eqref{TFE} by quoting out such symmetries. Further details on the large time asymptotics can be formulated after a suitable change of variables. Higher order large time asymptotics for the porous medium equation \eqref{118} with $m>1$ were obtained in one dimension by Angenent \cite{Angenent1988}, building up on the spectral information in \cite{ZeldovicBarenblatt58} and, more recently, in any dimension by the first author \cite{Seis15}, building up on \cite{Seis14}. While Angenent derived fine series expansions around the limiting solution, the later multidimensional contribution takes a geometric point of view by constructing finite-dimensional invariant manifolds that the solutions approximate to any given order. In the present work, we will derive a parallel theory for the thin film equation. Invariant manifold studies can be found in numerous applications in the field of nonlinear partial differential equations, for instance, \cite{HirschPughShub77,Carr83,FoiasSaut84,ChowLu88,FoiasSellTemam88, ConstantinFoiasNicolaenko89,EckmannWayne91,ChowLinLu91, VanderbauwhedeIooss92,Wayne97,GallayWayne02}. What is particularly challenging in \cite{Seis15} and the present paper is the moving free boundary at which solutions cease to be smooth. What is needed for linking the spectrum of the linear operators to the nonlinear dynamics \eqref{TFE} or \eqref{118} is a regularity framework in which solutions depend differentiably on the initial configuration. This is necessary since the precise rate of convergence in the limit \eqref{119} is dictated by the particular choice of the initial datum. Identifying such a framework is far from being trivial. A crucial first step is a nonlinear change of variables that transforms the free boundary problem into an evolution equation on a fixed domain, which can be chosen as the unit ball. The linear leading order part of the equation can then be seen as a degenerate parabolic equation, whose degeneracy can be cured by interpreting the the dynamics as a fourth-order heat flow on a weighted Riemannian manifold. For the porous medium equation, such this setting was proposed by Koch in his habilitation thesis \cite{KochHabilitation}, further refined in the work of Kienzler \cite{Kienzler16} and then adapted in \cite{Seis15}. An analogous theory for the thin film equation was derived by John in \cite{John15} and later adapted by the first author in \cite{SeisTFE}. After some necessary refinements, the latter will be the starting point for the present study. We also like to mention the related studies by Denzler, Koch and McCann \cite{DenzlerMcCann05,DenzlerKochMcCann15} and Choi, McCann and the first author \cite{ChoiMcCannSeis22}, who derived some improved large time asymptotics for the fast diffusion equation, i.e., \eqref{118} with $m<1$, in the full space and a bounded domain, respectively. The full space setting is particularly challenging due to the occurrence of continuous spectrum, which arises from the fact that the associated Barenblatt profile possesses a finite number of moments, while in a bounded domain, in which solutions extinct in finite time, negative (unstable) eigenvalues challenge the leading order asymptotics \cite{BonforteFigalli21}. Before giving in the next section a specific description of our setting and of our main results, we want to finish this introductory section with a brief discussion about the state of the art in the mathematical theory for thin film equation. Existence of nonnegative weak solutions was established with the help of compactness arguments and estimates on the free surface energy by Bernis and Friedman \cite{MR1031383}. This approach is not adequate to prove a general uniqueness result even though the regularity of these solutions could be improved, see \cite{MR1328475,MR1371925,MR1616558}. In a neighborhood of stationary solutions (of infinite mass), well-posedness and regularity of one-dimensional solutions could be established in a weighted Sobolev setting \cite{GiacomelliKnupferOtto08} and in H\"older spaces \cite{GiacomelliKnupfer10}. Moreover, the aforementioned work \cite{John15} deals with the multidimensional case and lowers the regularity requirements to Lipschitz norms and Carleson-type measures. The latter approach was adapted to neighborhoods of the Smyth--Hill self-similar solution in \cite{SeisTFE}. The one-dimensional setting was also considered in \cite{Gnann15} using weighted Hilbert spaces. We finally remark that for nonlinear mobilities, solutions are in general not smooth, see \cite{GGKH14,Gnann16}. \medskip \textbf{Organization of the paper.} In the following section, we state and discuss our results on the large time asymptotics in self-similar variables. In Section \ref{newvariables}, we rewrite the thin film equation as a perturbation equation around the self-similar Smyth--Hill solution and present our main theorems of this paper, including the Invariant Manifold Theorem. We will describe in Section \ref{S4} how these results for the perturbation equation translate into the large-time asymptotics for the thin film equation. Section \ref{theoryforperturbationequation} collects information on the well-posedness of the perturbation equation and improves on known regularity estimates. The subsequent Section \ref{S5} deals with a truncated version of the perturbation equation. Well-posedness and regularity estimates are provided. Moreover, we introduce and discuss the time-one mapping that will be our main object of consideration in our construction of invariant manifolds in Section \ref{dynamicalsystemarguments}. The final Section \ref{applicationinvariantmanifolds} exploits the invariant manifold theory to prove the large-time asymptotic expansions for the perturbation equation. We conclude with two appendices, one with a derivation of the perturbation equation, one with inequalities for weighted Sobolev spaces. \section{Higher order asymptotics for the thin film equation}\label{section2} In order to study the convergence towards self-similar solutions, it is customary to perform a self-similar change of variables. In view of the particular form of the Smyth--Hill solution \eqref{smythhillsolution}, we choose \begin{align}\label{121} x = \frac{1}{\sqrt{\sigma_M}}\frac{1}{\tau^{\frac{1}{N+4}}}y, \quad t= \gamma^{-1} \log\left(\tau^{\frac{1}{N+4}}\right) \quad \text{ and } \quad v = \frac{(N+4)\gamma}{\sigma_M^2}\tau^{\frac{N}{N+4}}u, \end{align} where $\gamma = 2(N+2)$, which transforms equation \eqref{TFE} into the \emph{confined} thin film equation \begin{align}\label{FPE} \partial_t v + \nabla \cdot \left(v\nabla\Delta v\right)- \gamma\nabla\cdot \left(xv\right)=0, \end{align} and turns the self-similar solution \eqref{smythhillsolution} into a stationary one, \begin{align}\label{123} v_*(x) = \frac{1}{4}\left(1-\vert x \vert^2\right)_+^2. \end{align} We remark that under this change of variables, the initial time will be transferred from $\tau =0$ to $t=-\infty$. As we are interested into the solutions' large time behavior only, we will hereafter treat $0$ as the initial time for the transformed equation. Moreover, the rescaling incorporates the total mass $M$ through $\sigma_M$ in such a way that the stationary $v_*$ is the limiting solution only if $v$ and $v_*$ have the same total mass. In what follows, we will assume that this is always the case be requiring that \begin{equation} \label{120} \int\limits_{\mathbb{R}^N} v_0\, dx = \int\limits_{\mathbb{R}^N} v_*\, dx, \end{equation} if $v_0$ is the initial configuration for the evolution in \eqref{FPE}. The theory in \cite{SeisTFE} guarantees that the confined thin film equation \eqref{FPE} has a unique regular solution provided that $v_0$ and $v_*$ are sufficiently close in the sense \begin{equation} \label{o1} \|\sqrt{v_0} - V_* \|_{W^{1,\infty}(\supp v_0)} \ll1, \end{equation} where $V_*( x) = \frac12(1-| x|^2)$ is the (unsigned) extension of $\sqrt{ v_*}$ to $\mathbb{R}^N$. This condition actually yields strong estimates between $v_0$ and and the exact stationary solution $v_*$ as will be explained in the following remark. \begin{remark}\label{R10} Choosing the globally decaying $V_*$ over $\sqrt{v_*}$ in \eqref{o1} has the advantage that we can infer from it simultaneously an information on the support of $v_0$, a global estimate on the difference of $v_0$ and $v_*$, and a bound on the slope of $v_0$. Indeed, regarding the first, restricting to the boundary of the support, where $v_0$ vanishes, and noticing that $V_*(x)\sim \dist(x,\partial B_1(0))$, we directly deduce \begin{align}\label{301} \sup \limits_{x\in \partial \supp v_0}\dist(x, \partial B_1(0)) \ll1. \end{align} Next, we observe that $V_*=\sqrt{v_*}$ inside the ball $B_1(0)$. Outside of $B_1(0)$ it holds that $0\le \sqrt{v_0} - \sqrt{v_*} = \sqrt{v_0} \le \sqrt{v_0}-V_*$, and thus, we find \begin{equation}\label{303} \| \sqrt{v_0} - \sqrt{v_*}\|_{L^{\infty}(A)} \ll1 \end{equation} on the set $A = \supp v_0$ as a consequence of \eqref{o1}. Moreover, since $V_*- \sqrt{v_0}= V_* = \sqrt{v_*}$ on $B_1(0)\cap \partial \supp v_0$ and since $v_*$ is decaying towards the boundary, the estimate \eqref{303} holds true on also on $A = B_1(0) \setminus \supp v_0 $. It remains to notice that $v_0 = v_*=0$ on the remaining set $A=B_1(0)^c\cap (\supp v_0)^c$, and thus \eqref{303} is proved to be true with $A=\mathbb{R}^N$. We immediately deduce that \begin{align}\label{304} \left\|v_0-v_*\right\|_{L^\infty\left(\mathbb{R}^N\right)}\ll1, \end{align} because $|v_0-v_*| = |\sqrt{v_0}-\sqrt{v_*}|(\sqrt{v_0}+\sqrt{v_*}) \lesssim|\sqrt{v_0}-\sqrt{v_*}|$ where the last identity is true because $v_0$ and $v_*$ are bounded. Finally, we can also extract a condition on the slope of $v_0$, namely \begin{align}\label{305} \left\|\nabla v_0 + 2x \sqrt{v_0}\right\|_{L^\infty\left(\mathbb{R}^N\right)} \ll1. \end{align} To establish \eqref{305}, we first note that the left-hand side vanishes provided that $x$ does not lie in the support of $v_0$. Inside $ \supp v_0$, we have $|\nabla v_0+ 2x\sqrt{v_0}| = 2\sqrt{v_0}|\nabla \sqrt{v_0}- \nabla V_*|\lesssim |\nabla \sqrt{v_0}- \nabla V_*|$, because $v_0$ is bounded. Condition \eqref{o1} then yields the claim. Notice that the left-hand side in \eqref{305} vanishes precisely for $v_0=v_*$ (under the mass constraint \eqref{120}). \end{remark} The main results of the referred work \cite{SeisTFE} are repeated in more details later in Section \ref{theoryforperturbationequation}. This section also contains the main results of the present work. At this stage, we present some consequences of that general theory for the confined thin film equation \eqref{FPE}, which provide exemplary improved convergence rates towards equilibrium by quoting out symmetries. The rates of relaxation being intimately related to the spectrum of the linear operator $\L^2+N\L$ associated to the confined equation \eqref{FPE}, see Section \ref{newvariables} below, for a better understanding of our results presented in the sequel, we recall the findings of the spectral analysis from the literature. \begin{theorem}[\cite{BernoffWitelski02,McCannSeis15}]\label{spektrum} The operator $\mathcal{L}^2+N\mathcal{L}$ has a purely discrete spectrum consisting of the eigenvalues \begin{align} \mu_{l,k}= \lambda_{l,k}^2+N\lambda_{l,k}, \end{align} where the $\lambda_{l,k}$ are the eigenvalues of $\mathcal{L}$. They are given by \begin{align} \lambda_{l,k}= 2\left(l+2k\right)+2k\left(k+l+\frac{N}{2}-1\right), \end{align} for $(l,k)\in \mathbb{N}_0\times \mathbb{N}_0$ if $N\geq2$ and $(l,k) \in \{0,1\}\times \mathbb{N}_0$ if $N=1$. The corresponding eigenfunctions are polynomials of degree $l+2k$, namely \begin{align} \psi_{l,n,k}(x) = {}_2F_1\left(-k,1+l+\frac{N}{2}+k;l+\frac{N}{2};|x|^2\right)Y_{l,n}\left(\frac{x}{|x|}\right)|x|^{l}, \end{align} where $n\in \{1,\dots,N_l\}$ with $N_0=1$ or $N_1=N$ and $N_l=\frac{\left(N+l-3\right)!\left(N+2l-2\right)}{l!\left(N-2\right)!}$ if $l\geq 2$. Besides, ${}_2F_1(a,b;c;d)$ is a hypergeometric function and $Y_{l,n}$ is a spherical harmonic (of degree $l$) if $N\geq 2$, corresponding to the eigenvalue $l(l+N-2)$ of $-\Delta_{\S^{N-1}}$ with multiplicity $N_l$. If $N=1$ it is $Y_{l,n}\left(\pm1\right)= \left(\pm1\right)^l.$ \end{theorem} The computation of the linear operator in \cite{McCannSeis15} was rather formal and was derived from the gradient flow interpretation of \eqref{FPE} with respect to the Wasserstein metric tensor \cite{Otto98,MR1865003,MatthesMcCannSavare09}. It occurs naturally after suitable rescaling inn the perturbation equation \eqref{perturbationequation} In the statement of the theorem, the linear operator is analyzed with respect to the Hilbert space introduced in \eqref{124} below, and the eigenfunctions $\psi_{l,n,k}$ give rise to an orthogonal basis of that Hilbert space. We recall that hypergeometric functions can be written as power series of the form \begin{align} {}_2F_1(a,b;c;z)=\sum\limits_{j=0}^{\infty} \frac{(a)_j(b)_j}{(c)_jj!}z^j, \end{align} where $a,b,c,z \in \mathbb{R}$ and $c$ is not an non-positive integer, see, e.g.~\cite{Rainville71}. The definition uses extended factorials, also known as Pochhammer symbols, \begin{align} (s)_j=s(s+1)\cdots (s+j-1), \quad \text{ for } j\geq 1 \text{ and } (s)_0=1. \end{align} The hypergeometric functions with $z = |x|^2$ reduce to a polynomial of degree $2k$ if we plug in $-k$ for $a$. In this case, they can be expressed as Jacobi polynomials. \begin{figure}[t]\begin{center} \includegraphics[width=1\textwidth]{Eigenwerte.pdf} \end{center}\caption{The spectrum of the linear operator with multiples in the range $[0,400]$ in the $2+1$ dimensional setting ($N=2$). }\label{fig1} \end{figure} In the one-dimensional setting, all eigenvalues have multiplicity one. In higher dimensions, all eigenvalues with $l\ge 2$ have a dimension dependent multiplicity that stems from the multiplicity of the eigenvalue $l(l+N-2)$ associated with the spherical harmonics, i.e., the eigenfunctions of the Laplace--Beltrami operator $\Delta_{\S^{N-1}}$. In addition, there are certain intersections between the eigenvalues $\mu_{\cdot, k}$ and $\mu_{\cdot, k+n}$. For instance, in two dimensions, it holds $\mu_{l,k} = \mu_{l(k+1) + k(k+2),0}$ for any $k,l$, see Figure \ref{fig1}. \subsection{Leading order asymptotics} Apparently, $\mu_{0,0}=0$ is the smallest eigenvalue. It corresponds to a situation in which the convergence in \eqref{119} fails, which is precisely the case if the equal mass condition \eqref{120} is not satisfied. Conversely, by requiring that \eqref{120} holds, this eigenvalue is automatically eliminated. The exact leading order asymptotics are then governed by the second smallest eigenvalue $\mu_{1,0}=4+2N$, which is our first result for solutions to the confined thin film equation. We will derive it from a more general statement in Theorem \ref{Whoeheremoden} in Section \ref{newvariables} and present it thus as a corollary here. \begin{corollary}[Exact leading order asymptotics]\label{exactleadingorder} Let $v$ be the solution to \eqref{FPE} with initial data $v_0$ satisfying the mass constraint \eqref{120} and being sufficiently close to $v_*$ in the sense of \eqref{o1}. Then it holds that \begin{align} \left\|\sqrt{v(t)}-V_*\right\|_{W^{1,\infty}\left(\supp v(t)\right)} &\lesssim e^{-(4+2N) t} \quad \text{ for all } t\geq 0. \end{align} \end{corollary} The result entails the convergence of $v(t)$ towards $v_*$ as outlined in Remark \ref{R10}. The same rate of convergence was established earlier in terms of the relative Tsallis entropy and the $L^1$ norm by Carrillo and Toscani \cite{CarrilloToscani02} in the one-dimensional setting and by Matthes, McCann and Savar\'e in any dimension (if one takes into account the difference in the time scaling that we introduced in \eqref{FPE} through the $\gamma^{-1}$ factor). It corresponds to an $O(\tau^{-(N+1)/(N+4)})$ convergence in the limit \eqref{119} for the original thin-film equation \eqref{TFE}. The convergence rate in this theorem is sharp and is saturated by spatial translations of the stationary solution $v_*$. Indeed, for every vector $b\in \mathbb{R}^N$, the function $v(t,x)=v_*(x-e^{-\gamma t}b)$ solves the confined thin film equation exactly and approaches $v_*(x)$ with exponential rate $\gamma =4+2N$, as can be readily checked via Taylor expansion. However, because the original equation \eqref{TFE} is invariant under spatial translations, the convergence in \eqref{119} with rate $O(\tau^{-(N+1)/(N+4)})$ remains true for any shifted version of the Smyth--Hill solution, i.e., $u(\tau,y)\approx u_*(\tau,y-b)$, and the significance of this rate is thus an artifact of this symmetry. Indeed, the above arguing shows that the convergence in Corollary \ref{exactleadingorder} is sharp only if we are not willing to pick the ``correctly'' centered Smyth--Hill solution. We may equivalently adjust the initial datum by a suitable translation in $\mathbb{R}^N$. As we will see, the ``correct'' choice for $b$ is the center of mass, which is preserved under the original evolution \eqref{TFE} and pushed towards the origin by the confined equation, \begin{equation}\label{motioncenterofmass} \int xv(t,x)dx = e^{- \gamma t}b_0,\quad b_0=\int x v_0(x)dx, \end{equation} for all $t\geq 0$, because our rescaling \eqref{121} has eliminated the translation invariance. Supposing that $v_0$ is centered at the origin, $b_0=0$, the eigenvalue $\mu_{1,0}$ drops out of the spectrum and we obtain a better rate of convergence, namely by the next smallest eigenvalue, which is $\mu_{0,1}=30$ if $N=1$, and $\mu_{2,0} = 16+4N$ if $N\ge 2$. \begin{corollary}\label{firstordercorrection} Let $v$ be as in Corollary \ref{exactleadingorder} and assume in addition that $v_0$ is centered at the origin, i.e., $b_0=0$ in \eqref{motioncenterofmass}. Then, it holds that \begin{align} \left\|\sqrt{v(t)}-V_*\right\|_{W^{1,\infty}\left(\supp v(t)\right)}&\lesssim e^{- 30t} \quad \text{ for all } t\geq 0 \end{align} if $N=1$, and \begin{align} \left\|\sqrt{v(t)}-V_*\right\|_{W^{1,\infty}\left(\supp v(t)\right)}&\lesssim e^{-(16 +4N)t} \quad \text{ for all }t\geq 0 \end{align} if $N\ge 2$. \end{corollary} This rate of convergence is again sharp for solutions that start, if $N\ge 2$, from affine transformations of the stationary solution, and if $N=1$, from dilated stationary solutions. Because we will discuss dilated stationary solutions later also in the multi-dimensional case, we will restrict ourselves here to the setting $N\ge 2$. Solutions starting from affine transformations of $v_*$ are then to leading order (modulo rescaling to fit the mass constraint) described by $v(t,x)\approx v_*(x - e^{-\mu_{2,0}t} Ax)$ for a symmetric and trace-free matrix $A$. The validity of this asymptotics is best understood in terms of the perturbation equation, that we will introduce in the subsequent section. The occurrence of such affine transformations can be explained on the level of the eigenfunctions computed in Theorem \ref{spektrum}: The finite displacements $v_s$ generated by an eigenfunction $\psi$ are described by \begin{equation} \label{122} v_s( x+s\nabla\psi(x)) \det(I+s\nabla^2 \psi(x)) = v_*(x), \end{equation} provided that $|s|\ll 1$. For $k=0$, the eigenfunctions are homogeneous harmonic polynomials of degree $l$, namely $\psi_{l,n,0}(x) = Y_{l,n}(x/|x|) |x|^l$. If $l=2$, the generating polynomials are quadratic, and thus of the form $\psi(x) = x\cdot Ax$ for a symmetric and trace-free matrix $A$. In this case, \eqref{122} defines affine transformations. For further improvements on the rate of convergence, we have to quote out affine transformations. \subsection{Higher order corrections and the role of symmetries} In order to improve on the convergence rates even further, we exploit symmetry invariances of the thin film equation in conjunction with symmetry properties of spherical harmonics, which determine the angular modulations of our eigenfunctions, see Theorem \ref{spektrum}. More precisely, we will obtain higher order convergence rates by assuming that the initial datum $v_0$ is invariant under certain orthogonal transformations. Because such transformations leave the thin film equation invariant and thanks to the uniqueness of solutions near self-similarity \cite{SeisTFE}, the invariance under those orthogonal transformations is inherited by the solution for all times. We will show that the orthogonality condition leads to a selection among the eigenfunctions forcing a large class of eigenmodes to remain \emph{inactive} during the evolution. The slowest active mode will then govern the large-time asymptotics. \begin{figure} \begin{picture}(95,90) \put(0,0){\includegraphics[scale=0.25]{correctionl2.pdf}} \end{picture} \begin{picture}(95,90) \put(0,0){\includegraphics[scale=0.25]{correctionl3.pdf}} \end{picture} \begin{picture}(95,90) \put(0,0){\includegraphics[scale=0.25]{correctionl4.pdf}} \end{picture} \begin{picture}(95,90) \put(0,0){\includegraphics[scale=0.25]{correctionl5.pdf}} \end{picture} \caption{The finite displacements of $v_*$ generated by the eigenfunctions $\psi_{l,n,0}$ in the physical case $N=2$.}\label{fig0} \end{figure} To motivate our approach for modding out certain modes, it is enlightening to study briefly the situation in two space dimensions, $N=2$. In Figure \ref{fig0}, we have plotted some finite displacements, cf.~\eqref{122}, generated by eigenfunctions $\psi_{l,n,0}$ with $l\in\{1,\dots, N_l\}$. Apparently, displacements generated by $\psi_{l,n,0}$ (and then also by any polynomial of the form $p(|x|)Y_{l,n}(x/|x|)$ including $\psi_{l,n,k}$) share precisely the symmetry properties of a regular $l$-polygon. Under the assumption that the solution has the symmetry properties such a regular $l$-polygon, all eigenmodes generated by $\psi_{m,n,k}$ with $m<l$ are necessarily inactive. In Remark \ref{R2} below, we will discuss the short elementary argument that rigorously supports this observation. In higher space dimensions, the situation gets more involved and the structure of the spherical harmonics is more complex. In order to mod out eigenmodes, taking a more abstract approach is strongly advised. We choose a group theoretical approach, noticing that the symmetry group of a regular $l$-polygon is a finite subgroup of the group of orthogonal transformations $O(N)$. Our goal is to determine geometric conditions on an arbitrary function, more precisely, invariances under the action of a given finite subgroup of $O(N)$, which guarantee that the $L^2$-projections of that function onto all spherical harmonics of a given degree $l$ vanish. To achieve this goal, we will eventually apply tools originating from the field of representation theory of groups, see, e.g., \cite{Mukai2003} or \cite{tomDieck1995} for elementary considerations. The space of square integrable functions on the unit sphere $L^2\left(\mathbb{S}^{N-1}\right)$ can be decomposed into a direct Hilbert sum over the eigenspaces of $\Delta_{\S^{N-1}}$, \[ L^2\left(\mathbb{S}^{n-1}\right) = \bigoplus\limits_{l\in \mathbb{N}_0}H_l, \] where the eigenspace $H_l$ is spanned by the spherical harmonics of degree $l$ and its dimension is given by $N_l$, see Theorem \ref{spektrum}. We remark that every eigenspace $H_l$ is invariant under the action of orthogonal transformations. More precisely, given an orthogonal matrix $g\in O(N)$, for every $f\in H_l $ we have that $f\circ g^{-1} \in H_l$. If $E$ is a finite subgroup of $O(N)$, we denote by $H_l^E$ the subspace of $H_l$ consisting of all functions that are invariant under the action of all elements of $E$, i.e., $f\circ g^{-1}=f$ for any $f\in H_l^E$ and $g\in E$. The eigenmodes corresponding to an eigenvalue $\mu_{l,k}$ are all modded out by the action of elements in $E$ if that subspace is trivial, $\dim(H_l^E)=0$. We present and discuss our final convergence result under such an abstract condition and will discuss thereafter some specific choices of $E$, for which we will need some deeper insights from the representation theory of finite groups. \begin{corollary}\label{grouptheorycorrections} Let $N\geq2$ and $v$ be given as in Corollary \ref{firstordercorrection} satisfying \begin{equation}\label{401} \left\|\sqrt{v(t)}-V_*\right\|_{L^\infty(\supp v(t))}\lesssim e^{-\mu_{l,k}t} \quad \text{ for all }t\geq 0 \end{equation} for some $l\in\mathbb{N}$ and $k\in \mathbb{N}_0$, such that the multiplicity of $\mu_{l,k}$ is given by $N_l$. Assume in addition that $v_0$ is invariant under the action of a finite subgroup $E$ of $O(N)$ such that \begin{align} \label{400} \dim\left(H_l^E\right)=0. \end{align} Then it holds that \begin{align}\label{h7} \left\|\sqrt{v(t)}-V_*\right\|_{W^{1,\infty}(\supp v(t))}&\lesssim e^{-\mu_+t} \quad \text{ for all }t\geq 0, \end{align} where $\mu_+$ is the next largest eigenvalue following $\mu_{l,k}$. \end{corollary} We shall briefly comment on the assumptions on $v(t)$ in the latter corollary. \begin{remark}\label{R3} It may be surprising that it suffices to demand the decay of $v(t)-\sqrt{V_*}$ in $L^\infty$ instead of $W^{1,\infty}$, what would be the expected setting due to the previous results. Due to the regularizing properties of the equation and the Lipschitz bound \eqref{o1} for the initial time, we will eventually see, that both assumptions are in fact equivalent in the given situation. We will discuss this phenomenon shortly in the proof of Corollary \ref{grouptheorycorrections}. \end{remark} Not every eigenfunction corresponds to an orthogonal transformation and thus, a symmetry condition like \eqref{400} is in general not sufficient to jump from one eigenvalue to another. Indeed, all eigenfunctions $\psi_{0,1,k}$ are radially symmetric polynomials, and the slowest of the corresponding modes is generated by delayed Smyth--Hill solutions $u_*(\tau + \tau_0,y)$ of \eqref{TFE}, which turn into the dilations $\lambda(t)^{-N}v_*(\lambda(t)^{-1}x)$ with $\lambda(t)\approx 1 + \frac{1}{N+4}\tau_0 e^{-\mu_{0,1}t} $ solving the confined equation \eqref{FPE}, and converging towards the stationary $v_*$ with exponential rate $\mu_{0,1}$. We do not know if these modes can be eliminated by a reasonable assumption on the initial configuration nor do we see how they can be suitably controlled during the evolution. Therefore, in order to raise the convergence rates beyond eigenvalues $\mu_{0,k}$, the decay hypothesis \eqref{401} seems necessary to ensure that the respective radial modes are inactive. We have to demand that the multiplicity of the eigenvalue $\mu_{l,k}$ in \eqref{401} is precisely $N_l$, in order to exclude possible resonances with \emph{any} spherical harmonics of different order (such that $\mu_{l,k} = \mu_{\tilde l,\tilde k}$). To conclude the discussion about higher order asymptotics on the level of the confined thin film equation, we remark that the number of eigenvalues we are able to remove from the spectrum before reaching $\mu_{0,1}$ (provided we find a suitable subgroup of $O(N)$) depends on the space dimension: If the dimension is odd, $ N=2m-1$, then $\mu_{0,1}$ is the $(m+2)$th eigenvalue and has multiplicity one. In even dimensions, $N=2m$, it coincides with $\mu_{m+2,0} $. We finally recall from the introduction that further and, in fact, much stronger statements on the large time asymptotics can be derived after a customary change of variables. These will be presented and discussed in the following section. It remains to identify finite subgroups $E$ of $O(N)$, which mod out spherical harmonics of a given order $l$ in the sense of \eqref{400}. We will do that by applying a surprisingly helpful tool, the Molien series, which originates from the field of representation theory of groups. It was suggested to us by our colleague Linus Kramer. The subspace $H_l\subseteq L^2\left(\S^{N-1}\right)$ of spherical harmonics of degree $l$ can be identified with the space of symmetric, trace-free tensors of rank $l$ that we will further denote by $H_l$ as well. The generating function $h_E(t)$ for the dimensions $ \dim\left(H_l^E\right)$ of the subspace of $L^2(\S^{N-1})$ that is invariant under the action of $E$ can be formally expressed as the power series \begin{align}\label{generatingfunction} h_E(s)=\sum \limits_{l=0}^{\infty}\dim\left(H_l^E\right)s^l, \end{align} which is called Molien series or Hilbert series in the literature, cf.\ \cite[p.~11]{Mukai2003} or \cite[p.~479]{Stanley1979}. A beautiful and functional way that is often used to compute this series explicitly is given by Molien's formula \begin{align} h_E(s)=\frac{1}{|E|}\sum\limits_{g\in E}\frac{1-s^2}{\det\left(I-sg\right)}, \end{align} see \cite{Mendes1975}, \cite{BurnettMeyer1954}. In the physical case $N=2$, the Molien series is known for all finite subgroups of $O(2)$, as will be discussed in the following. \begin{itemize} \item \emph{Cyclic groups.} The first class of subgroups, $\mathfrak{S}_n$ for $n\in \mathbb{N}$, is generated by rotations by an angle of $2\pi/n$. The corresponding Molien series is given by \begin{align} h_{\mathfrak{S}_n}(s)=\frac{1+s^n}{1-s^n}=(1+s^n)\sum\limits_{l=0}^\infty s^{ln} = 1 +2s^{n}+2s^{2n}+2s^{3n}+\dots, \end{align} see \cite[p.~143]{BurnettMeyer1954}. In view of the Hilbert series representation \eqref{generatingfunction} of $h_{\mathfrak{S}_n}(s)$, this formula proves that the corresponding invariant subspaces must be trivial \eqref{400} precisely if $l$ is not divisible by $n$. In other words, the projection of a function that is invariant under rotations of an angle of $2\pi/n$ onto the subspaces spanned by spherical harmonics of degree $l$ has to vanish if $l$ is not divisible by $n$. Moreover, if non-trivial, $H_l^{\mathfrak{S}_n}$ has dimension $2$, and thus, recalling that for $N=2$ each of the tensor spaces $H_l$ with $l\geq 1$ is two-dimensional $N_l=2$, it is $H_l^{\mathfrak{S}_n} = H_l$. \item \emph{Dihedral groups.} The second class of finite subgroups, $\mathfrak{D}_n$ for $n\in \mathbb{N}$, is generated by two elements. Again a rotation of the angle $2\pi/n$ and additionally a reflection. In this case the Molien series reads as \begin{align} H_{\mathfrak{D}_n}(s)=\frac{1}{1-s^n}=\sum_{l=0}^\infty s^{ln}=1+s^{n}+s^{2n}+s^{3n}+\dots, \end{align} see \cite[p.~59]{Humphreys1990}. If a function is invariant under the action of $\mathfrak{D}_n$ instead of $\mathfrak{S}_n$, the projection onto $H_l$ vanishes for the same $l$ as before. This time, however, the nontrivial subspace are one-dimensional. \end{itemize} We remark that the zeroth order term $s^0=1$ in the Molien series does not affect the convergence rates since the mass of the initial datum $v_0$ is already fixed. In higher dimensions, classifying the finite subgroups of $O(N)$ becomes more complicated. For $N=3$, we discuss the subgroups of $O(3)$ that only consist of rotations in more detail. The following results, together with more far-reaching ones, can be found in \cite[p.~143]{BurnettMeyer1954}. \begin{itemize} \item \emph{Cyclic groups.} The class $\mathfrak{S}_n$ for $n\in \mathbb{N}$ is generated by rotations by an angle of $2\pi/n$ around a fixed axis. The corresponding Molien series is given by \begin{align} h_{\mathfrak{S}_n}(s)= \frac{1}{1-s}\frac{1+s^n}{1-s^n}=\left(1+s+s^2+s^3+\dots\right)\left(1 +2s^{n}+2s^{2n}+2s^{3n}+\dots\right). \end{align} This formula shows that no invariant subspace is ensured to be trivial in this case. \item \emph{Dihedral groups.} In three dimensions, the dihedral group $\mathfrak{D}_n$ is generated by two rotations: A rotation by an angle of $2\pi/n$ around a fixed axis and a rotation by an angle of $\pi$ around an axis perpendicular to the first one. The corresponding Molien series is given by \begin{align} h_{\mathfrak{D}_n}(s) = \frac{1}{1-s^2}\frac{1+s^{n+1}}{1-s^n}= \left(1+s^{n+1}\right)\left(1+s^2+s^4+\dots\right)\left(1+s^n+s^{2n}+\dots\right). \end{align} In this case, the invariant subspace $H_l^E$ becomes trivial if and only if $l \neq (n+1)k+nm_1+2m_2$ for all $k\in \left\{0,1\right\}$ and $m_1,m_2\in\mathbb{N}_0$. \item \emph{Platonic solids.} The last group is given by the three rotation groups of the platonic solids. The tetrahedral group $\mathfrak{T}$ (the rotation group of the tetrahedron) has the Molien series \begin{align} h_{\mathfrak{T}}(s) = \frac{1}{1-s^4}\frac{1+s^{6}}{1-s^3}=\left(1+s^6\right)\left(1+s^4+s^8+\dots\right)\left(1+s^3+s^6+\dots\right). \end{align} In this case, the invariant subspace $H_l^E$ becomes trivial if and only if $l \neq 4m_1+3m_2$ for all $m_1,m_2\in \mathbb{N}_0$. The octahedral group $\mathfrak{O}$ (the rotation group of the cube or the octahedron) has the Molien series \begin{align} h_{\mathfrak{O}}(s) = \frac{1}{1-s^4}\frac{1+s^{9}}{1-s^6}=\left(1+s^{9}\right)\left(1+s^4+s^8+\dots\right)\left(1+s^6+s^{12}+\dots\right). \end{align} In this case, the invariant subspace $H_l^E$ becomes trivial if and only if $l \neq 9k+4m_1+6m_2$ for all $k\in \left\{0,1\right\}$ and $m_1,m_2\in\mathbb{N}_0$. The isocahedral group $\mathfrak{I}$ (the rotation group of the cube or the dodecahedron or the isocahedron) has the Molien series \begin{align} h_{\mathfrak{I}}(s) = \frac{1}{1-s^{10}}\frac{1+s^{15}}{1-s^6}=\left(1+t^{15}\right)\left(1+s^{10}+s^{20}+\dots\right)\left(1+s^6+s^{12}+\dots\right). \end{align} In this case, the invariant subspace $H_l^E$ becomes trivial if and only if $l \neq 15k+10m_1+6m_2$ for all $k\in \left\{0,1\right\}$ and $m_1,m_2\in\mathbb{N}_0$. \end{itemize} Regarding the four dimensional case $O(4)$, extensive results can be found in \cite{Mendes1975}. Besides, various results regarding the Molien series in general dimensions are available, see for example \cite{Humphreys1990}. \begin{remark} We remark that some of the references given above do not work in exactly the same setting that we consider here. In fact, it is not necessary to decompose the space $L^2\left(\S^{N-1}\right)$ into eigenspaces of $\Delta_{\S^{N-1}}$. Instead, one could also decompose it into spaces of homogeneous polynomials of fixed degree, \begin{align} L^2\left(\S^{N-1}\right) = \bigoplus \limits_{l\in \mathbb{N}_0}P_l, \end{align} where $P_l$ is the space of homogeneous polynomials of degree $l$. Given a finite subgroup $E$ of $O(N)$, we similarly denote by $P_l^E$ the subspace of $P_l$ consisting of all functions that are invariant under the action of all elements of $E$. Let \begin{align} p_E(s) = \sum_{l=0}^\infty \dim \left(P_l^E\right)s^l \end{align} be the corresponding generating function. In this situation Molien's formula has to be adapted, namely \begin{align} p_E(s)= \frac{1}{|E|}\sum\limits_{g\in E}\frac{1}{\det\left(I-sg\right)}, \end{align} see for example \cite[p.~13]{Mukai2003}. We obtain $h_E(s)=(1-s^2)p_E(s)$, what enables us to transfer results to the given setting. \end{remark} \section{New variables and main results}\label{newvariables} As announced earlier, one of the main analytical challenges in deriving fine large time asymptotics for the (confined) thin film equation is the free moving boundary. Following Koch \cite{KochHabilitation}, we perform a von Mises-type change of dependent and independent variables, which brings the equation into a setting in which solutions depend differentiably on the initial datum \cite{SeisTFE}. The transformation applies when the solution is Lipschitz close to the stationary solution in the sense of \eqref{o1}, cf.~\cite{Seis15,SeisTFE}. The underlying geometric procedure is the following, which is also illustrated in Figure \ref{fig2}: \begin{figure}[t] \begin{center} \includegraphics[width=0.9\textwidth]{transformation.pdf} \end{center}\caption{The change of variables from $(x,v(x))$ to $(z,w(z))$.}\label{fig2} \end{figure} The stationary $(4 v_*)^{1/4}$ describes a hemisphere over the $N$-dimensional unit ball $B=B_1(0)$. We orthogonally project each point $(x,(4 v(x))^{1/4})$ of the graph of $(4 v)^{1/4}$ onto the closest point $(z,(4 v_*(z))^{1/4})$ on the hemisphere and denote by $w(z)$ the (minimal) distance. Analytically this amounts to the choice \begin{align}\label{transformationz} z=\frac{x}{\sqrt{2( v(t,x))^{1/2}+|x|^2}} \end{align} for the new independent variable, and we see that $x=z$ precisely if $v$ is the stationary solution \eqref{123}. The formula for the dependent variables reads \begin{align}\label{transformationw} 1+w(t,z)= \sqrt{2( v(t,x))^{1/2}+|x|^2}, \end{align} and thus $w$ vanishes if $v$ is $v_*$. We will accordingly refer to $w$ as the \emph{perturbation}. The transformation is applicable also in situations in which $v$ and $v_*$ have not the same mass. This observation is reflected by the fact that $\mu_{0,0}=0$ occurs in the spectrum of the linear operator, see Theorem \ref{spektrum}. We will not eliminate this eigenvalue on the level of the perturbation, but only for the original variables through the mass constraint \eqref{120}. For the general theory that we perform in terms of the perturbation, any constant solution $w\equiv\const$ is admissible and corresponds to a Smyth--Hill solution \eqref{smythhillsolution} of arbitrary mass $M$. The derivation of an evolution equation for the new variable $w$ is lengthy and tedious. It has been described in detail already in \cite{SeisTFE}, using the sloppy $\star$ notation, see \eqref{nonlinearityperturbationequation} below. For our purposes it is necessary to rederive the transformed equation in a way that carries more structure than the formulation chosen in \cite{SeisTFE}. We postpone these computations to the appendix and state here our findings only. The \emph{perturbation equation} for the $w$ variables is \begin{align}\label{perturbationequation} \partial_tw+\mathcal{L}^2w+N\mathcal{L}w = \frac{1}{\rho} \nabla \cdot \left(\rho^2F[w]\right)+\rho F[w ] \quad \text{ on } \left(0,\infty \right)\times B_1(0), \end{align} where $\rho(z) = \frac12(1-|z|^2)$ is a weight function degenerating at the boundary, $\mathcal{L}w=-\rho^{-1}\nabla\cdot\left( \rho^2\nabla w\right)= -\rho \Delta w +2z\cdot \nabla w$ is the building block of the thin film linear operator and \begin{align}\label{nonlinearityperturbationequation} F[w] = p\star R[w]\star\left(\rho\nabla^3w\star\nabla w+ \rho(\nabla^2w)^{2\star} + \nabla^2w\star\nabla w+ (\nabla w)^{2\star}\right), \end{align} is the nonlinearity. The star product $a\star b$ denotes an \emph{arbitrary} linear combination of entries of the tensors $a$ and $b$, and thus, in particular, the above $F[w]$ defines a class of nonlinearities and both representatives in \eqref{nonlinearityperturbationequation} may be different from each other. We write $a^{k\star} = a \star \cdots \star a$, where the $\star$-product has $k$ factors. Moreover, $p$ is a polynomial tensor in $z$, which might have zero entries. The rational factors $R[w]$ are tensors of the form \[ R[w] = \frac{(\nabla w)^{k\star}}{(1+w+z\cdot \nabla w)^l}, \] for some $k\in \mathbb{N}_0$ and $l\in\mathbb{N}$. Finally, the distributive property respects only the tensor class, e.g.~$p\star(a+b) = p\star a +\tilde p\star b$ with two possibly different polynomial tensors $p$ and $\tilde p$. This shortened $\star$ notation is suitable in the present work because the exact form of the nonlinearity is not important for our analysis. We finally recall from our introduction that the linear operator $\L$ also occurs in the context of the porous medium equation \eqref{118} with $m=\frac{3}{2}$, and was analyzed, for instance, in \cite{Seis14,Seis15}. It is readily checked that $\L$ is symmetric (and, in fact, self-adjoint \cite{Seis14}) with respect to the inner product \begin{equation} \label{124} \langle w,\tilde w\rangle = \int_{B_1(0)} w\tilde w\, \rho dz, \end{equation} which induces a Hilbert space with norm $\|\cdot\|$ in the obvious way. The perturbation equation \eqref{perturbationequation} is well-posed for small Lipschitz initial data $w_0$, \begin{equation}\label{126} \|w_0\|_{W^{1,\infty}} \ll1, \end{equation} as was proved in \cite{SeisTFE}. We will recall the precise statement in Theorem \ref{maintheoremseis} below. The above smallness condition is equivalent to \eqref{o1} under the change of variables. It follows from the statement of Theorem \ref{spektrum} that the order of the eigenvalues $\mu_{l,k}$ depends on the space dimension $N$. For us, it only plays a role when we want to determine conditions on the initial datum $v_0$ that lead to improvements in the convergence rates for the confined thin film equation, see Corollaries \ref{exactleadingorder}, \ref{firstordercorrection}, and \ref{grouptheorycorrections} presented above. On the level of the perturbation equation, it is more convenient to rename the eigenvalues $\left\{\mu_k\right\}_{k\in \mathbb{N}_0}$ and order them in a strictly increasing way, that is $\mu_k<\mu_{k+1}$. Correspondingly, we denote by $\psi_{k,n}$ all eigenfunctions corresponding to $\mu_k$ for $n\in \left\{1,\dots,\tilde{N}_k\right\}$. We note that the multiplicity of $\mu_k$ may change due to intersections between the eigenvalues, see Figure \ref{fig1}. We mostly stick to this notation for the remaining work. All announced asymptotic results for solutions $v$ to the confined thin film equation will be derived from the following theorem that fully describes the higher order asymptotics of the perturbation equation. It is one of the two main results of the present work. Its proof can be found in Section \ref{applicationinvariantmanifolds}. \begin{theorem}\label{Whoeheremoden} For any fixed $K\in \mathbb{N}_0$, there exists an $\varepsilon_0>0$ with the following properties: Let $w$ be a solution to \eqref{perturbationequation} with initial datum $w_0$ satisfying $\|w_0\|_{W^{1,\infty}}\leq \varepsilon_0$. Then, under the assumption \begin{align}\label{h2} \lim \limits_{t\rightarrow \infty}e^{\mu_kt}\langle \psi_{k,n},w(t)\rangle = 0 \quad \text{ for all } k\in\left\{0,\dots,K\right\} \text{ and } n\in\left\{1,\dots,\tilde{N}_k\right\}, \end{align} it holds that \begin{align}\label{403} \left\|w(t)\right\|_{W^{1,\infty} }\lesssim e^{-\mu_{K+1}t} \text{ for all } t\geq 0. \end{align} \end{theorem} To clarify the meaning of this Theorem, we first consider the case $K=0$. The smallest eigenvalue $\mu_K= \mu_0= 0$, corresponds to the constant eigenfunction $1$, and thus, condition \eqref{h2} turns into the requirement \begin{align}\label{h1} \lim\limits_{t\rightarrow \infty} \int_{B_1(0)} w(t,z)\rho(z)dz=0. \end{align} As we will see in the proof of Corollary \ref{exactleadingorder}, the latter is equivalent to the mass constraint \eqref{120} for the $v$ variable. By imposing a condition of the solution's mass, we rule out $\mu_0=0$ as a relevant eigenvalue for the evolution, or, in other words, the corresponding mode is inactive. It follows that the leading order asymptotics are dominated by the next eigenvalue in order, $\mu_1$, in the sense that it determines the rate of convergence and governs the evolution towards the stationary $v_*$. The theorem states that this procedure can be iterated. Because the mappings $\langle \psi_{k,n},\cdot\rangle$ act as projections onto the respective eigenspaces, condition \eqref{h2} ensures that the first $K$ modes (with their multiplicities) are inactive during the evolution, that is, the modes do not affect the long-time behavior anymore. We can thus improve the rate of convergence and the theorem shows that the leading order asymptotics is then governed by the smallest active mode. In the proofs of Corollaries \ref{firstordercorrection} and \ref{grouptheorycorrections} we identify symmetry conditions for solutions to the thin film equation which ensure the decay \eqref{h2} for the perturbation equation. The proof for the higher-order asymptotics of the perturbation variable $w$ in Theorem \ref{Whoeheremoden} is based on the construction of invariant manifolds, which are localized around the stationary solution $w\equiv 0$. This is our second main result, which is of independent interest. To state it properly, we have to introduce some further notation. First, we denote by $S^t(g)$ the flow generated by the perturbation equation, that is $S^t(g)=w(t,\cdot)$ where $w(t,z)$ solves the perturbation equation with initial datum $g$. We consider the Hilbert space $H$ that is induced by the inner product \begin{align} \langle v,w\rangle_H = \langle v,w\rangle + \langle \mathcal{L}v,w\rangle =\langle v,w\rangle+ \langle v,\mathcal{L}w\rangle = \langle v,w\rangle +\langle \sqrt{\rho}\nabla v,\sqrt{\rho} \nabla w \rangle \end{align} and the norm \begin{align} \left\|w\right\|^2_H = \left\|w\right\|^2+ \|\mathcal{L}^{1/2}w\|^2= \left\|w\right\|^2+\|\sqrt{\rho}\nabla w\|^2, \end{align} where $\|\cdot\|$ was defined via \eqref{124}. It is equivalent to a scale invariant Hilbert space norm, \begin{equation} \label{103} \|w\|_H^2 \sim \left\|w\right\|^2_{L^2}+\left\|\rho\nabla w\right\|^2_{L^2}, \end{equation} as can be seen with the help of Hardy's inequality, cf.~Lemma \ref{hardytypeinequality} in the appendix. Furthermore, $E_c$ is the eigenspace spanned by the eigenfunctions $\psi_{k,n}$ for $k\leq K$ and $n\in \left\{1,\dots,\tilde{N}_k\right\}$ with $K\in\mathbb{N}_0$ fixed and $E_s$ denotes its orthogonal complement in $H$, such that $H=E_c \oplus E_s$. In the following theorem, $E_c$ and $E_s$ are the center and stable eigenspaces, respectively. We finally have to refine the analysis from \cite{SeisTFE} by considering \begin{equation} \label{125} \|w\|_{W} = \|w\|_{L^{\infty}} + \|\nabla w\|_{L^{\infty}} + \|\rho \nabla^2w \|_{L^{\infty}} + \|\rho^2\nabla^3 w\|_{L^{\infty}}, \end{equation} instead of the Lipschitz norm only. The necessity of considering (scale-invariant) higher-order norms is a crucial observation in our definition and analysis of the truncated equation \eqref{c3}. We will comment on it further in Section \ref{S5}. \begin{theorem}\label{localmanifolds} For any fixed $K\in\mathbb{N}_0$ and $\mu \in \left( \mu_K,\mu_{K+1}\right)$, there exist two constants $\varepsilon >\varepsilon_0>0$ (with $\varepsilon_0$ possibly smaller than in Theorem \ref{Whoeheremoden}), and a Lipschitz continuous mapping $\theta_\varepsilon:E_c\rightarrow E_s$ that is differentiable at zero with $\theta_\varepsilon(0)=0$ and $D\theta_\varepsilon(0)=0$ such that $W_{loc}^c $ given by \[ W_{loc}^c = \left\{g\in H: g=g_c+\theta_\varepsilon\left(g_c\right), g_c\in E_c, \|g\|_{H}\leq \varepsilon \right\}\] has the following properties: \begin{enumerate} \item For every $g\in W_{loc}^c$ with $\|g\|_{H}\leq \varepsilon_0$ it holds that $S^t(g)\in W_{loc}^c$ for all $t\geq 0$. \item For every $g\in H$ with $\|g\|_{W}\leq \varepsilon_0$ there exists a unique $\tilde{g} \in W_{loc}^c$ such that \begin{align} \left\|S^t\left(g\right)-S^t\left(\tilde{g}\right)\right\|_{W} \lesssim e^{-\mu t} \end{align} for every $t\geq 1$. \end{enumerate} \end{theorem} The first property simply states that the \emph{local center manifold} $W^c_{loc}$ is locally invariant under the nonlinear evolution \eqref{perturbationequation}. From the properties of $\theta_{\varepsilon}$ we infer that this manifold touches the center eigenspace $E_c$ tangentially at the origin. The second property provides a finite-dimensional approximation at a given rate by solutions in $W_{loc}^c$ for any given solution with sufficiently small initial datum. It is this feature that we exploit in order to derive fine large time asymptotics for the thin film equation. The invariant manifold theorem is interesting on its own as it provides a nonlinear finite-dimensional object which solutions approximate at a given rate in the large time limit. In other words, once a rate of convergence is determined, any sufficiently small solution belonging to an infinite-dimensional function space can be approximated with the prescribed rate by a solution on a finite-dimensional manifold. As outlined in the introduction, similar results have been derived earlier. What is particularly challenging here is the delicate degenerate parabolicity of the fourth-order equation \eqref{perturbationequation} modeling a free boundary problem whose mathematical understanding is still poor. The construction of the invariant manifolds will be done in Section \ref{dynamicalsystemarguments}, and will be carried out for a truncated version of the perturbation equation first. In fact, our analysis provides even more information, that we omit here because they are not relevant for the large time asymptotics. For instance, we will show that the finite-dimensional approximation emerges from foliation of the Hilbert space $H$ over a global invariant manifold. \section{From invariant manifolds to higher order asymptotics}\label{S4} The goal in this section is the derivation of the main results for the thin film equation stated in Corollaries \ref{exactleadingorder}, \ref{firstordercorrection} and \ref{grouptheorycorrections} from Theorem \ref{Whoeheremoden} on the mode-by-mode asymptotics for the perturbation equation. We start by noting that the transformations \eqref{transformationz} and \eqref{transformationw} yield that \begin{equation} \label{131} v(x) = \rho(\Phi(x))^2\left(1+w(\Phi(x)\right)^4, \end{equation} where $\Phi(x)=z$ is the diffeomorphism introduced in \eqref{transformationz}. In our proof of the leading order asymptotics, we apply Theorems \ref{Whoeheremoden} and \ref{localmanifolds} with $K=0$. \begin{proof}[ of Corollary \ref{exactleadingorder}] In a first step, we have to ensure that the mass constraint \eqref{120} implies the vanishing mean condition \eqref{h1}, which is the $K=0$ version of \eqref{h2}. We start by rewriting \eqref{120} with the help of the change of variables formula \eqref{131} and the expression for the Jacobian determinant \eqref{130} in the appendix, \begin{equation}\label{302} \begin{aligned} \int_{\mathbb{R}^N} v_*(x)\, dx &= \int_{\mathbb{R}^N} v(t,x)\, dx\\ & = \int\limits_{B_1(0)} \rho(z)^2 (1+w(t,z))^{N+3}\left(1+w(t,z)+z\cdot\nabla w(t,z)\right)\, dz. \end{aligned} \end{equation} The term on the right-hand side can be simplified via an integration by parts, \begin{align}\MoveEqLeft \int\limits_{B_1(0)} \rho^2 (1+w)^{N+3}\left(1+w+z\cdot\nabla w\right)dz\\ &= \int\limits_{B_1(0)} \rho^2 (1+w)^{N+4}dz + \frac{1}{N+4}\int\limits_{B_1(0)} \rho^2 z\cdot \nabla \left(1+w\right)^{N+4}dz &\\& =\frac{1}{N+4}\int\limits_{B_1(0)}\rho \left(1+w\right)^{N+4}\left(\left(N+4\right)\rho-N\rho+2|z|^2\right)dz\\ & = \frac{2}{N+4}\int\limits_{B_1(0)} \left(1+w\right)^{N+4}\rho\, dz, \end{align} where we have used that $4\rho(z) +2|z|^2=2$ in the last identity. In particular, as $v_*$ is mapped onto $w_*=0$ under the change of variables, the latter identity entails that \begin{align} \int_{\mathbb{R}^N} v_*\, dx = \frac{2}{N+4}\int\limits_{B_1(0)} \rho\, dz\quad \left(= \frac{2|B_1(0)|}{(N+2)(N+4)}\right), \end{align} which can also be verified via an elementary computation. Hence, we may cancel this term on both sides of \eqref{302} to obtain \begin{align} \frac{2}{N+4}\int\limits_{B_1(0)}\left(\left(1+w(t,z)\right)^{N+4}-1\right) \rho(z) dz=0 \quad \text{ for all } t\geq 0. \end{align} Now we notice that any solution to the perturbation equation $w(t)$ converges to leading order to a constant $a\in\mathbb{R}$. Indeed, if $K=0$, the local center manifold constructed in Theorem \ref{localmanifolds} is simply a ball $B_{\varepsilon}(0)$ in $\mathbb{R}$. (We comment on this simple fact briefly in the proof of Theorem \ref{stabilityw}.) Hence passing to the large time limit, the previous identity translates into $ (1+a)^{N+4}-1=0$, where $|a|\leq \varepsilon$ and thus $a=0$. This proves \eqref{h1}. Applying now Theorem \ref{Whoeheremoden} gives the decay estimate \begin{align} \left\|w(t)\right\|_{W^{1,\infty}}\lesssim e^{-\mu_{1,0}t}. \end{align} Using the transformation formulas \eqref{transformationz} and \eqref{transformationw}, we see that \[\label{407} w(t,\Phi(x)) + \frac12w(t,\Phi(x))^2 = \sqrt{v(t,x)} - V_*(x), \] for any $x\in \supp v(t)$, and the quadratic term on the left-hand side is of higher order because $w(t)$ is small. Therefore, the decay estimate for $w(t)$ implies the first part of the statement \[ \|\sqrt{v(t)}-V_*\|_{L^{\infty}(\supp v(t))} \lesssim e^{-\mu_{1,0}t}. \] Lastly, we turn to the decay of the first derivatives. With help of \eqref{407} we derive $ \partial_i \left(\sqrt{v(x)}-V_*\right) = \left(1+w\right) \nabla w \cdot \partial_i\Phi $. Recalling the transformation formulas \eqref{transformationz} and \eqref{transformationw}, we compute \begin{align}\partial_i\Phi(x) = \frac{e_i}{1+w} + \Phi(x)\frac{\partial_i\left(\sqrt{v(t)}-V_*\right)}{(1+w)^2}\end{align} and thus obtain \begin{align}\label{409} \nabla \left(\sqrt{v(t,x)}-V_*(x)\right) = \frac{1+w(t,\Phi(x))}{1+w(t,\Phi(x))+\Phi(x)\cdot w(t,\Phi(x))}\nabla w(t,\Phi(x)) \end{align} for all $x\in \supp v(t)$. Having this identity at hand, the decay estimate for $w(t)$ in $W^{1,\infty}$ directly yields the second part of the statement, \begin{align} \left\|\nabla \left(\sqrt{v(t)}-V_*\right)\right\|_{L^\infty(\supp v(t))}\lesssim e^{-\mu_{1,0}t}. \end{align} \end{proof} The proofs of Corollaries \ref{firstordercorrection} and \ref{grouptheorycorrections} will build up on the fact that the $L^2$-projections of solutions $v(t)$ of the confined thin film equation onto eigenspaces generated by certain eigenfunctions $\psi_{l,n,k}$ vanish for all times if they vanish initially. The following lemma illustrates how exactly this condition can be translated to the perturbation equation. \begin{lemma}\label{4} Let $\psi_{l,n,k}$ be an eigenfunction of the linear operator $\mathcal{L}^2+N\mathcal{L}$ as given in Theorem \ref{spektrum}. Then it holds that \begin{align} \int_{\mathbb{R}^N} \left(v(x)-v_*(x)\right)\psi_{l,n,k}(x)\, dx = 2\int_{B_1(0)} w(z)\psi_{l,n,k}(z)\rho(z)\, dz + \mathcal{O}\left(\|w\|^2_{L^{\infty}}\right), \end{align} provided that $w$ small in the sense of \eqref{126}. \end{lemma} The lemma, in particular, entails that \[ \left|\langle w, \psi_{l,n,k}\rangle \right| \lesssim \left\| w\right\|_{L^\infty}^2, \] provided that $\int _{\mathbb{R}^N} v \psi_{l,n,k}dx =\int _{\mathbb{R}^N} v_* \psi_{l,n,k}dx$. We will exploit this observation in the sequel. \begin{proof} Theorem \ref{spektrum} shows that every eigenfunction $\psi_{l,n,k}(x)$ is given as a product of a polynomial in $|x|^2$ and a homogeneous harmonic polynomial of degree $l$, that is \begin{align} \psi_{l,n,k} = \sum \limits_{j=1}^{k} c(l,k,j)|x|^{2j}\psi_{l}(x), \end{align} where $\psi_{l}$ denotes an arbitrary homogeneous harmonic polynomial of degree $l$ and $c(l,k,j)$ a real-valued coefficient. Due to this structure of the eigenfunctions, the problem boils down to proving \begin{align}\label{225} \int_{\mathbb{R}^N} \left(v(x)-v_*(x)\right)\psi_l(x)|x|^{2j}dx = 2\int_{B_1(0)} w(z)\psi_l(z)|z|^{2j}\rho(z)dz + \mathcal{O}\left(\|w\|^2_{L^{\infty}}\right), \end{align} for any integer $j\le k$. To address \eqref{225}, we first notice that by our choice of the perturbation variables \eqref{transformationz} and \eqref{transformationw}, it holds that $\psi_l(x) = (1+w(z))^l\psi_l(z)$ and $|x| = (1+w(z))|z|$. Therefore, we find with the help of the transformation identities \eqref{131} and \eqref{130} that \begin{align} \int_{\mathbb{R}^N} v\psi_l |x|^{2j}dx &= \int_{B_1(0)} \rho^2(1+w)^{3+l+2j+N} \psi_l(z) |z|^{2j}\left(1+w+z\cdot \nabla w \right)\,dz\\ &=\int_{B_1(0)} \rho^2(1+w)^{4+l+2j+N}\psi_l(z)|z|^{2j}dz\\ &\quad + \int_{B_1(0)} \rho^2(1+w)^{3+l+2j+N} \psi_l(z)|z|^{2j}z\cdot\nabla w\, dz. \end{align} In the last term on the right-hand side, we integrate by parts and find after a short computation that \begin{align*} \MoveEqLeft \int_{B_1(0)} \rho^2 (1+w)^{3+l+2j+N} \psi_l(z)|z|^{2j}z\cdot\nabla w\, dz\\ & = -\int_{B_1(0)} \rho^2 (1+w)^{4+l+2j+N} \psi_l(z)|z|^{2j}\, dz \\ &\quad + \frac2{4+l+2j+N}\int_{B_1(0)} \rho (1+w)^{4+l+2j+N} \psi_l(z)|z|^{2j}\, dz \end{align*} where we have used the identities $z\cdot \nabla \psi_l = l\psi_l$, which holds true because $\psi_{l}$ is a homogeneous polynomial of degree $l$, and $2\rho+|z|^2=1$. It follows that \[ \int_{\mathbb{R}^N} v\psi_l|x|^{2j}\, dx = \frac2{4+l+2j+N}\int_{B_1(0)} \rho (1+w)^{4+l+2j+N} \psi_l(z)|z|^{2j}\, dz. \] Next, we take into account the identity $(1+w)^m=1+mw+\mathcal{O}\left(\|w\|^2_{L^\infty}\right)$, which holds for $m\in \mathbb{N}$ and $\|w\|_{L^{\infty}}$ small by Taylor expansion, and derive \begin{align}\MoveEqLeft \int_{\mathbb{R}^N} v\psi_l|x|^{2j}dx=2 \int_{B_1(0)} \rho w \psi_l|z|^{2j}dz+\frac{2}{4+l+2j+N}\int_{B_1(0)} \rho\psi_l|z|^{2j} dz + \mathcal{O}\left(\|w\|_{L^{\infty}}^2\right). \end{align} It remains to show that \[ \frac{2}{4+l+2j+N}\int_{B_1(0)} \rho \psi_l|z|^{2j}dz = \int_{\mathbb{R}^N} v_*\psi_l|x|^{2j}dx= \frac{1}{4}\int_{B_1(0)} \psi_l\left(1-|x|^2\right)^2|x|^{2j}\rho dx. \] In the case $l\geq1$ both terms vanish thanks to the orthogonality of the eigenfunctions with respect to the inner product introduced in \eqref{124}. Indeed, the harmonic polynomial $\psi_l$ can be written as a linear combination of the eigenfunctions $\psi_{l,n,0}$ with $n\in \{1,\dots,N_l\}$, while the radial weights $|z|^{2j}$ and $ \left(1-|x|^2\right)^2|x|^{2j}$ lie in the spaces $\huelle\left\{\psi_{0,0,i}:\:i\leq j\right\}$ and $\huelle\left\{\psi_{0,0,i}:\:i\leq j+2\right\}$, respectively. For $l=0$, it holds that $\psi_0=1$ and the claim follows via an elementary computation. This establishes \eqref{225} and thus the proof is finished. \end{proof} With help of the previous lemma, the proof of Corollary \ref{firstordercorrection} reduces to an easy combination of the already established results. \begin{proof}[ of Corollary \ref{firstordercorrection}] As the solution of the confined thin film equation remains centered at the origin provided its initial data is, cf.~\eqref{motioncenterofmass}, we can make use of Lemma \ref{4} with $\psi_{l,n,k} = \psi_{1,n,0}$ to obtain, \begin{align} 0 = \int_{\mathbb{R}^N} x_iv(t,x)dx =2 \int_{B_1(0)} z_i w(t,z)\rho dz +\mathcal{O}\left(\|w\|^2_{L^{\infty}}\right), \end{align} for every $i \in \left\{1,\dots,N\right\}$. In the proof of Corollary \ref{exactleadingorder} we already established convergence rates for $w$, namely $\|w\|_{L^{\infty}}\lesssim e^{-\mu_{1,0}t}$. This directly yields \begin{align} \lim \limits_{t\rightarrow \infty}e^{\mu_{1,0}t} \int_{B_1(0)} zw(t,z)\rho dz =0, \end{align} which makes Theorem \ref{Whoeheremoden} applicable. We therefore obtain \begin{align} \|w(t)\|_{W^{1,\infty}}\lesssim e^{-\mu t}, \end{align} where $\mu$ is the next eigenvalue in line, which is $\mu = \mu_{0,1} =30$ if $N=1$ and $\mu = \mu_{2,0} = 16+4N$ if $N\ge 2$. It remains to translate the convergence result for the perturbation equation into a convergence result for the confined thin film equation. The argument proceeds in exactly the same way as the proof of Corollary \ref{exactleadingorder}. We drop the details. \end{proof} The last proof of this section is based on similar ideas and exploits Lemma \ref{4} in more generality. \begin{proof}[ of Corollary \ref{grouptheorycorrections}] In a first step we establish the uniform decay estimate $\left\|w(t)\right\|_{L^\infty}\lesssim e^{-\mu_{l,k}t}$, which directly implies $\lim \limits_{t\rightarrow \infty}e^{\mu t}\langle \psi,w(t)\rangle = 0 $ for all $\mu < \mu_{l,k}$ and their corresponding eigenfunctions $\psi$. Towards this uniform estimate, we notice that on the one hand it holds $|w(t,z)| \lesssim \left|w(t,z)+\frac{1}{2}w(t,z)^2\right|$, because $w(t)$ is small as a consequence of the leading order asymptotics in Corollary \ref{exactleadingorder}. On the other hand, we deduce from the transformation formulas \eqref{transformationz} and \eqref{transformationw} that $\left|w(t,z)+\frac{1}{2}w(t,z)^2\right| = \left| \sqrt{v(t,x)}-V_*(x)\right|$. A combination of both and \eqref{401} gives the estimate on $w(t)$. Before we continue with the proof, we insert a short discussion about the assumptions on the decay of $v(t)-\sqrt{V_*}$, c.\ f.\ Remark \ref{R3}. Since all eigenmodes corresponding to eigenvalues $\mu$ smaller than $\mu_{l,k}$ decay fast enough, Theorem \ref{Whoeheremoden} provides a decay estimate for $w(t)$ in $W^{1,\infty}$, namely $\left\|w(t)\right\|_{W^{1,\infty}}\lesssim e^{-\mu_{l,k}t}$. Proceeding in the same way as in the proof of Corollary \ref{exactleadingorder}, we obtain $\left\|v(t)-\sqrt{V_*}\right\|_{W^{1,\infty}\left(\supp v(t)\right)}\lesssim e^{-\mu_{l,k}t}$. This shows that extending norm in the decay assumption in Corollary \ref{grouptheorycorrections} from $L^\infty$ to $W^{1,\infty}$ eventually provides an equivalent condition. Let us now turn back to the actual proof. To deduce a better convergence rate for $w(t)$ from Theorem \ref{Whoeheremoden}, we also have to show that the eigenmodes corresponding to $\mu_{l,k}$ are inactive, that is \begin{align}\label{224} \lim \limits_{t\rightarrow \infty}e^{\mu_{l,k} t}\langle \psi_{l,n,k},w(t)\rangle = 0 \quad \text{ for all } n\in \left\{1,\dots,N_l\right\}. \end{align} Once this is proved, we obtain with help of Theorem \ref{Whoeheremoden} that $\|w\|_W \lesssim e^{-\mu_+ t}$, where $\mu_+$ is the next largest eigenvalue following $\mu_{l,k}$. From this point on, the proof proceeds in the same way as before. Let us now turn to the proof of \eqref{224}. Recalling that $\left\|w\right\|_{L^\infty}\lesssim e^{-\mu_{l,k}t}$ and Lemma \ref{4}, it suffices to prove \begin{align}\label{226} \int_{\mathbb{R}^N} \left(v(t,x)-v_*(x)\right)\psi_{l,n,k}(x)\, dx =0 \quad \text{ for all }n\in \left\{1,\dots,N_l\right\}, \end{align} for all $t\geq 0$. The argument for this identity is based on the invariance of $v(t)$ under orthogonal transformations contained in $E$. Since the confined thin film equation is invariant under orthogonal transformations, uniqueness of solutions to this equation guarantees that the solution $v(t)$ inherits this property from its initial datum $v_0$ for every time $t$. By the right choice of $E$, this geometric invariance ensures that the projection of $v(t)$ onto every homogeneous, harmonic polynomial of degree $l$ vanishes. The same trivially holds true for $v_*$. In order to exploit this fact, we have a closer look at the structure of the eigenfunctions $\psi_{l,n,k}$ appearing in \eqref{226}. Due to the condition that $\mu_{l,k}$ has multiplicity $N_l$, we know from Theorem \ref{spektrum} that every $\psi_{l,n,k}$ has the form \begin{align} \psi_{l,n,k} = \sum \limits_{j=1}^{k} c(l,k,j)|x|^{2j}\psi_{l}(x), \end{align} where $\psi_l$ denotes an homogeneous harmonic polynomial of degree $l$. Note that the product $v(t) \sum c(l,k,j)|x|^{2j}$ satisfies the same geometrical properties as $v(t)$ and thus its projection onto every homogeneous harmonic polynomial vanishes as well, i.\ e.\, \begin{align} 0 = \int_{\mathbb{R}^N} v(t,x)\sum \limits_{j=1}^{k} c(l,k,j)|x|^{2j}\psi_{l}(x)\,dx = \int_{\mathbb{R}^N}v(t,x)\psi_{l,n,k} \,dx. \end{align} Again, the same holds true for $v_*$ and thus the proof of \eqref{226} is completed. \end{proof} \begin{remark}\label{R2} In the two-dimensional case $N=2$, Corollary \ref{grouptheorycorrections} can also be easily proved in a more direct way thanks to the fact that both, the spherical harmonics and the orthogonal transformations have a handy, explicit form in two dimensions. The spherical harmonics of degree $l$ are given by (in polar coordinates) $cos(l\varphi)$ and $sin(l\varphi)$. Recalling the form of a rotation or reflection matrix, a straightforward computation yields the same results as Corollary \ref{grouptheorycorrections}. However, this strategy becomes impracticable in higher dimensions, particularly because there is no longer such a convenient representation for general orthogonal projections. \end{remark} \section{Theory for the perturbation equation}\label{theoryforperturbationequation} In this section, we will recall main aspects of the theory for the perturbation equation \eqref{perturbationequation} derived earlier in \cite{SeisTFE}, and we will provide higher order regularity estimates. Such estimates will be an important tool in our invariant manifold theory, which we will develop in the subsequent sections. We start by recalling that the operator $\mathcal{L}$ is symmetric in $L^2(\rho)$ and satisfies the maximal regularity estimate \begin{align}\label{b2} \|\nabla w\|+\|\rho\nabla^2w\| \lesssim \|\mathcal{L}w\|. \end{align} Indeed, such an estimate holds true for the more general class of degenerate elliptic operators \begin{equation}\label{504} \mathcal{L}_\sigma \coloneqq -\rho^{-\sigma}\nabla \cdot \left(\rho^{\sigma+1}\nabla w\right), \end{equation} that naturally occur in the context of the porous medium equation, see \cite{KochHabilitation,Kienzler16,Seis14,Seis15}. In this case, the underlying Hilbert space is $L^2\left(\rho^\sigma\right)$. We state the corresponding maximal regularity estimate for the fourth order linear problem associated to the perturbation equation \eqref{perturbationequation}, that is, \begin{align}\label{b1} \begin{cases} \partial_tw+\mathcal{L}^2w+ N \mathcal{L}w &= f \quad \text{ in } (0,\infty)\times B_1(0)\\ w(0,\cdot)&=w_0 \quad \text{ in } B_1(0). \end{cases} \end{align} This problem is well-posed for $L^2(\rho)$ initial data and $L^2((0,T);L^2(\rho))$ inhomogeneities, see Lemma 7 in \cite{SeisTFE}. In the case with zero initial data, $w_0=0$, there is the maximal regularity estimate \begin{equation}\label{112} \begin{aligned} \MoveEqLeft \|\partial_tw\|_{L^p\left(\left(0,T\right);L^p(\rho^\sigma)\right)}+\|\nabla^2w\|_{L^p\left((0,T);L^p(\rho^\sigma)\right)}+\|\rho\nabla^3w\|_{L^p\left((0,T);L^p(\rho^\sigma)\right)}+\|\rho^2\nabla^4w\|_{L^p\left((0,T);L^p(\rho^\sigma)\right)} \\ &\lesssim \|f\|_{L^p\left((0,T);L^p(\rho^\sigma)\right)}, \end{aligned} \end{equation} which holds true for any $p\in \left(1,\infty\right)$, $\sigma >0$ and $T>0$, see Lemma 8 and Proposition 19 (and its proof) in \cite{SeisTFE}. In order to motivate the results that are collected and derived in the following, we have a closer look at the nonlinearity occurring in \eqref{nonlinearityperturbationequation}. The natural framework to prove well-posedness of the nonlinear problem \eqref{perturbationequation} is the class $C^{0,1}(B_1(0))$, in which the singular terms $R_l[w]$ can be suitably controlled, at least, if $w$ is small in that class. Moreover, in such a situation, the nonlinearity is of the same regularity order as the linear elliptic operator $\L^2 $, and the inhomogeneity can thus be treated as a quadratic perturbation term. We will carry this out in a simple Hilbert space setting later in Section \ref{S5} (after a necessary truncation). A complete theory for the nonlinear equation \eqref{perturbationequation} forces us to construct higher order norms that match the scaling of the (homogeneous) Lipschitz norm. This naturally leads to considering Carleson or Whitney measures, more precisely \begin{align} \|w\|_{X(p)}&=\sum\limits_{(l,k,|\beta|)\in \mathcal{E}} \sup\limits_{\substack{z\in\overline{B_1(0)}\\0<r\leq1}} \frac{r^{4k+|\beta|-1}}{\theta(r,z)^{2l-|\beta|+1}}\left|Q_r^d(z)\right|^{-\frac{1}{p}}\|\rho^l\partial_t^k\partial_z^\beta w\|_{L^p\left({Q_r^d(z)}\right)}\\ &\quad +\sum\limits_{(l,k,|\beta|)\in \mathcal{E}} \sup\limits_{T\geq1} \|\rho^l\partial_t^k\partial_z^\beta w\|_{L^p\left(Q(T)\right)},\\ \|f\|_{Y(p)}&= \sup\limits_{\substack{z\in\overline{B_1(0)}\\0<r\leq1}} \frac{r^3}{\theta(r,z)}\left|Q_r^d(z)\right|^{-\frac{1}{p}}\|f\|_{L^p\left(Q_r^d(z)\right)} +\sup\limits_{T\geq1} \|f\|_{L^p\left(Q(T)\right)}, \end{align} where $ \mathcal{E}=\left\{ (0,1,0),(0,0,2),(1,0,3),(2,0,4) \right\}$ and $\theta(r,z) = \max\{r,\sqrt{\rho(z)}\}$. Moreover, $Q_r^d(z)$ is the Whitney cube $(r^4/2,r^4)\times B_r^d(z)$ and $Q(T) = (T,T+1)\times B_1(0)$. We remark that the balls $B_r^d(z)=\left\{z'\in \overline{B_1(0)}: d(z,z')<r\right\}$ are not defined with respect to the Euclidean metric on $B_1(0)$ but the semi-distance \begin{align}\label{500} d(z,z')\coloneqq \frac{|z-z'|}{\sqrt{\rho(z)}+\sqrt{\rho(z')}+\sqrt{|z-z'|}}. \end{align} The occurrence of this semi-distance can be motivated by interpreting the parabolic problem \eqref{b1} as a (fourth order) heat flow on a weighted Riemannian manifold $(\mathcal{M},\textbf{g},\omega \textbf{vol})$, cf.~\cite{Grigoryan06}. Indeed, considering $\textbf{g}=\rho^{-1}(dx)^2$ as the Riemannian metric on the disc $B$ and choosing a suitable weight $\omega$ on the volume form, the elliptic operator $\L$ turns out to be the Laplace--Beltrami operator on $(\mathcal{M},\textbf{g},\omega \textbf{vol})$. On this manifold, the induced geodesic distance is equivalent to $d(z,z')$ in \eqref{500}. Considering this \emph{intrinsic metric} is helpful as the theories for heat flows are often also available on weighted manifolds \cite{Grigoryan06}. For the subsequent computations, we recall some properties of the intrinsic distance from \cite{Seis15}: The intrinsic balls are equivalent to Euclidean balls, more precisely there exists a positive constant $C$ such that \begin{align}\label{equivalenceballs} B_{C^{-1}r\theta(r,z)}(z)\subseteq B_r^d(z)\subseteq B_{Cr\theta(r,z)}(z) \end{align} for every $z$ in $\overline{B_1(0)}$ and any $r$. Furthermore, it holds for any $r$ that \begin{align} \sqrt{\rho(z')}\lesssim r \quad \Rightarrow\quad \sqrt{\rho(z)}\lesssim r \quad \text{ for all } z\in B_r^d(z') \end{align} and \begin{align} \sqrt{\rho(z')}\gg r \quad \Rightarrow \quad \rho(z)\sim \rho(z') \quad \text{ for all } z\in B_r^d(z') \end{align} which, in particular, implies that \begin{equation}\label{502} \theta(r,\cdot)\sim \theta(r,z')\quad\mbox{in }B_r^d(z'). \end{equation} Variants of these norms were considered earlier in the treatment of the Navier--Stokes equations, a class of geometric flows, the porous medium equation and the thin film equation \cite{KOCH200122,Koch2012,Kienzler16,John15,Seis15}, see also the review in \cite{KochLamm15}. The choice of the large time contributions is rather arbitrary, see also Remark \ref{R1}. Still on the level of the linear equation \eqref{b1}, it is proved in \cite{SeisTFE} that for any $p>N+4$, the solution to \eqref{b1} satisfies the estimate \begin{align}\label{113} \|w\|_{W^{1,\infty}} +\|w\|_{X(p)}\lesssim \|f\|_{Y(p)} + \|w_0\|_{W^{1\infty}}, \end{align} provided that the right-hand side is finite. The well-posedness theory for the perturbation equation \eqref{perturbationequation} and our higher-order regularity estimate below do heavily rely on that bound. For further reference, we recall the main results for \eqref{perturbationequation} from the literature. \begin{theorem}[\cite{SeisTFE}]\label{maintheoremseis} Let $p>N+4$ be given. There exists $\varepsilon_0 >0$ such that for every $w_0\in W^{1,\infty}$ with $ \|w_0\|_{W^{1,\infty}} \leq \varepsilon_0$ there exists a solution $w$ to the nonlinear equation \eqref{perturbationequation} with initial datum $w_0$ and $w$ is unique among all solutions with $\|w\|_{L^\infty(W^{1,\infty})} + \|w\|_{X(p)} \lesssim \varepsilon_0$. Moreover, this solution $w$ satisfies the estimate \begin{align}\label{estimateagainstinitialdata} \|w\|_{L^\infty(W^{1,\infty})} + \|w\|_{X(p)}\lesssim \|w_0\|_{W^{1,\infty}} \end{align} and is smooth, and analytic in time and angular direction. \end{theorem} Strictly speaking, the result described here slightly differ from \cite{SeisTFE}. \begin{remark}\label{R1} For accuracy, we remark that in \cite{SeisTFE}, the linear bound \eqref{113} and the nonlinear theory in Theorem \ref{maintheoremseis} were derived for slightly different $X(p)$ and $Y(p)$ norms. Indeed, in this earlier work the large time contributions $ \|\rho^l\partial_t^k\partial_z^\beta w\|_{L^p\left(Q(T)\right)}$ and $ \|f\|_{L^p\left(Q(T)\right)}$ came both with a factor $T$. With regard to the theory developed in the present paper, dropping this factor is more convenient. \end{remark} In the present paper, we have to extend the theory from $C^{0,1}$ data to a higher regularity setting. Indeed, it turns out that the truncation that we introduce on the level of the nonlinearity in Section \ref{S5} needs to cut-off derivatives up to third order. In order to subsequently relate the truncated equation to the original one \eqref{perturbationequation}, these derivatives need to be controlled by the initial data. We will chose the uniform higher-order norms whose homogeneous parts have the same scaling as the homogeneous Lipschitz norm at the boundary, $\|\cdot\|_W$, which we introduced in \eqref{125}. Our main contribution in the present section is the following higher order regularity result. \begin{theorem}\label{dritteableitung} There exists $\varepsilon_0>0$, possibly smaller than in Theorem \ref{maintheoremseis}, such that for every $w_0\in W^{1,\infty}$ with $ \|w_0\|_{W}\leq \varepsilon_0$, the unique solution $w$ from from Theorem \ref{maintheoremseis} satisfies \begin{align} \|w\|_W \lesssim \|w_0\|_W. \end{align} \end{theorem} \begin{proof} \emph{Step 1. Second order derivatives.} We will prove the slightly stronger bound \begin{align}\label{114} \|\rho \nabla^2 w\|_{L^{\infty}} + \|\rho \nabla w\|_{X(p)} \lesssim \|w_0\|_{W^{1,\infty}}+\|\rho \nabla^2w_0\|_{L^\infty}. \end{align} For this purpose, for every $i=1,\dots,N$, we consider the dynamics of $\rho \partial_i w$ under the nonlinear equation \eqref{perturbationequation}, that is, \[ \partial_t(\rho \partial_iw)+\mathcal{L}^2(\rho \partial_iw)+N\mathcal{L}(\rho \partial_iw)=\rho \partial_if[w] + NE[w]+\mathcal{L}E[w] + E[\mathcal{L}w], \] where $E[v] = -\rho z_i \Delta v - 2\rho \partial_iv + (N\rho -2|z|)\partial_iv+ 2\rho z \cdot \nabla\partial_iv$ is the commutator of the operators $\rho\partial_i $ and $ \L$, and this equation is equipped with the initial datum $\rho \partial_iw_0.$ From the a priori bound in \eqref{113}, we know that \[ \|\rho \partial_iw\|_{W^{1,\infty}} +\|\rho \partial_iw\|_{X(p)}\lesssim \|\rho \partial_if[w]\|_{Y(p)} + \|NE[w]+\mathcal{L}E[w] + E[\mathcal{L}w]\|_{Y(p)}+ \|\rho \partial_iw_0\|_{W^{1,\infty}}. \] In view of the bound from Theorem \ref{maintheoremseis}, in order to prove \eqref{114} it suffices thus to prove that \begin{equation}\label{a3} \begin{aligned} \MoveEqLeft \|\rho \partial_if[w]\|_{Y(p)} + \|NE[w]+\mathcal{L}E[w] + E[\mathcal{L}w]\|_{Y(p)}\\ & \lesssim \|w\|_{W^{1,\infty}} + \|w\|_{X(p)} + \varepsilon_0 \left( \|\rho \nabla w\|_{W^{1,\infty}} + \|\rho \nabla w\|_{X(p)} \right), \end{aligned} \end{equation} and to choose $\varepsilon_0$ sufficiently small. From \cite{SeisTFE} we are aware of another form of the nonlinearity $f[w]$ of the perturbation equation \eqref{perturbationequation}, namely $f[w]=f^1[w]+f^2[w]+f^3[w],$ where \begin{align} f^1[w] &= p \star R[w]\star\left(\left(\nabla w\right)^{2\star}+\nabla w \star \nabla^2w \right),\\ f^2[w] & = p\star R[w]\star \rho \left(\left(\nabla^2w\right)^{2\star}+ \nabla^3w\star \nabla w\right),\\ f^3[w]& =p \star R[w]\star \rho^2 \left(\left( \nabla^2w\right)^{3\star} +\nabla^2w\star \nabla^3w+ \nabla w\star \nabla^4w \right), \end{align} and \[ R[w]= \frac{\left(\nabla w\right)^{k\star}}{\left( 1+w +z\cdot \nabla w\right)^{l}} \] for some $k \in \mathbb{N}_0$, $l\in\mathbb{N}$, whose values may be different in any occurrence of $R[w]$. (Of course, the reader may derive this presentation also directly from \eqref{perturbationequation} and \eqref{nonlinearityperturbationequation}.) The computation of derivatives of these expressions is tedious but straightforward. As an auxiliary result we notice that $\nabla R [w] = p\star R + p\star R\star \nabla^2 w$. Here are the final formulas: \begin{align*} \partial_i f^1[w]& =p\star R[w]\star\left( (\nabla w)^{2\star} +\nabla w\star \nabla^2 w + (\nabla^2 w)^{2\star} +\nabla w\star\nabla^3 w\right),\\ \partial_i f^2[w] &=p\star R[w]\star\left( (\nabla^2 w)^{2\star} +\nabla w\star \nabla^3 w + \rho (\nabla^2 w)^{3\star} +\rho \nabla^2 w\star\nabla^3 w + \rho\nabla w\star\nabla^4w\right),\\ \partial_i f^3[w] & = p\star R[w]\star\left(\rho(\nabla^2 w)^{3\star} + \rho\nabla^2 w\star\nabla^3 w+\rho\nabla w\star\rho^4w + \rho^2 (\nabla^2 w)^{4\star} \right.\\ &\quad +\left. \rho^2 \nabla w\star \nabla^5 w+ \rho^2 (\nabla^2 w)^{2\star}\star \nabla^3w +\rho^2 (\nabla^3 w)^{2\star} +\rho^2 \nabla^2 w\star\nabla^4 w\right). \end{align*} Combining them, and multiplying by $\rho$, we thus find that \[ \rho\partial_i f[w] = p\star R[w]\star\left( I + J \right), \] where \begin{align*} I & = (\nabla w)^{2\star} +\rho \nabla w\star\nabla^2 w +\rho (\nabla^2 w)^{2\star} +\rho \nabla w\star\nabla^3 w +\rho^2 \nabla^2 w \star\nabla^3 w\\ &\quad + \rho^2 \nabla w\star \nabla^4 w +\rho^2 \nabla^2 w\star\nabla^3 w + \rho^3 \nabla^2 w\star\nabla^4 w + \rho^3 \nabla w\star\nabla^5 w,\\ J & = \rho^3 (\nabla^3 w)^{2\star} +\rho^2 (\nabla^2 w)^{3\star} +\rho^3 (\nabla^2 w)^{2\star} \star\nabla^3 w + \rho^3 (\nabla^2 w)^{4\star} . \end{align*} Because $|p\star R[w]|\lesssim 1$ thanks to the control of $w$ and $\nabla w$ during the evolution, in our estimate of $\rho \partial_i f[w]$ it is enough to control $I$ and $J$. Here, the first term is much easier to handle. Indeed, using the fact that $\|\nabla w\|_{Y(p)} \lesssim \|\nabla w\|_{L^{\infty}}$ and $\|\nabla^2 w\|_{Y(p)} + \|\rho \nabla^3w\|_{Y(p)}+\|\rho^2\nabla^4w\|_{Y(p)} \lesssim \|w\|_{X(p)} $, which comes directly out of the definition of the $Y(p)$ norm, and invoking the a priori estimate in Theorem \ref{maintheoremseis}, we readily find that \begin{align*} \|I\|_{Y(p)} &\lesssim \left(\|\nabla w\|_{L^{\infty}} + \|w\|_{X(p)}\right)\left(\|\nabla w\|_{L^{\infty}} + \|\rho \nabla^2 w\|_{L^{\infty}} + \|\rho^3 \nabla^5 w\|_{Y(p)}\right)\\ & \lesssim \|w_0\|_{W^{1,\infty}} + \varepsilon_0 \left( \|\rho \nabla^2 w\|_{L^{\infty}} + \|\rho^3 \nabla^5 w\|_{Y(p)}\right). \end{align*} The estimates of the terms appearing in $J$ are more involved as we have to make use of suitable interpolations. Some were already discussed in \cite{SeisTFE}, but we present the ideas here for the convenience of the reader. Let $\eta$ be a smooth cut-off function satisfying $\eta = 1$ in $B_r^d(z_0)$ and $\eta =0$ outside $B_{2r}^d(z_0)$ for $r\leq 1$. Inside of the ball $B_r^d(z_0)$, we then have that \[ \rho^3 |\nabla^3 w|^2 \lesssim \rho |\nabla \zeta|^2 + \rho |\nabla^2 w|^2, \] if $\xi =\eta \rho \nabla^2w$. It follows that \begin{align*} \|\rho^3 |\nabla^3 w|^2\|_{L^p\left(B_r^d(z_0)\right)} & \lesssim \|\rho |\nabla \zeta|^2\|_{L^p} + \|\rho |\nabla^2 w|^2\|_{L^p\left(B_r^d(z_0)\right)}. \end{align*} To estimate the first term on the right hand side, we make use of the interpolation inequality \eqref{408} with $m=2$ and $i=1$ in Lemma \ref{interpolationintequality} of the appendix and find \begin{align*} \|\rho |\nabla\zeta|^2\|_{L^p} & = \|\nabla \zeta\|_{L^{2p}\left(\rho^p\right)}^2 \lesssim \|\zeta\|_{L^{\infty}} \|\nabla^2\zeta\|_{L^p\left(\rho^p\right)}. \end{align*} We then deduce from the definition of $\zeta$, by using Leibniz' rule and the fact that $|\nabla^k \eta|\lesssim r^{-k}\theta(r,z_0)^{-k}$, which follows from the behavior of the intrinsic balls in \eqref{equivalenceballs}, that \begin{align} \|\rho |\nabla\zeta|^2\|_{L^p} & \lesssim \|\rho \nabla^2 w\|_{L^{\infty}} \left( \|\rho^2 \nabla^4 w\|_{L^p\left(B_{2r}^d(z_0)\right)} + \|\rho \nabla^3 w\|_{L^p(B_{2r}^d(z_0))} \right.\\ &\quad +\frac1{r\theta(r,z_0)}\|\rho \nabla^2 w\|_{L^p\left(B_{2r}^d(z_0)\right)}+\frac1{r\theta(r,z_0)}\|\rho^2 \nabla^3 w\|_{L^p\left(B_{2r}^d(z_0)\right)}\\ &\quad \left.+\frac1{r^2\theta(r,z_0)^2}\|\rho^2 \nabla^2 w\|_{L^p\left(B_{2r}^d(z_0)\right)}\right). \end{align} The $\rho$'s can we always pulled out of the norms by estimating against $\theta(r,z_0)^2$, because $\theta(r,z_0)\sim \theta(r,z)=\max\left\{r,\sqrt{\rho(z)}\right\}$ by \eqref{502}. In view of the definitions of the $Y(p)$ and $X(p)$ norms, we then deduce that \[ \|\rho^3 |\nabla^3 w|^2 \|_{Y(p)} \lesssim \|w\|_{X(p)} \|\rho \nabla^2 w\|_{L^{\infty}} + \|\rho |\nabla^2 w|^2\|_{Y(p)}, \] and the second term can be estimated as in our bound for $I$, so that we find \begin{equation} \label{117} \|\rho^3 |\nabla^3 w|^2 \|_{Y(p)} \lesssim \varepsilon_0 \|\rho \nabla^2 w\|_{L^{\infty}} \end{equation} thanks to the estimates from Theorem \ref{maintheoremseis} The second term in $J$ can be estimated very similarly. This time we choose $\zeta = \eta \nabla w$ and eventually arrive at \begin{equation} \label{115} \|\rho^2 |\nabla^2 w|^3\|_{Y(p)} \lesssim \|\nabla w\|_{L^{\infty}}^2 \left(\|\nabla w\|_{L^{\infty}} + \|w\|_{X(p)}\right) \lesssim \|w_0\|_{W^{1,\infty}}, \end{equation} thanks to the a priori estimates in Theorem \ref{maintheoremseis}. (Notice that details for this estimate can be found in \cite{SeisTFE}.) The latter bound also entails an estimate for the fourth term in $J$. Indeed, we have \begin{equation} \label{116} \|\rho^3 |\nabla^2 w|^4\|_{Y(p)} \le \|\rho \nabla^2 w\|_{L^{\infty}} \|\rho^2 |\nabla^2 w|^3\|_{Y(p)}\lesssim \|g\|_{W^{1,\infty}}\|\rho \nabla^2 w\|_{L^{\infty}} \le \varepsilon_0 \|\rho \nabla^2 w\|_{L^{\infty}}. \end{equation} Finally, in order to bound the third term in $J$, we interpolate between \eqref{117} and \eqref{116}. Altogether, we find the estimate \[ \|J\|_{Y(p)} \lesssim \|w_0\|_{W^{1,\infty}} + \varepsilon_0 \|\rho \nabla^2 w\|_{L^{\infty}}. \] Our estimates on $I$ and $J$ yield the desired control on $\rho \partial_i f[w]$. To prove the full statement in \eqref{a3}, it remains only to choose $\varepsilon_0$ small enough and to notice that \begin{align} |NE[w]+\mathcal{L}E[w] + E[\mathcal{L}w]| \lesssim |\rho^2\nabla^4w|+|\rho \nabla^3w|+|\nabla^2w|+|\nabla w|, \end{align} which provides \[ \|NE[w]+\mathcal{L}E[w] + E[\mathcal{L}w]\|_{Y(p)} \lesssim \|w\|_{W^{1,\infty}} + \|w\|_{X(p)} \lesssim \|w_0\|_{W^{1,\infty}} \] in a similar manner as before. This finishes the proof. \medskip \emph{Step 2. Third order derivatives.} The prove of the estimates proceeds analogously to the first step, only this time, much more terms have to be considered. For every $i,j=1,\dots,N$ we consider the dynamics of $\rho \partial_j(\rho \partial_i w)$, that is, \begin{align}\MoveEqLeft \partial_t\left(\rho \partial_j(\rho \partial_iw)\right) + \mathcal{L}^2\left(\rho \partial_j(\rho \partial_iw)\right)+N\mathcal{L}\left(\rho \partial_j(\rho \partial_iw)\right) \\ &=\rho \partial_j\left(\rho \partial_if[w]\right) +\rho \partial_j \left(NE[w]+\mathcal{L}E[w] + E[\mathcal{L}w]\right) \\ &\quad + NE[\rho \partial_iw]+\mathcal{L}E[\rho \partial_iw] + E[\mathcal{L}(\rho \partial_iw)], \end{align} which is equipped with the initial datum $\rho \partial_j(\rho \partial_iw_0)$. Again, thanks to the a priori bound \eqref{113}, we know that \begin{align}\MoveEqLeft \|\rho\partial_j\left(\rho \partial_iw\right)\|_{W^{1,\infty}}+ \|\rho\partial_j\left(\rho \partial_iw\right)\|_{X(p)}\\ &\lesssim \|\rho \partial_j\left(\rho \partial_if[w]\right)\|_{Y(p)}+\|\rho \partial_j \left(NE[w]+\mathcal{L}E[w] + E[\mathcal{L}w]\right)\|_{Y(p)}\\ &\quad +\|NE[\rho \partial_iw]+\mathcal{L}E[\rho \partial_iw] + E[\mathcal{L}(\rho \partial_iw)]\|_{Y(p)}+\|\rho \partial_j\left(\rho \partial_iw_0\right)\|_{W^{1,\infty}}, \end{align} which can be rewritten as \begin{align}\MoveEqLeft \|\rho^2 \partial_{ij}^2 w \|_{W^{1,\infty}}+ \|\rho^2 \partial_{ij}^2 w \|_{X(p)}\\ &\lesssim \|\rho^2 \partial_{ij}^2 f[w] \|_{Y(p)}+\|\rho \partial_j \left(NE[w]+\mathcal{L}E[w] + E[\mathcal{L}w]\right)\|_{Y(p)}\\ &\quad +\|NE[\rho \partial_iw]+\mathcal{L}E[\rho \partial_iw] + E[\mathcal{L}(\rho \partial_iw)]\|_{Y(p)}+\| w_0\|_{W }, \end{align} by the virtue of the second order derivative \eqref{114}. The linear terms are, again, relatively easy to bound, as we have \begin{align*} \MoveEqLeft |NE[\rho\partial_i w] +\L E[\rho\partial_i w] + E[\L(\rho\partial_i w)]| + |\rho\partial_j \left(NE[w] + \L E[w] +E[\L w]\right)|\\ & \lesssim \rho^3 |\nabla^5 w| +\rho^2 |\nabla^4 w| + \rho |\nabla^3 w| + |\nabla^2 w| + |\nabla w|, \end{align*} and thus, the $Y(p)$ norm of the linear terms is controlled by the $X(p)$ and $L^{\infty}$ norms of $w$ and $\rho\nabla w$, which are in turn bounded by $\|w_0\|_W$ by the virtue of Theorem \ref{maintheoremseis} and the second order estimates in \eqref{114}. Let's thus focus on the nonlinear terms. They take the form \begin{align*} \rho^2 \partial_{ij}^2 f[w] & = p\star R[w]\star K , \end{align*} where \begin{align*} K & = \rho (\nabla w)^{2\star} + \rho \nabla w\star \nabla^2 w + \rho (\nabla^2 w)^{2\star} + \rho \nabla w \star \nabla^3 w + \rho^2 (\nabla^2 w)^{3\star} +\rho^2 \nabla^2 w\star\nabla^3 w \\ &\quad+ \rho^2 \nabla w\star\nabla^4 w+ \rho^3 (\nabla^2 w)^{2\star} \star\nabla^3 w+\rho^3 (\nabla^2 w)^{4\star} + \rho^3 \nabla^2 w\star \nabla^4 w +\rho^3 \nabla w\star\nabla^5 w \\ &\quad + \rho^4 \nabla w \star\nabla^6 w+ \rho^4 \nabla^2 w \star \nabla^5 w + \rho^4 (\nabla^2 w)^{2\star} \star \nabla^4 w+ \rho^4 (\nabla^2 w)^{3\star}\star\nabla^3 w\\ &\quad +\rho^4 (\nabla^2 w)^{3\star}\star \nabla^3 w+ \rho^4 (\nabla^2 w)^{5\star} + \rho^3 (\nabla^3 w)^{2\star} + \rho^4 \nabla^2 w \star (\nabla^3 w)^{2\star} + \rho^4 \nabla^3 w\star\nabla^4 w , \end{align*} as the reader may check in a lengthy but straightforward exercise. The bound of $K$ is surprisingly simple as, thanks to the second order estimates \eqref{114}, no interpolations have to be performed. We simply have \begin{align*} \MoveEqLeft \|K\|_{Y(p)}\\ & \lesssim \left(\|\nabla w\|_{L^{\infty}} + \sum_{k=1}^4\|\rho \nabla^2 w\|_{L^{\infty}}^k\right) \left(\|\nabla w\|_{L^{\infty}} + \|w\|_{X(p)} + \|\rho \nabla w\|_{X(p)}\right) \\ &\quad + \|\nabla w\|_{L^{\infty}} \|\rho^4 \nabla^6 w\|_{Y(p)} + \|w\|_{X(p)} \|\rho^2 \nabla^3 w\|_{L^{\infty}} + \|\rho \nabla^2 w\|_{L^{\infty}} \|w\|_{X(p)} \|\rho^2 \nabla^3 w\|_{L^{\infty}}\\ &\lesssim \|w_0\|_{W^{1,\infty}} + \|\rho\nabla^2 w_0 \|_{L^{\infty}} + \varepsilon_0\left( \|\rho^2 \nabla^2 w\|_{L^{\infty}} + \|\rho^2 \nabla^2 w\|_{X(p)}\right), \end{align*} where we invoked the second order estimates \eqref{114} and the a priori estimates from Theorem \ref{maintheoremseis} in the second inequality. We derive the statement of the theorem by choosing $\varepsilon_0$ sufficiently small. \end{proof} \section{The truncated problem}\label{S5} The particular form of the nonlinearity limitates the well-posedness theory for the Cauchy problem for \eqref{perturbationequation} to a small neighborhood of the trivial solution $w\equiv 0$. It follows that the resulting semi-flow is necessarily {\em local}. In order to construct a \emph{global} semi-flow, whose existence simplifies the construction of invariant manifolds significantly, it is customary to consider a truncated version of the perturbation equation. We thus introduce a cut-off function that eliminates the nonlinear terms (locally) near points where the solution $w$, or one of its (suitably weighted) derivatives, is too large. This way, the equation becomes linear at these points. The cut-off remains inactive as long as the solution is globally small with respect to $\|\cdot \|_W$, which is the case for solutions of the perturbation equation for sufficiently small initial datum due to Theorem \ref{dritteableitung}. To make this truncation more precise we recall that the perturbation equation reads \begin{align}\label{100} \partial_tw+\mathcal{L}^2w+N\mathcal{L}w=\rho^{-1}\nabla \cdot\left( \rho^2 F[w]\right)+\rho F[w], \end{align} where the nonlinear terms are schematically given by \begin{align}\label{101} F[w] = p\star R_l[w]\star\left(\rho\nabla^3w\star\nabla w+ \rho(\nabla^2w)^{2\star} + \nabla^2w\star\nabla w+ (\nabla w)^{2\star}\right), \end{align} cf.~\eqref{perturbationequation} and \eqref{nonlinearityperturbationequation}. Let $\hat{\eta}:[0,\infty) \rightarrow [0,1]$ be a smooth cut-off function that is supported on $[0,2)$ with $\hat{\eta}(x) =1$ if $0 \leq x\leq1$. For $\varepsilon \in (0,1)$, we define \begin{align} \eta_\varepsilon = \eta_\varepsilon\left[w,\nabla w,\rho\nabla^2w,\rho^2\nabla^3w\right] \coloneqq \hat{\eta}\left(\frac{w^2}{\varepsilon^2}\right)\hat{\eta}\left(\frac{|\nabla w|^2}{\varepsilon^2}\right)\hat{\eta}\left(\frac{\left|\rho\nabla^2w\right|^2}{\varepsilon^2}\right)\hat{\eta}\left(\frac{\left|\rho^2\nabla^3w\right|^2}{\varepsilon^2}\right). \end{align} The truncated problem we consider now is the following: \begin{align}\label{c3} \partial_tw+\mathcal{L}^2w+N\mathcal{L}w=\rho^{-1}\nabla \cdot\left( \rho^2 F_{\varepsilon}[w]\right)+\rho F_{\varepsilon}[w],\quad F_{\varepsilon} = \eta_{\varepsilon}F. \end{align} It is clear that this equation coincides with \eqref{perturbationequation} as long as all terms $\left|w\right|$, $\left|\nabla w\right|$, $\left|\rho \nabla^2w\right|$ and $\left|\rho^2 \nabla^3w\right|$ are globally bounded from above by $\varepsilon$. As we already know for solutions $w(t)$ of the full perturbation equation \eqref{perturbationequation} that $\left\|w(t)\right\|_{W}$ is controlled by $ \left\|w_0\right\|_{W },$ provided that the initial datum $w_0$ is sufficiently small, the solutions of both equations coincide if $\left\|w_0\right\|_{W}\ll \varepsilon$. Thus, in this situation the truncation does not change the dynamics, even though it has the advantage that we end up with a globally well-posed equation, see Theorem \ref{wellposednessH1}. We remark that the choice of a pointwise truncation is necessary in order to ensure the differentiability of the nonlinearity in $w$. It has, however, the drawback that the regularity estimates from \cite{SeisTFE} seem not to carry over to the truncated problem. The technical difficulties arise from the fact that derivatives are falling onto the cut-off functions and the resulting terms fail to be controlled in a way analogously to the nonlinear terms in the original problem. Moreover, it is crucial that derivatives up to third order are suitable truncated. This looks at first glance surprising because the original theory \cite{SeisTFE} for the perturbation equation \eqref{100} requires only the control of Lipschitz norms. However, it turns out that the well-posedness theory for a truncated equation becomes unexpectedly subtle if the truncation is performed only up to first order. We will prove well-posedness of \eqref{c3} in the Hilbert space $H$, which, as we will see, appears very naturally in the treatment of the truncated equation. Even though it is in general not necessary to work in a Hilbert space setting to construct invariant manifolds, see, e.g., \cite{ChenHaleTan97}, this choice will be extremely convenient. Moreover, we can take advantage of the spectral analysis developed in \cite{McCannSeis15} in a nearly identical setting. In order to prove well-posedness of the truncated problem in $H$, we need to extend the maximal regularity result \eqref{b2} for the operator $\mathcal{L}$ to the Hilbert space $H$. \begin{lemma}\label{maxregH} The operator $\mathcal{L}$ satisfies the maximal regularity estimate \begin{align} \left\|\nabla w\right\|_H+\|\rho\nabla^2 w\|_H \lesssim \left\|\mathcal{L}w\right\|_H. \end{align} \end{lemma} For the proof we refer to the theory for the operator $\mathcal{L}_\sigma $ in \eqref{504} and its derivatives developed in \cite{SeisTFE}, more precisely Lemmas 1,2 and 4 and their proofs. The proof of Lemma \ref{maxregH} can be done analogously. It mainly relies on the observation that the operator $\mathcal{L}_\sigma$ commutates with tangential derivatives and its radial derivative $\partial_r\mathcal{L}_\sigma w$ can be rewritten in terms of $\mathcal{L}_{\sigma+1}\partial_r w$ and lower order terms. This makes the maximal regularity estimate for $\mathcal{L}_\sigma$, equation \eqref{b2}, applicable. The proof of well-posedness of the truncated problem exploits a fixed point argument. For this it is necessary to control the Lipschitz constants of the nonlinear terms $F_{\varepsilon}$ in a suitable way. \begin{lemma}\label{DifferenzF} It holds that \begin{align} \MoveEqLeft \left\|\sqrt{\rho} F_{\varepsilon} \left[w_1\right]-\sqrt{\rho}F_{\varepsilon} \left[w_2\right]\right\| \\ &\lesssim \varepsilon \left( \left\|\rho \nabla^2 w_1-\rho \nabla^2 w_2 \right\|_H+\left\|\nabla w_1-\nabla w_2\right\|_H + \left\|w_1-w_2\right\|_H \right). \end{align} \end{lemma} \begin{proof} This is a straightforward computation embarking from the pointwise estimate \begin{align} \MoveEqLeft \left|\rho F_{\varepsilon} [w_1]-\rho F_{\varepsilon} [w_2]\right|\\ & \lesssim \varepsilon \left(\left|\rho^2 \nabla^3w_1-\rho^2\nabla^3w_2\right|+\left|\rho\nabla^2w_1-\rho\nabla^2w_2\right| +\left|\nabla w_1-\nabla w_2\right|+\left|w_1-w_2\right|\right), \end{align} which in turn can be readily checked. Indeed, the latter implies that \begin{align*} \MoveEqLeft \left\|\sqrt{\rho} F_{\varepsilon} [w_1] - \sqrt{\rho} F_{\varepsilon} [w_2]\right\| \\ & =\left\|\rho F_{\varepsilon} [w_1] - \rho F_{\varepsilon} [w_2]\right\|_{L^2}\\ & \lesssim \varepsilon\left( \left\|\rho^2 \nabla^3w_1-\rho^2\nabla^3w_2\right\|_{L^2}+\left\|\rho\nabla^2w_1-\rho\nabla^2w_2\right\|_{L^2}+\left\|\nabla w_1-\nabla w_2\right\|_{L^2}+\left\|w_1-w_2\right\|_{L^2}\right) \\ &\lesssim \varepsilon\left( \left\| \nabla^2w_1-\nabla^2w_2\right\|_{H}+\left\|\nabla w_1-\nabla w_2\right\|_{H}+ \left\|w_1-w_2\right\|_{H}\right), \end{align*} where we have used \eqref{103} in the last inequality. \end{proof} With this preparation, we are in the position to derive well-posedness. \begin{theorem}[Global well-posedness in $H$]\label{wellposednessH1} There exists $\varepsilon^*>0$ such that for every $\varepsilon\leq \varepsilon^*$ and every initial datum $w_0\in H$ the truncated problem \eqref{c3} has a unique global solution $w$. Moreover, the solution $w$ satisfies \begin{align} \left\|w\right\|_{L^\infty\left((0,\infty);H\right)}+\left\|\nabla w\right\|_{L^2\left((0,\infty);H\right)}+\|\rho\nabla^2w\|_{L^2\left((0,\infty);H\right)}\lesssim \left\|w_0\right\|_{H}. \end{align} \end{theorem} \begin{proof} We commence by considering the linear initial value problem \begin{align}\label{a6} \begin{cases} \partial_t\tilde{w}+\mathcal{L}^2\tilde{w}+N\mathcal{L}\tilde w &= \rho^{-1} \nabla \cdot\left( \rho^2 \tilde{F}\right)+\rho^2 \tilde{F}\\ \tilde{w}(0,\cdot)&=w_0 \end{cases} \end{align} for fixed $\tilde{F} \in L^2((0,\infty);L^2(\rho^2 ))$. The problem \eqref{a6} has a unique weak solution $\tilde{w}$ on the time interval $(0,T)$, see Lemma 7 in \cite{SeisTFE}. It satisfies the estimate \begin{equation} \label{a7} \begin{aligned \MoveEqLeft \left\|\tilde{w}\right\|_{L^\infty\left((0,T);H\right)}+\|\nabla \tilde{w}\|_{L^2\left((0,T);H\right)}+\left\|\rho\nabla^2\tilde{w} \right\|_{L^2\left((0,T);H\right)}\\ &\le C_T\left(\left\|\rho\tilde{F}\right\|_{L^2\left((0,T);L^2\right)} +\left\|w_0\right\|_{H} \right). \end{aligned}\end{equation} To derive \eqref{a7}, we test the equation with $w$ in the inner product $\langle \cdot,\cdot \rangle_H$ and obtain after multiple integration by parts \begin{align}\label{104} \frac{1}{2}\frac{d}{dt}\left\|\tilde w\right\|^2_H + \left\|\mathcal{L}\tilde w\right\|^2_H+N\left\|\mathcal{L}^{1/2}\tilde w\right\|^2_H= - \langle \rho \tilde{F},\nabla \tilde w\rangle_H+\langle \rho \tilde{F},\tilde w\rangle_H. \end{align} Using the Cauchy-Schwarz inequality in the energy space $L^2(\rho)$, we furthermore notice that \begin{align} \left|\langle \rho \tilde{F},\nabla \tilde w\rangle_H\right|&\leq\left|\langle \rho\tilde F,\nabla \tilde w\rangle\right| +\left| \langle \rho\tilde F,\nabla\L \tilde w\rangle\right| \\& \le \|\sqrt{\rho} \tilde F\|\left( \|\sqrt{\rho} \nabla \tilde w\| + \|\sqrt{\rho} \nabla \L \tilde w\|\right)\ \le \|\sqrt{\rho} \tilde F \|\left(\|\tilde w\|_H + \|\L\tilde w\|_{H}\right) \end{align} and \begin{align} \left| \langle \rho \tilde{F},\tilde w \rangle_H \right| & \le \left| \langle \rho \tilde{F}, \tilde w\rangle \right| + \left| \langle \rho \tilde{F},\mathcal{L}\tilde w\rangle\right| \\ & \leq \left\|\rho \tilde{F}\right\|\left(\left\|\tilde w\right\|+\left\|\mathcal{L}\tilde w\right\|\right)\leq \left\|\rho \tilde{F}\right\|\left(\left\| \tilde w\right\|_H+\left\|\mathcal{L}^{1/2}\tilde w\right\|_H\right). \end{align} We now invoke Young's inequality and the fact that $\rho\le 1$ and we drop the non-negative lower-order term on the left-hand side to derive the differential inequality \[ \frac{d}{dt}\left\|\tilde w\right\|^2_H + \left\|\mathcal{L}\tilde w\right\|^2_H \lesssim \left\|\sqrt{\rho}\tilde{F}\right\|^2 + \left\| \tilde w\right\|^2_H. \] We deduce \eqref{a7} with help of the maximal regularity result of Lemma \ref{maxregH} and a Gronwall type argument. To show well-posedness for the nonlinear problem, we apply a fixpoint argument. The estimate in Lemma \ref{DifferenzF} shows that the nonlinearity $F_{\varepsilon}[w]$ belongs to $ L^2((0,T);L^2(\rho^2))$ whenever $w\in L^\infty((0,T);H)$ is given such that $\nabla w, \rho \nabla^2w \in L^2((0,T);H)$. By the linear theory, there exists thus a solution $\tilde{w}=\tilde{w}(w,w_0)$ to the Cauchy problem \eqref{a6} with $\tilde{F} = F_{\varepsilon}[w]$, and the estimate \eqref{a7} and Lemma \ref{DifferenzF} (applied to $w_1=\tilde{w}$ and $w_2=0$) yield that \begin{align}\MoveEqLeft \left\|\tilde{w}\right\|_{L^\infty\left((0,T);H\right)}+\left\|\nabla \tilde{w}\right\|_{L^2\left((0,T);H\right)}+\left\|\rho \nabla^2\tilde{w} \right\|_{L^2\left((0,T);H\right)}\\&\le C_T\varepsilon\left( \left\|\rho \nabla^2w\right\|_{L^2\left((0,T);H\right)}+\left\|\nabla w\right\|_{L^2\left((0,T);H\right)} +\left\|w\right\|_{L^\infty\left(\left(0,T\right);H\right)}\right)+C_T\left\|g\right\|_{H}. \end{align} Similarly, given $w_1$ and $w_2$ in the same class of functions, the difference of the corresponding solutions $\tilde{w}_1\left(w_1,g\right)$ and $\tilde{w}_2\left(w_2,g\right)$ to the associated linear problems is bounded by \begin{align}\MoveEqLeft \left\|\rho\nabla^2 \tilde{w}_1-\rho\nabla^2 \tilde{w}_2\right\|_{L^2\left(H\right)}+\left\|\nabla \tilde{w}_1-\nabla \tilde{w}_2\right\|_{L^2\left(H\right)}+\left\|\tilde{w}_1-\tilde{w}_2\right\|_{L^\infty(H)} \\ &\le C_T\varepsilon\left(\left\|\rho\nabla^2 w_1-\rho\nabla^2 w_2\right\|_{L^2\left(H\right)}+\left\|\nabla w_1-\nabla w_2\right\|_{L^2\left(H\right)}+\left\|w_1-w_2\right\|_{L^\infty(H)}\right). \end{align} We conclude that, for $\varepsilon$ sufficiently small, the mapping $w\mapsto \tilde{w}(w,w_0)$ is a contraction on the space $\left\{w\in L^\infty\left((0,T); H \right)\text{ with }\nabla w \in L^2\left((0,T); H\right) \text{ and } \rho \nabla^2w \in L^2\left((0,T); H\right)\right\}$. An application of Banach's fixed point theorem shows that there exists a unique solution $w$ to the truncated problem \eqref{perturbationequation} with initial datum $w_0\in H$. We stress that the constructed solution is defined locally in time and that the size of the admissible $\varepsilon$ is dependent on $T$. In the following, we choose $\varepsilon$ for $T=1$ and show that the constructed solution can be extended globally in time. Our starting point is the estimate for the linear problem \eqref{104}, in which we choose $\tilde w=w$ and $\tilde F=F_{\varepsilon}[w]$. In order to avoid a time-dependency in the estimate for $w$, we should estimate the nonlinearities slightly differently as above. We notice that the nonlinearity obeys the pointwise estimate \begin{equation}\label{106} |F_{\varepsilon}[w]| \lesssim \rho |\nabla w||\nabla^3 w| + \rho |\nabla^2 w|^2 + |\nabla w| |\nabla^2 w| + |\nabla w|^2 , \end{equation} which implies that \[ \|\rho F_{\varepsilon}[w]\|_{L^1} \lesssim \|\nabla w\|_{L^2} \|\rho^2 \nabla^3 w\|_{L^2} + \|\rho \nabla^2 w\|_{L^2}^2 + \|\nabla w\|_{L^2} \|\rho \nabla^2 w\|_{L^2} + \|\nabla w\|_{L^2}^2 \] via the Cauchy--Schwarz inequality. In view of the norm characterization in \eqref{103}, the latter can be rewritten as \[ \|\rho F_{\varepsilon}[w]\|_{L^1} \lesssim \|\nabla w\|_H\left(\|\nabla w\|_H + \|\rho\nabla^2 w\|_H\right). \] We also notice that \[ |w| + |\nabla w| + |\L w|+ \rho |\nabla \L w| \lesssim |w| + |\nabla w| + \rho |\nabla^2 w| + \rho^2 |\nabla^3 w| \lesssim \varepsilon \] in the support of the nonlinearity $F_{\varepsilon}$ by our choice of the cut-off. Thanks to the previous two bounds, the nonlinear terms on the right-hand side of \eqref{104} are estimated as follows: \begin{align*} \MoveEqLeft \left| \langle \rho {F}_{\varepsilon}[w],\nabla w\rangle_H\right|+\left|\langle \rho {F}_{\varepsilon}[w], w\rangle_H\right| \\ & = \left| \langle \rho {F}_{\varepsilon}[w], w\rangle\right|+\left| \langle \rho {F}_{\varepsilon}[w],\nabla w\rangle\right|+\left| \langle \rho {F}_{\varepsilon}[w],\L w\rangle\right|+ \left| \langle \rho {F}_{\varepsilon}[w],\nabla \L w\rangle\right|\\ & \lesssim \varepsilon \|\rho F_{\varepsilon}[w]\|_{L^1}\\ & \lesssim \varepsilon \|\nabla w\|_H\left(\|\nabla w\|_H + \|\rho\nabla^2 w\|_H\right). \end{align*} Substitution into \eqref{104} thus yields \[ \frac{d}{dt} \|w\|_H^2 + \|\L w\|_{H}^2 \lesssim \varepsilon \|\nabla w\|_H\left(\|\nabla w\|_H + \|\rho \nabla^2 w\|_H\right), \] where we have again dropped the lower order term on the left-hand side. In view of the maximal regularity estimate from Lemma \ref{maxregH}, the right-hand side can be absorbed into the left-hand side provided that $\varepsilon$ is chosen sufficiently small. This gives \[ \frac{d}{dt} \|w\|_H^2 + \frac1C\|\L w\|_{H}^2 \le 0, \] for some $C>1$, and the local solution can thus be extended globally for all times. The estimate in the assertion of the theorem follows. % % % % % \end{proof} It will be crucial for our analysis to have some smoothing properties established for the truncated equation \eqref{c3}. This will be achieved in the following two lemmas. \begin{lemma}\label{L1} There exists $\varepsilon^*$ possibly smaller than in Theorem \ref{wellposednessH1}, such that for any $0< \varepsilon\le \varepsilon^*$ the following holds: If $w$ is the solution to the truncated equation \eqref{c3} with initial datum $w_0\in H$ then it holds that \begin{align}\label{e8}\MoveEqLeft \left\|\partial_tw\right\|_{L^q\left((1/4,2);L^q(\rho)\right)} +\left\|w\right\|_{L^q\left((1/4,2);L^q(\rho)\right)}+\left\|\nabla w\right\|_{L^q\left((1/4,2);L^q(\rho)\right)}\\&+ \left\|\nabla^2w\right\|_{L^q\left((1/4,2);L^q(\rho)\right)} +\left\|\rho\nabla^3w\right\|_{L^q\left((1/4,2);L^q(\rho)\right)}+\left\|\rho^2\nabla^4w\right\|_{L^q\left((1/4,2);L^q(\rho)\right)} \lesssim \left\|w_0\right\|_{H} \end{align} for any $q\in (1,\infty)$. \end{lemma} \begin{proof}We will perform an iterative argument for which it is convenient to localize time on an arbitrary scale. For this purpose, we fix $T\in(0,2)$ and introduce a smooth cut-off function $\phi_1:\mathbb{R}^+_0\rightarrow [0,1]$, satisfying $\phi_1(t)=0$ if $t\leq T$ and $\phi_1(t)=1$ if $t\ge 2T$. Of course, its growth rate is inversely proportional to the cut-off scale $T$, but having this quantity uniformly finite throughout the proof, we will simply write $|\phi'_1|\lesssim 1$ for convenience. Smuggling $\phi_1$ into the truncated equation \eqref{c3} gives \begin{align} \partial_t(w \phi_1) + \mathcal{L}^2(w\phi_1) + N\mathcal{L}(w\phi_1) = \rho^{-1}\nabla \cdot \left(\rho^2F_{\varepsilon}[w]\right)\phi_1 +\rho F_{\varepsilon}[w]\phi_1 + w\phi_1'. \end{align} We note that $w\phi_1$ has zero initial datum, which makes the maximal regularity theory for $\mathcal{L}^2+N\mathcal{L}$ applicable: From \eqref{112} and elementary computations we infer the maximal regularity estimate \begin{equation} \label{c4} \begin{aligned} \MoveEqLeft \left\|\partial_t (w\, \phi_1)\right\|_{L^2\left(L^2(\rho)\right)}+\left\|\nabla^2w\,\phi_1\right\|_{L^2\left(L^2(\rho)\right)}+\left\|\rho\nabla^3w\,\phi_1\right\|_{L^2\left(L^2(\rho)\right)} + \left\|\rho^2\nabla^4w\, \phi_1\right\|_{L^2\left(L^2(\rho)\right)} \\ & \lesssim \|\eta_{\varepsilon} F\, \phi_1\|_{L^2(L^2(\rho))} + \|\rho \nabla \eta_{\varepsilon}\, F \phi_1\|_{L^2(L^2(\rho))} + \|\rho \eta_{\varepsilon} \nabla F\, \phi_1\|_{L^2(L^2(\rho))} + \left\|w\phi_1'\right\|_{L^2\left( L^2(\rho)\right)}, \end{aligned} \end{equation} where we have set $F=F_{\varepsilon}[w]$ for brevity. For brevity, we have dropped the time interval $(0,2)$ in the norms. The final term on the right-hand side is easily controlled via the a priori estimates from Theorem \ref{wellposednessH1} and the defining properties of the temporal cut-off $\phi_1$: It holds that \[ \left\|w\phi_1'\right\|_{L^2\left(L^2(\rho)\right)} \lesssim \left\|w\right\|_{L^\infty\left(L^2(\rho)\right)} \le \|w\|_{L^{\infty}(H)} \lesssim \left\|w_0\right\|_{H}. \] For the first and the second term, we use the pointwise bound on the nonlinearity on the support of $\eta_{\varepsilon}$, \begin{equation} \label{107} |F[w]| \lesssim \varepsilon \left(|\nabla w| + |\nabla^2 w| + \rho|\nabla^3 w|\right)\lesssim \rho^{-1}\varepsilon^2, \end{equation} cf.~\eqref{106}. More precisely, plugging the first of the two estimates into the first term on the right-hand side of \eqref{c4}, we find that \begin{align*} \|\eta_{\varepsilon} F\, \phi_1\|_{L^2(L^2(\rho))} &\lesssim \varepsilon\left(\|\nabla w \phi_1\|_{L^2(L^2(\rho))}+ \|\nabla^2 w \phi_1\|_{L^2(L^2(\rho))} + \|\rho \nabla^3 w \phi_1\|_{L^2(L^2(\rho))}\right). \end{align*} We interpolate the first term with the help of Lemma \ref{interpolationintequality} in the appendix, so that \begin{align*} \|\eta_{\varepsilon} F\, \phi_1\|_{L^2(L^2(\rho))} &\lesssim \varepsilon\left(\| w \phi_1\|_{L^2(L^2(\rho))}+ \|\nabla^2 w \phi_1\|_{L^2(L^2(\rho))} + \|\rho \nabla^3 w \phi_1\|_{L^2(L^2(\rho))}\right). \end{align*} The two last terms on the right-hand side can be absorbed into the left-hand side of \eqref{c4} if $\varepsilon$ is chosen sufficiently small, while the first term is controlled by the initial datum through the energy estimate of Theorem \ref{wellposednessH1}. To estimate the second term on the right-hand side of \eqref{c4}, we notice that \[ |\nabla\eta_{\varepsilon}| \lesssim 1 + \frac1{\varepsilon}\left(|\nabla^2 w| + \rho |\nabla^3 w| +\rho^2 |\nabla^4 w|\right), \] and thus, using that $\rho\le 1$ and the second estimate in \eqref{107}, we find \begin{align*} \MoveEqLeft \|\rho \nabla \eta_{\varepsilon}\, F \phi_1\|_{L^2(L^2(\rho))}\\ & \lesssim \|\chi_{\supp \eta_{\varepsilon} } \, F \phi_1\|_{L^2(L^2(\rho))}\\ &\quad + \varepsilon\left(\| \nabla w \phi_1\|_{L^2(L^2(\rho))} +\| \nabla^2 w \phi_1\|_{L^2(L^2(\rho))} +\|\rho \nabla^3 w \phi_1\|_{L^2(L^2(\rho))} \right). \end{align*} The first term can be estimated as before and the second one can be absorbed into the left-hand side of \eqref{c4} if $\varepsilon$ is sufficiently small. It remains to study the third term on the right-hand side of \eqref{c4}. Here, we find after a small computation that \[ \rho|\nabla F| \lesssim |F| +\varepsilon\left(|\nabla w|+ |\nabla^2 w| + \rho|\nabla^3w | +\rho^2|\nabla^4w|\right). \] Hence, in view of the bound in \eqref{107}, the only new term we have to deal with is the fourth-order term. This one, however, can be controlled as the second- and third-order term before by absorption into the left-hand side of \eqref{c4}. Combining all the estimates that we discussed, adding the lower order term from the energy inequality in Theorem \ref{wellposednessH1} to the left-hand side, making use of the interpolation inequality in Lemma \ref{interpolationintequality} in the appendix to include the first order spatial gradient and finally dropping all higher order terms, we arrive at \begin{equation} \label{108} \left\| w\, \phi_1 \right\|_{L^2\left(L^2(\rho)\right)}+ \left\|\partial_t (w\, \phi_1)\right\|_{L^2\left(L^2(\rho)\right)}+\left\|\nabla( w\,\phi_1)\right\|_{L^2\left(L^2(\rho)\right)} \lesssim \|w_0\|_H. \end{equation} We are now in the position to invoke the Sobolev inequality Lemma \ref{sobolevinequality} in the appendix, namely \begin{align} \left\|w\right\|_{L^q\left(L^q(\rho)\right)}\lesssim\left\|\partial_tw\right\|_{L^p\left(L^{p}(\rho)\right)} + \left\|w\right\|_{L^p\left(L^{p}(\rho)\right)} + \left\|\nabla w\right\|_{L^p\left(L^{p}(\rho)\right)}, \end{align} where the integrability exponents $1\leq p\leq q<\infty$ are such that \begin{align} 1-\frac{N+2}{p} = -\frac{N+2}{q}. \end{align} In our situation, that is $p=p_1=2$, we deduce from \eqref{108} the inequality \begin{equation} \label{109} \left\|w\phi_1\right\|_{L^{q_1}\left(L^{q_1}(\rho)\right)} \lesssim \left\|w_0\right\|_{H}, \end{equation} where now $ q= q_1 = \frac{2(N+2)}{N}$. In order to further increase the order of integrability, we have to use the maximal regularity estimate in $L^q$, see \eqref{112}. We introduce a new smooth cut-off function $\phi_2:\mathbb{R}^+_0\rightarrow [0,1]$, such that $\phi_2(t)=0$ if $t\leq 2T$ and $\phi_2(t)=1$ if $t\ge 3T$. Using the maximal regularity estimate for $w\phi_2$ and $q_1$, we get \begin{align*} \MoveEqLeft \left\|\partial_t(w\phi_2)\right\|_{L^{q_1}\left(L^{q_1}(\rho)\right)} + \left\|\nabla^2(w\phi_2)\right\|_{L^{q_1}\left(L^{q_1}(\rho)\right)}+\left\|\rho\nabla^3(w\phi_2)\right\|_{L^{q_1}\left(L^{q_1}(\rho)\right)}+\left\|\rho^2\nabla^4(w\phi_2)\right\|_{L^{q_1}\left(L^{q_1}(\rho)\right)} \\ &\lesssim \|\eta_{\varepsilon} F\, \phi_2\|_{L^{q_1}(L^{q_1}(\rho))} + \|\rho \nabla \eta_{\varepsilon}\, F \phi_2\|_{L^{q_1}(L^{q_1}(\rho))} + \|\rho \eta_{\varepsilon} \nabla F\, \phi_2\|_{L^{q_1}(L^{q_1}(\rho))}+ \left\|w\phi_2'\right\|_{L^{q_1}\left(L^{q_1}(\rho)\right)}. \end{align*} The treatment of the right-hand side is almost identical to the $p=2$ case, only that now equation \eqref{109} is invoked where before the energy equation was used. We eventually arrive at \[ \left\| w\, \phi_2 \right\|_{L^{q_1}\left(L^{q_1}(\rho)\right)}+ \left\|\partial_t (w\, \phi_2)\right\|_{L^{q_1}\left(L^{q_1}(\rho)\right)}+\left\|\nabla( w\,\phi_2)\right\|_{L^{q_1}\left(L^{q_1}(\rho)\right)} \lesssim \|w_0\|_H, \] and we may use the Sobolev inequality once more with $p_2 \le \min\{q_1,N+2\}$. By iterating this procedure, the order of integrability can be further increased. After finitely many steps, depending only on the space dimension, and by choosing $T$ carefully, the statement follows. \end{proof} Theorem \ref{wellposednessH1} shows that the truncated equation generates a global semiflow in the Hilbert space setting. We define $S^t_\varepsilon : H \rightarrow H$ as the corresponding flow map, \[ S_{\varepsilon}^t(w_0) = w(t,\cdot) \] where $w$ is the unique solution to the truncated nonlinear problem \eqref{c3} with initial datum $w_0$. Our invariant manifold construction is based on that flow. More accurately, we choose to consider a discrete time setting by working with the time-one map rather than with the continuous flow. Compared to constructing the manifolds for the semiflow directly, this has the advantage, that the differentiability of the time-one map is a weaker property than its counterpart for flows, the variation of constants formula. We write $S_\varepsilon \coloneqq S_\varepsilon^1$. The main regularity results for the perturbation variable $w$ are stated uniformly in time and space, while our invariant manifold theory will rely on Hilbert spaces. The connection of both necessitates to establish suitable smoothing estimates. We will do so in the following lemma which we improve after one time step. As we are interested in the long-time behavior, such a delayed smoothing statement does not cause any problems. \begin{lemma}\label{smoothingestimate} Let $\varepsilon^*$ be as in lemma \ref{L1} and $\varepsilon\leq \varepsilon^*$. For any $w_0\in H$ the following holds: If $w(t)= S_\varepsilon^t(w_0)$ is the solution to the truncated equation, then \begin{align} \left\|w(t)\right\|_{L^\infty} +\left\|\nabla w(t)\right\|_{L^\infty} + \left\|\rho \nabla^2w\right\|_{L^\infty}+\left\|\rho^2\nabla^3w\right\|_{L^\infty} \lesssim \left\|w_0\right\|_{H} \end{align} for all $t\geq 1/2$. In particular, this yields $\|S_\varepsilon(w_0)\|_{W}\lesssim \left\|w_0\right\|_{H}$. Moreover, there exists $\varepsilon^0\leq \min \left\{\varepsilon,\varepsilon_0\right\}$ such that $S^t_\varepsilon\left(S_\varepsilon(w_0)\right)=S^t\left(S_\varepsilon(w_0)\right)$ for $t>0$, provided that $\left\|w_0\right\|_H\leq \varepsilon^0$. \end{lemma} \begin{proof} Due to the Morrey-type embedding inequality \ref{morreyinequality} in the appendix, we have that $\left\|w\right\|_{L^\infty}\lesssim \left\|w\right\|_{L^q(\rho)}+\left\|\nabla w\right\|_{L^q(\rho)}$, provided that $q$ is sufficiently large. We can extend this estimate to higher order derivatives and find \begin{equation}\label{505} \|w\|_{W} \lesssim \left\|w\right\|_{L^q(\rho)}+\left\|\nabla w\right\|_{L^q(\rho)} + \|\nabla^2 w\|_{L^{q}(\rho)} + \|\rho\nabla^3 w\|_{L^{q}(\rho)}+ \|\rho^2\nabla^4 w\|_{L^{q}(\rho)}. \end{equation} Thus, in order to establish the asserted estimate, we have to improve the estimate in Lemma \ref{L1} to a pointwise-in-time statement. For this, we invoke a simple construction. For an arbitrarily given function $f\in L^q(1/4,1/2)$, we consider the set \[ J_f=\left\{t\in (1/4,1/2) : \left|f(t)\right|>8\left\|f\right\|_{L^q(1/4,1/2)}\right\}. \] By Chebyshev's inequality, it holds that \[ \left\|f\right\|_{L^q(1/4,1/2)}\geq \left\|f\right\|_{L^q(J_f)} \geq 8\left\|f\right\|_{L^q(1/4,1/2)} |J_f|^{1/q}, \] where $\left|\cdot\right|$ denotes the Lebesgue measure, and thus, $|J_f|\leq \left(1/8\right)^q$. Moreover, since $q\ge 1$, we have also an estimate on the complementary set in $(1/4,1/2)$, namely $|J_f^c|\geq1/4-\left(1/8\right)^q \geq 1/8$. Applying this estimate to the function $f(t)=\left\|w(t)\right\|_{L^q(\rho)}+\left\|\nabla w(t)\right\|_{L^q(\rho)}+\left\|\nabla^2 w(t)\right\|_{L^q(\rho)}+\left\|\rho\nabla^3w(t)\right\|_{L^q(\rho)}+\left\|\rho^2\nabla^4w(t)\right\|_{L^q(\rho)}$ and using the above estimate \eqref{505}, we find that \begin{align} \left\|w\right\|_{L^\infty(J_f^c;W)}&\lesssim \|f \|_{L^{\infty}(J_f^c)} \lesssim \|f\|_{L^{q}(1/4,1/2)}\\ & \lesssim \left\|w\right\|_{L^q\left((1/4,1/2);L^q(\rho)\right)}+\left\|\nabla w\right\|_{L^q\left((1/4,1/2);L^q(\rho)\right)}+ \left\|\nabla^2w\right\|_{L^q\left((1/4,1/2);L^q(\rho)\right)}\\ &\quad +\left\|\rho\nabla^3w\right\|_{L^q\left((1/4,1/2);L^q(\rho)\right)}+\left\|\rho^2\nabla^4w\right\|_{L^q\left((1/4,1/2);L^q(\rho)\right)} . \end{align} By the virtue of Lemma \ref{L1}, the right-hand side is bounded by $\|w_0\|_H$. This shows that there exists a time $\hat{t}\in (1/4,1/2)$ such that \begin{align}\label{p4} \|w(\hat{t})\|_{W} = \|S^{\hat{t}}_\varepsilon(w_0)\|_{W} \leq C \|w_0\|_{H}. \end{align} Now suppose that $\left\|w_0\right\|_H\leq \varepsilon^0$ From Theorems \ref{maintheoremseis} and \ref{dritteableitung} we know, that the nonlinear flow $S^t(w_0)$ can be controlled in $W$ by its initial data $g$ in the $W$-norm, i.e., $\|S^t(w_0)\|_{W} \leq \tilde{C}\|w_0\|_{W}$ for every $t\geq0$, provided that $\|w_0\|_{W}$ is sufficiently small. If we now choose $\varepsilon^0$ in a way such that $\tilde{C}C\varepsilon^0\leq \varepsilon$, we obtain that $S^t_\varepsilon\left(S^{\hat{t}}_\varepsilon(w_0)\right) = S^t\left(S_\varepsilon^{\hat{t}}(w_0)\right)$ for every $t\geq 0 $ and thus \begin{align} \|S^{\hat{t}+t}_\varepsilon(w_0)\|_{W} = \|S^t\left(S_\varepsilon^{\hat{t}}(w_0)\right)\|_{W}\leq \tilde{C}\|S^{\hat{t}}_\varepsilon(w_0)\|_{W}\leq \tilde{C}C\|w_0\|_{H} \end{align} for every $t\geq0$. Since $\hat{t}\in (1/4,1/2)$, this gives the result. \end{proof} By construction of the solution in Theorem \ref{wellposednessH1}, we know that $S_\varepsilon$ is Lipschitz-continuous. We decompose the global flow $S_\varepsilon$ into a linear and nonlinear part \begin{align} S_\varepsilon = L + R_\varepsilon, \quad \text{ where } L\coloneqq e^{-\left(\mathcal{L}^2+N\mathcal{L}\right)}. \end{align} As a difference of Lipschitz continuous functions, $R_\varepsilon$ is Lipschitz continuous as well. Actually, its Lipschitz constant can be estimated in terms of $\varepsilon$ and becomes thus a contraction if $\varepsilon$ is sufficiently small. \begin{lemma}\label{lipschitzR} Let $\varepsilon^*>0$ as in Lemma \ref{L1} and $0<\varepsilon\leq \varepsilon^*$. Then, for any $g$, $\tilde{g} \in H$ it holds that \begin{align} \left\|R_\varepsilon(g)-R_\varepsilon(\tilde{g})\right\|_{H}\lesssim \varepsilon\left\|g-\tilde{g}\right\|_{H}. \end{align} \end{lemma} \begin{proof} Let $g$, $\tilde{g}\in H$ be given. Then $w(t,x)=S_\varepsilon^t(g)$ and $\tilde{w}(t,x)=S_\varepsilon(\tilde{g})$ solve the truncated problem \eqref{c3} with initial data $g$ or $\tilde{g}$, respectively. We set $v(t)=w(t)-L^tg$, where $L^tg$ is the solution to the linear problem with initial datum $g$, so that, in particular $v(1,x)=R_\varepsilon(g)$. Analogously we define $\tilde{v}$. Then $v-\tilde{v}$ solves the equation \begin{align*} \MoveEqLeft \partial_t(v-\tilde{v}) + \mathcal{L}^2(v-\tilde{v})+N\mathcal{L}(v-\tilde{v}) \\ &= \frac{1}{\rho}\nabla \cdot \left( \rho^2\left(F_{\varepsilon}[w]- F_{\varepsilon}[\tilde{w}] \right)\right) + \rho \left(F_{\varepsilon}[w]-F_{\varepsilon}[\tilde{w}]\right), \end{align*} with zero initial datum. With the help of estimate \eqref{a7} from the proof of Theorem \ref{wellposednessH1} we deduce that \begin{align*}\MoveEqLeft \left\|v(1)-\tilde{v}(1)\right\|_{H} + \left\|\nabla v-\nabla \tilde{v}\right\|_{L^2\left((0,1);H\right)} + \left\|\rho\nabla^2v-\rho\nabla^2\tilde{v}\right\|_{L^2\left((0,1);H\right)}\\ &\lesssim \left\|\rho F_{\varepsilon}[w] -\rho F_{\varepsilon}[\tilde{w}]\right\|_{L^2\left((0,1);L^2\right)} \\ &\lesssim \varepsilon\left( \left\|w-\tilde{w}\right\|_{L^\infty\left((0,1);H\right)} + \left\|\nabla w-\nabla \tilde{w}\right\|_{L^2\left((0,1);H\right)} + \left\|\rho\nabla^2w-\rho\nabla^2\tilde{w}\right\|_{L^2\left((0,1);H\right)} \right), \end{align*} where we used Lemma \ref{DifferenzF} in the last step. Since $S_\varepsilon$ is Lipschitz continuous, the right-hand side is controlled by $\varepsilon \|g-\tilde{g}\|_{H}$. This finishes the proof. \end{proof} Additionally we would like to know that $R_\varepsilon$ is quadratic near the origin. The superlinear behavior entails the differentiability of $R_\varepsilon$ in the origin, with derivative zero. Neither this information nor the regularity will be necessary for our construction of the invariant manifolds. However, as we will see, it provides the additional geometric insight that the center manifold $W_\varepsilon^c$ touches the stable Eigenspace $E_c$ tangentially, see Theorem \ref{centermanifold}. The proof of the quadratic estimate is rather technical and exploits smoothing properties of the nonlinear flow. We are able to show the quadratic behavior after a regularizing time step, in a similar way as in Lemma \ref{smoothingestimate}, what still is sufficient for our purpose. \begin{lemma}\label{quadratischeabschaetzung} Let $\varepsilon^*$ be as in Lemma \ref{L1}. For all $0\leq \varepsilon\leq \varepsilon_*$ and every $g\in H$ it holds that \begin{align} \left\|R_\varepsilon\left(S_\varepsilon(g)\right)\right\|_{H} \lesssim \left\|g\right\|^2_{H}. \end{align} \end{lemma} \begin{proof} Let $w(t,x)=S^t_\varepsilon(g)$ and set $W(t,x) = w(t+1,x)$, which yields $W(0,\cdot)=S_\varepsilon(g)$. Let $v$ solve the initial value problem \begin{align} \begin{cases} \partial_tv+\mathcal{L}^2v+N\mathcal{L}v &= \frac{1}{\rho} \nabla\cdot \left(\rho^2F_{\varepsilon}[W]\right) +\rho F_{\varepsilon}[W]\quad \text{ in } (0,\infty)\times B_1(0)\\ v(0,\cdot)&=0 \quad \text{ in } B_1(0), \end{cases} \end{align} so that $ v(1,\cdot) = R_\varepsilon \left(W(0,\cdot)\right)= R_\varepsilon\left(S_\varepsilon(g)\right)$. Thanks to the proof of Theorem \ref{wellposednessH1}, more precisely estimate \eqref{a7}, we know that \begin{align*} \left\|v(1)\right\|^2_{H} &\lesssim \int \limits_0^1 \left\|\sqrt{\rho}F_{\varepsilon}[W]\right\|^2\, dt = \int \limits_1^2 \left\|\sqrt{\rho}F_{\varepsilon}[w]\right\|^2\, dt, \end{align*} and by the virtue of the pointwise estimate \eqref{106} and Young's inequality, we deduce \begin{align*}\left\|v(1)\right\|_{H} &\lesssim \left\|\rho\nabla^3w\right\|_{L^4\left((1,2);L^4(\rho^2)\right)}^2 +\left\|\nabla^2w\right\|_{L^4\left((1,2);L^4(\rho^2)\right)}^2+\left\|\nabla w\right\|_{L^4\left((1,2);L^4(\rho^2)\right)}^2. \end{align*} It thus remains to invoke the smoothing property from Lemma \ref{L1} with $q=4$ and the bound $\rho\le 1$ in order to prove the lemma. \end{proof} \begin{lemma}\label{Lipschitzglaettung} Let $\varepsilon^*$ be as in Lemma \ref{L1} and $\varepsilon\leq \varepsilon^*$. Let $\varepsilon^0 \leq \min \left\{\varepsilon, \varepsilon^*\right\}$ be as in Lemma \ref{smoothingestimate}. Then, for any $g, \tilde{g}\in H^1_{1,2}$ with $\left\|g\right\|_{H},\left\|\tilde{g}\right\|_{H}\leq \varepsilon^0$ it holds that \begin{align} \left\| S^{\hat{t}}_\varepsilon(g)-S_\varepsilon^{\hat{t}}(\tilde{g}) \right\|_{W} \lesssim \left\|g-\tilde{g}\right\|_{H} \end{align} for some $\hat t\in(\frac45,1)$. \end{lemma} \begin{proof} Similar to the previous proof, we will make use of a maximal regularity estimate for the linear equation. However, this proof will be less technical, because the previous lemma, combined with a result of \cite{SeisTFE}, will allow us to consider the flow without the cut-off function $\eta_\varepsilon$. Let $w(t)$ denote $S^t_\varepsilon(g)$ and $\tilde w(t)=S^t_\varepsilon(\tilde{g})$ respectively. Then, by Lemma \ref{smoothingestimate} we know that $\left\|w(t)\right\|_{W} \lesssim \left\|g\right\|_{H}$ for every $t\geq 1/2$. At this point we invoke Theorem 2 of \cite{SeisTFE} to also achieve even better control (in terms of $\rho$) on the higher derivatives: It guarantees that the unique solution $w$ of the full nonlinear perturbation equation \eqref{perturbationequation} with (of course small) initial data $g$ satisfies $\left|\nabla^2w(x,t)\right|+\left|\rho\nabla^3w(x,t)\right|+\left|\rho^2\nabla^4w(t,x)\right| \lesssim t^{-\kappa} \left\|g\right\|_{W^{1,\infty}}$ for some positive $\kappa>0$. If we apply this result with $w(1/2)$ as the initial data, we obtain the estimate \begin{align}\MoveEqLeft\label{330} \left\|w(t)\right\|_{L^\infty}+\left\|\nabla w(t)\right\|_{L^\infty}+\left\|\nabla^2 w(t)\right\|_{L^\infty}+\left\|\rho\nabla^3w(t)\right\|_{L^\infty}+\left\|\rho^2\nabla^4w(t)\right\|_{L^\infty}&\\&\lesssim \left\|g\right\|_{H}\leq \varepsilon^0 \end{align} uniformly in time for every $t\geq 3/4$. The same holds true for $\tilde w(t)$ and $\tilde{g}$. That is, for $t\geq3/4$ both $w(t)$ and $\tilde w(t)$ solve the full nonlinear equation. We now introduce $v=w-\tilde w$, which solves the initial value problem \begin{align} \begin{cases} \partial_t v+\mathcal{L}^2v+N\mathcal{L}v &= \rho^{-1} \nabla \cdot\left( \rho^2 \left(F_1[w]-F_1[\tilde w]\right)\right) +\rho \left(F_2[w]-F_2[\tilde w]\right) ,\\ v(0,\cdot)&=0. \end{cases} \end{align} Arguing very similarly as in the proof of Lemma \ref{L1}, but using \eqref{330} instead of the truncation, we arrive at \begin{align}\MoveEqLeft \left\|\partial_t v\right\|_{L^q\left((4/5,1);L^q(\rho)\right)} +\left\|v\right\|_{L^q\left((4/5,1);L^q(\rho)\right)}+\left\|\nabla v\right\|_{L^q\left((4/5,1);L^q(\rho)\right)}\\&+ \left\|\nabla^2v\right\|_{L^q\left((4/5,1);L^q(\rho)\right)} +\left\|\rho\nabla^3 v \right\|_{L^q\left((4/5,1);L^q(\rho)\right)}+\left\|\rho^2\nabla^4v\right\|_{L^q\left((4/5,1);L^q(\rho)\right)} \lesssim \left\|g-\tilde{g}\right\|_{H} \end{align} for any $q\in (1,\infty)$. Lastly, we proceed as in the proof of Lemma \ref{smoothingestimate} to prove the existence of a $\hat{t}\in (4/5,1)$, such that \[ \|w(\hat t) - \tilde w(\hat t)\|_{W} = \|v(\hat t)\|_{W} \lesssim \left\|g-\tilde{g}\right\|_{H}. \] \end{proof} \section{Dynamical System Arguments}\label{dynamicalsystemarguments} In this chapter we will construct invariant manifolds and prove Theorem \ref{localmanifolds}. We want to draw a heuristic picture of the concept, see als Figure \ref{fig3} \begin{figure} \begin{center} \includegraphics[width=\textwidth]{BildManifolds.pdf} \end{center} \caption{The flow $S^t_\varepsilon(g)$ starting from an arbitrary $g\in H$ approaches the flow starting from the unique intersection point $\tilde{g}$ of $W_\varepsilon^c$ and $M_g^\varepsilon$, $S^t\varepsilon(\tilde{g})$, that stays on the center manifold and whose longtime behavior dominates the asymptotics of $S^t_\varepsilon(g)$.}\label{fig3} \end{figure} for a geometric illustration. The center manifold, see Theorem \ref{centermanifold}, can be represented as the graph of a Lipschitz continuous function over the finite-dimensional center eigenspace, and it touches the center eigenspace tangentially at the origin. Here, the center eigenspace is the subspace of $H$ spanned by the eigenfunctions of the first $K+1$ eigenvalues of $\mathcal{L}^2+N\mathcal{L}$, where $K$ is an arbitrarily fixed nonnegative integer. Solutions to the truncated flow that lie on the center manifold remain on it for all subsequent times. The stable manifolds, see Theorem \ref{stablemanifold}, intersect with the center manifold in exactly one point, and they form thus a foliation of the underlying Hilbert space $H$ over the center manifold. This foliation is invariant under the flow. The stable manifolds can be described as (displaced) graphs over the stable eigenspace, that is, the orthogonal complement of the center eigenspace. Given an arbitrary solution to the truncated perturbation equation, our construction provides a solution that approximates the given one with an exponential rate of at least $\mu_{K}$. Throughout this section, we fix $\varepsilon^*$ as in Lemma \ref{L1} and choose some $ \varepsilon^0 \leq \min \left\{\varepsilon, \varepsilon_0\right\}$ as in Lemma \ref{smoothingestimate}. With these choices, all results from the previous two sections are admissible. The linear operator $\mathcal{L}^2+N\mathcal{L}$ and the associated semi-flow operator $L = e^{-\mathcal{L}^2-N\mathcal{L}}$ share the same eigenfunctions and an eigenvalue $\mu$ of $\mathcal{L}^2+N\mathcal{L}$ turns into the eigenvalue $e^{-\mu}$ of $L$. We recall that all spectrum information is contained in Theorem \ref{spektrum}. The fact that the spectrum is discrete will facilitate our analysis substantially. In our construction of the invariant manifolds, we follow an approach by Koch, see \cite{Kochoncentermanifolds}, and mainly stick to his notation. From now on we keep $K\in \mathbb{N}_0$ fixed, and we denote by $E_c$ the finite-dimensional subspace of $H$ spanned by the eigenfunctions corresponding to the eigenvalues $\{\mu_0,\dots,\mu_K\}$, that we call the \emph{center eigenspace}. The projection of $H$ onto the space $E_c$ is given by $P_c$. The \emph{stable eigenspace} $E_s$ is defined as the orthogonal complement of the center eigenspace, that is $E_s \coloneqq E_c^\perp$, such that $H=E_c \oplus E_s$, and $P_s=1-P_c$. We denote the restriction of $L$ to $E_s$ by $L_s$; it can be estimated via $\left\|L_s\right\|_H\leq e^{-\mu_{K+1}}$. Indeed, for $w \in H$ it holds \begin{align} \left\|L_sw\right\|^2_{H} = \sum \limits_{k>K}\sum_{l} \langle L w, \psi_{k,l}\rangle^2_H = \sum \limits_{k>K}\sum_{l} e^{-2\mu_k} \langle w, \psi_{k,l} \rangle^2_H \leq e^{-2\mu_{K+1}} \left\|w\right\|^2_{H}, \end{align} if the $\psi_{k,l}$'s are the eigenfunctions corresponding to $\mu_k$. For $L_c$, the restriction of $L$ onto $E_c$, we similarly obtain $\left\|L_c^{-1}\right\| \leq e^{\mu_K}$. Indeed, we have \begin{align} \left\|L_c^{-1}w\right\|^2_{H} = \sum \limits_{k\leq K} \sum_{l}\langle L^{-1} w,\psi_k \rangle^2_H = \sum \limits_{k \leq K}\sum_l e^{2\mu_k}\langle w, \psi_k\rangle^2_H \leq e^{2\mu_k}\left\|w\right\|^2_{H}. \end{align} We define \begin{align} \Lambda_c = e^{-\mu_K}, \quad \Lambda_s = e^{-\mu_{K+1}} \quad \text{ and } \Lambda_{max}=1 \end{align} and conclude \begin{align}\label{e2} \begin{split} \left\|L_c^{-1}\right\|&\leq \Lambda_c^{-1}\quad \text{ or }\quad \Lambda_c\left\|w\right\|_{H} \leq \left\|Lw\right\|_{H} \text{ for all } w\in E_c, \\ \left\|L_s\right\| &\leq \Lambda_s \quad\ \ \text{ or }\ \ \quad \left\|Lw\right\|_{H} \leq \Lambda_s\left\|w\right\|_{H} \text{ for all } w \in E_s,\\ \text{ and }\quad \left\|L\right\| &\leq \Lambda_{max} \ \ \text{ or }\quad \ \ \left\|Lw\right\|_{H} \leq \left\|w\right\|_{H} \text{ for all }w\in H. \end{split} \end{align} We arbitrarily choose $\Lambda_s<\Lambda_- = e^{-\mu_-} <\Lambda_c$ with $\mu_- < \mu_{K+1}<2\mu_-$ and $\Lambda_{max} < \Lambda_+$ and introduce the following norms, that will be used for the construction of the manifolds: \begin{itemize} \item For $w \in H$ we define $\vertiii{w} \coloneqq \max \left\{ \left\|P_cw\right\|_{H}, \left\|P_sw\right\|_{H} \right\}$.\\ \item For $\left\{w_k\right\}_{k\in \mathbb{Z}} \subseteq H$ we set $\left\|\left\{w_k\right\}_{k\in \mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}\coloneqq\sup \limits_{k\in \mathbb{N}_0} \max \left\{ \Lambda_+^{-k}\vertiii{w_k}, \Lambda_-^k\vertiii{w_{-k}}\right\}.$\\ \item For $\left\{w_k\right\}_{k\in \mathbb{N}_0} \subseteq H$ we set $\left\|\left\{w_k\right\}_{k\in \mathbb{N}_0}\right\|_{\Lambda_-,+}\coloneqq\sup \limits_{k\in \mathbb{N}_0} \Lambda_-^{-k}\vertiii{w_k}.$ \end{itemize} The corresponding Banach spaces of sequences are denoted by $\ell_{\Lambda_-,\Lambda_+}$ and $\ell_{\Lambda_-,+}$ respectively. Our first result it the construction of the center manifold. \begin{proposition}[Center Manifold]\label{centermanifold} Fix $\Lambda_- = e^{-\mu_-}$ in $\left( \Lambda_s,\Lambda_c \right)$. Let $\varepsilon_{gap}>0$ such that \begin{align}\label{e4} \Lambda_s+\varepsilon_{gap} <\Lambda_-<\Lambda_c-\varepsilon_{gap}\quad \text{ and } \quad\Lambda_{max} +\varepsilon_{gap} < \Lambda_+. \end{align} Choose $\varepsilon\leq \varepsilon^*$ sufficiently small, such that \begin{align}\label{e1} \Lip\left(R_\varepsilon\right) \leq \varepsilon_{gap}. \end{align} (If necessary, choose $\varepsilon^0 \leq \min \left\{\varepsilon, \varepsilon_0\right\}$ even smaller according to Lemma \ref{smoothingestimate}.) Then there exists a function $\theta_\varepsilon : E_c \rightarrow E_s$ with $\theta_\varepsilon(0)=0$, that is differentiable at zero with $D\theta_\varepsilon(0)=0$, and the submanifold \begin{align} W_\varepsilon^c \coloneqq \left\{ w_c + \theta_\varepsilon\left(w_c\right) : w_c \in E_c \right\} \end{align} satisfies the following conditions. \begin{enumerate} \item The function $\theta_\varepsilon$ is a contraction with $\Lip\left(\theta_\varepsilon\right)\lesssim \varepsilon_{gap}$ and $\left\|\theta_\varepsilon \left(g_c\right)\right\|_{H} \lesssim \left\|g_c\right\|_{H}^{\alpha}$ for all $g_c \in E_c$ for some $1<\alpha <\frac{\mu_{K+1}}{\mu_-}$. Moreover, it holds that $\left\|\theta_{\varepsilon}(g_c)\right\|_{W}\lesssim \vertiii{g_c}$. \item If the semiflow $\left\{ S_\varepsilon^t\right\}_{t\geq0}$ gets restricted to $W_\varepsilon^c$, it can be extended to an eternal Lipschitz flow on $W_\varepsilon^c$. More precisely, it holds that $S_\varepsilon^t\left(W_\varepsilon^c\right)= W_\varepsilon^c$ for all $t\geq 0$ and for any $g \in W_\varepsilon^c$ there exists a semiflow $\left\{w(t)\right\}_{t\leq0}$ in $W_\varepsilon^c$ with $w(0)=g$. \item The manifold $W_\varepsilon^c$ is characterized as follows: The point $g$ belongs to $W_\varepsilon^c$ if and only if there exists a flow $\left\{w(t)\right\}_{t\in \mathbb{R}}$ with $w(0)=g$ and \begin{align} \left\|w(t)\right\|_{H} \leq \begin{cases} \Lambda_+^t\vertiii{g}\quad \text{ for all } t\geq 0 \\ \Lambda_-^t\vertiii{g}\quad \text{ for all }t \leq 0. \end{cases} \end{align} \end{enumerate} \end{proposition} The Lipschitz constants here and in the following are to be understood for a mappings from $H$ to $H$, if both are equipped with the $\vertiii{\cdot}$ norm. \begin{proof} Our proof relies on the construction in \cite{Kochoncentermanifolds} in many parts. However, with regard to the subtle regularity issues we have to modify the argument and need to establish additional properties. For this reason, we give here a self-contained presentation. First, we note that thanks to Lemma \ref{lipschitzR} by choosing $\varepsilon$ sufficiently small, the Lipschitz condition \eqref{e1} on $R_{\varepsilon}$ is realizable. We define $J:E_c\times \ell_{\Lambda_-,\Lambda_+} \rightarrow \ell_{\Lambda_-,\Lambda_+} $ by \begin{align} J_k\left(g_c, \left\{w_l\right\}_{l\in \mathbb{Z}}\right) = \begin{cases} S_\varepsilon\left(w_{k-1}\right)&\text{ if } k\geq 1\\ P_sS_\varepsilon\left(w_{-1}\right)+g_c& \text{ if }k=0\\ P_sS_\varepsilon\left(w_{k-1}\right)+L_c^{-1}P_c \left(w_{k+1}-R_\varepsilon\left(w_k\right) \right)&\text{ if } k\leq -1. \end{cases} \end{align} This mapping is well defined, as we will show \begin{align}\label{e3} \left\|J\left(g_c, \left\{w_l\right\}_{l\in \mathbb{Z}}\right)\right\|_{\Lambda_-,\Lambda_+} \leq \max\left\{ \vertiii{g_c}, \kappa \left\|\left\{w_l\right\}_{l\in\mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}\right\}, \end{align} with $\kappa \coloneqq \max \left\{ \frac{\Lambda_-+\varepsilon_{gap}}{\Lambda_c}, \frac{\Lambda_{max}+\varepsilon_{gap}}{\Lambda_+}, \frac{\Lambda_s+\varepsilon_{gap}}{\Lambda_-} \right\}.$ This quantity $\kappa$ is strictly smaller than one due to \eqref{e4}. To prove \eqref{e3} for positive times steps, $k\ge 1$, we compute with help of the triangle inequality and properties \eqref{e2} and \eqref{e1} of $L$ and $R_\varepsilon$ \begin{align*} \Lambda_+^{-k}\vertiii{P_sS_\varepsilon\left(w_{k-1}\right)}& \leq \left(\Lambda_+^{-k} \vertiii{L_sP_sw_{k-1}}+\vertiii{P_sR_\varepsilon(w_{k-1})}\right)\\ &\leq \Lambda_+^{-k}\left( \Lambda_s\vertiii{w_{k-1}}+\varepsilon_{gap} \vertiii{w_{k-1}} \right) \leq \frac{\Lambda_s+\varepsilon_{gap}}{\Lambda_+}\left\|\left\{w_l\right\}_{l\in \mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}. \end{align*} We have a similar bound on the projection onto the center manifold, \begin{align} \Lambda_+^{-k}\vertiii{P_cS_\varepsilon(w_{k-1})}\leq \frac{\Lambda_{max}+\varepsilon_{gap}}{\Lambda_+}\left\|\left\{w_l\right\}_{l\in\mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}. \end{align} The bound for negative time steps, $k\le -1$, is verified in the same manner, namely \begin{align*} \MoveEqLeft \Lambda_-^k\vertiii{P_sS_\varepsilon\left(w_{k-1}\right)+L_c^{-1}P_c \left(w_{k+1}-R_\varepsilon\left(w_k\right) \right)}\\ &\leq \max\left\{ \frac{\Lambda_s+\varepsilon_{gap}}{\Lambda_-},\frac{\Lambda-_+\varepsilon_{gap}}{\Lambda_c}\right\}\left\|\left\{{w_l}\right\}_{l\in \mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}. \end{align*} And finally, for $k=0$ the same strategy yields \[ \vertiii{P_sS_\varepsilon(w_{-1})+g_c} \leq \max\left\{ \frac{\Lambda_s+\varepsilon_{gap}}{\Lambda_-}\left\|\left\{w_l\right\}_{l\in\mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}, \vertiii{g_c}\right\}, \] which completes the proof of \eqref{e3}. Making use of the inequalities \eqref{e2} and \eqref{e1} again, we derive similarly that $J(g_c,\cdot)$, for fixed $g_c \in E_c$, is a contraction on $\ell_{\Lambda_-,\Lambda_+}$, that is \begin{align} \left\|J_k\left(g_c, \left\{w_l\right\}_{l\in \mathbb{Z}}\right) - J_k\left(g_c, \left\{\tilde{w}_l\right\}_{l\in \mathbb{Z}}\right)\right\|_{\Lambda_-,\Lambda_+}\leq \kappa \left\|\left\{w_l\right\}_{l\in \mathbb{Z}} - \left\{\tilde{w}_l\right\}_{l\in \mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}, \end{align} for every $\left\{w_l\right\}_{l\in \mathbb{Z}}$, $\left\{\tilde{w}_l\right\}_{l\in \mathbb{Z}}$ in $\ell_{\Lambda_-,\Lambda_+}$. Hence, by Banach's fixed point theorem, for every element $g_c \in E_c$ there exists a unique sequence $\left\{w_k\right\}_{k\in \mathbb{Z}} \in \ell_{\Lambda_-,\Lambda_+}$ with $J\left(g_c,\left\{w_k\right\}_{k\in \mathbb{Z}}\right) = \left\{w_k\right\}_{k\in \mathbb{Z}}$. By construction this fixed point sequence is a solution to the discrete semiflow with $P_cw_0 = g_c$. By the virtue of \eqref{e3}, we also know that $\left\|\left\{w_k\right\}_{k\in \mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}\leq \vertiii{g_c}$. Now, we define the solution mapping $\hat{\theta}_\varepsilon : E_c \rightarrow \ell_{\Lambda_-,\Lambda_+}$ by $\hat{\theta}_\varepsilon\left(g_c\right) = \left\{w_k\right\}_{k\in \mathbb{Z}}$ and consider $\theta_\varepsilon:E_c\rightarrow E_s$ given by $\theta_\varepsilon\left(g_c\right)= P_sw_0$. In other words, the initial datum of the solution sequence decomposes into $w_0=g_c+\theta_{\varepsilon}\left(g_c\right)$. Since $J(0,0)=0$, we obtain, by the uniqueness of the fixed point, that $\hat{\theta}_\varepsilon(0)=0$ and thus $\theta_\varepsilon(0)=0$. The contraction property, in particular, entails that the solution mapping $\hat{\theta}_\varepsilon$ is Lipschitz continuous with bound $\Lip\left(\hat{\theta}_\varepsilon\right)\leq \frac{1}{1-\kappa}$. Thus, also its ``coordinate'' $\theta_\varepsilon$ is Lipschitz continuous with the same bound. We will need to a stronger bound, in fact, a contraction estimate. For any $g_c$ and $\tilde{g}_c \in E_c$ we have \begin{align} \vertiii{\theta_\varepsilon\left(g_c\right)-\theta_\varepsilon\left(\tilde{g}_c\right)} =\vertiii{P_s\left( S_\varepsilon\left(w_{-1}\right)-S_\varepsilon\left(\tilde{w}_{-1}\right) \right)}, \end{align} where $\left\{w_k\right\}_{k\in \mathbb{Z}} = \hat{\theta}_\varepsilon\left(g_c\right)$ and $\left\{\tilde{w}_k\right\}_{k\in \mathbb{Z}} = \hat{\theta}_\varepsilon\left(\tilde{g}_c\right)$. Using the triangle inequality and the properties of $L$ and $R_\varepsilon$, we get for any $k\ge 0$ that \begin{align*} \MoveEqLeft \Lambda_-^k\vertiii{P_s\left(w_{-k}-\tilde{w}_{-k}\right)}\\ & \leq \frac{\Lambda_s}{\Lambda_-}\Lambda_-^{k+1}\vertiii{P_s\left(w_{-(k+1)}-\tilde{w}_{-(k+1)}\right)} + \frac{\varepsilon_{gap}}{\Lambda_-}\Lambda_-^{k+1}\vertiii{w_{-(k+1)}-\tilde{w}_{-(k+1)}}. \end{align*} Applying this inequality iteratively, we obtain \begin{align*} \vertiii{\theta_\varepsilon\left(g_c\right)-\theta_\varepsilon\left(\tilde{g}_c\right)} & = \vertiii{P_s(w_0-\tilde w_0)}\\ &\leq \left(\frac{\Lambda_s}{\Lambda_-}\right)^m\left\|\left\{w_k\right\}_{k\in \mathbb{Z}}- \left\{\tilde{w}_k\right\}_{k\in \mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}\\ &\quad + \frac{\varepsilon_{gap}}{\Lambda_-}\sum \limits_{l=0}^{m-1}\left(\frac{\Lambda_s}{\Lambda_-}\right)^l\left\|\left\{w_k\right\}_{k\in \mathbb{Z}}- \left\{\tilde{w}_k\right\}_{k\in \mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+} \end{align*} for every $m\in \mathbb{N}$. Sending $m$ to infinity and using the Lipschitz bound for $\hat \theta_{\varepsilon}$ yields \begin{align} \vertiii{\theta_\varepsilon\left(g_c\right)-\theta_\varepsilon\left(\tilde{g}_c\right)} \leq \frac{\varepsilon_{gap}}{\Lambda_--\Lambda_s} \left\|\hat\theta_{\varepsilon}(g_c)-\hat \theta_{\varepsilon}(\tilde g_c)\right\|_{\Lambda_-,\Lambda_+}\le \frac{\varepsilon_{gap}}{\Lambda_--\Lambda_s}\frac{1}{\kappa-1} \vertiii{g_c-\tilde{g}_c}. \end{align} This proves that $\theta_{\varepsilon}$ is Lipschitz with constant $\Lip (\theta_{\varepsilon})\lesssim \varepsilon_{gap}$. We continue by deriving the superlinear behavior of $\theta_\varepsilon$ near zero, which eventually implies the differentiability properties stated in the the proposition. We compute, using the quadratic bound on $R_{\varepsilon}$ in Lemma \ref{quadratischeabschaetzung} \[ \vertiii{\theta_\varepsilon\left(g_c\right)} = \vertiii{P_sw_0} \leq \vertiii{P_s R_\varepsilon\left(S_\varepsilon\left(w_{-2}\right)\right)} + \vertiii{P_sLw_{-1}} \leq C \vertiii{w_{-2}}^2 + \Lambda_s\vertiii{P_sw_{-1}}. \] Similarly, we get $\vertiii{P_sw_{-k}} \leq C \vertiii{w_{-(k+2)}}^2 + \Lambda_s \vertiii{P_sw_{-(k+1)}}$ for any $k\in \mathbb{N}_0$ and thus, for any $m \in \mathbb{N}$, \begin{align} \vertiii{\theta_\varepsilon\left(g_c\right)} \le \Lambda_s^m \vertiii{w_{-m}} + C\sum \limits_{l=1}^{m}\Lambda_s^{l-1}\vertiii{w_{-(l+1)}}^2. \end{align} Recalling the definition of $\left\|\cdot\right\|_{\Lambda_-,\Lambda_+}$ and the fact that the solution sequence is bounded via \eqref{e3}, $\left\|\left\{w_k\right\}_{k\in\mathbb{Z}}\right\|_{\Lambda_-,\Lambda_+}\leq \vertiii{g_c}$, we obtain \begin{align}\MoveEqLeft \Lambda_s^m \vertiii{w_{-m}} + \sum \limits_{l=1}^{m}\Lambda_s^{l-1}\vertiii{w_{-(l+1)}}^2 \\ & \leq \left( \frac{\Lambda_s}{\Lambda_-} \right)^m \vertiii{g_c} + \frac{C}{\Lambda_s\Lambda_-^2}\sum \limits_{l=1}^m \frac{\Lambda_s^l}{\Lambda_-^{2l}} \vertiii{g_c}^2 \\ &= \left( \frac{\Lambda_s}{\Lambda_-} \right)^m \vertiii{g_c} + \frac{C}{\Lambda_s\Lambda_-^2}\sum \limits_{l=1}^m \left(\frac{\Lambda_-}{\Lambda_s}\right)^{lk}\left(\frac{\Lambda_s^{k+1}}{\Lambda_-^{k+2}}\right)^l\vertiii{g_c}^2 \end{align} for any $k\in \mathbb{N}$. We recall that $\Lambda_->\Lambda_s$. Hence, if there exists a $k \in \mathbb{N}$, such that $\Lambda_-^{k+2}> \Lambda_s^{k+1}$, it holds that \begin{align} \vertiii{\theta_\varepsilon\left(g_c\right)} \le \left( \frac{\Lambda_s}{\Lambda_-} \right)^m \vertiii{g_c} + \frac{C}{\Lambda_s\Lambda_-^2}\left(\frac{\Lambda_-}{\Lambda_s}\right)^{km}\frac{\Lambda_s^{k+1}}{\Lambda_-^{k+2} - \Lambda_s^{k+1}}\vertiii{g_c}^2, \end{align} and after optimizing in $m$, this becomes \begin{align} \vertiii{\theta_\varepsilon\left(g_c\right)} \lesssim \vertiii{g_c}^{1+\frac{1}{k+1}}, \end{align} provided that the right-hand side is sufficiently small. (For larger $g_c$, this bound follows trivially from the linear estimate.) It remains to verify the existence of a suitable $k$. This, however, follows easily from our choice of $\Lambda_-$, more precisely, from the assumption $\mu_- < \mu_{K+1}<2\mu_-$. Indeed, the latter enables us to pick $k > \frac{2\mu_--\mu_{K+1}}{\mu_{K+1}-\mu_-}$, which implies $\Lambda_-^{k+2}> \Lambda_s^{k+1}$ as desired. This proves the first statement with $\alpha = 1+ \frac{1}{k+1}< \frac{\mu_{K+1}}{\mu_-}$. We turn to the last inequality of the first statement. By the definition of $\theta_{\varepsilon}$, the construction of the fixed point and the smoothing estimate from Lemma \ref{smoothingestimate}, we have that \begin{align} \left\|\theta_{\varepsilon}\left(g_c\right)\right\|_{W} \leq \left\|S_\varepsilon\left(w_{-1}\right)\right\|_{W} \lesssim \left\|w_{-1}\right\|_{H}. \end{align} It remains to notice that $\left\|w_{-1}\right\|_{H}\lesssim \vertiii{w_{-1}}\leq \Lambda_-^{-1}\vertiii{g_c}\lesssim \vertiii{g_c}$ by the equivalence of the norms and the bound \eqref{e3} applied to the solution sequence. The second part of the proof covers the properties of the center manifold $W_\varepsilon^c$ which is defined as the graph of $\theta_\varepsilon$. We commence with the invariance of $W_\varepsilon^c$. For this we consider an arbitrary point on that manifold $g=g_c+\theta_\varepsilon\left(g_c\right)$ and consider the evolution $\left\{w_k\right\}_{k\in \mathbb{Z}} = S^k_\varepsilon(g) = \hat{\theta}_{\varepsilon}(g)$ starting at that point. We have to show that for every time step $k\in \mathbb{Z}$, the solution $w_k$ lies in $W_\varepsilon^c$, or, equivalently, that $P_sw_k=\theta_{\varepsilon}\left(P_cw_k\right)$. By iteration, it suffices to show this only for $k=1$ and $k=-1$. We set $\tilde{g}_c=P_cw_1$. Then $S^k_\varepsilon\left(\tilde{w}_0\right) = \hat{\theta}_\varepsilon\left(\tilde{g}_c\right)$ is the unique flow in $\ell_{\Lambda_-,\Lambda_+}$ that satisfies $P_c\tilde{w}_0=\tilde{g}_c$. Since $P_cw_1 = \tilde{g}_c$, we have by uniqueness that $w_{k+1}=\tilde{w}_k$ for every $k\in \mathbb{Z}$. This yields $P_sw_1 = P_s\tilde{w}_0= \theta_{\varepsilon}\left(\tilde{g}_c\right)=\theta_{\varepsilon}\left(P_cw_1\right)$. The same procedure backwards in time yields the statement for $k=-1$. It remains to prove the characterization of the center manifold. First, for a point $w_0$ on that manifold, i.e., $w_0=g_c+\theta_{\varepsilon}\left(g_c\right)$ for some $g_c\in E_c$, we already know that $\left\|\left\{S^k_\varepsilon\left(w_0\right)\right\}\right\|_{\Lambda_-,\Lambda_+} = \left\|\hat{\theta}_\varepsilon\left(g_c\right)\right\|_{\Lambda_-,\Lambda_+} \leq \vertiii{g_c}\leq\vertiii{g}$ by the virtue of \eqref{e3}. Otherwise, if a flow $\left\{w_k\right\}_{k\in \mathbb{Z}} = \left\{S^k_\varepsilon\left(w_0\right)\right\}_{k\in\mathbb{Z}}$ satisfies this bound, it must be a fixed point of $J\left(P_cw_0,\cdot\right)$. Since this fixed point is unique, we have $\hat{\theta}_\varepsilon\left(P_cw_0\right)=\left\{w_k\right\}_{k\in \mathbb{Z}}$ and thus $\theta_{\varepsilon}\left(P_cw_0\right) = P_sw_0$. This yields $w_0\in W_\varepsilon^c$. \end{proof} The regularity of $\theta_{\varepsilon}$ allows us to deduce the equivalence of the Hilbert space norm $\vertiii{\cdot}$ and the higher-order norm $\|\cdot \|_W$ on the finite-dimensional manifold $W_{\varepsilon}^c$. \begin{corollary}\label{equivalenceofnorms} The norms $\vertiii{g}$ and $\|g\|_{W}$ are equivalent\ for any $g\in W_\varepsilon^c$. \end{corollary} \begin{proof} Trivially, the embedding $W\hookrightarrow H$ is continuous on a bounded domain, that is, $\vertiii{g}\lesssim \|g\|_{W}$ for every $g \in W$. To show the reverse inequality, we take an element $g = g_c +\theta_{\varepsilon}(g_c)$ in $W_\varepsilon^c$. Now, we notice that on the one hand, thanks to the regularity of $\theta_{\varepsilon}$ established in Proposition \ref{centermanifold}, we have $\left\|\theta_{\varepsilon}\left(g_c\right)\right\|_{W} \lesssim \vertiii{g_c}$. On the other hand, because $E_c$ is a finite-dimensional space, all norms on $E_c$ are equivalent, so that $\|g_c\|_W\lesssim \vertiii{g_c}$. We combine both insights and find \[ \|g\|_W \le \|g_c\|_W + \|\theta_{\varepsilon}(g_c)\|_W \lesssim \vertiii{g_c} \le \vertiii{g}, \] as desired. \end{proof} We will now construct the stable manifolds. \begin{proposition}[Stable Manifold]\label{stablemanifold} Let $\varepsilon_{gap}>0$ and $\varepsilon$ be as in Proposition \ref{centermanifold} such that \eqref{e4} and \eqref{e1} hold. Then for every $g\in H$, there exists a map $\nu^\varepsilon_g:E_s\rightarrow E_c$ such that the submanifold \begin{align} M_g^\varepsilon \coloneqq g + \left\{ \nu_g^\varepsilon\left(g_s\right)+g_s : g_s\in E_s\right\} \end{align} satisfies the following conditions. \begin{enumerate} \item For every $g\in H$, the map $\nu_g^\varepsilon :E_s\rightarrow E_c$ is Lipschitz continuous with $\Lip\left(\nu_g^\varepsilon\right)\lesssim \varepsilon_{gap}$. \item For every $t\geq 0$ it holds that $S^t_\varepsilon\left(M_g^\varepsilon\right) \subseteq M_{S^t_\varepsilon(g)}^\varepsilon$ and $M_g^\varepsilon$ can be characterized as follows \begin{align} M_g^\varepsilon = \left\{ \tilde{g}\in H : \sup \limits_{k\in \mathbb{N}_0} \Lambda_-^{-k}\vertiii{S^k_\varepsilon\left(g\right)- S^k_\varepsilon\left(\tilde{g}\right)} \leq \vertiii{P_s\left(g-\tilde{g}\right)} \right\} \end{align} \item If $\varepsilon_{gap}$ is sufficiently small (and $\varepsilon^0$ chosen accordingly), the following holds true: For every $g\in H$ the intersection $M_g^\varepsilon\cap W_\varepsilon^c$ consists of a single point $\tilde{g}$. This particularly yields that $\left\{M_g^\varepsilon\right\}_{g\in H}$ is a foliation of $H$ over $W_\varepsilon^c$. Moreover, it holds that \begin{align} \left\|\tilde{g}\right\|_{W} \lesssim \vertiii{g}. \end{align} \end{enumerate} \end{proposition} \begin{proof} The existence follows again by a fixed point argument, which is similar to the one of Proposition \ref{centermanifold}. We will thus only sketch it. We fix a function $g\in H$ and a positive constant $r$ and define \[ \ell_{\Lambda_-,+}^{g,r}\coloneqq \left\{ \left\{w_l\right\}_{l\in \mathbb{N}_0}\in \left(L^2(\rho)\right)^{\mathbb{N}_0}: \left\|\left\{w_l\right\}_{l\in\mathbb{N}_0}-\left\{S_\varepsilon^l(g)\right\}_{l\in \mathbb{N}_0}\right\|_{\Lambda_{-},+}\le r\right\}. \] Note that $\ell_{\Lambda_-,+}^{g,r}$ equipped with the metric \[ d_g\left(\left\{w_l\right\}_{l\in \mathbb{N}_0},\left\{\tilde{w}_l\right\}_{l\in \mathbb{N}_0}\right) = \left\|\left\{w_l\right\}_{l\in \mathbb{N}_0}-\left\{\tilde{w}_l\right\}_{l\in \mathbb{N}_0}\right\|_{\Lambda_{-},+} \] is a closed subset of $\left(L^2(\rho)\right)^{\mathbb{N}_0}$. We consider the map $I^g:E_s \times \ell_{\Lambda_-,+}^{g,r} \rightarrow \ell_{\Lambda_-,+}^{g,r}$ defined by \begin{align} I^g_k\left( g_s,\left\{w_l\right\}_{l\in \mathbb{N}_0} \right) = \begin{cases} g_s+P_sg+L_c^{-1}P_c\left( w_1-R_\varepsilon\left(w_0\right) \right)& \text{ if } k=0\\ P_s\left( S_\varepsilon\left(w_{k-1}\right)+L_c^{-1}P_c\left(w_{k+1}-R_\varepsilon\left(w_k\right) \right) \right)& \text{ if } k\geq 1, \end{cases} \end{align} which has the following useful property \begin{align}\label{e6} I^g_k\left( g_s, \left\{S_\varepsilon^l(g)\right\}_{l\in \mathbb{N}_0}\right) - S_\varepsilon^k(g) = g_s\delta_{0k}. \end{align} Moreover, by similar arguments as for the operator $J$ in the proof of Proposition \ref{centermanifold}, relying on \eqref{e2} and \eqref{e1} we compute for a fixed element $g_s\in E_s$ that \[ \left\|I^g\left(g_s, \left\{ w_l \right\}_{l\in\mathbb{N}_0}\right)-I^g\left(g_s, \left\{\tilde{w}_l\right\}_{l\in \mathbb{N}_0} \right)\right\|_{\Lambda_-,+} \leq \kappa \left\|\left\{ w_l \right\}_{l\in\mathbb{N}_0}-\left\{\tilde{w}_l\right\}_{l\in \mathbb{N}_0}\right\|_{\Lambda_-,+} \] and \begin{align*} \MoveEqLeft \left\|I^g\left(g_s,\left\{w_l\right\}_{l\in\mathbb{N}_0}\right)-\left\{S^l_\varepsilon(g)\right\}\right\|_{\Lambda_-,+} \\ & \leq \max\left\{\vertiii{g_s},\kappa\left\|\left\{w_l\right\}_{l\in\mathbb{N}_0}-\left\{S^l_\varepsilon(g)\right\}\right\|_{\Lambda_-,+}\right\}, \end{align*} where $\kappa = \max \left\{ \frac{\Lambda_- + \varepsilon_{gap}}{\Lambda_c}, \frac{\Lambda_s+\varepsilon_{gap}}{\Lambda_-}\right\}<1$. Notice that in the latter estimate, we made use of the formula \eqref{e6}. Both estimates imply $I^g(g_s,\cdot)$ is a contraction and a self-mapping on the set $ \ell_{\Lambda_-,+}^{g,r}$, if we choose $r=\vertiii{g_s}$. Hence, by Banach's fixed point theorem there exists a unique sequence $\left\{w_k\right\}_{k\in \mathbb{N}_0}$ satisfying \begin{align} I^g\left(g_s, \left\{ w_k \right\}_{k\in\mathbb{N}_0}\right)= \left\{w_k\right\}_{k\in \mathbb{N}_0} \quad \text{ and } \quad \left\|\left\{w_k\right\}_{k\in \mathbb{N}_0} - \left\{S_\varepsilon^k(g)\right\}_{k\in \mathbb{N}_0}\right\|_{\Lambda_-,+}\leq r. \end{align} By construction, this sequence $\left\{w_k\right\}_{k\in \mathbb{N}_0}$ is a semiflow to the truncated equation with $P_sw_0= g_s+P_sg$. We may now introduce a solution mapping $\hat{\nu}^\varepsilon_g:E_s \rightarrow \ell_{\Lambda_-,+}^g$ by $\hat{\nu}_g^\varepsilon\left(g_s\right) \coloneqq \left\{w_k\right\}_{k\in \mathbb{N}_0}$, and we define $\nu_g^\varepsilon\left(g_s\right)=P_c\left(w_0-g\right)$. Due to the construction via a fixpoint argument, we deduce that $\hat{\nu}_g^\varepsilon$ is Lipschitz continuous with $\Lip\left(\hat{\nu}^\varepsilon_g\right)\leq \frac{1}{1-\kappa}$. We will improve the Lipschitz constant in a similar way as in the previous proof. For this, let $\hat{\nu}_g^\varepsilon\left(g_s\right) = \left\{w_k\right\}_{k\in\mathbb{N}_0}$ and $\hat{\nu}_g^\varepsilon\left(\tilde{g_s}\right) = \left\{ \tilde{w}_k\right\}_{k\in\mathbb{N}_0}$ be two fixed point solution sequences. It holds that $ \nu_g^\varepsilon\left(g_s\right)-\nu_g^\varepsilon\left(\tilde{g_s}\right) = P_c\left(w_0- \tilde{w}_0\right) $, and we compute \[ \vertiii{P_c\left(w_k-\tilde{w}_k\right)} \leq\frac{1}{\Lambda_c}\vertiii{P_c\left(w_{k+1}-\tilde{w}_{k+1}\right)} + \frac{\varepsilon_{gap}}{\Lambda_c} \vertiii{w_k-\tilde{w}_k} \] with the help of the definition of the map $I$. Therefore, for every $m\in \mathbb{N}$ it holds \begin{align} \vertiii{\nu_g^\varepsilon\left(g_s\right)-\nu_g^\varepsilon\left(\tilde{g_s}\right)} &\leq \left(\frac{\Lambda_-}{\Lambda_c}\right)^m \left\|\left\{w_k\right\}_{k\in\mathbb{N}_0}-\left\{\tilde{w}_k\right\}_{k\in\mathbb{N}_0}\right\|_{\Lambda_-,+} \\ &+ \frac{\varepsilon_{gap}}{\Lambda_c}\sum \limits_{l=0}^{m-1} \left(\frac{\Lambda_-}{\Lambda_c}\right)^l \left\|\left\{w_k\right\}_{k\in\mathbb{N}_0}-\left\{\tilde{w}_k\right\}_{k\in\mathbb{N}_0}\right\|_{\Lambda_-,+}. \end{align} Since $\frac{\Lambda_-}{\Lambda_c}<1$, for $k\rightarrow \infty$, this yields \begin{align} \vertiii{\nu_g^\varepsilon\left(g_s\right)-\nu_g^\varepsilon\left(\tilde{g_s}\right)} \leq \frac{\varepsilon_{gap}}{\left(\Lambda_c-\Lambda_-\right)\left(1-\kappa\right)}\vertiii{g_s-\tilde{g_s}}. \end{align} The stable manifold $M_g^\varepsilon$ is defined as the graph of $\nu_g^\varepsilon$ shifted by $g$. We first prove its characterization as stated in the second part of the proposition. Let $\tilde{g}$ be in $M_g^\varepsilon$, i.e., $\tilde{g}= g+\nu_g^\varepsilon\left(g_s\right)+g_s$ for some $g_s \in E_s$. We define $\left\{w_k\right\}_{k\in \mathbb{N}_0}=\hat{\nu}_g^\varepsilon\left(\tilde{g}\right)$ as the unique semi flow with $\Lambda_-^{-k}\vertiii{w_k-S^k_\varepsilon(g)}\leq \vertiii{g_s}=\vertiii{P_s\left(g-\tilde{g}\right)}$ and $P_sw_0=g_s+P_sg$. By definition of $\nu_g^\varepsilon$, we have \begin{align} \tilde{g}=g+\nu_g^\varepsilon\left(g_s\right)+g_s = g+P_c\left(w_0-g\right)+P_sw_0-P_sg =w_0, \end{align} and thus, $w_k= S^k_\varepsilon\left(\tilde{g}\right) $ satisfies the desired bound. Let us now assume that $S^k_\varepsilon\left(\tilde{g}\right)$ satisfies this bound. We define $g_s=P_s\left( \tilde{g}-g\right)$. Then $S^k_\varepsilon\left(\tilde{g}\right)$ is the unique fixpoint of $I^g\left(g_s,\cdot\right)$ with $\left\|S^k_\varepsilon\left(\tilde{g}\right)-S^k_\varepsilon(g)\right\|_{\Lambda_-,+}\leq \vertiii{g_s}$. By definition, this yields $S^k_\varepsilon\left(\tilde{g}\right)= \hat{\nu}_g^\varepsilon\left(g_s\right)$ and $\nu_g^\varepsilon\left(g_s\right)=P_c\left(\tilde{g}-g\right)$ and thus \begin{align} g+\nu_g^\varepsilon\left(g_s\right)+g_s= g + P_c\left(\tilde{g}-g\right) + P_s\left(\tilde{g}-g\right)=\tilde{g}. \end{align} Next, we have to verify that $M_g^\varepsilon$ is positive invariant. For this, we take an arbitrary point $w_0$ in $M_g^\varepsilon$ and define $\tilde{w}_0=S_\varepsilon(w_0)$. We straightforwardly compute that $S^k_\varepsilon\left(\tilde{w}_0\right)$ is a fixpoint of $I^{S_\varepsilon(w_0)}\left(0, \cdot\right)$, which implies the desired property. To prove that there exists a single intersection point with the center manifold $W_{\varepsilon}^c$, we consider the mapping $\chi(g_s) = \theta_{\varepsilon}(\nu^{\varepsilon}_g(g_s-P_sg)+P_cg)$ on $E_s$. Since $\theta_{\varepsilon}$ and $\nu_g^{\varepsilon}$ are both Lipschitz continuous with constant of order $\varepsilon_{gap}$, the mapping $\chi$ itself is Lipschitz with a constant of the order $\varepsilon_{gap}^2$, and thus, it is a contraction if $\varepsilon_{gap}$ is sufficiently small. We denote by $\tilde g_s$ the unique fixed point and set $\tilde g_c = \nu_g^{\varepsilon}(\tilde g_s - P_sg)+P_cg$. By definition, $\tilde g = \tilde g_c+\tilde g_s$ lies in the intersection of $W^c_{\varepsilon} $ and $M_g^{\varepsilon}$. As every point in this intersection is itself a fixed point, the uniqueness follows. To estimate the intersection point $\tilde g$ against $g$, we argue similarly. Indeed, by construction, the Lipschitz property for $\nu_g^{\varepsilon}$, and the fact that both $\theta_{\varepsilon}(0)=0$ and $\nu_g^{\varepsilon}(0)=0$, it holds that \begin{align*} \vertiii{\tilde g} & = \vertiii{\nu_g^{\varepsilon} (\tilde g_s -P_s g) +P_c g + i_s \theta_{\varepsilon}(\nu_g^{\varepsilon} (\tilde g_s -r_s g) +P_c g)}\\ &\lesssim (1+\varepsilon_{gap}) \vertiii{\nu_g^{\varepsilon} (\tilde g_s -r_s g) +P_c g}\\ &\lesssim \varepsilon_{gap} \vertiii{\tilde g_s -P_sg} + \vertiii{g}\\ &\lesssim \varepsilon_{gap} \vertiii{\tilde g} + \vertiii{g}, \end{align*} where we have used that $\varepsilon_{gap}\le 1$ in the third inequality. We arrive at \[ \vertiii{\tilde g} \lesssim \vertiii{g}, \] provided that $\varepsilon_{gap}$ is sufficiently small. Because $\tilde{g}$ lies on the manifold $W_\varepsilon^c$, we can make use of Corollary \ref{equivalenceofnorms} to obtain $\left\|\tilde{g}\right\|_{W} \lesssim\vertiii{g}$. \end{proof} Finally we are able to show the existence of a localized invariant manifold as claimed in Theorem \ref{localmanifolds} by combining the two preceding constructions with earlier proved regularity properties of the flow map $S$. \begin{proof}[ of Theorem \ref{localmanifolds}] We choose $0<\varepsilon_{gap}<\min \left\{e^{-\mu_{K+1}}-e^{-\mu}, e^{-\mu}-e^{-\mu_K}\right\}$, such that the third statement in Proposition \ref{stablemanifold} applies, and we define $\Lambda_-=e^{-\mu}$. We furthermore pick $\varepsilon\leq \varepsilon^*$ and $\varepsilon^0 \leq \min \left\{\varepsilon,\varepsilon_0\right\}$ as in the hypotheses of Propositions \ref{centermanifold} and \ref{stablemanifold}. The construction of $W_{loc}^c$ then follows directly from Proposition \ref{centermanifold}. To prove the first property in the theorem, we consider $g\in W_{loc}^c$ with $\|g\|_H\le\varepsilon_0$ and we notice that by the semi-flow property from Theorem \ref{wellposednessH1}, it holds $\left\|S^t_\varepsilon(g)\right\|_{L^\infty\left(\left(0,\infty\right);H\right)} \leq \tilde{C} \varepsilon_0$ for some $\tilde C\geq 1$. Moreover, $S^t_\varepsilon(g)\in W_{\varepsilon}^c$ by construction, and thus, by the equivalence of norms in Corollary \ref{equivalenceofnorms}, it holds that $\|S^t_\varepsilon(g)\|_{W}\leq C \|S^t_\varepsilon(g)\|_{H} \le C\tilde C \varepsilon_0$ for some $C\geq 1$. Thus, for $\varepsilon_0\leq \frac{1}{C\tilde{C}}\varepsilon$, we find $S^t(g) = S^t_{\varepsilon}(g)$ by the definition of the truncation and, in particular, $\|S^t(g)\|_H\le \varepsilon$, for any $t\ge0$. We turn to the proof of the second property. We know that there exists a unique point $\tilde{g}$ in $W_\varepsilon^c\cap M_g^\varepsilon$ that satisfies $\left\|\tilde{g}\right\|_{H}\lesssim \left\|\tilde{g}\right\|_{W} \lesssim \left\|g\right\|_{H}\lesssim \|g\|_{W}\leq \varepsilon_0$, see Proposition \ref{stablemanifold}. In particular, choosing $\varepsilon_0 \leq \varepsilon$ even smaller, if necessary, it holds that $S^k_{\varepsilon}(g) = S^k(g)$ and $S^k_{\varepsilon}(\tilde g) = S^k(\tilde g)$. Moreover, the estimate shows that $\tilde{g}$ actually lies in $W_c^{loc}$. Now, the characterization of the stable manifold yields \begin{align} \left\|S^k(g)-S^k\left(\tilde{g}\right)\right\|_{H} \lesssim \Lambda_-^k. \end{align} Since we are allowed to drop the $\varepsilon$ at $S^t_\varepsilon(g)$ and $S^t_\varepsilon\left(\tilde{g}\right)$, and since the solution to the (truncated) equation depends continuously on the initial datum with respect to the Hilbert space topology, $\left\|S^t_\varepsilon(g)-S^t_\varepsilon\left(\tilde{g}\right)\right\|_H \lesssim \left\|g-\tilde{g}\right\|_H$ holds for all $t\in [0,1]$ (see the fixed point construction of solutions in Theorem \ref{wellposednessH1}), we obtain \begin{align} \left\|S^t(g)-S^t\left(\tilde{g}\right)\right\|_H\lesssim e^{-\mu t} \end{align} for any $t\geq 0$. Next, we make use of Lemma \ref{Lipschitzglaettung} and obtain \[ \left\|S^{t}\left(g\right)-S^{t}\left(\tilde{g}\right)\right\|_{W} \lesssim e^{-\mu t}, \] for any $t\geq \hat{t}$ and some $\hat t\in(4/5,1)$. The statement follows. \end{proof} \section{Mode-by-mode asymptotics for the perturbation equation}\label{applicationinvariantmanifolds} In this final section, we exploit our invariant manifold theorem, Theorem \ref{localmanifolds} to prove the mode-by-mode asymptotics in Theorem \ref{Whoeheremoden}. We start with a brief comment on the projection of a function $w \in H$ onto the subspaces spanned by the eigenfunctions of $\mathcal{L}^2+N\mathcal{L}$. Let $\psi$ be such an eigenfunction for the eigenvalue $\lambda^2 + N\lambda$, or, equivalently, $\L\psi =\lambda\psi$. We consider the $H$-projection of $w$, and find via an integration by parts \begin{align} \langle \psi,w\rangle_{H}&= \int \psi w \rho dz + \int \nabla\psi \cdot \nabla w \rho^2dz \\ &= \int \psi w \rho dz + \int w\mathcal{L}\psi \rho dz = \left(1+\lambda\right)\int \psi w \rho dz = (1+\lambda)\langle \psi,w\rangle. \end{align} This shows that the $H$-projection coincides, up to a constant, with the $L^2(\rho)$-projection, due to the right choice of the weights. Thus, it is enough to consider the projection with respect to $\langle \cdot,\cdot\rangle$ in the following. We notice that the projection of $w$ onto the space spanned by the constant eigenfunction corresponding to the eigenvalue $\mu_0=0$ is given by \begin{align} P_0w= c_{0,N}\int_{B_1(0)} w\rho dz \end{align} and the projection $w$ onto the eigenspaces spanned by the eigenfunctions corresponding to the next eigenvalue $\mu_1$ is given by \begin{align} P_1w = c_{1,N}\int_{B_1(0)} zw\rho dz, \end{align} where $c_{0,N}$ and $c_{1,N}$ are two positive constants. Eventually we will prove Theorem \ref{Whoeheremoden} by induction and thus commence by proving the case $K=0$ in the following theorem. We remark that thanks to smoothing effects, see Equation (54) in \cite{SeisTFE}, it holds that \[ \|w(t)\|_{W}\le \|w_0\|_{W^{1,\infty}}, \] for some $t\gtrsim 1$, and thus, instead of considering Lipschitz initial data, we may impose slightly stronger assumptions. \begin{theorem}\label{stabilityw} There exists $\varepsilon_0>0$ such that the following holds. Let $w$ be a solution to \eqref{perturbationequation} with initial datum $w_0$. We further assume that $\|w_0\|_{W} \leq \varepsilon_0$ and \begin{align}\label{h11} \lim\limits_{t\rightarrow \infty} \int w(z)\rho(z)dz=0. \end{align} Then we have \begin{align} \left\|w(t)\right\|_{W} \lesssim e^{-\mu_{1}t} \quad \text{ for all } t\geq 0. \end{align} \end{theorem} \begin{proof} We will make use of the invariant manifolds we just constructed in the case $K=0$. In this case, $E_c$ is one-dimensional and spanned by the constant eigenfunction $\psi_{1,0}$ corresponding to the eigenvalue $\mu_0=0$. Thus, we obtain $E_c \cong \mathbb{R}$. We fix $\mu \in (0,\mu_{1})$ and accordingly $\varepsilon$ and $\varepsilon_0$ as in Theorem \ref{localmanifolds} and claim the equality \begin{align}\label{claim} W_{\varepsilon}^c= E_c. \end{align} To see this, we first pick a function $g \in E_c$, i.e., $g(x)= \alpha \in \mathbb{R}$. The constant function $w(t,x)\equiv \alpha$ solves equation \eqref{c3} with initial datum $g$ and satisfies the bounds \begin{align} \|w(t)\|\sim |\alpha| \lesssim \begin{cases} \Lambda_+^t |\alpha|, \quad \text{ for } t\geq 0,\\ \Lambda_-^t |\alpha|, \quad \text{ for } t\leq 0. \end{cases} \end{align} By the characterization of the center manifold, we deduce $g \in W_{\varepsilon}^c$. Now let $g=g_c+\theta_{\varepsilon}(g_c)$ be a function in $ W_{\varepsilon}^c$. From above we know $E_c\subset W_{\varepsilon}^c$, and thus $g_c \in W_{\varepsilon}^c$. This forces $\theta_{\varepsilon}(g_c)=0$, which proves the claim \eqref{claim}. Let us know consider an initial datum $w_0$ with $\|w_0\|_{W} \le \varepsilon_2$ and let $w(t)=S^t(w_0)$ be the corresponding solution to the perturbation equation. The Invariant Manifold Theorem \ref{localmanifolds} combined with the characterization \eqref{claim} yields the existence of constant $a$ with \begin{align}\label{g1} \|w(t)-a\|_H\lesssim \left\|w(t)-a\right\|_{W}\lesssim e^{-\mu t},\quad \text{ for } t\geq 1. \end{align} In particular, if $a(t)$ denotes the average of $w(t)$ or, in other words, the projection onto $E_c$, $a(t)=P_c w(t)= \fint w(t)\rho dx$, it holds that $|a(t)-a| \lesssim e^{-\mu t}$. Invoking the hypothesis \eqref{h11}, this estimate entails that $a=0$. We want to improve on the decay rate of $a(t)$. We note that $a(t)$ solves the equation \begin{align} \frac{d}{dt}a(t)= \fint \nabla \cdot \left(\rho^2F[w]\right)+\rho^2F[w]dz = \fint \rho^2F[w]dz. \end{align} The nonlinear term $\rho F[w]$ consists of a linear combination of respective two factors of $\nabla w$, $\rho \nabla^2w$ or $\rho^2 \nabla^3w$, cf.~\eqref{106}. Thus, we obtain the estimate $|\rho F[w]| \lesssim \|w\|^2_{\dot{W}}$, where we consider only the homogeneous part of the norm. From \eqref{g1} we already know that $\|w(t)\|_{\dot{W}}\lesssim e^{-\mu t}$ for $t\geq 1.$ We conclude \begin{align} \left|\frac{d}{dt}a(t)\right|\lesssim e^{-2\mu t} \end{align} for $t\geq 1$. We integrate over the time interval $(t,\infty)$ and recall the assumption \eqref{h11} to obtain \begin{align}\label{110} |a(t)| \lesssim e^{-2\mu t} \quad \text{ for } t\geq 1. \end{align} As we may choose $\mu$ larger than $\frac{1}{2}\mu_{1}$, it remains to gain suitable control over the projection of $w(t)$ onto $E_s\cong H/\mathbb{R}$, namely $P_sw(t) = w(t)-a(t)$. We note that $P_sw$ solves the equation \begin{align} \partial_tP_sw +\left( \mathcal{L}^2+N\mathcal{L} \right)P_sw = P_s\left( \frac{1}{\rho}\nabla \cdot\left(\rho^2F[w]\right)+\rho F[w] \right). \end{align} Since the eigenfunctions $\left\{\psi_i\right\}_{i\in \mathbb{N}_0}$ form an orthogonal basis of $H$, it holds that $\langle P_sw,\left(\mathcal{L}^2+N\mathcal{L}\right)P_sw \rangle_H \geq \mu_{1} \|P_sw\|^2_{H}$ and thus, arguing similarly as in the proof of Theorem \ref{wellposednessH1}, we find that \begin{align} \MoveEqLeft \frac{1}{2}\frac{d}{dt}\left\|P_sw\right\|^2_{H} +\mu_{1}\left\|P_sw\right\|^2_{H}\\ &\leq -\langle \nabla P_sw,\rho F[w]\rangle - \langle \nabla\L P_sw,\rho F[w]\rangle + \langle P_sw,\rho F[w]\rangle +\langle \L P_s w,\rho F[w]\rangle\\ & \le \|P_sw \|_{W} \left(\|\rho F[w]\|_{L^\infty} + \|\rho F[w]\|_{L^\infty}\right)\\ &\leq\left( \|w\|_W + |a(t)| \right) \left(\|\rho F[w]\|_{L^\infty} + \|\rho F[w]\|_{L^\infty}\right). \end{align} Thanks to the uniform estimates on the nonlinearities that we quoted above and the bound in \eqref{g1}, we observe that the right-hand side decays with at least $ e^{-3\mu t}$. Therefore, the latter estimate translates into \begin{align} \frac{d}{dt}\left(e^{2\mu_{1} t} \left\|P_sw\right\|^2_{H}\right)\lesssim e^{\left(2\mu_{1}- 3\mu\right)t}, \end{align} for any $t\ge 1$. The right hand side is integrable, provided that we choose $\mu$ sufficiently close to $\mu_{1}$, so that $3\mu >2\mu_1$. Integration in time yields \begin{align} \left\|P_sw(t)\right\|^2_{H} \lesssim e^{-2\mu_1 t} \quad \text{ for } t\geq 1. \end{align} In combination with our estimate on the average, \eqref{100}, this bound gives \begin{align} \left\|w(t)\right\|_{H} \leq \left\|P_sw(t)\right\|_{H}+\left|a(t)\right| \lesssim e^{-\mu_1 t}\quad \text{ for } t\geq 1. \end{align} We take into account Lemma \ref{smoothingestimate} to finally obtain the statement of the theorem, noting that the result is trivial for $t\lesssim 1$. \end{proof} \begin{remark}\label{g2} Using the final result of Theorem \ref{stabilityw} we are able to improve the convergence rate of $a(t)$ to $|a(t)|\lesssim e^{-2\mu_1t}$ for all $t\geq 0$. \end{remark} Having already proved the part of Theorem \ref{Whoeheremoden} concerning the smallest eigenvalue, we are now able to deduce the full statement with an analogue approach. \begin{proof}[ of Theorem \ref{Whoeheremoden}] We prove this theorem by induction. The base case $K=0$ is proved in the latter theorem. Now, may assume that \eqref{h2} holds true and additionally \begin{align}\label{h3} \left\|w(t)\right\|_{W}\lesssim e^{-\mu_{K}t} \text{ for all } t\geq 0. \end{align} This directly implies $|\rho F[w]|\lesssim e^{-2\mu_Kt}$. We will again exploit the invariant manifolds in a similar way as in the base case. The center eigenspace takes the form $E_c = \Span\big \{\psi_{k,n}\, |\, k\in\left\{0,\dots,K\right\} \text{ and } n\in \left\{1,\dots,N_k\right\} \big\}.$ We fix $\mu \in \left(\mu_{1},\mu_{2}\right)$ and accordingly $\varepsilon$ and $\varepsilon_0$ as in Theorem \ref{localmanifolds}. We deduce the existence of $\tilde w_0 \in W_{loc}^c$ such that $\tilde{w}(t)=S^t(\tilde w_0 )\in W_{loc}^c$ satisfies \begin{align}\label{g3} \left\|w(t)-\tilde{w}(t)\right\|_{W}\lesssim e^{-\mu t}\quad \text{ for all } t\geq 1, \end{align} where $\tilde{w}(t) = P_c\tilde{w}(t)+ \theta_\varepsilon\left(P_c\tilde{w}(t)\right)$ with $P_c\tilde{w}(t) = \sum \limits_{n,k} \langle \tilde{w}(t),\psi_{k,n}\rangle \psi_{k,n}$. Now, we fix an arbitrary $k\in \left\{0,\dots,K\right\}$ and consider the projection of $w$ onto one of the eigenfunctions $\psi_{k,n}$. We obtain the ordinary differential equation \begin{align} \frac{d}{dt}\langle \psi_{k,n},w(t)\rangle +\mu_k\langle \psi_{k,n},w(t)\rangle =-\langle \nabla\psi_{k,n},\rho F[w(t)]\rangle +\langle \psi_{k,n},F[w(t)]\rangle \quad \text{ for all }t\geq 0, \end{align} which implies $\left|\frac{d}{dt}e^{\mu_kt}\langle \psi_{k,n},w(t)\rangle \right| \lesssim e^{-\left( 2\mu_K-\mu_k \right)t}$ due to the bound on $|\rho F[w]|$. We notice that $\lim \limits_{t\rightarrow \infty} e^{\mu_k}\langle \psi_{k,n},w(t)\rangle $ exists and vanishes by the virtue of assumption \eqref{h2}. We conclude that \begin{align}\label{h4} |\langle\psi_{k,n},w\rangle |\lesssim e^{-2\mu_Kt}\quad \text{ for all }t\geq 0. \end{align} This yields $\|P_c w(t)\|_W\lesssim e^{-2\mu_Kt} $ and enables us to estimate the center part of $\tilde{w}(t)$ with help of \eqref{g3} and the triangle inequality, namely \begin{align} \left\|P_c\tilde{w}(t)\right\|_W \leq \left\|P_c \left(w(t)-\tilde{w}(t)\right)\right\|_W + \left\|P_cw(t)\right\|_W\lesssim e^{-\min\left\{2\mu_K, \mu \right\}t} \end{align} for all $t\geq 1$. Thanks to the regularity property of $\theta_\varepsilon$ derived in the first part of Proposition \ref{centermanifold} we deduce \begin{align} \left\|\theta_\varepsilon\left(P_c\tilde{w}(t)\right)\right\|_W\lesssim e^{-\min\left\{2\mu_K, \mu \right\}t} \quad \text{ for all }t\geq 1. \end{align} Combining the previous estimates, we have \begin{align}\label{p1} \left\|w(t)\right\|_{W}& \leq \left\|w(t)-\tilde{w}(t)\right\|_{W} + \left\|P_c\tilde{w}(t)\right\|_W + \left\|\theta_\varepsilon\left(P_c\tilde{w}(t)\right)\right\|_W\\ &\lesssim e^{-\min\left\{2\mu_K, \mu \right\}t} \quad \text{ for all }t\geq 1. \end{align} We note that \eqref{p1} gives a better rate than \eqref{h3}. Due to the structure of the eigenvalues it may happen, depending on $K$ and the space dimension $N$, that $2\mu_K <\mu$. In this case, inequality \eqref{p1} downgrades to $\left\|w(t)\right\|_W\lesssim e^{-2\mu_K t}$. Similarly, in this case the estimate for center part of $w(t)$, that is $P_cw(t)$, is also not good enough, as we want to prove $|\langle \psi_{k,n},w(t)\rangle | \lesssim e^{-\mu_{K+1}t}$. We overcome this problem by repeating the first step of this proof, now from the starting point \eqref{p1} instead of \eqref{h2}, which directly yields $|\rho F[w]|\lesssim e^{-4\mu_Kt}$. If $2\mu_K \le \mu_{K+1}$, we deduce via iteration that \begin{align}\label{p2} \left\|P_cw(t)\right\|_W \lesssim e^{-2^m\mu_K} \quad \text{ for all } t\geq 1 \end{align} and \begin{align}\label{p3} \|w(t)\|_W\lesssim e^{-\mu t} \quad \text{ for all }t\geq 1, \end{align} where $m$ is smallest natural number that satisfies $\mu_K \leq 2^{m-1}\mu_K<\mu <\mu_{K+1} \leq2^m\mu_K$. We remark that we are allowed to choose $\mu$ sufficiently close to $\mu_{K+1}$. In the case $2\mu_K \geq \mu_{K+1}$, we may directly continue from estimate \eqref{p1}, which corresponds to $m=1$. To achieve the rate $\mu_{K+1}$, we investigate the projection of $w(t)$ onto $E_s$. Similar to the previous proof, testing the equation solved by $P_sw$ with $\rho P_sw$ yields \begin{align}\MoveEqLeft \frac{1}{2}\frac{d}{dt}\left\|P_sw(t)\right\|^2_{H}+\mu_{K+1}\left\|P_sw(t)\right\|^2_{H}\\ &\le- \langle \nabla P_sw,\rho F[w]\rangle - \langle \nabla\L P_sw,\rho F[w]\rangle + \langle P_sw,\rho F[w]\rangle +\langle \L P_s w,\rho F[w]\rangle\\ & \le \|P_sw \|_{W} \left(\|\rho F[w]\|_{L^\infty} + \|\rho F[w]\|_{L^\infty}\right)\lesssim e^{-3\mu t} \quad \text{ for all }t\geq 1, \end{align} where we used \eqref{p3} and the quadratic behavior of $\rho F[w]$. Just like in the previous proofs, choosing $\mu$ large enough such that $3\mu >2\mu_{K+1}$ we obtain \begin{align} \left\|P_sw(t)\right\|^2_{H}\lesssim e^{-2\mu_{K+1}} \quad \text{ for all }t\geq 1 \end{align} and in total \begin{align} \left\|w(t)\right\|_{H}\leq \left\|P_cw(t)\right\|_{H}+\left\|P_sw(t)\right\|_{H}\lesssim e^{-2\mu_Kt}+e^{-\mu_{K+1}t}\lesssim e^{-\mu_{K+1}t}\quad \text{ for all }t\geq 1. \end{align} To carry this result over to the $W$-norm it remains to make use of the smoothing estimate in Lemma \ref{smoothingestimate}, noting again that the result is trivial for $t\lesssim 1$. \end{proof}
train/arxiv
BkiUdcQ5qX_BkksYXhWZ
5
1
\section{Introduction} A number of methods have been developed to find clusters, or densely linked communities, in networks. To mention a few, there are clustering algorithms based on link betweenness, number of in-cluster links, random walks, spectrum of connectivity matrix (see a review \cite{newman} and \cite{newmanq, newmang}), and ordering of spin models \cite{blatt, spirin, stefan}. Yet often a need arises to go beyond the existing clustering algorithms as new kinds of communities and networks are analyzed. Our initial goal was to find protein complexes and functional modules in protein-protein binding networks. Proteins in a complex link together to simultaneously perform a a certain function, while members of a functional module sequentially participate in the same cellular process \cite {spirin}. Both types of clusters usually consist of 10-40 proteins that are stronger linked with each other than with the the rest of the network. Since certain proteins are known to perform functions ubiquitous to several modules, network communities may overlap. We consider protein-protein binding networks of baker yeast and fruit fly, each consisting of $\sim 10^{3}$ vertices and $\sim 10^{3} - 10^{4} $ links \cite {ito,uetz,fly}. These networks are composed from the data obtained in Yeast 2-Hybrid (Y2H) high-throughput experiments. Such networks are known to be rather noisy and incomplete, that is, to contain a number of links that do not occur naturally and to miss a noticeable fraction of existing links. Thus it is hard to estimate the precise number of links and nodes that comprise a given protein cluster in such a dataset. Protein binding networks are sparse, so that a probability for an arbitrary pair of nodes to be linked is $\sim 10^{-3}$. While it is assumed that the link density inside a cluster is higher than the average, the precise magnitude of the link density contrast is unknown. Overall, the link density contrast in these networks is relatively low: The largest completely connected subgraph, or clique, contains only four and five vertices in yeast and fly networks, correspondingly. In addition, since many proteins function on their own, there are parts of the network that do not belong to any cluster at all. To summarize, we looked for an {\it a priori} unknown number of possibly overlapping mesoscopic clusters in a sparse network with a low link density contrast. Unfortunately, we were unable to detect a sufficient number of such clusters using any of the existing algorithms. Crucial limitations of many of the available network clustering methods are discussed in \cite{stefan}. For example, for our purposes we ruled out the Q-optimization algorithm by Newman \cite{newmanq} as in its earliest steps it connects all the vertices with a single neighbor (leaves) to their neighbors, thus making it impossible to select only densely linked clusters such as cliques. Similarly, the clustering algorithm based on consecutive cutting the links with the highest betweenness \cite{ newmang} produces the leafy branches as the links leading to leaves have the lowest betweenness and are the last to be cut. A finite-temperature ordering of Potts model used in \cite{spirin} to detect protein communities yields in our case only very large ($\approx 500$ vertices) cluster. The main reason for this failure of the finite-temperature Potts model clustering is a difference in the networks: In addition to the links from Y2H experiments, the network analyzed in \cite{spirin} contained the data obtained using other methods such as mass spectroscopy, where protein complexes are often recorded as cliques. A clustering based on an annealing in ferromagnetic Potts model with global antiferromagnetic term \cite{stefan} performs somewhat better; yet it still did not allow us to find the expected number of mesoscopic communities. However, a generalization of the last two approaches enabled us to detect a large number of candidates for protein complexes and modules of the desired size. In the following section we discuss the methods developed in \cite{blatt, spirin,stefan} in more detail and introduce our clustering algorithm. In section III we discuss the implementation of the algorithm, averaging, which is used to check robustness of the found complexes, and present examples. A discussion and a brief summary concludes the paper. \section {Ordering of Potts model on a network} First consider a $q$-state ferromagnetic Potts model on a network. Each vertex is assigned a state $\sigma$ (often called a spin) that may have any integer value between one and $q$. The energy of the system is equal to the number of links that connect pairs of vertices in the same state, so that the Hamiltonian reads \begin {equation} \label{h} H= - \sum_{\{i,j\}\in E} \delta_{\sigma_i, \sigma_j}, \end {equation} where sum runs over all edges and the coupling constant is set equal to one. Evidently, in the ground state all connected vertices are in the same Potts state. Equilibration at a low but finite temperature $T$ results in a mosaic of sets of the same-state vertices, interpreted as network communities \cite{blatt, spirin}. Usually performed in the Canonical ensemble, such finite-temperature equilibration minimizes the free energy $F=H-TS$. The entropy $S$ can be qualitatively approximated by its mean-field form (see, for example, \cite{wu}), \begin {equation} \label{s} S_{MF}= N\ln N - \sum_{s=1}^q n_s \ln n_s, \; N=\sum_{s=1}^q n_s, \end {equation} Here $n_s$ is the number of vertices in state $s$ and $N$ is the total number of vertices in the network. This approximation sets an upper limit on the actual entropy of the network Potts model. Yet it illustrates the process of equilibration as a competition between the energy term $H$, that favors condensation of all spins into a single state ($n_i=N,\; n_{j}=0,\; j\neq i$), and an entropic term $T\sum_{s=1}^q n_s \ln n_s$, that favors a completely disordered configuration ($n_s=N/q,\; s=1,\ldots,q$). A similar competition between ordering and disordering trends defines the structure of the ground state of the Potts model with a global antiferromagnetic term suggested in Ref.~\cite{stefan}, \begin {equation} \label{hs} H'= - \sum_{\{i,j\}\in E} \delta_{\sigma_i, \sigma_j}+ \gamma \sum_{s=1}^q \frac {n_s(n_s-1)}{2}. \end {equation} where $\gamma$ is an antiferromagnetic coupling constant. To generalize, the ordering in both the finite-temperature Potts model and zero-temperature model (\ref{hs}) corresponds to minimization of the expression \begin {equation} \label{mh} \tilde H = - \sum_{\{i,j\}\in E} \delta_{\sigma_i, \sigma_j}+ \sum_{s=1}^q n_s f(n_s), \end {equation} with \begin{equation} \label{f} f(x)= \cases {T \ln(x)&{\rm in\; finite-T \; Potts \; model} \\ \gamma x /2 & {\rm in \;the \;model \; [4].}\\} \end{equation} Terms that depend only on $N$ are left out. The role of the temperature in these two cases is somewhat different: In the former case the temperature is used as an effective disordering (antiferromagnetic) parameter, while in the later case it is a mean to anneal the system into a sufficiently low-energy configuration. It seems natural to interpret two forms of $f(x)$ in (\ref{f}) as two particular cases of some general antiferromagnetic penalty function with more than one parameter. Furthermore, the existence of only single adjustable parameter in both cases (\ref{f}) often does not allow to control the properties such as size and link density of the clusters. As we observed, the Potts model \cite{blatt, spirin} on Y2H protein networks at a certain temperature exhibits a sharp transition from a completely disordered state to a state consisting of a single large (containing $\sim 10 \%$ or more of all vertices) ordered component and disordered rest of the system. A possible interpretation of such a large-scale ordering is that the dependence of the disordering (entropy) term on the number of each state spins is weak (logarithmic) and the large increase in cluster size does not carry a sufficient free energy penalty. Indeed, the modified Potts model (\ref{hs}), where the dependence of the anti-clustering term on the number of spins in each state is stronger (linear), yielded several smaller clusters. Evidently, the form of $f(x)$ defines the sizes of ordered clusters: The faster $f(x)$ increases with $x$, the stronger large clusters are suppressed. In order to overcome the limitations of the existing Potts model clustering methods, it appears natural to go beyond two particular forms of $f$ (\ref{f}). We consider the generalized Potts Hamiltonian (\ref{mh}) where the global antiferromagnetic term that has two adjustable parameters, \begin {equation} \label{us} f(x)=\gamma x ^{\alpha},\; \alpha > 0. \end {equation} The clustering methods of \cite{blatt,spirin} and \cite{stefan} correspond to $\alpha\rightarrow +0 $ and $\alpha=1$ cases, respectively. In a smaller $\alpha$ case larger communities are produced. while a larger $\alpha$ results in a higher number of smaller clusters. In either case $\gamma$ should be sufficiently small to observe any ordering at all. To illustrate the clustering, consider the evolution of a single ordered mesoscopic community of $n_1$ vertices. We assume that the number of Potts states $q$ is much larger than the number of communities and the bulk of the network remains disordered, so that $n_i=(N-n_1)/(q-1)$, $i=2,\ldots,q$. The antiferromagnetic term for this configuration reads \begin {equation} \label{eaf} H_{AF} = \gamma \sum_{s=1}^q n_s^{\alpha+1} = \gamma \left [ n_1 ^{ \alpha + 1} + (q-1) \left(\frac{N-n_1}{q-1}\right )^{\alpha+1}\right]. \end {equation} The community continues to grow while the number of links $\Delta L$ brought into the community by $\Delta n_1$ added vertices (usually $\Delta n_1=1$) exceeds the antiferromagnetic cost of such vertex addition, that is, \begin {equation} \label{dl} \frac{\Delta L}{\Delta n_1} \ge \gamma(\alpha+1) \left [ n_1 ^{ \alpha} - \left(\frac{N-n_1}{q-1}\right )^{\alpha}\right]. \end {equation} Adjusting $\gamma$ and $\alpha$ it is possible to detect communities of a desired size and link density. Evidently, any finite-temperature system has a certain degree of disorder and consequently, non-zero entropy. However, the contribution of the entropy term to the free energy (\ref{h}) can be made arbitrary small by annealing the system to the sufficiently low temperature. Comparing the entropy and the antiferromagnetic terms, we estimate the threshold temperature $T^*\approx \gamma n^{\alpha}/\ln n$. Below $T^*$ the equilibrium ordering of clusters of size $n$ and larger is controlled by competition between only the ferromagnetic and antiferromagnetic couplings, leaving the entropic term irrelevant. This is of course only a qualitative estimate as it is based on a mean-field approximation for the entropy (\ref{s}). \section{Implementation and Averaging} To make the antiferromagnetic term work, the disordered equilibrium population of a state $N/q$ should be significantly less than a size of the smallest cluster we need to detect. We set $N/5\le q \le N/3$, and experimentally determine the optimal values of $\gamma$ and $\alpha$. For the Y2H networks \cite {ito, uetz, fly} these numbers are $0.002 \le \gamma \le 0.02$ and $1 \le \alpha \le 2$. In general, we observed that for a typically sparse protein binding network where $2L/N^2 \sim 10^{-3}$, it is convenient to start with $\alpha=1$ as in \cite{stefan}, adjust $\gamma\sim 10^{-2}$ to produce the reasonable number of clusters, and then fine-tune both $\alpha$ and $\gamma$ to focus on the desired cluster size and link density. To illustrate this process, cluster abundance vs cluster size plots are presented in Fig.~ \ref{fig1} for three sets of $(\alpha, \gamma)$. \begin{figure} \includegraphics[width=.5\textwidth]{fig1.eps} \caption{\label{fig1} Cluster size histogram for the fly network. The cluster abundance for $\alpha=1.5$ and $\gamma=10^{-2}$ (red dotted line) strongly peaks around the desired community size, $n\approx 15$, while the histogram for the same $\alpha$ and smaller $\gamma=10^{-3}$ (dashed black line) consists of a smaller and broader peak at much larger clusters, $n\approx 50$. While clustering with a smaller antiferromagnetic exponent, $\alpha=0.5$ and $\gamma=0.2$ (green solid line) also produces a cluster size distribution with a maximum at a desired cluster size, $n \approx 15$, the number of such clusters is noticeably less than in the $\alpha=1.5$ and $\gamma=10^{-2}$ case, and very large (up to $n=200$) biologically-irrelevant clusters are produced. Only clusters consisting of $n>8$ vertices and $L>2n$ links are counted, the results are averaged over 50 equilibration runs. } \end{figure} Similarly to \cite{stefan}, the network is initialized with randomly assigned spins and then gradually annealed to $T \ll T^*$. At an annealing step a state of a randomly picked spin is evolved according to the Metropolis rules; each spin is approached at average $Cq$ times with $C\sim 10$. After such isothermal equilibration the temperature is reduced by a small fraction (usually 1-2 \%). The algorithm is fairly fast, its performance scales as $Nq$. Naturally, each run produces a distinct set of clusters. In some sense, all clusters of the expected size that contain sufficient number of links are good as is, since their high link density make them equally good candidates for protein complexes. However, certain communities are reproduced practically in all runs, while the others are not so robust. Such lack of robustness often has the following explanation: There may exist a set of vertices that contribute similar numbers of links if brought into a cluster. However, in each run only a fraction of these vertices is included into a cluster due to the rapidly growing antiferromagnetic cost (\ref{dl}). Alternating membership of such vertices in a cluster results in its poor reproducibility. To study the robustness of clusters more systematically, we average the results of many annealing runs. Along with the averaging methods and cluster merging algorithms used in \cite{blatt, stefan}, we utilize the following procedure. In each run the ``ordered'' links that connect the same state vertices are marked. As a result, each link carries an order parameter $\psi \le 1$ equal to the fraction of runs in which this link was ordered. For a community obtained in a particular annealing run, the averaged over all in-community links order parameter $\bar\psi$ characterizes the reproducibility of the community. It was not uncommon to see communities with $\bar\psi= 0.5$ and higher. In each run we were able to detect 5 -- 15 (in the baker yeast network) and 15 -- 30 (in the fruit fly network) communities of $n>10$ vertices and $L\geq 2n$ in-community links. Examples of candidates for protein complexes revealed by this algorithm are shown in Fig ~\ref{fig2}. \begin{figure} \includegraphics[width=.5\textwidth]{fig2.eps} \includegraphics[width=.5\textwidth]{fig2a.eps} \includegraphics[width=.5\textwidth]{fig2b.eps} \caption{\label{fig2} Top to bottom: A mRNA splicing complex in yeast network. Two examples of densely linked clusters in the fly network. On top is the mini-chromosome maintenance complex. Clustered vertices, marked with blue haloes, are shown together with their nearest neighbors. Note only the single link between the neighbors. On the bottom is a cluster, formed around five recently duplicated and thus highly similar (paralogous) heat-shock proteins. The large link density is produced by the duplicated (paralogous) links from heat-shock proteins to their partners. The yeast network is a union of data from Refs.~\cite{ito,uetz} and consists of $N=3689$ vertices and $L=5551$ links. The fly network is taken from Refs.~\cite{fly} and spans $N=6954$ vertices with $L=20435$ links. } \end{figure} \section{Discussion and conclusions} Revealing the intrinsic connection between the finite-$T$ Potts ordering and zero-$T$ Potts clustering with the additional antiferromagnetic coupling \cite{stefan}, we developed a fast method for detecting mesoscopic-size communities in sparse networks. Our method is a natural generalization of the algorithms introduced in \cite{blatt, stefan} and is based on the Potts model with a two-parameter global antiferromagnetic term (\ref{mh},\ref{us}). Applying the method to the protein binding networks of the fruit fly and baker yeast, we were able to detect more than a hundred densely interlinked communities that included strong candidates for not yet annotated protein complexes and functional modules. The form of antiferromagnetic term that allows to achieve the desired mesoscopic clustering is by no means limited to the power law suggested here. In principle, any monotonously increasing function with tunable rate of growth will suffice. Yet the form (\ref{us}) has a strong advantage of being probably the simplest one and having the explicit growth control in form of the exponent $\alpha$. \section*{Acknowledgment} This work was supported by 1 R01 GM068954-01 grant from NIGMS. \section*{References}
train/arxiv
BkiUcrvxaKPQonPpLCQy
5
1
\section{Introduction} \label{sec:introduction} \IEEEPARstart{M}{uch} progress has been made in both understanding and replicating animal locomotion through robotics. Successful implementations include biologically-inspired controllers such as Central Pattern Generators (CPGs)~\cite{ijspeert2008,Fukuoka2003,owaki2013simple,owaki2017quadruped}, model-based control~\cite{dicarlo2018mpc,kim2019highly,sombolestan2021adaptive,bellicoso2018dynamic}, and learning-based approaches~\cite{tan2018minitaur,hwangbo2019anymal,kumar2021rma,lee2020anymal}. However, despite these successes and progress, the exact processes animals use to learn and produce motion remain unclear, especially related to how higher parts of the brain interact with CPGs in the spinal cord. The agility of robots also does not yet match that of animals. In this work, we draw inspiration from several of these areas to achieve legged locomotion by combining biology-inspired oscillators with the strength of deep learning. \subsection{Related Work} \subsubsection{Biology-Inspired Control} Central Pattern Generators are neural circuits located in the spinal cords of vertebrate animals that are capable of producing coordinated patterns of high-dimensional rhythmic output signals, with evidence from nature and experimentally validated on various robot hardware~\cite{ijspeert2008}. Among quadrupeds, the combination of CPGs, sensory input, reflexes, and mechanical design has led to adaptive dynamic walking on irregular terrain~\cite{Fukuoka2003}. Other works have focused on mechanical design inspired from biology for dynamic trot gaits~\cite{sprowitz2013cheetah}. Owaki et al.~show evidence that gait generation and transitions can occur through simple force feedback, without any explicit coupling between oscillators~\cite{owaki2013simple,owaki2017quadruped}. Righetti and Ijspeert added feedback from touch sensors to stop or accelerate transitions between swing/stance phases~\cite{righetti08}. The oscillators can also be formed in task space and mapped to joint commands with inverse kinematics, with additional control for push recovery and attitude control~\cite{barasuol2013reactive}. Ajallooeian et al.~augmented CPGs with both reflexes and Virtual Model Control~\cite{mos2013cat,mos2013oncilla}, and Hyun et al. presented a hierarchical controller leveraging proprioceptive impedance control for highly dynamic trot running~\cite{hyun2014trot}. Sensory feedback has not just been limited to proprioceptive information, for example Gay et al. used both a camera and gyroscope as feedback to a CPG to learn to walk over varying terrains~\cite{gay2013learncpg}. \begin{figure}[!t] \centering \includegraphics[width=0.5\linewidth]{figs/rough_snapshot.png}\includegraphics[width=0.5\linewidth]{figs/mass_snapshot.png} \\ \vspace{.008in} \includegraphics[width=0.25\linewidth]{figs/intro_snap_1.png}\includegraphics[width=0.25\linewidth]{figs/intro_snap_2.png}\includegraphics[width=0.25\linewidth]{figs/intro_snap_3.png}\includegraphics[width=0.25\linewidth]{figs/intro_snap_4.png} \\ \vspace{.008in} \includegraphics[width=\linewidth]{figs/intro_isaac_snapshot.png}\\ \vspace{-0.5em} \caption{Quadruped locomotion with CPG-RL. \textbf{Top:} crossing uneven terrain (left), and walking at 0.8 \texttt{m/s} with a 13.75 \texttt{kg} load representing 115\% of the nominal quadruped mass (right). \textbf{Bottom:} user-specified body height modulation to crawl underneath a ledge, both experiment and simulation. } \label{fig:intro} \vspace{-2.0em} \end{figure} \subsubsection{Model-Based Control} Conventional control approaches have shown impressive performance for real world robot locomotion \cite{dicarlo2018mpc,kim2019highly,sombolestan2021adaptive,bellicoso2018dynamic}. Such methods typically rely on solving online optimization problems (MPC) using simplified dynamics models, which require significant domain knowledge, and may not generalize to new environments not explicitly considered during development (i.e. uneven slippery terrain). There is also considerable bias in the resulting motions due to the (for example) pre-determined foot swing trajectories, and use of Raibert heuristics \cite{raibert1986legged}, which are based on a spring-loaded inverted pendulum model, for foot placement planning. \subsubsection{Learning-Based Control} Deep reinforcement learning (DRL) has emerged as an attractive solution to train control policies that are robust to external disturbances and unseen environments in sim-to-real. Similarly to model-based control, recent works have shown successful results in training ``blind'' control policies, with only proprioceptive sensing being mapped to joint commands. For quadrupedal robots, such approaches have resulted in successful locomotion controllers on both flat~\cite{tan2018minitaur,hwangbo2019anymal} and rough terrain~\cite{kumar2021rma,lee2020anymal}. Other works have shown the possibility of directly imitating animal motions~\cite{peng2020laikagoimitation}, and the emergence of different gaits through minimizing energy consumption with DRL~\cite{fu2022energy}. To improve on the reactive controllers trained only with proprioceptive sensing as input, recent works are integrating vision into the deep reinforcement learning framework. This has allowed for obstacle avoidance~\cite{yang2022learning} as well as more dynamic crossing of rough terrain through the use of sampling from height maps~\cite{miki2022learning,rudin2022anymalisaac}. Additional works have shown gap crossing~\cite{yu2022visual}, also with full flight phases learned from pixels and leveraging MPC~\cite{margolis2022pixels}. \subsubsection{Action Space and Phase in DRL} Directly mapping from proprioceptive sensing to joint space commands remains a challenging problem, even for deep networks and millions of training samples. In addition to the complexity of specifying desired motions like foot swing height, there are difficulties with the sim-to-real transfer arising from unmodeled dynamics such as the actuators. To better structure the learning problem and avoid undesired local optima, recent works are exploring different action spaces (for example directly learning torques~\cite{chen2022learning}, or desired task space positions~\cite{bellegardaIROS19TaskSpaceRL,bellegarda2020robust,bellegarda2021robust}), and in particular encoding a time component (phase) into the framework, such as Policies Modulating Trajectory Generators (PMTG) for Minitaur~\cite{iscen2018policies}, or for ANYmal~\cite{lee2020anymal,miki2022learning}. Additionally, incorporating phase encodings facilitates learning different gaits~\cite{shao2022gait}, and can also be used together with MPC~\cite{yang2022fast}. Learning with more biologically inspired approaches with explicit use of CPGs has also been shown in simulation for bipeds~\cite{li2013,kasaei2021cpg}. Li et al.~use reinforcement learning to directly learn the feedback terms of a CPG for a biped to tackle different slopes~\cite{li2013}. Kasaei et al.~use DRL to update the parameters of a CPG-ZMP walk engine and output joint target residuals to adapt to commands and different terrains~\cite{kasaei2021cpg}. For quadrupeds, Shi et al.~learn both the parameters of a foot trajectory generator as well as joint target residuals to locomote in a variety of environments including stairs and slopes~\cite{shi2021reinforcement}. \subsection{Contribution} In contrast to previous work, in this paper we propose to use deep reinforcement learning to directly learn the time-varying oscillator intrinsic amplitude and frequency for each oscillator which together forms the Central Pattern Generator. We implement the CPG network with one oscillator per limb, but without explicit couplings between oscillators, similarly to the work of Owaki et al~\cite{owaki2017quadruped}. Couplings between oscillators are known to exist in biological CPGs, but recent work has shown that they might be weaker than previously thought~\cite{owaki2017quadruped,thandiackal2021emergence}, and that sensory feedback and descending modulation might play an important role in inter-oscillator synchronization. This biology-inspired approach to locomotion additionally mitigates several drawbacks usually encountered each with CPGs and deep reinforcement learning. On the CPG side, parameter tuning remains difficult, often necessitating an expert doing so by hand, or the use of genetic algorithms. The tuning of these parameters usually results in only one gait, whereas animals exhibit a continuum of motions at different speeds and directions. Specified (strong) couplings also result in rigid and not completely natural gaits, and the role of sensory feedback and reflexes must then also be tuned and added on top of the CPG. On the other hand, when applying deep reinforcement learning algorithms for control tasks, mapping from sensory information to joint commands often results in non-intuitive motions, with great care being needed to properly tune reward functions (i.e., how to specify a desired swing foot height). Additionally, the sim-to-real transfer presents added difficulties from non-modeled dynamics and possible overfitting of simulator physics engine parameters. In this work we address all of the above difficulties, and train and deploy our CPG-RL controller on the Unitree A1 quadruped, shown in Figure~\ref{fig:intro}. Some highlights of our method include: \begin{itemize} \item fast training and ease of sim-to-real transfer (i.e. without any domain randomization or added noise in simulation) \item a minimal amount of sensory information needed in the observation space (i.e. feedback only from foot contact booleans) \item on the fly parameter selection to achieve locomotion at user-specified body heights and foot swing heights, without any needed specification in the DRL framework \item robustness to disturbances not seen in training, i.e. uneven terrain, and a dynamically added 13.75 \texttt{kg} load representing 115\% of the nominal quadruped mass \end{itemize} The rest of this paper is organized as follows. Section~\ref{sec:background} provides background details on deep reinforcement learning and Central Pattern Generators. Section~\ref{sec:method} describes our design choices and integration of Central Pattern Generators into the deep reinforcement learning framework (CPG-RL). Section~\ref{sec:result} shows results and analysis from learning our controller and sim-to-sim and sim-to-real transfers for several scenarios including minimal observation space, online modulating body height and swing foot ground clearance, and significant disturbances in terrain and added load. Finally, a brief conclusion is given in Section~\ref{sec:conclusion}. \section{Background} \label{sec:background} \subsection{Reinforcement Learning} In the reinforcement learning framework~\cite{sutton1998rl}, an agent interacts with an environment modeled as a Markov Decision Process (MDP). An MDP is given by a 4-tuple $(\mathcal{S,A,P,R})$, where $\mathcal{S}$ is the set of states, $\mathcal{A}$ is the set of actions available to the agent, $\mathcal{P}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$ is the transition function, where $\mathcal{P}(s_{t+1} | s_t, a_t)$ gives the probability of being in state $s_t$, taking action $a_t$, and ending up in state $s_{t+1}$, and $\mathcal{R}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$ is the reward function, where $\mathcal{R}(s_t,a_t,s_{t+1})$ gives the expected reward for being in state $s_t$, taking action $a_t$, and ending in state $s_{t+1}$. The agent goal is to learn a policy $\pi$ that maximizes its expected return for a given task. \subsection{Central Pattern Generators} \label{sec:cpg_intro} While a number of neural oscillators have been proposed to implement CPG circuits, here we focus on the amplitude-controlled phase oscillators from~\cite{ijspeert2007salamander} used to produce both swimming and walking behaviors on a salamander robot: \begin{align} \ddot{r}_i &= a \left(\frac{a}{4} \left(\mu_i - r_i \right) - \dot{r}_i \right) \label{eq:salamander_r} \\ \dot{\theta}_i &= \omega_i +\sum_{j}^{} r_j w_{ij} \sin(\theta_j - \theta_i - \phi_{ij}) \label{eq:salamander_theta} \end{align} where $r_i$ is the current amplitude of the oscillator, $\theta_i$ is the phase of the oscillator, $\mu_i$ and $\omega_i$ are the intrinsic amplitude and frequency, $a$ is a positive constant representing the convergence factor. Couplings between oscillators are defined by the weights $w_{ij}$ and phase biases $\phi_{ij}$. Regardless of the particular oscillator selection, several choices exist for the number of oscillators in the network, as well as how to map the output back to joint commands. For example, one oscillator can be used for each joint to directly produce motion in joint space~\cite{ijspeert2007salamander,sprowitz2013cheetah}, or the oscillator can be formed in task space and mapped to joint commands with inverse kinematics~\cite{righetti08}. Thus, this formulation results in numerous design decisions and parameters that must be tuned, commonly by hand or through evolutionary algorithms such as CMA-ES~\cite{hansen2016cma}. Notably, this tuning generally results in specific fixed gaits which may not be robust to external disturbances. \section{Learning Central Pattern Generators} \label{sec:method} In this section we describe our CPG-integrated reinforcement learning framework and design decisions for learning locomotion controllers for quadruped robots. At a high level, the agent learns to modulate the CPG parameters for each leg with deep reinforcement learning. This allows for adaptation based on sensory feedback along with the knowledge of the current CPG state. This type of learning represents an approximation of motor learning in animals, namely how higher brain centers such as the motor cortex and the cerebellum learn to send modulation signals to CPG circuits in the spinal cord. The control diagram for our method is shown in Figure~\ref{fig:control_diagram}, and we discuss all components below. \subsection{Action Space} \label{sec:action_space} \begin{figure*}[!t] \centering \includegraphics[width=0.77\linewidth]{figs/cpgrl_control_diagram.pdf} \vspace{-0.7em} \caption{The CPG-RL control architecture. Given an observation consisting of velocity commands, a subset of proprioceptive measurements, and the current CPG states, the policy network selects CPG parameters $\mu$, $\omega$, and $\psi$ for each leg $i$ (Front Left (FL), Front Right (FR), Hind Left (HL), Hind Right (HR)). The CPG states are mapped to desired foot positions $\bm{p}_d$, which are then converted to desired joint angles with inverse kinematics, and finally tracked with joint PD control to produce torques $\bm{\tau}$. The control policy selects actions at 100 Hz, and all other blocks operate at 1 kHz.} \label{fig:control_diagram} \vspace{-1.7em} \end{figure*} \begin{figure}[!tpb] \centering \includegraphics[width=0.85\linewidth]{figs/cpg_leg_traj.pdf} \\ \vspace{-0.6em} \caption{Mapping the CPG states to Cartesian foot positions. At left in the XZ-plane: ground clearance ($g_c$), ground penetration ($g_p$), max step length ($d_{step}$) are design parameters, whereas CPG states $r$ and $\theta$ control amplitude and frequency. Two trajectories are shown representing the mapping for two converged amplitude set points, $\mu$, which the agent can directly modulate to vary the step length. At right, CPG state $\phi$ controls the orientation of the trajectory with respect to the body, shown for the Front Right (FR) leg.} \label{fig:cpg_traj} \vspace{-2.2em} \end{figure} We first consider the coupled oscillators in Equations~\ref{eq:salamander_r} and~\ref{eq:salamander_theta} (one for each leg, or $i \in \{1,2,3,4\}$), whose output we wish to map to foot trajectories in Cartesian space similar to~\cite{righetti08}. Such a strategy poses two issues: 1) the coupling severely biases the rhythm into potentially unnatural motions, and 2) with only two variables the motions are restricted to a single plane. Inspired by animals, which frequently deviate and transition between various gaits and produce omnidirectional motions, we solve both issues by removing any explicit coupling ($w_{ij}=0$), with the intuition that the agent should learn to coordinate its limbs on its own (as opposed to being fixed by the CPG coupling topology), and we add another state variable, $\phi$, to orient motion in any direction (see right of Figure~\ref{fig:cpg_traj}). Thus, for each limb we define the following oscillator: \begin{align} \ddot{r}_i &= a \left(\frac{a}{4} \left(\mu_i - r_i \right) - \dot{r}_i \right) \label{eq:rl_r}\\ \dot{\theta}_i &= \omega_i \label{eq:rl_theta}\\ \dot{\phi}_i &= \psi_i \label{eq:rl_phi} \end{align} In contrast to recent works making use of phase in reinforcement learning, we propose an action space to directly modulate the intrinsic oscillator amplitude and phases, by learning to modulate $\mu_i$, $\omega_i$, and $\psi_i$ for each leg. This allows the agent to adapt each of these states online in real-time (for instance increasing the amplitude to step farther or accelerating a leg movement during swing), compared with the more traditional CPG approach of optimizing for only a single set of fixed parameters. Additionally, we hypothesize that this framework will lead to more interpretable results, as we can directly observe how the agent modulates the oscillators depending on feedback from both the environment as well as from the current oscillator states (for example in contrast to directly learning joint commands). Thus, for the omnidirectional locomotion task, our action space can be summarized as $\vec{a} = [\bm{\mu}, \bm{\omega}, \bm{\psi}] \in \mathbb{R}^{12}$. The agent selects these parameters at 100 Hz, and we use the following limits for each input during training: $\mu \in [1, 2]$, $\omega \in [0, 4.5]$ Hz, and $\psi \in [-1.5, 1.5]$ Hz, and $a=150$. To map from the oscillator states to joint commands, we first compute corresponding desired foot positions, and then calculate the desired joint positions with inverse kinematics. The desired foot position coordinates are given as follows: \begin{align} x_{i,\text{foot}} &= -d_{step} (r_i-1) \cos(\theta_i) \cos(\phi_i) \\ y_{i,\text{foot}} &= -d_{step} (r_i-1) \cos(\theta_i) \sin(\phi_i) \\ z_{i,\text{foot}} &= \begin{cases} -h + g_c\sin(\theta_i) & \text{if } \sin(\theta_i) > 0 \\ -h + g_p\sin(\theta_i) & \text{otherwise} \end{cases} \end{align} \noindent where $d_{step}$ is the maximum step length, $h$ is the robot height, $g_c$ is the max ground clearance during swing, and $g_p$ is the max ground penetration during stance. A sample visualization of the foot trajectory for a set of these parameters is shown at left of Figure~\ref{fig:cpg_traj}. These parameters make it possible to specify behaviors that are in general difficult to learn when directly outputting joint commands. For example, specifying a foot swing height would usually necessitate keeping track of a history of states and exasperates the temporal credit assignment problem of reinforcement learning. With our framework, we randomly sample $h$, $g_c$, and $g_p$ during training (i.e. the agent has no explicit observation of these parameters) to learn continuous behavior, which then allows the user to specify both a robot height and swing foot ground clearance during deployment. \subsection{Observation Space} \label{sec:obs_space} We consider three observation spaces in this work with varying amounts of proprioceptive sensing: full ($obs_{full}$), medium ($obs_{med}$) and minimal ($obs_{min}$). Our purpose is to investigate how much the locomotor performance depends on the richness of the observation space, and also to identify which information is sufficient to reach reasonable performance. This investigation is also interesting for neuroscience to explore which type of sensor modality is more important than others for generating stable gaits. The CPG states are always in the observation space, and compared with the proprioceptive sensing which is subject to measurement noise from onboard sensors, the CPG states are always known. This provides a source of stability to the method and we believe eases the sim-to-real transfer. \noindent \textbf{Full observation:} The full observation consists of velocity commands and measurements reasonably available with proprioceptive sensing, and are becoming standard in DRL approaches. These include the body state (orientation, linear and angular velocities), joint state (positions, velocities), and foot contact booleans. The last action chosen by the policy network and CPG states $\{\bm{r,\dot{r},\theta,\dot{\theta}, (\phi, \dot{\phi})}\}$ are concatenated to the proprioceptive measurements. \noindent \textbf{Medium observation:} The medium observation removes the joint state and last action from the full observation. This observation space is chosen to show that joint information is actually not necessary for omnidirectional locomotion through our method. Other states remain the same (i.e.~velocity commands, body state, foot contact booleans, and CPG states). \noindent \textbf{Minimal observation:} The minimal observation space consists only of foot contact booleans and the CPG states $\{\bm{r,\dot{r},\theta,\dot{\theta}}\}$. This observation space shows that coordination between limbs can be accomplished with very little sensing at all, with the only environmental feedback being from foot contact booleans. The idea for this space is inspired from the force feedback term in traditional CPGs shown to coordinate transitions between gaits~\cite{owaki2013simple,owaki2017quadruped}, also known as \textit{Tegotae}-based control~\cite{owaki2017minimal}. The importance of contacts and limb loading has also been shown by Ekeberg and Pearson in a simulation of cat locomotion~\cite{ekeberg2005computer}. For this observation space, the task is only to move forward at a particular desired velocity $v_{b,x}^{*}$. \subsection{Reward Function} \label{sec:reward_function} We design our reward function to track desired body velocity commands in the body frame $x$ and $ y$ directions as well as a desired yaw rate $\omega_{b,z}^{*}$. We include terms to minimize other undesired body velocities as well as penalize the work (aiming at keeping the body stable and minimizing the energy consumption). More precisely, the reward function is a summation of the following terms: \begin{itemize} \item linear velocity tracking, body $x$ direction: $f(v_{b,x}^{*} - v_{b,x})$ \item linear velocity tracking, body $y$ direction: $f(v_{b,y}^{*} - v_{b,y})$ \item angular velocity tracking (body yaw rate): $f(\omega_{b,z}^{*} - \omega_{b,z})$ \item linear velocity penalty in body $z$ direction: $-v_{b,z}^2$ \item angular velocity penalty (body roll and pitch rates): $-||\bm{\omega}_{b,xy}||^2$ \item work: $-|\bm{\tau} \cdot (\dot{\bm{q}}^t-\dot{\bm{q}}^{t-1})|$ \end{itemize} \noindent where $(\cdot)^{*}$ represents a desired command, and $f(x) := \exp{(-\frac{||x||^2}{0.25}) } $. These terms are weighted with $0.75 dt$, $0.75 dt$, $0.5dt$, $2dt$, $0.05dt$, $0.001dt$, where $dt=0.01$ is the control policy time step. Notably, as discussed in Section~\ref{sec:action_space}, we do not need to put any additional terms on foot swing time. Compared with other learning-based approaches for omnidirectional locomotion, this is a simple set of terms to properly specify desired behavior. \begin{table}[tpb] \centering \footnotesize \caption{PPO Hyperparameters and neural network size.} \vspace{-0.6em} \resizebox{0.71\linewidth}{!}{% \begin{tabular}{ c c } Parameter & Value \\ \hline Batch size & 98304 (4096x24) \\ Mini-bach size & 24576 (4096x6)\\ Number of epochs & 5\\ Clip range & 0.2\\ Entropy coefficient & 0.01\\ Discount factor & 0.99\\ GAE discount factor & 0.95\\ Desired KL-divergence $kl^*$ & 0.01\\ Learning rate $\alpha$ & adaptive\\ Number of hidden layers (all networks) & 3 \\ Number of hidden units per layer & [512, 256, 128] \\ Activation & elu \\ \hline \end{tabular} }\\ \label{table:ppo} \vspace{-2.5em} \end{table} \subsection{Training Details} \label{sec:training_details} We use Isaac Gym~\cite{isaacgym,rudin2022anymalisaac} with PhysX as our physics engine and training environment, and the Unitree A1 quadruped~\cite{unitreeA1}. This framework allows for high throughput, where we simulate 4096 A1s in parallel on a single NVIDIA RTX 3070 GPU. We use PPO~\cite{ppo} to train the policy, and the relevant hyperparameters and neural network architecture details (multilayer perceptron with 3 hidden layers) are listed in Table~\ref{table:ppo}. With this framework, similar to in~\cite{rudin2022anymalisaac}, we can learn control policies within minutes. The maximum episode length is 20 seconds, and the environment resets for an agent if the base or a thigh comes in contact with the ground. The terrain is always flat during training. With each reset, we sample new parameters $h$ and $g_c$ for the mapping from oscillator states to joint commands so the agent can learn to locomote at varying body heights and step heights. New velocity commands $\{v_{b,x}^{*},v_{b,y}^{*},\omega_{b,z}^{*}\}$ are sampled every 5 seconds. Although we find that domain randomization is not strictly needed to perform a sim-to-real transfer, unless specified we randomize the following parameters during training (kept constant during an episode): \begin{itemize} \item \textbf{ground coefficient of friction} varied in [0.3, 1] \item \textbf{limb mass} varied within 20\% of nominal values \item \textbf{added base mass} up to 5 \texttt{kg} \item \textbf{external push} of up to 0.5 \texttt{m/s} applied in a random direction to the base every 15 seconds \end{itemize} No noise is added to the observation. The control frequency of the policy is 100 Hz, and the torques computed from the desired joint positions are updated at 1 kHz. The equations for each of the oscillators (Eqns.~\ref{eq:rl_r}-\ref{eq:rl_phi}) are thus also integrated at 1 kHz. The joint PD controller gains are $K_p=100,\ K_d=2$. For the sim-to-real transfer, all proprioceptive information is measured from the Unitree sensors. \section{Experimental Results and Discussion} \label{sec:result} In this section we report results from learning locomotion controllers with CPG-RL. We seek to evaluate the necessity of sensory information as defined by the three observation spaces, the difficulty of the sim-to-real transfer, the interpretability of the resulting policy, and the robustness to various disturbances not seen during training. Snapshots of some of our results are shown in Figure~\ref{fig:intro}, and the supplementary video shows clear visualizations of the discussed experiments. \input{tex/exp_results} \input{tex/sim2sim} \section{Conclusion} \label{sec:conclusion} In this work we have presented CPG-RL, a framework for learning and modulating intrinsic oscillator amplitudes and frequencies to coordinate rhythmic behavior among limbs to achieve quadruped locomotion. Our results have shown that this method results in fast training and ease of sim-to-real transfer, where we show successful transfers with no domain randomization and only minimal sensory feedback. Additionally, the framework allows the user to easily specify (on the fly and/or in training) desired legged robot quantities like body height and swing foot ground clearance, the latter of which can be challenging to specify through reward shaping in end-to-end learning frameworks. The framework also proved robust to disturbances not seen in training, for example A1 was able to walk over uneven terrain or with added mass 2.75x greater than in training (115\% of nominal robot mass) in hardware, and 250\% of the nominal robot mass in sim-to-sim. To the best of our knowledge, this represents the highest robustness against loads so far achieved on the Unitree A1 quadruped. In terms of neuroscience, the use of DRL will allow us in the future to explore questions that are still open in animal motor control, namely the exact roles and interactions of descending pathways, interoscillator couplings within CPG networks, and sensory feedback in gait generation. The results presented here suggest (i) that descending pathways are more effective at modulating locomotion by acting on the CPG circuits rather than directly on muscles (CPG-RL performs better than $Joint$ $PD$), (ii) that stable locomotion can be obtained with non-existent (or weak) interoscillator couplings, and (iii) that sensing contact (or loading) in the limb is one of the most important sensory information. This last point is in agreement with the conclusion of a neuromechanical simulation of cat locomotion~\cite{ekeberg2005computer}. \section*{Acknowledgements} We would like to thank Milad Shafiee and Alessandro Crespi for assisting with hardware setup and experiments. \bibliographystyle{IEEEtran} \subsection{Sim-to-Real Experimental Results} \subsubsection{CPG State Modulation} \label{sec:result_cpg_state} In the video, we examine how the agent trained with CPG-RL coordinates and modulates the CPG states to produce locomotion. We verify how similar the resulting gait is compared with fixed open-loop CPG gaits (trot, walk, pace) as generated by Equations~\ref{eq:salamander_r} and~\ref{eq:salamander_theta}, tuned for locomotion at particular frequencies. We command the agent to walk forward at 0.8 \texttt{m/s} while adding variable mass up to 13.75 \texttt{kg}. One second of the CPG amplitude and phase plots is shown in Figure~\ref{fig:cpg_r_theta}, where we observe an approximate trot gait with a cycle of approximately 0.5 seconds. The swing duration (when $0 \leq \theta \leq \pi $) can be observed to be lower than the stance duration, as is typical of quadruped animals. It is additionally apparent that there is coordination between phase and amplitude, the latter of which is not quite as periodic or constant, implying the agent uses it more to adapt to sensory feedback. In general however the amplitude appears higher when in stance phase, showing the agent uses this time to push backwards and propel the quadruped forward. This result is also apparent by examining the leg frame foot $XZ$ trajectories, shown in Figure~\ref{fig:cpg_xz}. Notably, the trajectories for both front feet are significantly modulated from the nominal task space trajectory used in the open-loop trot gait in the video, where now the foot is primarily underneath and behind the hip in the leg frame. Compared with the fixed open-loop CPG gaits, CPG-RL produces a continuum of more natural gaits that are less rigid, more efficient, more robust, and have lower frequency. \begin{figure}[!t] \centering \includegraphics[width=0.7\linewidth]{figs/cpg_states_amplitude.png} \\ \includegraphics[width=0.7\linewidth]{figs/cpg_states_phase.png} \\ \vspace{-0.5em} \caption{CPG states for 1 second of a deployed policy locomoting at 0.8 \texttt{m/s}. An approximate trot gait can be observed, with faster swing time ($0 \leq \theta \leq \pi $) than stance time. Coordination between the amplitude and phase variables can be seen, noticeably increasing amplitude in stance phase to push backward and propel the quadruped forward.} \label{fig:cpg_r_theta} \vspace{-1.2em} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=\imgHalf\linewidth]{figs/xz/xz_FL.png}\hspace{\imgHalfSpace\linewidth}\includegraphics[width=\imgHalf\linewidth]{figs/xz/xz_FR.png} \\ \includegraphics[width=\imgHalf\linewidth]{figs/xz/xz_HL.png}\hspace{\imgHalfSpace\linewidth}\includegraphics[width=\imgHalf\linewidth]{figs/xz/xz_HR.png} \\ \vspace{-0.5em} \caption{Leg frame foot XZ trajectories for a deployed policy locomoting at 0.8 \texttt{m/s} with dynamically added mass. Significant modulations can be seen to the sample task space trajectory represented by the dotted line. Notably, the amplitude for the front feet shifts the trajectory to lie mostly behind the hip. } \label{fig:cpg_xz} \vspace{-2.3em} \end{figure} \subsubsection{Observation Space Effects} \label{sec:result_obs} As can be seen in Figure~\ref{fig:intro} and the video, we achieve robust sim-to-real transfer with and without joint state information (i.e.~using either the full or medium observation spaces, $obs_{full}$ and $obs_{med}$). This result holds for omnidirectional motions at varying velocities for scenarios including uneven terrain, push recovery, and significant added loads encompassing 115\% of the nominal robot mass (see Section~\ref{sec:result_mass}). This result is attributable to the presence of the oscillator states in the observation space, and its mapping back to the task space trajectories. We are also able to transfer the CPG-RL $obs_{min}$ policy, for which the only feedback from the environment that the agent observes is through the contact booleans, and is trained without any simulator noise or randomization. While the agent has no direct observation of its body velocity, it can relate the frequencies of the neural oscillator states (which it has direct control over) to the reward it receives at each time step, to track 0.5 \texttt{m/s} in the body $x$ direction while keeping all other velocities 0. This result supports that coordination between limbs is possible through very little sensory feedback \cite{owaki2013simple,owaki2017quadruped,owaki2017minimal}. We also tested training policies without the foot contact feedback, i.e. with only CPG states in the observation space, but the agent was unable to learn to locomote at any fixed velocity. This shows that some sort of feedback is necessary to coordinate locomotion. \subsubsection{Body Height and Swing Foot Height Online Modulation} As discussed in Section~\ref{sec:action_space}, our framework naturally allows the user to specify the body height and foot ground clearance through setting parameters $h$ and $g_c$ for the mapping from oscillator states to desired foot positions. The agent has no explicit knowledge of these parameters, but in the full observation space it can find a relationship from observing the direct effects on the joint positions. Once training is complete a user can change either of these parameters in real time, as the agent continues to locomote at its velocity commands. In Figure~\ref{fig:intro} and in the video we show an example of lowering the body height $h$ from its nominal height of 0.30 \texttt{m} to 0.19 \texttt{m} in order to crawl underneath a ledge, and then increasing it back to 0.30 \texttt{m} on the other side. We also test varying the foot ground clearance $g_c$ online, changing it from 0.03 \texttt{m} to 0.12 \texttt{m} (almost half of the nominal robot height), and then back down to 0.03 \texttt{m}. \subsubsection{Uneven Terrain} \label{sec:result_terrain} As discussed in Section~\ref{sec:training_details}, we train our policies only on flat terrain, though the coefficient of friction is varied. Notably, the rigid body dynamics used in simulation are not exactly representative of the hardware, as the A1 feet undergo significant deformations on contact, in addition to other sim-to-real considerations such as the lack of motor modeling. We tested adding debris such as soft foam and hard styrofoam (see Figure~\ref{fig:intro} and video) and found the policy to be robust to such terrain. These materials are very light and easy to kick and crumple by A1, and even when the material got stuck or rolled up, causing swing feet to catch and impeding progress, it did not immediately fall. \subsubsection{Added Mass} \label{sec:result_mass} We also tested adding varying mass to the robot while it was trotting at various velocities (0.3-0.8 \texttt{m/s}). In all cases there was no noticeable drop in performance, which can be attributed to both the robustness of the method as well as the high joint gains we are able to use. We achieved trotting at 0.8 \texttt{m/s} with 13.75 \texttt{kg} (115\% of nominal robot mass) dynamically added to the robot, which was all of the available mass we had in the lab. To the best of our knowledge, this represents the highest robustness against loads achieved on A1. Other comparisons include RMA~\cite{kumar2021rma} achieving up to a 12 \texttt{kg} load at lower velocity, a model-based method achieving 11 \texttt{kg} while standing~\cite{sombolestan2021adaptive}, and the Unitree default model-based controller has a max rating of 5 \texttt{kg}~\cite{unitreeA1}. Remarkably, we used the domain randomization explained in Section~\ref{sec:training_details}, which only added up to 5 \texttt{kg} in simulation, again showing significant robustness to out of distribution disturbances. Additionally, as we achieve the same results with both the medium and full observation spaces, we show that joint state information is actually not necessary for dynamic locomotion with our method. \subsection{Sim-to-Sim Comparison with Learning in Joint Space} \label{sec:result_comparison} To evaluate the benefits of our approach, we compare CPG-RL with a standard joint space training pipeline, which takes as input proprioceptive measurements, and outputs joint space offsets from a nominal resting position using the same method from~\cite{rudin2022anymalisaac}, which we call $Joint$ $PD$. The observation space contains all proprioceptive sensing used in the full observation space by CPG-RL, without the oscillator states. All training hyperparameters, environment details, and neural network architectures remain the same. To train the $Joint$ $PD$ baseline, we compare using both (a) the same reward function as described in Section~\ref{sec:reward_function}, as well as (b) the more complex \textit{special} reward function from~\cite{rudin2022anymalisaac}, which additionally includes terms for orientation, joint acceleration, joint velocity, joint torque, action rate, collisions, feet air time, and base height. The video shows training curves for learning locomotion policies with CPG-RL as well as with the $Joint$ $PD$ baseline trained with both reward functions (a) and (b). While all three methods provide similar returns as training progresses, the policies trained with CPG-RL produce the most natural looking gaits, are easiest to interpret, and allow the user to set swing foot ground clearance. In contrast, $Joint$ $PD$ policies trained with the same reward function (a) result in unnatural gaits that overfit the simulator dynamics (shown in the video), and are unlikely to transfer well in sim-to-real. $Joint$ $PD$ policies trained with the more complex \textit{special} reward function (b) can achieve more natural gaits, however tracking accuracy is lower (Tables~\ref{table:vx_comp} and~\ref{table:omni_comp_v2}), and it is not possible to directly control the swing foot ground clearance. \input{tex/sim2sim_tables} Quantitatively, we evaluate and compare the performance of CPG-RL as well as $Joint$ $PD$ policies~\cite{rudin2022anymalisaac} in a sim-to-sim transfer from Isaac Gym with the PhysX physics engine to Gazebo with the ODE physics engine. While a successful sim-to-sim transfer does not necessarily guarantee a successful sim-to-real transfer, sim-to-sim allows safely testing many policies without risking damaging the hardware, as used in recent work to verify the agent has not overfit the training simulator's dynamics before sim-to-real transfers~\cite{bellegarda2021robust,chen2022learning}. While there is no noise added to the observation space in Isaac Gym which uses ground truth data, in Gazebo we simulate the state estimation (i.e. Kalman Filter) as used on the real hardware. For each method (CPG-RL or $Joint$ $PD$~\cite{rudin2022anymalisaac}) and observation space ($obs_{full}$, $obs_{med}$, $obs_{min}$), we also compare performance when training with and without randomization as described in Section~\ref{sec:training_details}. \subsubsection{Sim-to-sim Forward Locomotion} We first train policies (at least 10) for each method and observation space with the goal of tracking only forward velocities in the body $x$ direction, namely $v_{b,x}^{*} \in [0.2, 1]$ (\texttt{m/s}). Table~\ref{table:vx_comp} shows sim-to-sim tracking performance of desired command $v_{b,x}^{*} = 0.5$ (\texttt{m/s}), where we present the mean velocity, duty factor, gait period, as well as body height and foot ground clearance. The best tracking performance is for CPG-RL with both full and medium observation spaces. The mean duty factor and gait period are much higher for policies trained with CPG-RL, suggesting greater stability. Compared with $Joint$ $PD$ policies which learn to locomote at a fixed body height and ground clearance, all CPG-RL policies can vary height and foot ground clearance online. \subsubsection{Sim-to-sim with Added Loads} We take the same policies and repeat the transfers while adding loads to the robot and still commanding $v_{b,x}^{*} = 0.5$ (\texttt{m/s}). Figure~\ref{fig:mass_comp} shows the mean velocity tracking performance for added masses from 0 to 30 \texttt{kg}, in increments of 3 \texttt{kg}. All policies, whether trained with noise (i.e. up to 5 \texttt{kg} in Isaac Gym) or without, are able to make at least some forward progress with loads up to 9 \texttt{kg}. The points labeled with $*$s indicate that performance is not guaranteed: the robot either stops or falls down, with increasing probability for higher loads. The dashed lines show the performance of policies trained without any noise in Isaac Gym, which notably still allows all policies trained with CPG-RL to locomote with 15 \texttt{kg}, even for the minimal observation space $obs_{min}$. Interestingly, the policies trained with CPG-RL and $obs_{med}$ perform better than policies trained with $obs_{full}$. Under higher disturbances, the joint states are an additional source of noise and may take value combinations unseen in training, which may shift the expected distribution the agent has learned to map between observations and actions. The results show that CPG-RL allows sim-to-sim transfer with loads representing 250\% of the nominal robot mass, while trained with noise of only up to 42\% of the nominal robot mass. \subsubsection{Sim-to-sim Omnidirectional Commands} We next train policies (at least 10) for each method and observation space to track omnidirectional commands in the following ranges: $v_{b,x}^{*} \in [-1, 1]$ (\texttt{m/s}), $v_{b,y}^{*} \in [-1, 1]$ (\texttt{m/s}), $\omega_{b,z}^{*} \in [-1, 1]$ (\texttt{rad/s}). Table~\ref{table:omni_comp_v2} shows sim-to-sim tracking performance of various commanded omnidirectional velocities within these ranges. The data shows that CPG-RL policies can closely track desired forward, lateral, and angular velocities, as well as combinations of these, even when trained without noise. In contrast, we note that in addition to not tracking the commands as accurately, transferring the omnidirectional $Joint$ $PD$ policies comes with several added difficulties compared to CPG-RL. As the training curve converges, there is a very small window (can be fewer than 50 iterations) for which the $Joint$ $PD$ policies are able to transfer sim-to-sim, which is also not consistent across different random seeds. If training continues, while the average return does not increase, the resulting policies become more and more unnatural as the agent exploits the simulator dynamics (we observe tiny steps with the front limbs, and both rear limbs in the air), and are unable to transfer sim-to-sim. We also observe that training $Joint$ $PD$ policies requires much more tuning and design decisions, including reward function tuning, dynamics randomization parameter tuning, possible motor modeling, etc. which overall results in longer training times compared with CPG-RL. \subsubsection{Joint PD Observation Spaces} While possible to learn omnidirectional locomotion with $Joint$ $PD$ and $obs_{med}$, such policies are unable to transfer due to learning high frequency small-step gaits that exploit the simulator dynamics, lacking feedback from the joint states. Training in joint space with $obs_{min}$ is unable to learn any locomotion policy.
train/arxiv
BkiUbRM5qoaAwmbq1hxC
5
1
\section{introduction} The uncertainty relation is an important feature in quantum physics and its comprehension requires unceasing improvement \cite{heisenberg,ozawa,ozawa2,ozawa3,ozawa4}. A similar relation was recently proposed in classical viscous hydrodynamics \cite{koide18,koide_review20}. The derived relations describe the uncertainty associated with position and momentum of a fluid element. Differently from the quantum-mechanical relations, the finite minimum uncertainty is induced by thermal fluctuations. Nevertheless, the hydrodynamic relations have the same structure as the quantum-mechanical ones. When we apply the hydrodynamic uncertainty relations to Madelung's hydrodynamic representation of the Schr\"{o}dinger equation, the well-known Kennard and Robertson-Schr\"{o}dinger inequalities in quantum mechanics are reproduced. However the minimum uncertainty state for this viscous uncertainty relations has not been derived. In this paper, we consider a general fluid described by the following differential equation, \begin{eqnarray} && (\partial_t + {\bf v} \cdot \nabla ) v^{i} = -\frac{1}{\uM} \partial_i V + 2\kappa \partial_i \frac{\nabla^2 \sqrt{\rho}}{\sqrt{\rho}} \nonumber \\ && - \frac{1}{\uM \rho } \partial_i \left\{ P -\left( \mu + \frac{\eta}{D} \right) \left( \nabla \cdot {\bf v} \right) \right\} + \frac{1}{\uM \rho}\sum_{j=1}^D \partial_j \left( \eta E^{ij} \right) \, , \label{eqn:nsk} \nonumber \\ \end{eqnarray} where ${\bf v}$, $V$, $P$, $\eta$ and $\mu$ are the velocity field, the external potential, the pressure, the shear viscosity and the second coefficient of viscosity, respectively. The number of the spatial dimension is denoted by $D$. The traceless symmetric stress tensor is defined by \begin{equation} E^{ij} = \frac{1}{2} \left( \partial_i v^j + \partial_j v^i \right) - \frac{1}{D} \left( \nabla \cdot {\bf v} \right) \delta_{ij} \, . \nonumber \end{equation} Normally, hydrodynamics is described using the mass distribution. For the sake of comparison with quantum mechanics, however, we use the distribution of constituent particles of the fluid $\rho$, which is normalized by the number of constituent particles $N$. The mass distribution is given by $\uM \rho$ with $\uM$ being the mass of constituent particles of a simple fluid. This equation is reduced to the Navier-Stokes-Fourier (NSF) equation when the last term on the first line is dropped. We call this additional term the $\kappa$ term. Equation (\ref{eqn:nsk}) appears at least in three applications of hydrodynamics. Korteweg considered that the behavior of liquid-vapor fluids near phase transitions is described by a generalized equation of fluid. This is called the Navier-Stokes-Korteweg (NSK) equation and Eq.\ (\ref{eqn:nsk}) is a special case of the NSK equation. Then the $\kappa$ term describes the capillary action \cite{korteweg}. Brenner pointed out that, since the velocity of a tracer particle of a fluid is not necessarily parallel to the mass velocity, the existence of these two velocities should be taken into account in the formulation of hydrodynamics. This theory is called bivelocity hydrodynamics \cite{koide18,koide_review20,brenner,gustavo,dadzie} and the NSK equation is understood to be one of the variants. Lastly, the $\kappa$ term is equivalent to the so-called gradient of the quantum potential. Indeed the NSK equation becomes Madelung's hydrodynamics when we choose $\kappa = \hbar^2/(4\uM^2)$ in the vanishing viscosity limit. In addition, Eq.\ (\ref{eqn:nsk}) is sometimes used as a model of a quantum viscous fluid \cite{brull2010,bresch19}. The purpose of this paper is to derive the minimum uncertainty state of the fluid described by the NSK equation. As will be seen later, Eq.\ (\ref{eqn:nsk}) is formulated in the framework of the generalized variational principle, the stochastic variational method (SVM) \cite{yasue,zambrini,koide18,koide_review20,koide12,koide-review1,koide-review2,koide19,koide20-1}. Then the uncertainty relation of the NSK fluid is derived by applying the method developed in Refs.\ \cite{koide18,koide20-1,koide_review20}. We show that the minimum uncertainty state of the derived uncertainty relation is given by a generalized coherent state. We further find that this minimum uncertainty is controlled by the shear viscosity and can be smaller than the inviscid minimum value for sufficiently weak viscosity. This uncertainty reflects the information of the fluctuating microscopic degrees of freedom in the fluid and will modify the standard hydrodynamic scenario, for example, in heavy-ion collisions. This paper is organized as follows. In Sec.\ \ref{sec:svm}, the NSK equation is formulated in the framework of the stochastic variational method \cite{yasue,zambrini,koide18,koide_review20,koide12,koide-review1,koide-review2,koide19,koide20-1}. In Sec.\ \ref{sec:ucr}, the uncertainty relation is derived by applying the method in Refs.\ \cite{koide18,koide_review20}. In Sec.\ \ref{sec:mus}, we derive the minimum uncertainty state of the NSK equation and study the properties. Concluding remarks and the possible influence in heavy-ion collision physics are discussed in Sec.\ \ref{sec:conclusion}. \section{Stochastic variational method} \label{sec:svm} To define the uncertainty relation in fluids, we formulate Eq.\ (\ref{eqn:nsk}) in SVM \cite{yasue,zambrini,koide18,koide_review20,koide12,koide-review1,koide-review2,koide19,koide20-1}. As a similar but different approach, see Ref.\ \cite{kuipers} As is well-known, the behavior of a fluid can be described by the ensemble of fluid elements. We thus consider the variation of the trajectory of a fluid element in SVM. A fluid element is an abstract volume element with a fixed mass and constituent particles inside of it are assumed to be thermally equilibrated. For the sake of simplicity, however, we identify a fluid element with a constituent particle in the following discussion. See Ref.\ \cite{koide_review20} for more details on the uncertainty relation for fluid elements. In SVM, the viscous and $\kappa$ terms are induced through the fluctuations of constituent particles (fluid elements). Then the trajectory of a constituent particle is supposed to be given by the forward stochastic differential equation (SDE), \begin{eqnarray} \ud \widehat{\bf r}(t) = {\bf u}_+ (\widehat{\bf r}(t), t) \ud t + \sqrt{2\nu} \ud\widehat{\bf W}(t) \, \, \, (\ud t > 0) \, . \nonumber \end{eqnarray} The second term on the right-hand side represents the noise of Brownian motion. We used $(\widehat{\,\,\,\,})$ to denote stochastic variables and $\ud \widehat{A}(t) = \widehat{A}(t+\ud t) - \widehat{A}(t)$ for an arbitrary $\widehat{A}(t)$. The standard Wiener process is described by $\widehat{\bf W}(t)$ which satisfies \begin{equation} \begin{split} \uE[ \ud\widehat{\bf W}(t) ] = 0\, , \, \, \, \, \uE [ \ud\widehat{W}^{i} (t) \ud\widehat{W}^{j} (t^\prime) ] = |\ud t| \, \delta_{t \,t^\prime} \, \delta_{ij} \, , \end{split} \label{eqn:wiener} \end{equation} where $\uE[\, \, \, \,]$ denotes the ensemble average for the Wiener process. Note that ${\bf u}_+ (\widehat{\bf r}(t),t)$ is stochastic because of $\widehat{\bf r}(t)$, but ${\bf u}_+ ({\bf x},t)$ is a smooth function. The field ${\bf u}_+ ({\bf x},t)$ is associated with the velocity of constituent particles. The purpose of SVM is to determine its form by applying the variational principle. The noise intensity $\nu$ controls the stochasticity of the trajectory. In the derivation of Madelung's hydrodynamics, $\nu$ is given by the function of the Planck constant \cite{yasue}. In the derivation of the NSF equation, however, $\nu$ characterizes the intensity of thermal fluctuations and thus is a function of temperature \cite{koide12}. In this work, we consider that $\nu$ is a general function of the Planck constant and temperature. The standard definition of velocity is not applicable in stochastic trajectories because the left and right-hand limits of the inclination of stochastic trajectories do not agree. To distinguish this difference, we consider the backward time evolution of the trajectory described by the backward SDE, \begin{eqnarray} \ud \widehat{\bf r}(t) = {\bf u}_- (\widehat{\bf r}(t), t) \ud t + \sqrt{2\nu} \ud \underline{\widehat{\bf W}}(t) \, \, \, (\ud t < 0) \, , \nonumber \end{eqnarray} where $\ud \underline{\widehat{\bf W}}(t)$ satisfies the same correlation properties as Eq.\ (\ref{eqn:wiener}) using $|\ud t | = - \ud t$. The field ${\bf u}_- ({\bf x},t)$ is associated with the velocity backward in time. Because of this ambiguity of velocity, Nelson introduced two different time derivatives \cite{nelson}: one is the mean forward derivative $\uD_+$ and the other the mean backward derivative $\uD_-$, which are defined by \begin{equation} \uD_\pm \widehat{\bf r}(t) = \lim_{\ud t \rightarrow0\pm} \uE \left[ \frac{ \widehat{\bf r}({t + \ud t}) - \widehat{\bf r}(t)}{\ud t} \Big| \widehat{\bf r}(t) \right] = {\bf u}_\pm (\widehat{\bf r}(t), t) \, . \label{eqn:mfd} \end{equation} Here the expectation value is the conditional average for fixing $\widehat{\bf r}(t)$ and we used that $\widehat{\bf r}(t)$ is Markovian. When these are operated to a function of $\widehat{\bf r}(t)$, we find \begin{eqnarray} \uD_\pm f(\widehat{\bf r}(t),t) \Big|_{\widehat{\bf r}(t) = {\bf x}} \hspace{-0.1cm}= \left\{ \partial_t + {\bf u}_\pm ({\bf x},t) \cdot \nabla \pm \nu \nabla^2 \right\} f({\bf x},t) \, , \label{eqn:ito} \end{eqnarray} where $f({\bf x},t)$ is an arbitrary smooth function and we used Ito's lemma \cite{koide_review20}. That is, $\uD_+$ and $\uD_-$ correspond to material derivatives along the stochastic trajectories described by the forward and backward SDE's, respectively. These derivatives satisfy the following relation, \begin{eqnarray} \lefteqn{\int^{t_b}_{t_a} \ud t \, \uE \left[ \widehat{B}(t) \uD_+ \widehat{A}(t) + \widehat{A}(t) \uD_- \widehat{B}(t) \right]} && \nonumber \\ &&= \uE \left[ \widehat{A}(t_b) \widehat{B}(t_b) - \widehat{A}(t_a) \widehat{B}(t_a) \right] \, . \nonumber \end{eqnarray} This corresponds to the stochastic generalization of integration by parts \cite{koide_review20}. The particle distribution is defined by \begin{eqnarray} \rho ({\bf x},t) = \int \ud^D {\bf R} \, \rho_0 ({\bf R}) \uE[\delta({\bf x} - \widehat{\bf r}(t))] \, , \nonumber \end{eqnarray} where ${\bf R} $ denotes the initial position of the constituent particles and its distribution is characterized by $\rho_0 ({\bf R})$. Applying the forward and backward SDE's to this definition, two Fokker-Planck equations are obtained, \begin{eqnarray} \partial_t \rho({\bf x},t) &=& -\nabla \cdot \left\{ {\bf u}_+ ({\bf x},t) \rho({\bf x},t) \right\}+ \nu \nabla^2 \rho({\bf x},t) \, , \label{eqn:ffp}\\ \partial_t \rho({\bf x},t) &=& -\nabla \cdot \left\{ {\bf u}_- ({\bf x},t) \rho({\bf x},t) \right\}- \nu \nabla^2 \rho({\bf x},t) \, . \label{eqn:bfp} \end{eqnarray} The first and second equations are obtained using the forward and backward SDE's, respectively. The different sign in the second terms on the right-hand sides is due to $|\ud t|$ in the correlation function of the Wiener process (\ref{eqn:wiener}). That is, the second term of Eq.\ (\ref{eqn:ffp}) represents the diffusion effect induced by the noise term, but the corresponding term in Eq.\ (\ref{eqn:bfp}) gives the accumulation effect. These equations should be equivalent. To conform Eq.\ (\ref{eqn:bfp}) to Eq.\ (\ref{eqn:ffp}), ${\bf u}_- ({\bf x},t)$ should be chosen to satisfy the consistency condition, \begin{eqnarray} {\bf u}_+ ({\bf x},t) = {\bf u}_- ({\bf x},t) + 2\nu \nabla \ln \rho ({\bf x},t) \label{eqn:cc} \, . \end{eqnarray} See Ref.\ \cite{koide_review20} for details. It is also noteworthy that a similar condition plays an important role in bivelocity hydrodynamics \cite{koide18,koide_review20,brenner,gustavo,dadzie}. Let us consider the classical Lagrangian, \begin{eqnarray} L_{cla} ({\bf r}, \ud {\bf r}/\ud t ) =\frac{\uM}{2} \left( \frac{\ud {\bf r}}{\ud t} \right)^2 - V - \frac{\varepsilon}{\rho} \, , \label{eqn:cla-lag} \end{eqnarray} where $\varepsilon$ is an internal energy density given by a function of the particle distribution and the entropy density. Applying the classical variation, this Lagrangian gives ideal-fluid dynamics (Euler equation) \cite{koide_review20}. As mentioned before, the viscous and $\kappa$ terms are induced through the fluctuating trajectory in SVM and hence the NSK equation (\ref{eqn:nsk}) is obtained by applying SVM to this Lagrangian. To find the corresponding stochastic Lagrangian, we have to replace $\ud/\ud t$ with $\uD_+$ and $\uD_-$ in Eq.\ (\ref{eqn:cla-lag}). Due to this ambiguity in the replacement, we introduce two real parameters $\alpha_A$ and $\alpha_B$. Then the stochastic Lagrangian is defined by \begin{eqnarray} \lefteqn{L_{sto} (\widehat{\bf r},\uD_+ \widehat{\bf r}, \uD_- \widehat{\bf r}) } && \nonumber \\ && = \frac{\uM}{2} (\uD_+ \widehat{\bf r}, \uD_- \widehat{\bf r}) {\cal M} \left( \begin{array}{c} \uD_+ \widehat{\bf r} \\ \uD_- \widehat{\bf r} \end{array} \right) - V - \frac{\varepsilon}{\rho} \, , \label{eqn:sto-lag} \end{eqnarray} with \begin{eqnarray} {\cal M} = \left( \begin{array}{cc} \left( \frac{1}{2} + \alpha_A \right)\left( \frac{1}{2} + \alpha_B \right) & \frac{1}{4} - \frac{\alpha_B}{2} \\ \frac{1}{4} - \frac{\alpha_B}{2} & \left( \frac{1}{2} - \alpha_A \right)\left( \frac{1}{2} + \alpha_B \right) \end{array} \right) \, . \nonumber \end{eqnarray} See the discussion in Sec.\ 4.1 in Ref.\ \cite{koide_review20} for details. In the vanishing limit of $\nu$, $\uD_\pm$ coincide with $\ud/\ud t$ and then the stochastic Lagrangian (\ref{eqn:sto-lag}) is reduced to the corresponding classical one (\ref{eqn:cla-lag}) independently of $\alpha_A$ and $\alpha_B$. The parameters $\alpha_A$ and $\alpha_B$ are absorbed into the definitions of $\kappa$ and $\eta$ as shown later in Eq.\ (\ref{eqn:coeff}). In the classical variation, a trajectory is entirely determined for a given velocity. This is however not the case with SVM due to the noise terms in the two SDE's. Therefore only the averaged behavior of the stochastic Lagrangian is optimized by variation. The action is then defined by the expectation value, \begin{eqnarray} I_{sto} [\widehat{\bf r}] = \int^{t_f}_{t_i} \ud t \, \uE[L_{sto}(\widehat{\bf r},\uD_+ \widehat{\bf r}, \uD_- \widehat{\bf r})]\, , \label{eqn:sto_act} \end{eqnarray} with an initial time $t_i$ and a final time $t_f$. Here, the initial distribution of constituent particles $\rho_0 ({\bf R})$ is omitted but it does not affect the result of the stochastic variation. See, for example, Eq.\ (116) in Ref.\ \cite{koide_review20}. The variation of the stochastic trajectory is defined by $\widehat{\bf r}(t) \longrightarrow \widehat{\bf r}^\prime (t) = \widehat{\bf r}(t) + \delta {\bf f} (\widehat{\bf r}(t),t)$, where an infinitesimal smooth function $\delta {\bf f} ({\bf x},t)$ satisfies $\delta {\bf f}({\bf x},t_i) = \delta {\bf f}({\bf x},t_f) = 0$. We further define the fluid velocity field by \begin{eqnarray} {\bf v} ({\bf x},t) = \frac{{\bf u}_+ ({\bf x},t) + {\bf u}_- ({\bf x},t)}{2} \, . \label{eqn:def_v} \end{eqnarray} Then the stochastic variation of Eq.\ (\ref{eqn:sto_act}) leads to \begin{eqnarray} \left[ \frac{\uD_- {\bf p}_+ + \uD_+ {\bf p}_-}{2} = - \nabla V - \frac{1}{ \rho } \nabla \left\{ P - \mu (\nabla \cdot {\bf v} ) \right\} \right]_{\widehat{\bf r}(t) = {\bf x}} \, . \label{eqn:qvh_nn} \end{eqnarray} Here, $\mu$ is obtained through the variation of the entropy density. See Sec. 5.1 in Ref.\ \cite{koide12} for details. To obtain $P$, $\varepsilon$ is assumed to satisfy the local thermal equilibrium in the variation of Eq.\ (\ref{eqn:sto_act}). See the discussion around Eq.\ (106) in Ref.\ \cite{koide_review20}. The $\varepsilon$ and potential terms, however, do not affect the definitions of the two momenta, which are introduced through the Legendre transformation of the stochastic Lagrangian, \begin{eqnarray} {\bf p}_\pm ({\bf x},t ) = \left. 2\frac{\partial L_{sto}}{\partial (\uD_\pm \widehat{\bf r})} \right|_{\widehat{\bf r} = {\bf x}} \, . \label{eqn:twomom} \end{eqnarray} Here the factors $2$ in the definitions of ${\bf p}_\pm ({\bf x},t )$ are introduced for a convention to reproduce the classical result in the vanishing limit of $\nu$ \cite{koide18}. Note that the operations of $\uD_\pm$ to ${\bf p}_\mp ({\bf x},t )$ are calculated using Eq.\ (\ref{eqn:ito}). Then it is straightforward to show that Eq.\ (\ref{eqn:qvh_nn}) reproduces the NSK equation (\ref{eqn:nsk}) with the identification, \begin{eqnarray} \begin{split} \kappa = 2 \alpha_B \nu^2 \, , \, \, \, \, \eta = 2\alpha_A (1 + 2 \alpha_B) \nu \uM \rho \, . \end{split} \label{eqn:coeff} \end{eqnarray} Using ${\bf v} ({\bf x},t)$ defined by Eq.\ (\ref{eqn:def_v}), the two Fokker-Planck equations (\ref{eqn:ffp}) and (\ref{eqn:bfp}) are simplified and the equation of continuity is obtained, \begin{eqnarray} \partial_t \rho ({\bf x},t) + \nabla \cdot ( \rho({\bf x},t) {\bf v}({\bf x},t) ) =0 \, . \nonumber \end{eqnarray} It is important to note that the NSK equation (\ref{eqn:nsk}) reproduces not only the Schr\"{o}dinger equation but also the Gross-Pitaevskii equation when we choose the internal energy density $\varepsilon$ and the parameters in the stochastic Lagrangian $(\alpha_A,\alpha_B)$ appropriately. See the discussion in Refs.\ \cite{koide18,koide_review20,koide12} for details. \section{Uncertainty relations} \label{sec:ucr} The emergence of the two momenta is attributed to the non-differentiability of the stochastic trajectory. As seen in Eq.\ (\ref{eqn:qvh_nn}), ${\bf p}_\pm ({\bf x},t )$ contribute to our equation of motion on an equal footing. Therefore it is natural to define the standard deviation of momentum by the average of the two contributions, ${\bf p}_+ ({\bf x},t )$ and ${\bf p}_- ({\bf x},t )$. We define the standard deviations of position and momentum. The former is given by \begin{eqnarray} \sigma^{(2)}_{x^{i}} = \lceil (\delta {x}^{i} )^2 \rfloor \, , \nonumber \end{eqnarray} where $\delta {f} = f ({\bf x},t) - \lceil {f} \, \rfloor$ and we introduced the following expectation value, \begin{eqnarray} \lceil f \, \rfloor = \frac{1}{N}\int \ud^D {\bf x} \, \rho ({\bf x},t) f ({\bf x},t) \, , \label{eqn:exp} \end{eqnarray} with $N$ being the number of constituent particles. As discussed above, the latter is given by the average, \begin{eqnarray} \sigma^{(2)}_{p^{i}} &=& \frac{\lceil (\delta {p}^{i}_+ )^2 \rfloor + \lceil (\delta {p}^{i}_- )^2 \rfloor }{2} \nonumber \\ &=& \left\lceil \left( \frac{\delta p^i_- + \delta p^i_+ }{2} \right)^2 \right\rfloor + \left\lceil \left( \frac{\delta p^i_- - \delta p^i_+ }{2} \right)^2 \right\rfloor \, , \nonumber \end{eqnarray} where \begin{eqnarray} \left( \begin{array}{c} {\bf p}_-({\bf x},t) - {\bf p}_+ ({\bf x},t) \\ {\bf p}_-({\bf x},t) + {\bf p}_+ ({\bf x},t) \end{array} \right) = 2\uM G \left( \begin{array}{c} -\nu \nabla \ln \rho ({\bf x},t) \\ {\bf v} ({\bf x},t) \end{array} \right) . \label{eqn:p-pp+p} \end{eqnarray} The symmetric matrix $G$ is defined by \begin{eqnarray} G = \left( \begin{array}{cc} \frac{\kappa}{\nu^2} & - \frac{\xi}{\nu} \\ - \frac{\xi}{\nu} & 1 \end{array} \right) \, , \nonumber \end{eqnarray} with the kinematic viscosity, \begin{eqnarray} \xi = \frac{\eta}{ 2\uM \rho} \, . \nonumber \end{eqnarray} The consistency condition (\ref{eqn:cc}) is used in Eq.\ (\ref{eqn:p-pp+p}). The above definitions of the standard deviations reproduce the corresponding quantum-mechanical quantities as shown later in Eq.\ (\ref{eqn:op-sto}). Using these definitions and the Cauchy-Schwarz inequality $\lceil A^2 \rfloor \lceil B^2 \rfloor \ge (\lceil AB \rfloor)^2$, the product of $\sigma^{(2)}_{x^i}$ and $\sigma^{(2)}_{p^j}$ is shown to satisfy the inequality, \begin{eqnarray} \lefteqn{\sigma^{(2)}_{x^i} \sigma^{(2)}_{p^j} } && \nonumber \\ &\ge& \uM^2 \left[ \frac{\nu^2 \lambda^2_+ \lambda^2_-}{\lambda_+ + \lambda_- - \lambda_+ \lambda_- } \delta_{ij} + \left( \lambda_+ + \lambda_- - \lambda_+ \lambda_- \right) \right. \nonumber \\ && \left. \times \left\{ \lceil \delta x^i \delta v^j \rfloor + \frac{\nu^2}{\xi} \frac{(\lambda_+ + \lambda_-) (1-\lambda_+)(1-\lambda_-)}{\lambda_+ + \lambda_- - \lambda_+ \lambda_-} \delta_{ij} \right\}^2 \right] \nonumber \\ &=& \uM^2 \frac{(\xi^2 - \kappa)^2}{\nu^2 + \xi^2} \delta_{i j} \nonumber \\ && + \uM^2 \left( 1+ \frac{\xi^2}{\nu^2}\right) \left( \lceil \delta x^i \delta v^j \rfloor - \frac{\xi (\nu^2+\kappa)}{\nu^2 + \xi^2} \delta_{i j} \right)^2 \, , \label{eqn:ucr} \end{eqnarray} where $\lambda_\pm = \{ 1+ \kappa/\nu^2 \pm \sqrt{(1-\kappa/\nu^2)^2 + 4 \xi^2/\nu^2} \}/2$ are the eigenvalues of $G$. This inequality was derived in Ref. \cite{koide18} for the first time. The right-hand side becomes minimum when $\lceil \delta x^i \delta v^j \rfloor = \delta_{i j} \xi (\nu^2+\kappa)/(\nu^2 + \xi^2) $. The inequality reproduces the well-known result in quantum mechanics by choosing \begin{eqnarray} (\alpha_A,\alpha_B,\nu) = \left( 0, \frac{1}{2}, \frac{\hbar}{2\uM} \right) \, . \label{eqn:para_qm} \end{eqnarray} Then Eq.\ (\ref{eqn:nsk}) (or equivalently Eq.\ (\ref{eqn:qvh_nn})) coincides with Medelung's hydrodynamics, and our uncertainty relation (\ref{eqn:ucr}) leads to the Robertson-Schr\"{o}dinger inequality, \begin{eqnarray} \sigma^{(2)}_{x^i} \sigma^{(2)}_{p^j} \ge \frac{\hbar^2}{4} \delta_{ij} + \left\{ {\rm Re} [ \langle (x^i_{op} - \langle x^i_{op}) (p^j_{op} - \langle p^j_{op}) \rangle ] \right\}^2 \, . \label{eqn:rs_qm} \end{eqnarray} In this derivation, we used that the quantum-mechanical expectation values are expressed as \begin{eqnarray} \begin{array}{lcl} \langle {x}^i_{op} \rangle = \lceil x^i \rfloor \, , & & \langle ({x}^i_{op} - \langle x^i_{op} \rangle)^2 \rangle = \sigma^{(2)}_{x^i} \, ,\\ \langle {p}^i_{op} \rangle = \frac{\lceil p^i_+ \rfloor + \lceil p^i_- \rfloor}{2} \, , & & \langle ({p}^i_{op} -\langle p^i_{op} \rangle)^2 \rangle = \sigma^{(2)}_{p^i} \, , \end{array} \label{eqn:op-sto} \end{eqnarray} where ${\bf x}_{op}$ and ${\bf p}_{op}$ are the position and momentum operators, respectively, and $\langle \, \, \, \, \rangle$ denotes the expectation value with a wave function. See Refs.\ \cite{koide18,koide_review20} for details. The second term on the right-hand side of Eq.\ (\ref{eqn:rs_qm}) is always positive. The Kennard inequality is reproduced when this term is ignored. Note that the famous paradox for the angular uncertainty relation is resolved in the present approach \cite{koide20-1}. For a quantum-mechanical uncertainty relation in different stochastic approaches, see Refs.\ \cite{illuminati,lindgren}. The advantage of the present approach compared to the standard operator formalism is discussed in Sec.\ \ref{sec:conclusion}. \subsection{Zero uncertainty} \label{sec:ideal_limit} The NSK equation is reduced to the Euler equation in the vanishing noise limit $\nu \rightarrow 0$. Then our inequality (\ref{eqn:ucr}) becomes \begin{eqnarray} \sigma^{(2)}_{x^i} \sigma^{(2)}_{p^j} \ge 0 \, . \nonumber \end{eqnarray} Here we dropped the second term on the right-hand side of Eq.\ (\ref{eqn:ucr}) to find the Kennard-type inequality. One may consider that the zero uncertainty can be realized even for fluctuating dynamics by setting $\kappa = \xi^2$. This choice of the parameter is however not permitted. It is because the right-hand side of Eq.\ (\ref{eqn:ucr}) can be reexpressed as \begin{eqnarray} \sigma^{(2)}_{x^i} \sigma^{(2)}_{p^j} &\ge& \frac{(4 \uM \nu)^2}{1+(\xi/\nu)^2} |{\rm det} ({\cal M}) |^2 \delta_{ij} \, . \nonumber \end{eqnarray} Here, again, the irrelevant second term on the right-hand side of Eq.\ (\ref{eqn:ucr}) is ignored. The matrix ${\cal M}$ is introduced in Eq.\ (\ref{eqn:sto-lag}). That is, the condition $\kappa = \xi^2$ is equivalent to ${\rm det} ({\cal M}) =0$. However ${\rm det} ({\cal M})$ cannot disappear to define our momenta through the Legendre transformation of the stochastic Lagrangian. Therefore the uncertainty for stochastic dynamics always has a finite value. \section{Viscous minimum uncertainty state} \label{sec:mus} We discuss the minimum uncertainty state of the inequality (\ref{eqn:ucr}) in one-dimensional system. Such a state should reproduce the well-known result in quantum mechanics when we choose Eq.\ (\ref{eqn:para_qm}). To guess the viscous minimum uncertainty state, we consult the numerical result of the NSF equation. The time evolutions of the uncertainty of the NSF fluid are numerically calculated in Ref.\ \cite{koide_review20} using the initially Gaussian distribution of the mass at rest. As shown in Fig.\ 7, the uncertainty of the viscous fluids with low Reynolds numbers takes a value close to the theoretically predicted minimum soon after the initial time. The profiles of the mass distribution and the velocity field are shown in Fig.\ 4. The mass distribution is then approximately given by a Gaussian function. Ignoring the behavior in the irrelevant low density region, the velocity field seems to have a linear position dependence. We thus assume that the viscous minimum uncertainty state is given by \begin{eqnarray} \begin{split} \rho(x) = \sqrt{\frac{A}{\pi}} e^{-A (x-x_0)^2} \, , \, \,\,\, v(x) = v_0 + B x \, , \end{split} \label{eqn:mu-state} \end{eqnarray} where $x_0$, $v_0$, $A$ and $B$ are real constants. More properly, the second equation should be $v(x) = v_0 + B (x - x_0)$ but the difference is absorbed into the definition of $v_0$. The standard deviations, $\sigma^{(2)}_x$ and $\sigma^{(2)}_p $, are easily calculated using this state and then we find \begin{eqnarray} \sigma^{(2)}_x \sigma^{(2)}_p &=& \uM^2 \frac{(\kappa - \xi^2)^2 }{\nu^2 + \xi^2} \nonumber \\ && + \uM^2 \left( 1 + \frac{\xi^2}{\nu^2} \right) \left( \frac{B}{2A} - \frac{\xi(\nu^2 + \kappa)}{\nu^2 + \xi^2} \right)^2 \, . \label{eqn:minim_product} \end{eqnarray} Therefore, by choosing \begin{eqnarray} \frac{B}{2A} = \lceil \delta x \delta v \rfloor = \frac{\xi(\nu^2 + \kappa)}{\nu^2 + \xi^2} \, , \label{eqn:B} \end{eqnarray} we find that Eq.\ (\ref{eqn:minim_product}) gives the minimum of the inequality (\ref{eqn:ucr}). That is, the viscous minimum uncertainty state is defined by Eqs.\ (\ref{eqn:mu-state}) and (\ref{eqn:B}). Because of the position-dependent velocity, the viscous minimum uncertainty state expands to homogenize the particle distribution. Therefore the lifetime of the viscous minimum uncertainty state will be short in general. To see the influence of the viscous uncertainty, we should observe short time evolutions in small inhomogeneous systems as is realized in heavy-ion collisions. See also the discussion in Sec.\ \ref{sec:conclusion}. \subsection{NSF equation} \label{sec:nsf} The above result describes the minimum uncertainty state for the NSF equation by setting $\kappa=0$, \begin{eqnarray} \rho (x) = \sqrt{\frac{A}{\pi}} e^{-A (x-x_0)^2} \, , \, \,\,\, v(x) = v_0 + 2A \frac{\xi \nu^2}{\nu^2 + \xi^2} x \, . \nonumber \end{eqnarray} As pointed out, the linear-position dependence of the velocity field is observed in the expansion process of a localized fluid which is shown in Fig.\ 4 of Ref.\ \cite{koide_review20}. As the initial condition, we used the stationary fluid given by the Gaussian mass distribution. The isentropic ideal gas is considered for the equation of state. The corresponding minimum value is \begin{eqnarray} \sigma^{(2)}_x \sigma^{(2)}_p &=& \uM^2 \frac{ \xi^4 }{\nu^2 + \xi^2} \, . \nonumber \end{eqnarray} The minimum value is characterized by two different parameters and thus, even if we fix $\xi$, the minimum value is affected by $\nu$. Because the diffusion coefficient of the Fokker-Planck equations obtained from the SDE's is given by $\nu$, it is natural to consider that the noise intensity $\nu$ is determined by the diffusion coefficient of a fluid. For example, let us consider water \cite{koide_review20}. The mass of a water molecule is $\uM = 3 \times 10^{-26}$ kg. At room temperature, the kinematic viscosity is $\xi \sim 10^{-6}$ $\um^2/s$ and the diffusion coefficient in the liquid phase is $\nu \sim 10^{-9}$ $\um^2/s$. Thus, the contribution of $\nu$ is negligibly smaller than $\xi$ and the minimum value is given by \begin{eqnarray} \uM^2 \frac{ \xi^4 }{\nu^2 + \xi^2} \sim \uM^2 \xi^2 \sim \, 600 \times \frac{\hbar}{2} \, . \nonumber \end{eqnarray} For water vapor, $\xi \sim 0.3 \times 10^{-6}$ $\um^2/s$ and $\nu \sim 10^{-4}$ $\um^2/s$. Contrary to the above case of liquid, the effect of diffusion is larger than the viscosity in the gas phase and the minimum value becomes \begin{eqnarray} \uM^2 \frac{ \xi^4 }{\nu^2 + \xi^2} \sim \uM^2 \frac{\xi^4}{\nu^2} \sim \, 60 \times \frac{\hbar}{2} \, . \nonumber \end{eqnarray} One can see that these minimum values are much larger than the corresponding quantity in quantum mechanics. These are however still much smaller than the coarse-graining scale in the standard applications of hydrodynamics and thus this effect will be irrelevant to most of applications. See also the discussion in Sec.\ \ref{sec:conclusion}. The above results suggest that the difference of liquid and gas can be characterized by the different behaviors of the uncertainty. See the discussion in Ref.\ \cite{koide_review20} for details. \subsection{Quantum-mechanical limit} \label{sec:qm_limit} The viscous minimum uncertainty state is the generalization of the (standard) coherent state. In Madelung's hydrodynamics \cite{madelung}, $\rho (x)$ and $v(x)$ for a given wave function $\Psi(x)$ are defined by \begin{eqnarray} \begin{split} \rho (x) = |\Psi(x)|^2 \, , \, \, \, \, v (x) = -\ii \frac{\hbar}{\uM} {\rm Im}[ \partial_x \ln \Psi(x)] \, . \end{split} \label{eqn:decomp} \end{eqnarray} At the same time, the coordinate representation of the coherent state is given by \begin{eqnarray} \langle x | \alpha \rangle = \left( \frac{C}{\pi}\right)^{1/4} e^{-\frac{1}{2}(\sqrt{C}x- \alpha_R)^2 } e^{\ii \alpha_I (\sqrt{C} x - \alpha_R/2)} \, , \nonumber \end{eqnarray} where $\alpha_R$, $\alpha_I$ and $C$ are real constants and $\alpha = (\alpha_R + \ii \alpha_I)/\sqrt{2}$ is the eigenvalue of the lowering operator in a quantum harmonic oscillator \cite{book:jpg}. Substituting this into Eq.\ (\ref{eqn:decomp}), we find that $\rho(x)$ and $v(x)$ for the coherent state are reproduced from Eq.\ (\ref{eqn:mu-state}) by choosing \begin{eqnarray} \begin{array}{lcl} A = C \, , & & x_0 = \frac{\alpha_R}{\sqrt{C}} \, ,\\ B = 0\, ,& & v_0 = \frac{\hbar}{\uM} \sqrt{C} \alpha_I \, . \end{array} \nonumber \end{eqnarray} Here $B=0$ means the absence of the kinematic viscosity in Eq.\ (\ref{eqn:B}). Note that this state becomes stationary and gives the ground state of the harmonic oscillator in quantum mechanics when the parameters are chosen by \begin{eqnarray} C = \frac{\uM \omega}{\hbar} \, , \, \, \, \, \, \, \alpha_R = \alpha_I = 0\, \end{eqnarray} where $\omega$ is the angular frequency of the harmonic potential. Indeed, by using the definition of $v(x)$ (\ref{eqn:def_v}) and the consistency condition (\ref{eqn:cc}), we can show that this state gives the stationary point of the Fokker-Planck equations (\ref{eqn:ffp}) and (\ref{eqn:bfp}). See also the discussion in Sec.\ \ref{sec:smv}. In the viscous case where $B \neq 0$, the minimum uncertainty state is not stationary. In other words, there exist other states which have smaller uncertainties around the 1-D Gaussian fluid with $v=0$. See, for example, Fig.\ 7 of Ref.\ \cite{koide_review20}, where the uncertainty always decreases in the early stage of time evolution when a stationary fluid is used as the initial condition. \subsection{Inviscid minimum uncertainty} \label{sec:smv} Let us consider the inviscid limit ($\eta=\mu=0$) in the NSK equation, \begin{eqnarray} (\partial_t + {\bf v} \cdot \nabla ) v^{i} = -\frac{1}{\uM} \partial_i V + 2\kappa \partial_i \frac{\nabla^2 \sqrt{\rho}}{\sqrt{\rho}} -\frac{1}{\uM\rho} \partial_i P \, . \label{eqn:madelung} \end{eqnarray} This is called the Euler-Korteweg equation. As was pointed out, the $\kappa$ term represents the capillary action in liquid-vapor fluids and the gradient of the quantum potential in the Schr\"{o}dinger and Gross-Pitaevskii equations. Using Eqs.\ (\ref{eqn:mu-state}) and (\ref{eqn:B}), the product of $\sigma^{(2)}_{x^i}$ and $\sigma^{(2)}_{p^j}$ in this case is given by \begin{eqnarray} \sqrt{\sigma^{(2)}_{x^i}} \sqrt{\sigma^{(2)}_{p^j}} = \uM \frac{\kappa}{\nu} \delta_{i j} \, . \label{eqn:ucr_nonvis} \end{eqnarray} The constant on the right-hand side represents the inviscid minimum uncertainty. Suppose that the smallest inviscid uncertainty is given by the quantum-mechanical one. Then the coefficient $\kappa$ has a lower bound and should satisfy the following inequality \begin{eqnarray} \kappa \ge \frac{\hbar}{2\uM} \nu \, . \label{eqn:lb_kappa} \end{eqnarray} When $\kappa$ is fixed, the upper bound of the noise intensity (the diffusion coefficient) is characterized by this inequality. This interpretation may sound strange because the uncertainty in our approach comes from the non-differentiability of the stochastic trajectory which seems to be enhanced by the increase of the value of $\nu$, as seen from the consistency condition (\ref{eqn:cc}). Note however that $\kappa$ is proportional to $\nu^2$ as shown by the first equation of Eq.\ (\ref{eqn:coeff}). Using this, the above inequality is reexpressed as \begin{eqnarray} \nu \ge \frac{\hbar}{4\uM \alpha_B} \, , \end{eqnarray} where $\alpha_B$ is a finite real constant. As we expected, the uncertainty in Eq.\ (\ref{eqn:ucr_nonvis}) increases as $\nu$ is enhanced. In the inviscid case, the minimum uncertainty state can be stationary and thus satisfies the Fokker-Planck equations (\ref{eqn:ffp}) and (\ref{eqn:bfp}) as was discussed in Sec.\ \ref{sec:qm_limit}. Moreover, this state satisfies even the Euler-Korteweg equation when we choose \begin{eqnarray} V = \frac{1}{2}\uM \omega^2 x^2 \, , \, \, \, \, \, \, P = C_{pre} \, \rho \, , \end{eqnarray} where $C_{pre} $ is a proportional constant. Then the stationary solution of the Euler-Korteweg equation is given by \begin{eqnarray} \begin{split} \rho(x) = \sqrt{\frac{A}{\pi}} e^{-A x^2} \, , \, \,\,\, v(x) = 0\, , \end{split} \end{eqnarray} where \begin{eqnarray} A = \frac{1}{2\kappa} \left\{ \sqrt{\frac{C^2_{pre}}{\uM^2} + 4 \kappa \omega^2} - \frac{C_{pre}}{\uM} \right\} \, . \end{eqnarray} Comparing this solution with Eqs.\ (\ref{eqn:mu-state}) and (\ref{eqn:B}), it is easy to see that this stationary state gives the inviscid minimum uncertainty. When we use $C_{pre} = 0$ and Eq.\ (\ref{eqn:para_qm}), this state agrees with the ground state of the harmonic oscillator in quantum mechanics, which was discussed in Sec.\ \ref{sec:qm_limit} \subsection{Viscous control of minimum uncertainty} We investigate the effect of viscosity to the inviscid minimum uncertainty defined in Sec.\ \ref{sec:smv}. The minimum uncertainty obtained from Eqs.\ (\ref{eqn:mu-state}) and (\ref{eqn:B}) is given by \begin{eqnarray} \sqrt{ \sigma^{(2)}_x } \sqrt{ \sigma^{(2)}_p } = \uM \nu \frac{| (\kappa/\nu^2) - (\xi/\nu)^2 | }{\sqrt{1 + (\xi/\nu)^2}} \, . \label{eqn:mu_mus} \end{eqnarray} For the right-hand side to be smaller than the inviscid minimum, $\uM\kappa/\nu$, the kinematic viscosity should satisfy \begin{eqnarray} \xi^*_{min} < \xi < \xi^*_{max} \, , \label{eqn:xi_ineq} \end{eqnarray} where \begin{eqnarray} \xi^*_{min} &=& \left\{ \begin{array}{cl} 0 & {\rm for}\, \, 0 \le \kappa <\nu^2 \\ & \\ \sqrt{\frac{3}{2} \frac{\kappa}{\nu^2} - \sqrt{\frac{\kappa}{\nu^2} \left( 1 + \frac{5}{4} \frac{\kappa}{\nu^2} \right)}} & {\rm for}\, \, \nu^2 \le \kappa \end{array} \right. \, , \nonumber \\ \xi^*_{max} &=& \sqrt{\frac{3}{2} \frac{\kappa}{\nu^2} + \sqrt{\frac{\kappa}{\nu^2} \left( 1 + \frac{5}{4} \frac{\kappa}{\nu^2} \right)}} \, , \nonumber \end{eqnarray} The parameters satisfying these inequalities are shown in Fig.\ \ref{fig:xi}. The shaded area of the diagram corresponds to the domain where the viscous minimum uncertainty is smaller than the inviscid minimum value $\uM \kappa /\nu$. The uncertainty becomes extremely small around $\kappa = \xi^2$ denoted by the dashed line. The point $(\kappa/\nu^2,\xi^2/\nu^2) = (1,0)$ on the diagram corresponds to the case of quantum mechanics. The NSF equation corresponds to the vertical line of $\kappa/\nu^2 = 0$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.3]{xi} \end{center} \caption{ The parameters satisfying the inequality (\ref{eqn:xi_ineq}). In the shaded area, the viscous minimum uncertainty is smaller than the inviscid minimum value. The dashed line represents $\kappa = \xi^2$.} \label{fig:xi} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.3]{minimum_uc_2} \end{center} \caption{The minimum uncertainty for $\kappa=\nu^2$ is plotted as a function of $\xi/\nu$. The dashed line represents the inviscid minimum value $\uM \kappa/\nu = \uM \nu = \hbar/2$ for $\nu = \hbar/(2\uM)$. } \label{fig:muc} \end{figure} As an extreme case, let us consider a weakly interacting quantum many-body system at low temperature. Then the coefficient $\kappa$ will be given by $\hbar^2/(4\uM^2)$. In fact, the Bose-Einstein condensate is approximately described by the Gross-Pitaevskii equation where $\kappa = \nu^2 = \hbar^2/(4\uM^2)$. Suppose that thermal fluctuations induce viscosity and the time evolution of the quantum many-body system is described by the NSK equation. The coefficient $\kappa$ is a function of temperature and changed from $\hbar^2/(4\uM^2)$ in general. We further assume that the temperature dependence in $\kappa$ is weak and $\kappa$ is given by $\hbar^2/(4\uM^2)$ at sufficiently low temperature. In this case, we can treat $\xi$ as a free parameter fixing $\kappa/\nu^2=1$ to investigate the behavior of the minimum uncertainty, which is shown in Fig.\ \ref{fig:muc}. For the sake of comparison, the dashed line represents the inviscid minimum value, which agrees with the quantum-mechanical minimum value $\uM \kappa/\nu = \uM \nu = \hbar/2$ in the present parameter set. The effects induced by the $\kappa$ term and the viscous term cancels each other out and hence we find that the product $(\sigma^{(2)}_x)^{1/2} ( \sigma^{(2)}_p)^{1/2}/(\uM \nu)$ can be smaller than $1$ for a sufficiently weak kinematic viscosity satisfying \begin{eqnarray} \xi^*_{min} = 0< \xi < \xi^{*}_{max} = \frac{\sqrt{3}}{2} \frac{\hbar}{\uM} \, . \label{eqn:ineq_qm} \end{eqnarray} The viscous minimum uncertainty vanishes when $\xi/\nu = 1$ which correspond to $\kappa = \xi^2$ but this choice is forbidden because of the reason discussed in Sec.\ \ref{sec:ideal_limit}. For a larger $\xi$, the effect of the viscous term becomes dominant and then the number of the collisions among particles (fluid elements) increases. Since the non-differentiability of trajectories is enhanced by the collisions, the viscous minimum uncertainty behaves as an increasing function of $\xi$. \subsection{Lower bound and critical value of viscosity} For the coefficient $\kappa$, we discussed the possible lower bound given by Eq.\ (\ref{eqn:lb_kappa}) for a given $\nu$. The similar constraint can exist even for $\xi$. In relativistic heavy-ion collision physics, the behavior of quantum many-body systems is approximately given by a viscous fluid \cite{hydro_review}. The viscous effect is indeed considered to be indispensable because it is believed that the shear viscosity cannot be smaller than the Kovtun-Son-Starinets (KSS) bound \cite{kss}, \begin{eqnarray} \frac{\eta}{s} \ge \frac{\hbar}{4\pi k_B} \, , \label{eqn:kss} \end{eqnarray} where $s$ is the entropy density. This bound is based on the ansatz of the AdS/CFT correspondence. Similar lower bounds for the shear viscosity are considered in Ref.\ \cite{gyu}. Assuming $s \sim k_B \rho$, Eq.\ (\ref{eqn:kss}) reads the inequality for the kinematic viscosity, \begin{eqnarray} \xi \ge \frac{1}{8\pi}\frac{\hbar}{ \uM} \, . \label{eqn:kss2} \end{eqnarray} Let us consider the relation between the viscous minimum uncertainty and the KSS bound in the system considered in Fig.\ \ref{fig:muc}. To have a minimum value smaller than the inviscid one $\hbar/2$, the kinematic viscosity should be smaller than the critical value $\xi^*_{max}$ defined in Eq.\ (\ref{eqn:ineq_qm}). Comparing Eq.\ (\ref{eqn:kss2}) with Eq.\ (\ref{eqn:ineq_qm}), we find that $\xi^*_{max}$ is larger than the KSS bound and thus the viscous effect can induce the minimum uncertainty smaller than $\hbar/2$ in principle at least. However, the difference between the critical value and the KSS bound is only slight and thus we cannot decide the precise order of these quantities in the present rough estimation. The KSS bound may indicate that there exists a fundamental mechanism in quantum physics which does not permit the improvement of uncertainty beyond the quantum-mechanical minimum value $\hbar/2$ by viscosity. \section{Concluding remarks} \label{sec:conclusion} The viscous minimum uncertainty state of the fluid described by the Navier-Stokes-Korteweg equation was derived. This state has a Gaussian particle distribution and thus is regarded as the generalization of the coherent state of quantum mechanics. The velocity field of this state exhibits the linear-position dependence and the inclination is characterized by the shear viscosity. Such a linear dependence in the velocity field is often observed in the expanding fluid described by the Navier-Stokes-Fourier equation. The corresponding uncertainty is controlled by the shear viscosity and can be smaller than the inviscid minimum value when the shear viscosity is smaller than a critical value. The parameter set to satisfy this condition distributes zonally on the diagram of the transport coefficients $\kappa$ and $\xi$ as is shown in Fig.\ \ref{fig:xi}. The existence of such a parameter set requires special attention because the shear viscosity can have a minimum value. If this minimum is given by the Kovtun-Son-Starinets bound, we found that the order of the KSS bound is similar to that of the critical value of the shear viscosity. This may suggest that the lower bound of viscosity appears so that viscosity does not improve uncertainty beyond the quantum-mechanical minimum value $\hbar/2$. The shear and bulk viscosities in Eq.\ (\ref{eqn:nsk}) are given by the same formulae as those of the NSF equation. As is well-known, these coefficients are determined by the Green-Kubo-Nakano (GKN) formula \cite{zwanzig,koide_tra1,koide_tra2,koide_tra3}. See, for example, the discussion around Eq.\ (26) of Ref.\ \cite{koide_tra1}. The shear viscosity $\eta$ is given by Eq.\ (27). To obtain this expression, we linearize the NSF equation and take the low wave number limit (${\bf k} \rightarrow 0$). Applying this procedure to Eq.\ (\ref{eqn:nsk}), the contribution from the $\kappa$ term disappears and thus Eq. (1) coincides with the NSF equation. Therefore the GKN formula of the NSF equation is applicable to determine the coefficients in Eq.\ (\ref{eqn:nsk}). The corresponding formula of the coefficient $\kappa$ is however not yet known. The formulation developed in Refs.\ \cite{koide_tra1,koide_tra2,koide_tra3} is applicable not only to classical many-body systems but also to quantum many-body systems. Thus, if such a formula is found, the coefficient $\kappa$ is calculated from quantum mechanics. Differently from the coefficients of irreversible currents, the $\kappa$ term does not violate the time-reversal symmetry in the NSK equation and thus will be characterized by the real part of the retarded Green's function of microscopic currents. In the standard formulation of quantum mechanics, the non-commutativity of operators leads to the uncertainty relation, while the same property is reproduced from the non-differentiability of particle trajectories in the present approach. The operator formalism is established in various applications of quantum mechanics and thus one may wonder about the significance of the alternative interpretation for the uncertainty relation. The advantage of the present approach is its applicability to generalized coordinate systems. For example, the angle variable and the angular momentum form a pair of canonical variables in polar coordinates, but the corresponding operator representations are not established because there is no self-adjoint multiplicative operator which satisfies the periodicity and the canonical commutation relation simultaneously. See Ref.\ \cite{koide20-1} and references therein. Therefore, in the discussion of the angular uncertainty relation, the angle operator is introduced exclusively by altering one of those conditions. By contrast, the present approach is applicable to quantize generalized coordinate systems without introducing additional condition \cite{koide19} and the uncertainty relation in generalized coordinates is obtained without any difficulty \cite{koide20-1}. It is known that thermal fluctuations are enhanced in low dimensions $(D <3)$ and such strong fluctuations can trigger modification of Eq.\ (\ref{eqn:nsk}) \cite{ernst,kovtun}. In our approach, this difference of fluctuations will be taken into account through Brownian motions. In one and two dimensions, Brownian motion is recurrent: a Brownian particle comes back to an initial position at some time or other. However, in higher dimensions ($D \ge 3$), the trajectory is not recurrent. See, for example, Ref.\ \cite{ezawa} and references therein. Thus the difference of fluctuations in low and high dimensions will be investigated through the comparison of the uncertainty relations. For example, if the uncertainty in low dimensions is not qualitatively different from the one in high dimensions, it may be the signature of the incompatibility of Eq. (\ref{eqn:nsk}) in low dimensions. The viscous uncertainty characterizes the motion of fluid elements. The fluid element is an abstract volume element and thus its direct observation will be difficult. However, the descriptions based on hydrodynamic models sometimes depend on the motions of fluid elements and thus the existence of the viscous uncertainty triggers the modification of the descriptions. Physics in relativistic heavy-ion collisions is one example \cite{hydro_review}. The vacuum is excited by high-energy nucleus collisions and the behavior of the excited vacuum is approximately described by viscous hydrodynamics. The experimentally observed particles, called hadrons, are assumed to be produced by the thermal radiation from each fluid element of the viscous fluid. It is known that this hydrodynamic model explains experimental data very well. In this model, we assume that the fluid elements pass along the streamline of the viscous fluid, but such a view is too simple. Our result shows that the currents of the fluid elements fluctuate around streamlines of the viscous fluid and this fluctuation is characterized by $\sigma^{(2)}_p$. Moreover the behavior of $\sigma^{(2)}_p$ is restricted by that of $\sigma^{(2)}_x$ which can reflect the inhomogeneity of the matter distribution. See also Fig.\ 5 in Ref.\ \cite{koide_review20}. Because of the lack of the above mentioned effect, the standard hydrodynamic model may underestimate the effect of the spatial inhomogeneity of the excited vacuum and hence the anisotropy of the hadron production. A more quantitative analysis is left as a future work. \vspace*{1cm} The author thanks J.-P.\ Gazeau and T.\ Kodama for fruitful discussions and comments, and acknowledges the financial support by CNPq (303468/2018-1). A part of the work was developed under the project INCT-FNA Proc.\ No.\ 464898/2014-5.
train/arxiv
BkiUdV45qoTBDdoZ6bXZ
5
1
\section{Introduction} The present formulation of the theory of actions and representations of Lie supergroups does not appropriately address all relevant phenomena: Consider the basic example of the additive Lie supergroup $G$ of an odd-super vector space $\mathfrak g$. The coadjoint action is trivial, so the orbit through the unique point $0\in\mathfrak g^*$ is again a point. Similarly, $G$ has only the trivial irreducible unitary representation. Although this confirms the idea of the orbit method in a narrow sense, there is no hope of decomposing the regular representation of $G$ on $\SheafTypeface O_G=\bigwedge\mathfrak g^*$ by these means, nor can one reasonably expect thereby to construct representations of $G$ in any generality. This suggests that it is crucial to broaden the notion of points. Following A.~Grothendieck, a \Define{$T$-valued point} of a space $X$ is a map $x:T\longrightarrow X$. This idea is based on considering an ordinary point as a map $*\longrightarrow X$ where $*$ is a singleton, allowing the parameter space to acquire additional degrees of freedom. The $G$-isotropy (or stabiliser) through $x$ should then be a `group bundle' $G_x\longrightarrow T$, and the orbit a `bundle' $G\cdot x\longrightarrow T$ with a fibrewise $G$-action. For any Lie supergroup $G$ with Lie superalgebra $\mathfrak g$ acting on a supermanifold $X$ and any $x:T\longrightarrow X$, we obtain the following. \begin{ThA} The isotropy supergroup $G_x$ exists as a Lie supergroup over $T$ if and only if the orbit morphism is of locally constant rank, which is the case if and only if the $\SheafTypeface O_T$-module $x^*(\SheafTypeface A_\mathfrak g)$ is a locally direct summand of $x^*(\SheafTypeface T_X)$. Here, $\SheafTypeface A_\mathfrak g$ is the \Define{fundamental distribution} generated by the fundamental vector fields. Moreover, in this case, the orbit $G\cdot x\longrightarrow T\times X$ through $x$ exists as an equivariant local embedding of supermanifolds over $T$. \end{ThA} For the special case of orbits through ordinary points, the Superorbit Theorem was first proved by B.~Kostant \cite{kostant} in the setting of Lie--Hopf algebras, by C.P.~Boyer and O.A.~S\'anchez-Valenzuela \cite{bsv} for differentiable Lie supergroups, and by L.~Balduzzi, C.~Carmeli, and G.~Cassinelli \cite{bcc} using a functorial framework and super Harish-Chandra pairs. We recover the case of usual orbits through ordinary points as a special case. In the case of the coadjoint action of $G$ on $\mathfrak g^*$ and of a $T$-valued point $f$ of $\mathfrak g^*$, we prove the following result. \begin{ThB} If $G_f$ exists as a Lie supergroup, then the coadjoint orbit $G\cdot f$ admits a canonical supersymplectic structure over $T$. \end{ThB} We stress that our point of view allows us to stay within the realm of \emph{even} supersymplectic forms, whereas in previous work \cites{tuyn10a,tuyn10b}, it was necessary to work with inhomogeneous symplectic forms. Furthermore, we introduce a general framework of supergroup representations over $T$ to extend Kirillov's method \cite{kirillov} to orbits through $T$-valued points. As a proof of concept, we apply this to derive a Plancherel formula for the odd Abelian supergroup $\mathfrak g$, presenting its regular representation as an `odd direct integral' of `unitary' characters. In a similar vein, we construct representations for two super versions of the three-dimensional Heisenberg group which arise by assigning suitable parities to the generators in the commutation relation $[x,y]=z$. In this case, we find `universal' parameter spaces $T$ and `universal' representations over $T$. Not surprisingly, these bear a striking similarity to the Schr\"odinger representation. \medskip\noindent The idea that irreducible representations should be constructed from orbits on some universal $G$-space is suggested by the general philosophy of geometric quantisation. The case where this works best is that of nilpotent Lie groups, where it was established by A.A.~Kirillov in the form of his orbit method. The goal of extending this method to Lie supergroups was first addressed by B.~Kostant, in his seminal paper \cite{kostant}. In fact, as he remarks in his note \cite{kostant-harm}: Lie supergroups are ``likely to be [\ldots] useful [objects] only insofar as one can develop a corresponding theory of harmonic analysis''. Similarly, V.~Kac \cite{k77}*{5.5.4} poses the problem of constructing Lie supergroup representations \via the orbit method, in particular infinite-dimensional ones. For nilpotent Lie supergroups through \emph{ordinary} points, it was shown by H.~Salmasian \cite{salmasian} (and further investigated by Neeb--Salmasian \cite{ns11}) that indeed, there is a one-to-one correspondence of coadjoint orbits through ordinary points, \ie through elements of $\mathfrak g_\ev^*$, with irreducible unitary representations in the sense of Varadarajan \emph{et al.} \cites{cctv,cctv-err}. As remarked at the beginning of this introduction, this does not yet attain the goal of a theory of harmonic analysis for Lie supergroups, even in the Abelian case. These limitations are overcome by considering orbits through $T$-valued points. A framework for the study of orbits through $T$-valued points was formulated in the category of schemes by D.~Mumford in his influential monograph \cite{mfk}, based on foundational work by A.~Grothendieck and P.~Gabriel. Although these ideas remain fruitful, the algebraic theory cannot be simply transferred to the differentiable category, and indeed the technical obstructions are formidable. At the same time, the differentiable setting is necessary for the envisaged applications: While all Lie groups are real analytic, any non-analytic (complete) vector field gives rise to an action which is not analytic (much less algebraic). Such situations are ubiquitous, particularly in the context of solvable Lie groups and their super generalisations. A careful study of coadjoint orbits (through regular semi-simple elements) of the orthosymplectic and special linear supergroups in the algebraic category was conducted by R.~Fioresi and M.A.~Lled\'o in Ref.~\cite{fl}. The first one to consider coadjoint orbits through non-even functionals was G.~Tuynman \cites{tuyn10a,tuyn10b} in the form of a case study. His considerations are geared toward a specific example and formulated for DeWitt type supermanifolds. It is not clear whether this can be built into a general procedure and translated to Berezin--Kostant--Leites supermanifolds. Moreover, in his approach, he has to consider inhomogeneous ``symplectic'' forms. \medskip\noindent We conclude the introduction by summarising the paper's contents. We present general categorical notions for the study of actions in Section \ref{sec:cats}. We emphasize the technique of base change known from algebraic geometry. This allows, among other things, to give a general definition of isotropy (or stabiliser) groups at $T$-valued points. In Section \ref{sec:groupoid-quots}, we review categorical quotients in the setting of differentiable and analytic superspaces and suggest a weak notion of geometric quotients. In order to treat quotients by group actions and equivalence relations on an equal footing, and with a view toward future applications, we introduce and employ the language of groupoids and their quotients. In Section \ref{sec:super-quot}, we specialise the discussion to supermanifolds. We prepare our discussion of isotropy supergroups at $T$-valued points by generalising the notion of morphisms of constant rank to relative supermanifolds (over a possibly singular base). We prove a rank theorem in this context (\thmref{Prop}{constant-rank-fibprod}); this is based on a family version of the inverse function theorem presented in Appendix \ref{app:invfun} (\thmref{Th}{invfun-loc}), also valid over a singular base. We investigate when the orbit morphism through a general point has constant rank (\thmref{Th}{action-locconst}) and, as an application, show the representability of isotropy supergroups under general conditions (\thmref{Th}{trans-iso}). This gives the existence of orbits under the same assumptions (\thmref{Th}{orbit}) and also implies that the isotropy supergroups exist only if the orbit morphism has constant rank. This relies on a family version of the closed subgroup theorem that we prove in Appendix \ref{app:closed-subgrp} (\thmref{Th}{imm-lie}). In Section \ref{sec:coad}, we construct the relative Kirillov--Kostant--Souriau form for coadjoint orbits through general points (\thmref{Th}{coadj-sympl}). Finally, in Section \ref{sec:quant}, we define the concept of representations over $T$. We then decompose the left-regular representation ${\mathbb A}^{0|n}$ as a direct integral of characters and construct representations over appropriate parameter superspaces $T$ for super variants of the Heisenberg group. \medskip\noindent \emph{Acknowledgements.} We gratefully acknowledge the hospitality of the Max-Planck Institute for Mathematics in Bonn, where much of the work on this article was done. We wish to thank Torsten Wedhorn for helpful discussions on module sheaves. \section{A categorical framework for group actions}\label{sec:cats} \subsection{Categorical groups and actions}\label{subs:catgrp} Groups and actions can be defined quite generally for categories with finite products. In this subsection, we recall the relevant notions and give a number of examples from differents contexts, which will serve to illustrate our further elaborations. In what follows, let $\CategoryTypeface C$ be a category with a terminal object $*$. For any $S,T\in\Ob\CategoryTypeface C$, let $\CategoryTypeface C_T^S$ be the category of objects in $\CategoryTypeface C$, which are under $S$ and over $T$. That is, objects and morphisms are given by the commutative diagrams depicted below: \[ \begin{tikzcd}[row sep=small] S\dar{}&S\dar{}\rar[mathdouble]{}&S\dar{}\\ X\dar{}&X\rar{}\dar{}&Y\dar{}\\ T&T\rar[mathdouble]{}&T. \end{tikzcd} \] Similarly, we define the categories $\CategoryTypeface C_T$ of objects over $T$ and $\CategoryTypeface C^S$ of objects under $S$. We recall the definition of group objects and actions. These concepts are well-known, see \eg Ref.~\cite{maclane}. If $X,S\in\Ob\CategoryTypeface C$, then we write $x\in_SX$ for the statement `$x:S\longrightarrow X$ is a morphism in $\CategoryTypeface C$'. We also say `$x$ is an $S$-valued point of $X$' and denote the set of all these by $X(S)$. This defines the object map of the \Define{point functor} $X(-)$ of $X$. For a morphism $f:X\longrightarrow Y$ in $\CategoryTypeface C$ and $x\in_SX$, we define $f(x)\defi f\circ x$. Applying this procedure to $S$-valued points of $X$ for various $S$ defines the point functor on morphisms. \begin{Def}[grp-act][groups and actions] A \Define[group!in $\CategoryTypeface C$]{$\CategoryTypeface C$-group} is the data of $G\in\Ob\CategoryTypeface C$, such that all non-empty finite products $G\times\dotsm\times G$ exist in $\CategoryTypeface C$, together with morphisms \[ 1=1_G:*\longrightarrow G,\quad i:G\longrightarrow G,\quad m:G\times G\longrightarrow G \] called, respectively, the \Define{unit}, the \Define{inverse}, and the \Define{multiplication} of $G$, which are assumed to satisfy, for any $S\in\Ob\CategoryTypeface C$ and any $r,s,t\in_SG$, the group laws \[ 1r=r1=r,\quad rr^{-1}=1=r^{-1}r,\quad (rs)t=r(st), \] where we denote $st\defi m(s,t)$ and $s^{-1}\defi i(s)$. In particular, $*$ is in a unique fashion a $\CategoryTypeface C$-group, called the \Define[group!trivial]{trivial $\CategoryTypeface C$-group}. Given a $\CategoryTypeface C$-group $G$ with structural morphisms $1$, $i$, and $m$, we define the \Define[group!opposite]{opposite $\CategoryTypeface C$-group} $G^\circ$ to $G$, together with the morphisms $1$ and $i$, and the multiplication $m^\circ:G\times G\longrightarrow G$, where the latter is defined by $m^\circ(s,t)\defi m(t,s)$ for all $T\in\Ob\CategoryTypeface C$ and $s,t\in_TG$. Let $X\in\Ob\CategoryTypeface C$ and assume that the non-empty finite products $Y_1\times\dotsm\times Y_n$ exist in $\CategoryTypeface C$, where $Y_j=G$ or $Y_j=X$ for any $j$. A (left) \Define[group!action (left)]{action} of a $\CategoryTypeface C$-group $G$ in $\CategoryTypeface C$, interchangeably called a (left) \Define[gspace@$G$-space (left)]{$G$-space}, consists of the data of $X$ and a morphism \[ a:G\times X\longrightarrow X, \] written $g\cdot x=a(g,x)$, for which we have \[ 1\cdot x=x,\quad (rs)\cdot x=r\cdot(s\cdot x) \] for any $S\in\Ob\CategoryTypeface C$, $x\in_SX$, and $r,s\in_SG$. Slightly abusing terminology, it is sometimes the morphism $a$ that is called an action and the space $X$ that is called a $G$-space. A $G^\circ$-space is called a \Define[gspace@$G$-space!right]{right $G$-space}. An action of $G^\circ$ is called a \Define[action!right]{right action} of $G$. \end{Def} \begin{Rem}[grp-dep] The data in the definition of a $\CategoryTypeface C$-group are not independent. Given $m$ and $1$ satisfying all above equations not involving $i$, there is at most one morphism $i$ with the above conditions verified. Similarly, $1$ is determined uniquely by $m$. Since the Yoneda embedding preserves limits, a $\CategoryTypeface C$-group is the same thing as an object $G$ of $\CategoryTypeface C$ whose point-functor $G(-)=\Hom[_\CategoryTypeface C]0{-,G}$ is group-valued. Actions can be characterised similarly. \end{Rem} \begin{Ex}[sgrp-ex] Group objects and their actions are ubiquitous in mathematics. Since our main interest lies in supergeometry, we begin with two examples from this realm. \begin{enumerate}[wide] \item\label{item:grpob-ex-vi} The general linear supergroup $\GL(m|n)$ is a complex Lie supergroup (\ie a group object in the category of complex-analytic supermanifolds). Its functor of points is given on objects $T$ by \[ \GL(m|n)(T)\defi\Set3{ \begin{Matrix}1 A&B\\ C&D \end{Matrix} }{ \begin{Matrix}[0]1 A\in\GL(m,\SheafTypeface O_\ev(T)),B\in\SheafTypeface O_\odd(T)^{m\times n}\\ C\in\SheafTypeface O_\odd(T)^{n\times m},D\in\GL(n,\SheafTypeface O_\ev(T)) \end{Matrix} }. \] Here, we let $\SheafTypeface O_k(T)\defi\Gamma(\SheafTypeface O_{T,k})$, $k=\ev,\odd$, $\Gamma$ denoting global sections and $\SheafTypeface O_T$ the structure sheaf of $T$, with graded parts $\SheafTypeface O_{T,\ev}$ and $\SheafTypeface O_{T,\odd}$. The group structure is defined by the matrix unit, matrix inversion and multiplication at the level of the point functor. For $X={\mathbb A}^{m|n}$, we have \[ X(T)=\Set3{ \begin{Matrix}1 a\\ b \end{Matrix} }{ a\in\SheafTypeface O_\ev(T)^{m\times 1},b\in\SheafTypeface O_\odd(T)^{n\times 1} }. \] Hence, an action of $\GL(m|n)$ on $X$ is given at the level of the functor of points by the multiplication of matrices with column vectors. As another example, consider $X=\mathrm{Gr}_{p|q,m|n}$, the super-Grassmannian of $p|q$-planes in $m|n$-space (where $p\sle m$ and $q\sle n$). For affine $T$, the point functor takes on the form \[ X(T)=\Set1{ Z }{ Z\text{ rank $p|q$ direct summand of }\SheafTypeface O(T)^{m|n} }. \] Again, $\GL(m|n)$ acts by left multiplication of matrices on column vectors. For general $T$ (which need not be affine), the functor of points can be computed in terms of locally direct subsheaves, compare Ref.~\cite{manin}. \item\label{item:grpob-ex-vii} In the category $\CategoryTypeface C$ of $(\knums,\Bbbk)$-supermanifolds \cite{ahw-sing}, where $\Bbbk\subseteq\knums$ and both are $\reals$ or $\cplxs$, consider the affine superspace $G\defi{\mathbb A}^{0|1}$ with the odd coordinate $\tau$. Then $G(T)=\SheafTypeface O_\odd(T)$, and the addition of odd superfunctions gives $G$ the structure of a supergroup. Let $X$ be a manifold. The total space $\Pi TX$ of the parity reversed tangent bundle of $X$ has the underlying manifold $X$ and the structure sheaf $\SheafTypeface O_{\Pi TX}=\Omega^\bullet_X$, the sheaf of $\knums$-valued differential forms, with the $\ints/2\ints$ grading induced by the $\ints$-grading. The supermanifold $\Pi TX$ has the point functor \[ \Pi TX(T)\cong\Hom[_\CategoryTypeface C]0{T\times{\mathbb A}^{0|1},X}. \] We denote elements on the left-hand side by $f$ and the corresponding elements on the right-hand side by $\tilde f$. We may let $x\in_TG$ act on $f\in_T\Pi TX$ by defining $x\cdot f$ \via \[ (x\cdot f)^\sim:T\times{\mathbb A}^{0|1}\longrightarrow X:(t,y)\in_R(T\times{\mathbb A}^{0|1})\longmapsto \tilde f(t,y+x(t))\in_RX. \] If $X$ has local coordinates $(x^a)$, then $\Pi TX$ has local coordinates $(x^a,dx^a)$. If $f\in_T\Pi TX$, then in terms of the point functor above, we have \[ f^\sharp(x^a)=j^\sharp(\tilde f^\sharp(x^a)),\quad f^\sharp(dx^a)=j^\sharp\Parens1{\tfrac\partial{\partial\tau}\tilde f^\sharp(x^a)}. \] Here, $j:T\longrightarrow T\times{\mathbb A}^{0|1}$ is the unique morphism over $T$ defined by $j^\sharp(\tau)\defi0$, $\tau$ denoting the standard odd coordinate function on ${\mathbb A}^{0|1}$. From this description, we find that the action of $G$ on $\Pi TX$ is the morphism \[ a:G\times\Pi TX\longrightarrow\Pi TX,\quad a^\sharp(\omega)=\omega+\tau d\omega. \] Expanding on this example a little, one may consider the action $\alpha$ of $({\mathbb A}^1,+)$ on ${\mathbb A}^{0|1}$ given by dilation, \ie $\alpha^\sharp(\tau)=e^t\tau$. This defines a semi-direct product supergroup $G'\defi{\mathbb A}^1\ltimes{\mathbb A}^{0|1}$, and the action $a$ considered above may be extended to $G'$ by dilating and translating in the ${\mathbb A}^{0|1}$ argument. In terms of local coordinates, the thus extended action is given by \[ a^\sharp(\omega)=e^{nt}(\omega+\tau d\omega), \] for $\omega$ of degree $n$, compare \cite{hkst}*{Lemma 3.4, Proposition 3.9}. \item\label{item:grpob-ex-runninggag} Let $G\defi{\mathbb A}^{0|1}$ with its standard additive structure and $X\defi{\mathbb A}^{1|1}$. Then $G$ acts on $X$ \via $a:G\times X\longrightarrow X$, defined by \[ a\Parens0{\gamma,(y,\eta)}\defi(y+\gamma\eta,\eta) \] for all $R$ and $\gamma\in_RG$, $(y,\eta)\in_RX$. In terms of the standard coordinates $\gamma$ on $G$ and $(y,\eta)$ on $X$, we have \[ a^\sharp(y)=y+\gamma\eta,\quad a^\sharp(\eta)=\eta. \] \end{enumerate} \end{Ex} \begin{Ex}[grpob-ex] Complementing our examples from supergeometry, we give a list of examples for categorical groups and actions from different contexts. \begin{enumerate}[wide] \item\label{item:grpob-ex-i} Let $G$ be a $\CategoryTypeface C$-group. Any $X\in\Ob\CategoryTypeface C$ can be endowed with a natural $G$-action, given by taking $a:G\times X\longrightarrow X$ to be the second projection. That is, $g\cdot x\defi x$ for all $T\in\Ob\CategoryTypeface C$, $g\in_TG$, and $x\in_TX$. This action is called \Define[action!trivial]{trivial}.\index{gspace@$G$-space!trivial} \item\label{item:grpob-ex-ii} Any $\CategoryTypeface C$-group $G$ is both a left and a right $G$-space, by the assignments \[ g\cdot x\defi gx\mathtxt{\OR} x\cdot g\defi xg, \] respectively, for all $T\in\Ob\CategoryTypeface C$, $g\in_TG$, and $x\in_TX$. \item\label{item:grpob-ex-iii} Topological groups and Lie groups, and their actions on topological spaces and smooth manifolds, respectively, are examples of categorical groups and actions. \item Group schemes and their actions on schemes are examples of categorical groups and actions as well, see \citelist{\cite{mfk}*{Definitions 0.2--3} \cite{demazure-gabriel}*{Chapitre II, \S 1.1}}. \item\label{item:grpob-ex-iv} A pointed (compactly generated) topological space $(W,w_0)$ is called an \textit{$H$-group}, if it is equipped with based continuous maps $\mu:W\times W \longrightarrow W$, $e:W\longrightarrow W$ with $e(W)=w_0$, and $j:W\longrightarrow W$ such that the following holds: \begin{gather*} \mu \circ (e,{\id_W})\simeq \mu \circ ({\id}_W,e)\simeq \id_W,\\ \mu \circ (\mu\times{\id}_W)\simeq \mu \circ ({\id}_W \times \mu),\quad \mu \circ ({\id}_W,j)\simeq\mu \circ (j,{\id}_W)\simeq e, \end{gather*} where $\simeq$ denotes based homotopy equivalence, \cf \cite{agp}*{Section 2.7}. Given a pointed, compactly generated topological space $(X,x_0)$, its based loop space $\Omega X$ is a prime example of an $H$-group. In the category $\CategoryTypeface C$ of pointed, compactly generated topological spaces with based homotopy classes of continuous maps as morphisms, an $H$-group together with the homotopy classes of $e$, $j$, and $\mu$ is simply a $\CategoryTypeface C$-group. The basic theorem that the set $[X,W]_*=\Hom[_\CategoryTypeface C]0{X,W}$ of based homotopy classes has a group structure that is natural in the variable $X$ if and only if $W$ is an $H$-group \cite{agp}*{Theorem 2.7.6} is an instance of \thmref{Rem}{grp-dep}. If now $(G,1_G)=(W,w_0)$ is an $H$-group and $(X,x_0)$ a pointed topological space, then a pointed continuous map $a:G\times X\longrightarrow X$ is a group action in $\mathbf C$ if and only if $a(1_G,\cdot)$ is pointed homotopy equivalent to ${\id}_X$ and the diagram \[ \begin{tikzcd}[column sep=large] G\times G\times X\dar[swap]{\mu\times{\id}_X}\rar{{\id}_G\times a}&G\times X\dar{a}\\ G\times X\rar{a}&X \end{tikzcd} \] commutes up to a pointed homotopy. \item\label{item:grpob-ex-viii} In the theory of integrable systems one encounters the following situation: $(M,\omega)$ is a symplectic manifold of dimension $2n$ and $\rho:M\longrightarrow B$ is a fibration whose fibres are compact, connected Lagrangian submanifolds. Then there is a smooth fibrewise action of $T^*B$ on $M$. In the above language, $T^*B\longrightarrow B$ is a group in the category of smooth manifolds over $B$, and it acts on $X=(M\longrightarrow B)$. To see this latter fact, let $m\in M$, $b=\rho(m)$, and $M_b:=\rho^{-1}(b)$. The dual of the differential of $\rho$ is an injective linear map $(T_m\rho)^*:T^*_bB\longrightarrow T_m^*M$ whose image is the annihilator of $T_m(M_b)$. Since $M_b$ is Lagrangian, the musical isomorphism $\omega_m^\flat :T_m^*M\longrightarrow T_mM$ identifies this annihilator space with $T_m(M_b)$. We thus have canonical linear isomorphisms $T_b^*B\longrightarrow T_m(M_b)$ depending smoothly on $m$. Given $v\in T_b^*B$, we obtain a smooth vector field $\hat{v}$ on $M_b$. It is easy to see that these vector fields extend to a commuting family of Hamiltonian vector fields on $M$, and that a linearly independent set of elements of $T_b^*B$ yields vector fields on the fibre $M_b$ that are everywhere independent. Since $M_b$ is compact, we obtain an action of the additive group of $T_b^*B$ whose isotropy is a cocompact lattice $\Lambda_b$ \cite{gs}*{Theorem 44.1}. \end{enumerate} \end{Ex} \subsection{Isotropies at generalised points} For many applications of group actions, the notion of isotropy (or stabiliser) groups is essential. In the categorical framework, we can consider isotropy groups through $T$-valued points, by following the general philosophy of base change and specialisation: As we shall see, this allows us to consider $T$-valued points as ordinary points in the category of objects over $T$, leading to a general definition of isotropy groups. \begin{Cons}[grp-act-spec][base change of groups and actions] Let $G$ be a $\CategoryTypeface C$-group, $X$ a $G$-space and $T\in\Ob\CategoryTypeface C$. We assume that the finite products $T\times Y_1\times\dotsm\times Y_n$ exist in $\CategoryTypeface C$ for any choice of $Y_j=X$ or $Y_j=G$. Consider the category $\CategoryTypeface C_T$. The morphism $\id_T:T\longrightarrow T$ is a terminal object in $\CategoryTypeface C_T$. Non-empty finite products in $\CategoryTypeface C_T$, provided they exist, are fibre products $\times_T$ over $T$ in $\CategoryTypeface C$. Thus, if we denote \[ G_T\defi T\times G,\quad X_T\defi T\times X, \] then \[ (Y_1)_T\times_T\dotsm\times_T(Y_n)_T=T\times Y_1\times\dotsm\times Y_n=(Y_1\times\dotsm\times Y_n)_T \] exist as finite products in $\CategoryTypeface C_T$. So it makes sense to define on $G_T$ and $X_T$ the structure of a $\CategoryTypeface C_T$-group and a $G_T$-space, respectively. The $\CategoryTypeface C_T$-group structure \[ 1=1_{G_T}:T\longrightarrow G_T,\quad i=i_{G_T}:G_T\longrightarrow G_T,\quad m=m_{G_T}:G_T\times_TG_T\longrightarrow G_T \] on $G_T$ is defined by the equations \[ 1(t)\defi(t,1),\quad (t,g)^{-1}\defi(t,g^{-1}),\quad (t,g)(t,h)\defi(t,gh) \] for all $g,h\in_RG$ and $t\in_RT$, where we have written all morphisms in $\CategoryTypeface C$ and used the notational conventions from \thmref{Def}{grp-act}. Similarly, $X_T$ is a $G_T$-space \via \[ G_T\times_TX_T\longrightarrow X_T:(t,g)\cdot(t,x)\defi(t,g\cdot x) \] for all $g\in_RG$, $x\in_RG$, and $t\in_RT$. \end{Cons} As we have seen, groups and actions are easily defined in the full generality of categories with terminal objects. Possibly after base change and specialisation, it will be sufficient to consider isotropy groups only through ordinary points. Their definition at the level of functors presents no difficulty. We will define isotropy groups at ordinary points, passing to the general case of $T$-valued points only after base change. This definition will be equivalent to the one given in Ref.~\cite{mfk}*{Definition 0.4} in the case of schemes over some base scheme. \begin{Def}[isotropy][isotropy group] Let $G$ be a $\CategoryTypeface C$-group and $X$ a $G$-space. We write $X_0\defi X(*)$ and call the elements of this set the \Define{ordinary points} of $X$. Let $x\in X_0$. The \Define{isotropy at $x$} (a.k.a.~the \Define{stabiliser at $x$}) is the functor $G_x:\CategoryTypeface C\longrightarrow\Sets$ whose object map is defined by \[ G_x(R)\defi\Set1{g\in_RG}{g\cdot x=x}, \] for any $R\in\Ob\CategoryTypeface C$. In other words, $G_x$ is the fibre product defined by the following diagram in the category of set-valued functors on $\CategoryTypeface C$: \begin{center} \begin{tikzcd} G_x\dar{}\rar{}&G\dar{a_x}\\ *\rar{x}&X. \end{tikzcd} \end{center} Here, $a_x:G\longrightarrow X$ is the \Define{orbit morphism} defined by \begin{equation}\label{eq:orbit-mor-def} a_x(g)\defi g\cdot x \end{equation} for all $R\in\Ob\CategoryTypeface C$ and $g\in_RG$. The functor $G_x$ is group-valued. Indeed, let $R\in\Ob\CategoryTypeface C$. By construction, an $R$-valued point $g\in G_x(R)$ is just $g\in_RG$ such that $g\cdot x=x$. If $g,h\in G_x(R)$, then \[ (gh)\cdot x=g\cdot(h\cdot x)=g\cdot x=x, \] so $gh\in G_x(R)$. Taking this as the definition of the group law on $G_x$, we see that the canonical morphism $G_x\longrightarrow G$ preserves this operation. Since $G(R)$ is a group, so is $G_x(R)$, and this proves the assertion. In particular, if $G_x$ is representable and the finite direct products $G_x\times\dotsm\times G_x$ exist, then $G_x$ is a $\CategoryTypeface C$-group. \end{Def} Although the above definition defines the isotropy group only for ordinary points, we may use the procedure of base change from \thmref{Cons}{grp-act-spec} to give a satisfactory definition of the isotropy of an action at a $T$-valued point, as we now proceed to explain in detail. \begin{Cons}[iso-tpt][\protect{$T$-valued points as ordinary points}] Recall the natural bijection \begin{equation}\label{eq:comma-iso} \Hom[_\CategoryTypeface C]0{A,B}\longrightarrow\Hom[_{\CategoryTypeface C_T}]0{A,B_T}:f\longmapsto(p_A,f), \end{equation} valid for any $(p_A:A\longrightarrow T)\in\Ob\CategoryTypeface C_T$ and any $B\in\Ob\CategoryTypeface C$. This allows us to consider any morphism in $\CategoryTypeface C$ from an object over $T$ as a morphism over $T$. Applying this to $A=T=*_T$, we obtain in the notation of \thmref{Def}{isotropy} \[ (X_T)_0=\Hom[_{\CategoryTypeface C_T}]0{*_T,X_T}=\Hom[_\CategoryTypeface C]0{T,X}=X(T). \] Thus, \emph{we may consider any $T$-valued point $x$ of $X$ as an ordinary point of the base change $X_T\in\Ob\CategoryTypeface C_T$ of $X$}. This is one of the main distinguishing traits of our general point of view. Let now $G$ be a $\CategoryTypeface C$-group, $X$ a $G$-space, and $x\in_TX$. By \thmref{Cons}{grp-act-spec}, $G_T$ is a $\CategoryTypeface C_T$-group and $X_T$ is a $G_T$-space. In particular, we obtain an \Define{orbit morphism} $a_x:G_T\longrightarrow X_T$ in $\CategoryTypeface C_T$, from Equation \eqref{eq:orbit-mor-def}. It is the composite \begin{center} \begin{tikzcd}[column sep=huge] T\times G\rar{({\id}_T,x)\times{\id}_G}&T\times X\times G\rar{{\id}_T\times(a\circ\sigma)}&T\times X, \end{tikzcd} \end{center} denoting the action of $G$ on $X$ by $a$, and by $\sigma$ the exchange of factors, \ie \begin{equation}\label{eq:orbmap-def} a_x(t,g)=\Parens1{t,g\cdot x(t)},\quad\forall t\in_RT,g\in_RG. \end{equation} \end{Cons} The objects $T=*_T$, $G_T$, and $X_T$ in the category $\CategoryTypeface C_T$ are promoted to contravariant functors on $\CategoryTypeface C_T$. Similarly, $x$ and $a_x:G_T\longrightarrow X_T$ are promoted to morphisms of functors. We now pose the following definition. \begin{Def}[isotrop-def][isotropy functor] The \Define{isotropy functor} (a.k.a.~\Define{stabiliser functor}) $G_x\defi(G_T)_x:\CategoryTypeface C_T\longrightarrow\Sets$ is the fibre product defined by the diagram \begin{center} \begin{tikzcd} G_x\dar{}\rar{}&G_T\dar{a_x}\\ T=*_T\rar{x}&X_T \end{tikzcd} \end{center} in the category of (contravariant) set-valued functors on $\CategoryTypeface C_T$. \end{Def} \begin{Rem} This coincides with Mumford's definition \cite{mfk}*{Definition 0.4} in the case of $\CategoryTypeface C=\mathop{\mathrm{Sch}}_S$. \end{Rem} Consider now the following diagram in the category $\CategoryTypeface C$: \begin{center} \begin{tikzcd} {}&T\times G\dar{a_x}\\ T\rar{(\id_T,x)}&T\times X. \end{tikzcd} \end{center} Its limit in the functor category is the fibre product functor given on $R\in\Ob\CategoryTypeface C$ by \begin{align*} \Parens1{T\times_{T\times X}(T\times G)}(R) &=\Set3{(t_1,t_2,g)\in_R(T\times T\times G)}{\begin{aligned}t_1&=t_2\\x(t_1)&=g\cdot x(t_2)\end{aligned}}\\ &=\Set1{(t,g)\in_R(T\times G)}{g\cdot x(t)=x(t)}. \end{align*} If $R$ comes with morphisms $R\longrightarrow T$ and $R\longrightarrow T\times G$ in $\CategoryTypeface C$ completing the fibre product diagram above, then we may consider $R\in\Ob\CategoryTypeface C_T$ \via either of the $T$-projections thus obtained. The above computation then gives \[ G_x(R)=\Parens1{T\times_{T\times X}(T\times G)}(R). \] Hence, the representability of the functor $G_x=(G_T)_x$ in $\CategoryTypeface C_T$ is equivalent to the existence of this fibre product in $\CategoryTypeface C$. \begin{Ex}[isotropy-runninggag] Recall the notation from \thmref{Ex}{sgrp-ex} \eqref{item:grpob-ex-runninggag}. We will investigate the representability of the isotropy functor for different choices of points. To that end, recall the category $\ssplfg{\knums}=\ssplfg[\varpi]{\knums,\Bbbk}$ of locally finitely generated superspaces from Section \ref{sec:groupoid-quots} below and/or Ref.~\cite{ahw-sing}. This category is finitely complete and contains $\SMan_\knums=\SMan_{\knums,\Bbbk}$ as a full subcategory. Finite limits in $\SMan_\knums$, when they exist, are finite limits in $\ssplfg{\knums}$. Any point $p\in X_0=X(*)$ gives rise to $p_R\in X(R)$ and we obviously have $\gamma\cdot p_R=p_R$ for all $\gamma\in_RG$ and all $R\in\Ob\ssplfg{\knums}$. Thus, we find $G_p=G$ as functors, so $G_p$ is represented by the Lie supergroup $G$. By contrast, take $T={\mathbb A}^{0|1}$ with the odd coordinate $\theta$ and define $x\in_TX$ by \[ x^\sharp(y)\defi0,\quad x^\sharp(\eta)\defi\theta. \] where we might as well take any other number for $x^\sharp(y)$. That is, for any $R\in\ssplfg{\knums}$, we have \[ x(\theta)=(0,\theta),\quad\forall\theta\in_RT. \] In this case, the isotropy functor $G_x$ evaluates on any $R\in\ssplfg{T}$ as \[ G_x(R)=\Set1{(\theta,\gamma)\in_R(T\times G)}{\gamma\theta=0}. \] Therefore, $G_x$ is represented by the superspace \[ \Spec\knums[\theta,\gamma]/(\theta\gamma)=\Parens1{*,\knums[\theta,\gamma]/(\theta\gamma)}, \] where $\theta,\gamma$ are odd indeterminates. It lies over $T$ \via the morphism \[ p:G_x\longrightarrow T,\quad p^\sharp(\theta)\defi\theta. \] The group multiplication works out to be \[ m:G_x\times_T G_x\longrightarrow G_x,\quad m^\sharp(\gamma)\defi\gamma_1+\gamma_2, \] where $\gamma_i\defi p_i^\sharp(\gamma)$. Thus, $G_x$ is a group object in $\ssplfg{T}$ but not given by a Lie supergroup over $T$. \end{Ex} \begin{Def}[spec-pt][specialisation of a point] Let $\CategoryTypeface C$ be a category, $T_1,T_2,X$ be objects in $\CategoryTypeface C$. Given two points $x_1\in_{T_1}X$ and $x_2\in_{T_2}X$, we say that $x_2$ is a \Define{specialisation} of $x_1$ if for some morphism $\vphi:T_2\longrightarrow T_1$ in $\CategoryTypeface C$, the following diagram commutes: \begin{center} \begin{tikzcd} T_2\arrow{rr}{\vphi}\arrow[swap]{dr}{x_2}&&T_1\arrow{dl}{x_1}\\ &X. \end{tikzcd} \end{center} \end{Def} \begin{Prop}[isotropy-basechange] Let $G$ be a $\CategoryTypeface C$-group and $X$ a $G$-space. Let $x_1\in_{T_1}X$ and $x_2\in_{T_2}X$ such that $x_2$ is a specialisation of $x_1$. Then there is a natural isomorphism \[ T_2\times_{T_1}G_{x_1}=G_{x_2} \] of $\Sets$-valued contravariant functors on $\CategoryTypeface C_{T_2}$. In particular, if $G_{x_1}$ is representable in $\CategoryTypeface C_{T_1}$, then $G_{x_2}$ is representable in $\CategoryTypeface C_{T_2}$ if and only if the fibre product $T_2\times_{T_1}G_{x_1}$ exists in $\CategoryTypeface C$. \end{Prop} \begin{proof} By assumption, we have $x_2=x_1\circ\vphi$ for some morphism $\vphi:T_2\longrightarrow T_1$ in $\CategoryTypeface C$. We compute for each $R\in\Ob\CategoryTypeface C$ and $(t,g)\in_RG_{T_2}$ that \[ g\cdot x_2(t)=g\cdot x_1(\vphi(t)), \] so that the map $(t,g)\longmapsto(t,\vphi(t),g)$ on $R$-valued points defines a natural bijection \[ G_{x_2}(R)\longrightarrow\Parens1{T_2\times_{T_1}G_{x_1}}(R). \] This proves the assertion. \end{proof} \begin{Def}[free-act][free $G$-spaces] Let $G$ be a $\CategoryTypeface C$-group and $X$ a $G$-space. Given a $T$-valued point $x\in_TX$, the $G$-space $X$ is called \Define[action!free at $x$]{free at $x$}\index{gspace@{$G$-space}!free at $x$} if $(G_T)_x$ is the trivial group in the category of $\Sets$-valued contravariant functors on $\CategoryTypeface C_T$. It is simply called \Define[action!free]{free}\index{gspace@{$G$-space}!free} if it is free at any $x\in_TX$, for any $T\in\Ob\CategoryTypeface C$. \end{Def} As the following corollary to \thmref{Prop}{isotropy-basechange} shows, it is equivalent to require that $X$ be free at the generic point $x=\id_X\in_XX$. \begin{Cor} Let $G$ be a $\CategoryTypeface C$-group and $X$ a $G$-space. Assume that $X$ is free at the generic point $x=\id_X\in_XX$. Then $X$ is free. \end{Cor} \subsection{Quotients and orbits}\label{subs:quot-orb} In this subsection, we introduce basic facts and terminology relating to quotients and orbits. Although we are mainly interested in quotients by group actions, we shall need a general statement on quotients by equivalence relations for our applications (see \thmref{Prop}{eq-quot}, which is applied in the proof of \thmref{Prop}{orb-quot}). In order to be able to treat quotients by group actions and equivalence relations on the same footing, the language of groupoids, introduced to this context by P.~Gabriel \cite{gabriel}*{\S~1}, has proved to be convenient. Moreover, applications in forthcoming work actually rely on this generality. We briefly recall the main definitions and give a number of motivating examples before going into the applications. In what follows, we let $\CategoryTypeface C$ be a category with all finite products. \begin{Def}[groupoid][groupoids] Let $X\in\Ob\CategoryTypeface C$. A \Define{$\CategoryTypeface C$-groupoid on $X$} is a $\Gamma\in\Ob C$, together with morphisms $s,t:\Gamma\longrightarrow X$---called \Define{source}  and \Define{target}---such that all finite fibre products \[ \Gamma^{(n)}\defi\Gamma\times_X\Gamma\times_X\dotsm\times_X\Gamma=\Gamma\times_{s,X,t}\Gamma\times_{s,X,t}\dotsm\times_{s,X,t}\Gamma \] exist, and morphisms \[ 1:X\longrightarrow\Gamma,\quad i:\Gamma\longrightarrow\Gamma,\quad m:\Gamma^{(2)}\longrightarrow\Gamma \] ---where the first and third are over $X\times X$ (where we consider $X$ as lying over $X\times X$ \via $\Delta_X$ and $\Gamma$ as lying over $X\times X$ \via $(t,s)$) and the second is over the flip $\sigma:X\times X\longrightarrow X\times X$---such that the following diagrams commute: \[ \begin{tikzcd} \Gamma^{(3)}\rar{m\times_X{\id}}\dar[swap]{{\id}\times_X m}&\Gamma^{(2)}\dar{m}\\ \Gamma^{(2)}\rar[swap]{m}&\Gamma \end{tikzcd} \begin{tikzcd} \Gamma\arrow[mathdouble]{rd}\rar{(1\circ t)\times_X{\id}}\dar[swap]{{\id}\times_X(1\circ s)}&\Gamma^{(2)}\dar{m}\\ \Gamma^{(2)}\rar[swap]{m}&\Gamma \end{tikzcd} \begin{tikzcd} \Gamma\rar{s}\dar[swap]{(i,{\id})}&X\dar{1}\\ \Gamma^{(2)}\rar[swap]{m}&\Gamma \end{tikzcd} \begin{tikzcd} \Gamma\rar{({\id},i)}\dar[swap]{t}&\Gamma^{(2)}\dar{m}\\ X\rar[swap]{1}&\Gamma. \end{tikzcd} \] A morphism $\vphi:X\longrightarrow Y$ in $\CategoryTypeface C$ that coequalises $s$ and $t$, \ie \[ \vphi\circ s=\vphi\circ t:\Gamma\longrightarrow Y \] will be called \Define{$\Gamma$-invariant}. A \Define{subgroupoid} of $\Gamma$ is a monomorphism $j:\Gamma'\longrightarrow\Gamma$ with the induced source and target morphisms, such that $1$, $i\circ j$, and $m\circ(j\times_Xj)$ factor through $j$. \end{Def} \begin{Ex}[ex-groupoid] We will need the following three simple examples of groupoids. \begin{enumerate}[wide] \item Let $G$ be a $\CategoryTypeface C$-group and $X$ be a $G$-space with action morphism $a$. Then $\Gamma\defi G\times X$ is a $\CategoryTypeface C$-groupoid over $X$, called the \Define{action groupoid} of $a$. Its structural morphisms are \[ s\defi p_2:\Gamma\longrightarrow X,\quad t\defi a:\Gamma\longrightarrow X,\quad 1\defi(1_G,{\id}_X):X\longrightarrow\Gamma, \] as well as the inversion $i$ and multiplication $m$ defined by \[ i(g,x)\defi(g^{-1},g\cdot x),\quad m(g_1,x,g_2)\defi(g_1g_2,x),\quad\forall g_1,g_2\in_TG,x\in_TX, \] respectively. Here, we identify $\Gamma^{(2)}=G\times X\times G$ \via the morphism induced by ${\id}_\Gamma\times p_1:\Gamma\times\Gamma\longrightarrow G\times X\times G$. \item Let $X\in\Ob\CategoryTypeface C$. Then $\Gamma\defi X\times X$ is a $\CategoryTypeface C$-groupoid over $X$, called the \Define{pair groupoid} of $X$. Its structural morphisms are \[ s\defi p_1,t\defi p_2:\Gamma\longrightarrow X,\quad 1\defi\Delta_X:X\longrightarrow\Gamma, \] as well the inversion $i$ and multiplication $m$ defined by \[ i(x,y)\defi(y,x),\quad m(x,y,z)\defi(x,z),\quad\forall x,y,z\in_TX, \] respectively. Here, we identify $\Gamma^{(2)}=X\times X\times X$ \via the morphism induced by ${\id}_\Gamma\times p_2:\Gamma\times\Gamma\longrightarrow X\times X\times X$, \item\label{item:ex-groupoid-iii} Let $X\in\Ob\CategoryTypeface C$. By definition, an \Define{equivalence relation} on $X$ is a subgroupoid $R$ of the pair groupoid of $X$. This definition, in the categorical context, seems to be due to P.~Gabriel \cite{gabriel}*{\S 3 e)}. Almorox \cite{almorox}*{Definition 2.1} was the first to adapt this definition to the case of supermanifolds. \end{enumerate} \end{Ex} We now recall the notion of categorical quotients \cite{mfk}*{Definition 0.5}. Although Mumford does not use the language of groupoids introduced above, the definition immediately extends to this case. \begin{Def}[quot-def][categorical quotients] Let $X\in\Ob\CategoryTypeface C$ and $\Gamma$ be a $\CategoryTypeface C$-groupoid on $X$. A morphism $\pi:X\longrightarrow Q$ is called a \Define{categorical quotient} of $X$ by $\Gamma$ if it is universal among $\Gamma$-invariant morphisms. That is, the morphism $\pi$ is $\Gamma$-invariant, and for any $\Gamma$-invariant morphism $f:X\longrightarrow Y$, where $Y\in\Ob\CategoryTypeface C$, there is a unique morphism $\tilde f:Q\longrightarrow Y$ such that the following diagram commutes: \[ \begin{tikzcd} X\arrow{rd}[swap]{f}\rar{\pi}&Q\arrow[dashed]{d}{\tilde f}\\ &Y. \end{tikzcd} \] By abuse of notation, we also say that $Q$ is a categorical quotient (of $X$ by $\Gamma$). We say that $\pi:X\longrightarrow Q$ is a \Define{universal categorical quotient} if for all morphisms $Q'\longrightarrow Q$, the fibre products $X'\defi Q'\times_QX$ and $\Gamma'\defi (Q'\times Q')\times_{Q\times Q}\Gamma$ exist, and $\pi'\defi Q'\times_Q\pi:X'\longrightarrow Q'$ is a categorical quotient of $X'$ by $\Gamma'$. We use the notation $X/\Gamma$ for categorical quotients. In case $\Gamma$ is the action groupoid for the left (respectively, right) action of a $\CategoryTypeface C$-group $G$, we write $G\backslash X$ (respectively, $X/G$) for the categorical quotient (if it exists). \end{Def} We now apply these notions to pointed spaces, to arrive at a definition of orbits. At this point, we have to depart from Mumford's definitions \cite{mfk}*{Definition 0.4}, since the notion of scheme-theoretic image does not apply to the setting of $\SheafTypeface C^\infty$ differentiable supermanifolds that we are primarily interested in. For any category $\CategoryTypeface C$ with a terminal object $*$, we define the category $\CategoryTypeface C^*$ of \Define{pointed spaces} to be the category of objects and morphisms under $*$. We denote the objects $*\longrightarrow X$ in this category by $(X,x)$. \begin{Def}[orb-def][categorical orbits] Let $G$ be a $\CategoryTypeface C$-group and $X$ be a $G$-space. Let $x\in_TX$, where $T\in\Ob\CategoryTypeface C$ is arbitrary. Assume that $G_x$ is representable in $\CategoryTypeface C_T$. Being a group object in that category, it is naturally pointed by the unit. Since the unit acts trivially, we have a right $G_x$-action on $G_T$ in $(\CategoryTypeface C_T)^*$. If it exists, the categorical quotient $\pi_x:G_T\longrightarrow G_T/G_x$ in $(\CategoryTypeface C_T)^*$ is called the \Define{categorical orbit} of $G$ through $x$, and denoted by $\pi_x:G_T\longrightarrow G\cdot x$. If the quotient is universal categorical, then we say that the orbit is \Define{universal categorical}. The space $X_T$ is pointed by \[ x_T\defi({\id}_T,x):T\longrightarrow X_T, \] and by definition, $G_x$ acts trivially on $x_T$, so if the categorical orbit exists, there is a unique pointed morphism $\tilde a_x:G\cdot x\longrightarrow X_T$ over $T$ such that $\tilde a_x\circ\pi_x=a_x$. In order to avoid cluttering our terminology, we also refer to $\tilde a_x$ as the \Define{orbit morphism} of $x$. Also, by definition, the categorical orbit $G\cdot x$ is pointed in $\CategoryTypeface C_T$, so that it comes with a section $T\longrightarrow G\cdot x$ whose composite with $\tilde a_x$ is $x$. We call this section \Define{canonical} and will usually also denote it by $x$. \end{Def} We now spell out in detail what the definition given above of an orbit through a $T$-valued point is. Let $G$ be a $\CategoryTypeface C$-group, $X$ a $G$-space in $\CategoryTypeface C$, $T\in\Ob\CategoryTypeface C$, and $x\in_TX$. Assume that $G_x$ is representable in $\CategoryTypeface C_T$. As we have seen above, this means that the fibre product \[ G_x=T\times_{T\times X}(T\times G) \] exists in $\CategoryTypeface C$. So we have in $\CategoryTypeface C$ a fibre product diagram \[ \begin{tikzcd} G_x\dar{}\rar{}&T\times G\dar{a_x}\\ T\rar{(\id_T,x)}&T\times X. \end{tikzcd} \] Recall that we are working under assumption that finite products exist in $\CategoryTypeface C$. Then $G\cdot x$, provided it exists in $(\CategoryTypeface C_T)^*$, is characterised as follows: For every $G_x$-invariant morphism $f$, which fits into a commutative diagram as depicted on the left-hand side of the display below, there is a unique morphism $\tilde f$ completing the right-hand diagram commutatively: \[ \begin{tikzcd}[column sep=scriptsize,row sep=tiny] &T\arrow[mathdouble]{dd}\arrow[swap]{dl}{({\id}_T,1)}\arrow{dr}{y}&&&T\arrow[mathdouble]{dd}\arrow[swap]{dl}\arrow{dr}{y}\\ T\times G\arrow[crossing over,near start]{rr}{f}\arrow[swap]{dr}&&Y\arrow{dl}{p_Y} &G\cdot x\arrow[dotted,crossing over,near start]{rr}{\exists!\tilde f}\arrow{dr}&&Y\arrow{dl}{p_Y}\\ &T&&&T\\ \end{tikzcd} \] In other words, for any such $T$, the set of pointed morphisms $G\cdot x\longrightarrow Y$ in $\CategoryTypeface C_T$ is in natural bijection to the set of morphisms $f:G_T\longrightarrow Y$, which satisfy the conditions: \[ \left\{ \begin{aligned} f(t,1)&=y(t),\\ p_Y(f(t,g))&=t,\\ h\cdot x(t)&=x(t)\ \Longrightarrow\ f(t,gh)=f(t,g) \end{aligned}\right. \] for all $R\in\Ob\CategoryTypeface C$, $g,h\in_RG$, and $t\in_RT$. Here, we recall that the equation $h\cdot x(t)=x(t)$ characterises the $R$-valued points $(t,h)$ of $G_x$. \medskip Universal categorical orbits carry a natural action. \begin{Prop}[orbit-action] Let $G$ be a $\CategoryTypeface C$-group, and $(X,x)$ a pointed $G$-space in $\CategoryTypeface C$. If the $G$-orbit $G\cdot x$ exists and is universal categorical, then the morphism \[ \pi_x\circ m:G\times G\longrightarrow G\cdot x \] induces an action of $G$ on $G\cdot x$. It is the unique action of $G$ on $G\cdot x$ for which $\pi_x:G\longrightarrow G\cdot x$ is $G$-equivariant. Moreover, the canonical point $x:*\longrightarrow G\cdot x$ of $G\cdot x$ is invariant under the action of $G_x$. \end{Prop} \begin{proof} By assumption, $G\cdot x$ is universal categorical, so the base change \[ {\id}\times\pi_x:G\times G\longrightarrow G\times(G\cdot x) \] along the projection $G\times G\cdot x\longrightarrow G\cdot x$ is a categorical quotient in $\CategoryTypeface C$, for the groupoid \[ \Gamma'\defi G\times\Gamma=G\times G\times G_x \] derived from $\Gamma=G\times G_x$. In particular, ${\id}\times\pi_x$ is an epimorphism. Applying the base change for a further copy of $G$, we see that so is ${\id}\times{\id}\times\pi_x$. Consider the multiplication $m$ of $G$. We have \[ \pi_x(m(g_1,g_2h))=\pi_x(g_1g_2h)=\pi_x(g_1g_2)=\pi_x(m(g_1,g_2)) \] for all $R\in\Ob\CategoryTypeface C$, $g_1,g_2\in_R G$, and $h\in_R G_x$. It follows that \[ (p_1,\pi_x\circ m):G\times G\longrightarrow G\times(G\cdot x) \] is $\Gamma'$-invariant, and hence, there is a unique morphism \[ a_{G\cdot x}:G\times(G\cdot x)\longrightarrow G\cdot x \] such that $a_{G\cdot x}\circ({\id}\times\pi_x)=\pi_x\circ m$. In particular, $\pi_x$ will be $G$-equivariant and $a_{G\cdot x}$ uniquely determined by this requirement as soon as we have established that it indeed is an action. To do so, we compute \begin{align*} a_{G\cdot x}\circ({\id}\times a_{G\cdot x})\circ({\id}\times{\id}\times\pi_x)&=a_{G\cdot x}\circ({\id}\times(\pi_x\circ m))\\ &=\pi_x\circ m\circ({\id}\times m)\\ &=\pi_x\circ m\circ(m\times{\id})\\ &=a_{G\cdot x}\circ(m\times\pi_x)\\ &=a_{G\cdot x}\circ(m\times{\id})\circ({\id}\times{\id}\times\pi_x), \end{align*} which shows that \[ a_{G\cdot x}\circ({\id}\times a_{G\cdot x})=a_{G\cdot x}\circ(m\times{\id}), \] since ${\id}\times{\id}\times\pi_x$ is an epimorphism. Similarly, one has \[ a_{G\cdot x}\circ(1\times{\id})={\id}_{G\cdot x}. \] Hence, $a_{G\cdot x}$ is an action for which $\pi_x$ is $G$-equivariant. We will denote it by $\cdot$, as for any action. Finally, we verify the claim that $x$ is $G_x$-fixed. By construction, $\pi_x$ is pointed, so that $\pi_x(1)=x$. For $h\in_RG_x$, we compute, by use of the left $G$-equivariance and right $G_x$-invariance of $\pi_x$, that \[ h\cdot x=h\cdot\pi_x(1)=\pi_x(h\cdot 1)=\pi_x(h)=\pi_x(1)=x. \] This completes the proof of the proposition. \end{proof} \begin{Ex}[orb-ex][examples of orbits] Returning to the groups and actions from \thmref{Ex}{grpob-ex}, we explain the notion of isotropy groups and orbits in these cases. In items \eqref{item:orb-ex-i} and \eqref{item:orb-ex-ii} below, let $\CategoryTypeface C$ denote a category such that all finite products exist. \begin{enumerate}[wide] \item\label{item:orb-ex-i} Let $G$ be a $\CategoryTypeface C$-group acting trivially on $X\in \Ob\CategoryTypeface C$. Then for all $x:T\longrightarrow X$ and $R\in\Ob\CategoryTypeface C$, we have $G_x(R)=G_T(R)$. Thus, the isotropy functor $G_x$ is represented by $G_T=T\times G$. Here, the morphism $\pi_x=p_1:G_T\to T$ is a universal categorical orbit, as can be seen as follows: $\pi_x$ is invariant with respect to the action groupoid $\Gamma$ coming from the right $G_T$-action on $G_T$. Given any $\Gamma$-invariant morphism $f:G_T\to Y$ with $Y$ over $T$, it uniquely factors over $\pi_x$ to $\tilde{f}=f\circ({\id}_T\times 1_G)$. Furthermore, given $Q\to T$, the fibre product $Q\times_TG_T=G_Q$ exists. Moreover, $Q\times_T\pi_x={\id}_Q\times p_1:G_Q\to Q=Q\times_TT$ is a categorical quotient by the above, since $(Q\times Q)\times_{T\times T}\Gamma$ is the action groupoid for the right $G_Q$-action on $G_Q$. \item\label{item:orb-ex-ii} Assume given a $\CategoryTypeface C$-group $G$, viewed as a left $G$-space via left multiplication. For $T\in\Ob\CategoryTypeface C$ and $x\in_TG$, we have \[ G_x(R)=\Set1{(t,g)\in_RG_T}{g\cdot x(t)=x(t)} =\Set1{(t,1_G(t))}{t\in_RT}\cong T(R). \] Thus, $G_x$ is represented by $T$. Defining $\pi_x$ by ${\id}_{G_T}:G_T \longrightarrow Q\defi G_T$, we obtain for any $Y$ and any $G_T$-invariant $f:G_T\to Y$ a unique factorisation $\tilde{f}\defi f$. Thus, $\pi_x:G_T\longrightarrow Q$ is the categorical quotient of $G_T$ with respect to the $G_x$-action. In other words, it is the categorical orbit of $G$ through $x$. Furthermore, given $Q'\in\Ob\CategoryTypeface C$ and $Q'\longrightarrow Q$, we have $Q'\times_QG_T=Q'$ and $Q'\times_Q\Gamma=Q'\times_QG_T=Q'$. The projections ${\id}_Q'\times_Qs$ and ${\id}_Q'\times_Qt$ are the identity of $Q'$, so that $Q'\times_Q\pi_x={\id}_{Q'}$ is the categorical quotient of $Q'$ (the space) by $Q'$ (the groupoid). It follows that $\pi_x:G_T\longrightarrow G_T$ is a universal categorical orbit. \item Let a continuous or smooth action $a:G\times X\longrightarrow X$, respectively, of a topological group or Lie group on a topological space or a smooth manifold be given. The isotropies at $x\in X_0=X(*)=X$ are represented by the obvious set-theoretic isotropy groups, endowed with the subspace topology coming from the inclusion into $G$. Since these isotropies are closed, they are notably Lie subgroups in the smooth case. Both in the topological and the smooth case, a categorical orbit through such an $x$ is represented by the set of right cosets with respect to the isotropy group $G_x$, with its canonical structure of topological space or smooth manifold, respectively. For the rest of this example, let us focus on the topological case. Then we can consider arbitrary continuous maps $x:T\longrightarrow X$, defined on some topological space $T$ and observe that \[ G_x=\Set1{(t,g)\in G_T}{g\in G_{x(t)}} \] with the subspace topology from $T\times G$. We may define an equivalence relation $\sim$ on $G_T$ by \[ (t,g)\sim(t',g')\ :\Longleftrightarrow\ t=t',g\cdot x(t)=g'\cdot x(t). \] The quotient space $Q\defi X/\sim$ with the canonical map $\pi_x:G_T\longrightarrow Q$ satisfies the universal property of the categorical orbit of $G$ through $x$. If $\pi_x$ is an open map, then it is already an universal categorical orbit. Indeed, in this case, for any $Q'\longrightarrow Q$, the projection $p_1:Q'\times_QG_T\longrightarrow Q'$ is open and in particular a quotient map. The map $\pi_x$ is open in case $T=*$, which is the situation studied classically. In general, however, this fails to be true, as one may see in the following example: Let $G\defi(\reals,+)$, $T\defi\reals$, and $X\defi\reals^2$. Define the action by \[ g\cdot(t,s)\defi(t,tg+s) \] and set $x:T\longrightarrow X,x(t)\defi(t,0)$. Then $G_x=(0\times\reals)\cup(\reals^\times\times0)$ and the projection $G_T\longrightarrow G_T/G_x$ is not open, as the saturation of an open set $U\subseteq G_T$ containing $(0,0)$ is $(\reals\times0)\cup U$, which is open only if $\reals\times 0$ is already contained in $U$. The smooth case is more subtle, since in general, the isotropy $G_x$ might not exist as a smooth manifold over $T$. In Section \ref{sec:super-quot}, we study these questions for the category of supermanifolds. \emph{A fortiori}, these apply to ordinary manifolds. \item The existence question for isotropies and orbits in the homotopy category of pointed topological spaces leads immediately to subtle questions concerning homotopy pullbacks and homotopy orbits. We do not dwell on these matters here. \item\label{item:orb-ex-viii} From the description of the action of $T^*B$ on $M$ in \thmref{Ex}{grpob-ex} \eqref{item:grpob-ex-viii}, it follows immediately that for any $b\in B$, the action of $T^*_bB$ on the fibre $M_b$ is transitive and the orbits are $n$-dimensional real tori. Furthermore, the isotropy is a cocompact lattice $\Lambda_b$ in $T^*_bB$, depending smoothly on $b$, \cf \cite{gs}*{Theorem 44.1}. The union of the $\Lambda_b$ is the total space a smooth $\ints^n$-principal subbundle $\Lambda$ of $T^*B\longrightarrow B$. The traditional description underlines the ensuing action-angle coordinates: Action for the base directions of $B$, angle for the fibre directions (compare the detailed analysis of Duistermaat \cite{d-glo}). In the terminology introduced above, we find that the isotropy of the generic point $x={\id}_X:T=X\to X$ is the subgroup $G_x=M\times_B\Lambda$ of $G_T=M\times_BT^*B$. By our results below (\thmref{Th}{trans-iso} and \thmref{Th}{orbit}), the orbit \[ G\cdot x=G_T/G_x=(M\times_BT^*B)/(M\times_B\Lambda) \] exists as a universal categorical quotient in the category of manifolds over $M$. Moreover (\loccit), it coincides with the image of the orbit morphism $a_x$, which is a surjective submersion. Hence, we have $G\cdot x\cong M\times_BM$ as manifolds over $M$. \end{enumerate} \end{Ex} \section{Groupoid quotients of superspaces}\label{sec:groupoid-quots} We now apply the general setup of Section \ref{sec:cats} to the categories of locally finitely generated superspaces and of relative supermanifolds constructed in Ref.~\cite{ahw-sing}. We will start by recalling some basic definitions, referring to this paper for more details. We fix a field $\knums\in\{\reals,\cplxs\}$. The category $\SSp_\knums$ has as objects pairs $X=(X_0,\SheafTypeface O_X)$ where $X_0$ is a topological space and $\SheafTypeface O_X$ is a sheaf of $\knums$-superalgebras with local stalks. Such objects are called \Define{$\knums$-superspaces}. Morphisms $\vphi:X\longrightarrow Y$ are again pairs $(\vphi_0,\vphi^\sharp)$ where this time, $\vphi_0:X_0\longrightarrow Y_0$ is a continuous map and $\vphi^\sharp:\SheafTypeface O_Y\longrightarrow(\vphi_0)_*\SheafTypeface O_X$ is a local morphism of $\knums$-superalgebra sheaves. If $S$ is a fixed $\knums$-superspace, the category of objects and morphisms in $\SSp_\knums$ over $S$ will be denoted by $\SSp_S$. Objects are denoted by $X/S$ and morphisms by $\vphi:X/S\longrightarrow Y/S$. Now we fix a subfield $\Bbbk$ of $\knums$ containing $\reals$ and a `differentiability' class $\varpi\in\{\infty,\omega\}$. Here, $\infty$ means `smooth' and $\omega$ means `analytic' (over $\Bbbk$). We consider model spaces adapted to these data. Namely, let a finite-dimensional super-vector space $V=V_\ev\oplus V_\odd$ over $\Bbbk$ be given, together with a compatible $\knums$-structure on $V_\odd$. Then we may consider on the topological space $V_\ev$ the sheaf $\SheafTypeface C_{V_\ev}^\varpi$ of $\knums$-valued functions of differentiability class $\varpi$. We set \[ {\mathbb A}(V)\defi\Parens1{V_\ev,\SheafTypeface C^\varpi_{V_\ev}\otimes_\knums\textstyle\bigwedge(V_\odd)^*} \] and call this the \Define{affine superspace} associated with $V$. It depends on the data of $(\knums,\Bbbk,\varpi)$, but we will usually omit them from the notation. By definition, a \Define{supermanifold over $(\knums,\Bbbk)$ of class $\SheafTypeface C^\varpi$} is a Hausdorff $\knums$-superspace $X$ which admits a cover by open subspaces which are isomorphic to open subspaces of affine superspaces. We will usually just say that $X$ is a \Define{supermanifold}. The full subcategory of $\SSp_\knums$ comprised of these objects with be denoted by $\SMan_\knums$. In the literature, the case $\knums=\Bbbk=\reals$ corresponds to (smooth or real-analytic) real supermanifolds \citelist{\cite{leites}*{3.1.2} \cite{ccf}*{Definitions 3.2.1 and 4.2.1}}, and the case $\knums=\Bbbk=\cplxs$ corresponds to (holomorphic) supermanifolds \citelist{\cite{manin}*{Chapter 4, \S 1, Definition 1} \cite{ccf}*{Definition 4.8.1}}. In the case of $\knums=\cplxs$ and $\Bbbk=\reals$, supermanifolds are also known as `\emph{cs} manifolds' \cite{deligne-morgan}*{\S 4.8}. We take this opportunity to replace this unfortunate terminology with a hopefully less confusing one. In Ref.~\cite{ahw-sing}, we construct a full subcategory $\ssplfg{\knums}=\ssplfg[\varpi]{\knums,\Bbbk}$ of $\SSp_\knums$ that admits finite fibre products and contains $\SMan_\knums$ as a subcategory closed under finite products. Here, `lfg' stands for `locally finitely generated'. For any $S\in\Ob\ssplfg{\knums}$, the category of objects and morphisms over $S$ in $\ssplfg{\knums}$ will be denoted by $\ssplfg{S}$. Given any super-vector space $V$ as above, we define ${\mathbb A}_S(V)\defi S\times{\mathbb A}(V)$ (where the product is taken in $\ssplfg{\knums}$). Using these as model spaces, we arrive at a definition of \Define{supermanifolds over $S$}, compare \opcit{} We denote the corresponding full subcategory of $\ssplfg{S}$ by $\SMan_S$. Note that this now makes sense for a wide class of \Define{singular} base spaces $S$ and, moreover, that, contrary to the setting of schemes, it would not be appropriate to instead consider products in $\SSp_S$, as already the embedding $\SMan_\knums\longrightarrow\SSp_\knums$ does not preserve products. For this reason, the use of the intermediate category $\ssplfg{\knums}$ is essential. \subsection{Geometric versus categorical quotients} In what follows, fix $S\in\ssplfg{\knums}$, and let $\CategoryTypeface C$ be a full subcategory of $\ssplfg{S}$ admitting finite products. Particular cases are $\CategoryTypeface C=\ssplfg{S}$ and $\CategoryTypeface C=\SMan_S$, by \cite{ahw-sing}*{Corollaries 5.27, 5.42}. Furthermore, let $X\in\Ob\CategoryTypeface C$ and $\Gamma$ be a groupoid over $X$ in $\CategoryTypeface C$. \begin{Prop}[wgeom-cat] The coequaliser $\pi:X\longrightarrow Q$ of $s,t:\Gamma\longrightarrow X$ exists in $\SSp_S$ and is regular in the sense of \cite{ahw-sing}*{Definition 4.12}. If $Q\in\Ob\CategoryTypeface C$, then $Q$ is the categorical quotient of $X$ by $\Gamma$. \end{Prop} \begin{proof} The existence and regularity of $Q$ is immediate from \cite{ahw-sing}*{Propositions 2.17, 5.5}. By definition, the morphism $\pi:X\longrightarrow Q$ is a coequaliser in $\SSp_S$. But since $\CategoryTypeface C$ is a full subcategory of $\SSp_S$, $\ssplfg{S}$ being full in the latter, $Q$ is the coequaliser of $s,t$ in $\CategoryTypeface C$ if $Q\in\Ob\CategoryTypeface C$, and thus has the properties required by \thmref{Def}{quot-def}. \end{proof} \begin{Rem}[colimit-explicit] We can describe the colimit $Q$ of $s,t:\Gamma\longrightarrow X$ explicitly. Indeed, by \cite{ahw-sing}*{Remark 2.18}, $\SheafTypeface O_Q$ is the equaliser in the category $\Sh0{Q_0}$ of sheaves on $Q_0$, defined by the diagram \[ \begin{tikzcd} \SheafTypeface O_Q\rar{\pi^\sharp}& \pi_{0*}\SheafTypeface O_X\arrow[shift up=0.5ex]{r}{s^\sharp}\arrow[shift up=-0.5ex]{r}[swap]{t^\sharp}& (\pi_0\circ s_0)_*\SheafTypeface O_\Gamma. \end{tikzcd} \] Moreover, since the embedding of $\SSp_S$ in $\SSp$ preserves colimits, one may see easily that $Q_0$ is the coequaliser of $s_0,t_0:\Gamma_0\longrightarrow X_0$, \ie the topological quotient space of $X_0$ by the equivalence relation generated by $s_0(\gamma)\sim t_0(\gamma)$. \end{Rem} \begin{Ex}[orbit-runninggag] Recall the action from \thmref{Ex}{sgrp-ex} \eqref{item:grpob-ex-runninggag} and the $T$-valued point $x$ from \thmref{Ex}{isotropy-runninggag}. Recall that the isotropy supergroup $G_x$ is in this case representable by the group object \[ G_x=\Spec\knums[\theta,\gamma]/(\theta\gamma),\quad p^\sharp(\theta)=\theta,\quad m^\sharp(\gamma)=\gamma_1+\gamma_2,\quad 1^\sharp(\gamma)=0 \] in $\ssplfg{T}$, where $\theta,\gamma$ are odd indeterminates. In particular, it lies in $(\ssplfg{T})^*$. Let $\eps$ be an even indeterminate and define \[ Q\defi\Spec\knums[\eps|\theta]/(\eps^2,\theta\eps). \] We then have morphisms \[ p_Q:Q\longrightarrow T,\quad p_Q^\sharp(\theta)\defi\theta,\quad q:T\longrightarrow Q,\quad q^\sharp(\eps)\defi0,\quad q^\sharp(\theta)\defi\theta. \] The morphism \[ \pi_x:G_T\longrightarrow Q,\quad\pi_x^\sharp(\theta)\defi\theta,\quad\pi_x^\sharp(\eps)\defi\theta\gamma \] is in the category $(\ssplfg{T})^*$. We claim that $\pi_x:G_T\longrightarrow Q$ is the categorical orbit of $G$ through $x$. To establish this claim, let $b:G_T\times_TG_x\longrightarrow G_T$ denote the action by right multiplication of the isotropy, \ie $b^\sharp(\gamma)=\gamma_1+\gamma_2$. We compute \[ (\pi_x\circ b)^\sharp(\eps)=b^\sharp(\theta\gamma)=\theta(\gamma_1+\gamma_2)=\theta\gamma_1=p_1^\sharp(\theta\gamma)=(\pi_x\circ p_1)^\sharp(\eps) \] so $\pi_x$ is indeed $G_x$-invariant. If $f$ is a function on $G_T$, then \[ f=f_0+f_\theta\theta+f_\gamma\gamma+f_{\theta\gamma}\theta\gamma \] where $f_\alpha\in\knums$ for $\alpha=0,\theta,\gamma,\theta\gamma$. Then \[ b^\sharp(f)-p_1^\sharp(f)=f_\gamma\gamma_2, \] so $f$ is $G_x$-invariant if and only $f_\gamma=0$. In this case, \[ f=\pi_x(\tilde f),\quad\tilde f=f_0+f_\theta\theta+f_{\theta\gamma}\eps, \] and $\tilde f$ is unique with this property. It is easy to conclude that $\pi_x:G_T\longrightarrow Q$ is the categorical quotient of $G_T$ by $G_x$, and thus the claim follows. Notice that $G\cdot x=Q$ is not a supermanifold over $T$. \end{Ex} \begin{Def}[wgeom-quot][weakly geometric quotients] The coequaliser $\pi:X\longrightarrow Q$ of $s,t:\Gamma\longrightarrow X$ is called a \Define{weakly geometric quotient} of $X$ by $\Gamma$ if $Q\in\Ob\CategoryTypeface C$. We say that $\pi:X\longrightarrow Q$ is a \Define{universal weakly geometric quotient} if for all morphisms $Q'\longrightarrow Q$, the fibre products $X'\defi Q'\times_QX$ and $\Gamma'\defi (Q'\times Q')\times_{Q\times Q}\Gamma$ exist in $\CategoryTypeface C$, and $\pi'\defi Q'\times_Q\pi:X'\longrightarrow Q'$ is the weakly geometric quotient of $X'$ by $\Gamma'$. \end{Def} \begin{Rem} The terminology is justified as follows: If $G$ is a group scheme acting on a scheme $X$, then a morphism $\pi:X\longrightarrow Q$ is called a \Define{geometric quotient} of $X$ by $G$ if it is the coequaliser of $p_2,a:G\times X\longrightarrow X$ in the category of locally ringed spaces, and in addition, the scheme-theoretic image of $(p_2,a):G\times X\longrightarrow X\times X$ is $X\times_QX$, see \cite{mfk}*{Definition 0.6}. \end{Rem} In terms of the above terminology, we may rephrase \thmref{Prop}{wgeom-cat} as follows. The result is a generalisation of \cite{mfk}*{Proposition 0.1}. \begin{Cor}[wgeom-quot-is-cat] Let the (universal) weakly geometric quotient $Q$ of $X$ by $\Gamma$ exist in $\CategoryTypeface C$. Then $Q$ is the (universal) categorical quotient of $X$ by $\Gamma$ in $\CategoryTypeface C$. \end{Cor} \section{Existence of superorbits}\label{sec:super-quot} In this section, we will derive general sufficient conditions for the existence of isotropies at and orbits through generalised points in the category $\SMan_S$ of supermanifolds over $S$, where in what follows, $S$ will denote some object of $\ssplfg{\knums}$. The material is organised as follows: In Subsection \ref{subs:const-rank}, we discuss at length the notion of morphisms of constant rank basic for our considerations. In particular, we characterise precisely when the orbit morphism of a generalised point is locally of constant rank. Subsequently, in Subsection \ref{subs:isotropy}, we study the isotropy of a supergroup action at a generalised point. This leads, in Subsection \ref{subs:orbits}, to a characterisation of the existence of orbits through generalised points. \subsection{\texorpdfstring{Tangent sheaves of supermanifolds over $S$}{Tangent sheaves of supermanifolds over S}} We briefly collect some definitions and facts concerning tangent sheaves. These are totally classical if $S$ is itself a supermanifold. \begin{Def}[tan-sh][tangent sheaf] Let $p_X:X\longrightarrow S$ and $p_Y:Y\longrightarrow S$ be superspaces over $S$ and $\vphi:X/S\longrightarrow Y/S$ a morphism over $S$. Let $U\subseteq X_0$ be open. An $\smash{p_{X,0}^{-1}\SheafTypeface O_S}$-linear sheaf map \[ v:\vphi_0^{-1}\SheafTypeface O_Y|_U\longrightarrow\SheafTypeface O_X|_U \] will be called a \Define{vector field along $\vphi$ over $S$} (defined on $U$) if $v=v_\ev+v_\odd$ where \[ v_i(fg)=v_i(f)\vphi^\sharp(g)+(-1)^{i\Abs0f}\vphi^\sharp(f)v_i(g) \] for all $i=\ev,\odd$ and all homogeneous local sections $f,g$ of $p_{X,0}^{-1}\SheafTypeface O_Y|_U$. The sheaf on $X_0$ whose local sections over $U$ are the vector fields along $\vphi$ over $S$ defined on $U$ will be denoted by $\SheafTypeface T_{X/S\to Y/S}$ or $\SheafTypeface T_{\vphi:X/S\to Y/S}$ if we wish to emphasize $\vphi$. It is an $\SheafTypeface O_X$-module, and will be called the \Define{tangent sheaf along $\vphi$ over $S$}. In particular, we define $\SheafTypeface T_{X/S}\defi\SheafTypeface T_{{\id}_X:X/S\to X/S}$ and $\SheafTypeface T_X\defi\SheafTypeface T_{X/*}$, the \Define{tangent sheaf of $X$ over $S$} and the \Define{tangent sheaf of $X$}, respectively. \end{Def} Let $\tau$ be an even and $\theta$ an odd indeterminate. Whenever $X$ is a $\knums$-superspace, we define \[ X[\tau|\theta]\defi\Parens1{X_0,\SheafTypeface O_X[\tau|\theta]/(\tau^2,\tau\theta)}. \] There is a natural morphism $(\cdot)|_{\tau=\theta=0}:X\longrightarrow X[\tau|\theta]$ whose underlying map is the identity and whose pullback map sends $\tau$ and $\theta$ to zero. \begin{Lem}[der-mor][superderivations and super-dual numbers] Let $X/S$ and $Y/S$ be superspaces over $S$ and $\vphi:X/S\longrightarrow Y/S$ be a morphism over $S$. There is a natural bijection \[ \Set1{\phi\in\Hom[_S]1{X[\tau|\theta],Y}}{\phi|_{\tau=\theta=0}=\vphi}\longrightarrow\Gamma(\SheafTypeface T_{X/S\longrightarrow Y/S}):\phi\longmapsto v \] given by the equation \begin{equation}\label{eq:der-mor} \phi^\sharp(f)\equiv\vphi^\sharp(f)+\tau v_\ev(f)+\theta v_\odd(f)\pod{\tau^2,\tau\theta} \end{equation} for all local sections $f$ of $\SheafTypeface O_Y$. Symbolically, we write \[ v_\ev(f)=\frac{\partial\phi^\sharp(f)}{\partial\tau}\mathtxt{\AND} v_\odd(f)=\frac{\partial\phi^\sharp(f)}{\partial\theta}. \] \end{Lem} \begin{proof} Since $X[\tau|\theta]$ is a thickening of $X$ \cite{ahw-sing}*{Definition 2.10}, the underlying map of $\phi$ is fixed by $\phi_0=\vphi_0$. The assertion follows easily. \end{proof} \begin{Def}[inf-flow][infinitesimal flow] Let $v\in\Gamma(\SheafTypeface T_{X/S\longrightarrow Y/S})$. The unique morphism $\phi^v\in\Hom[_S]0{X[\tau|\theta],Y}$, such that $\smash{\phi^v|_{\tau=\theta=0}=\vphi}$, associated with $v$ \via \thmref{Lem}{der-mor}, is called the \Define{infinitesimal flow} of $v$. \end{Def} The infinitesimal flow construction allows us to introduce for each fibre coordinate system a family of fibre coordinate vector fields. \begin{Cons}[fib-der][fibre coordinate vector fields] Let $S\in\ssplfg{\knums}$ and $X/S$ be in $\SMan_S$ with a global fibre coordinate system $x=(x^a)$. By \cite{ahw-sing}*{Propositions 5.18, 4.19, Corollary 5.22}, we have $X[\theta|\tau]\in\Ob\ssplfg{\knums}$, and there are unique morphisms $\phi^a\in\Hom[_S]0{X[\tau|\theta],X}$ such that \[ \phi^{a\sharp}(x^b)=\begin{cases}x^b+\tau\delta_{ab}&\text{for }\Abs0{x^a}=\ev,\\ x^b+\theta\delta_{ab}&\text{for }\Abs0{x^a}=\odd.\end{cases} \] Evidently, we have $(\phi^a|_{\tau=\theta=0})^\sharp(x^b)=x^b$, and hence $\phi^a|_{\tau=\theta=0}={\id}_X$. On account of \thmref{Lem}{der-mor}, the morphisms $\phi^a$ are the infinitesimal flows of unique vector fields over $S$, denoted by $\smash{\frac\partial{\partial x^a}}\in\Gamma(\SheafTypeface T_{X/S})$. We call these \Define{fibre coordinate vector fields} and simply \Define{coordinate vector fields} in case $S=*$. Observe that the meaning of each individual $\smash{\frac\partial{\partial x^a}}$ depends on the entire fibre coordinate system $(x^b)$, and not only on the coordinate $x^a$. \end{Cons} As we shall presently see, the coordinate vector fields give systems of generators for the relative tangent bundle. \begin{Prop}[tan-coord][coordinate expression of vector fields] Let $S$ be in $\ssplfg{\knums}$, $X/S$ be in $\ssplfg{S}$, $Y/S$ be in $\SMan_S$, and $\vphi:X/S\longrightarrow Y/S$ be a morphism over $S$. Let $(y^a)$ be a local fibre coordinate system on an open subset $V\subseteq Y_0$. Let $U\subseteq X_0$ be an open subset, such that $\vphi_0(U)\subseteq V$, and $v\in\SheafTypeface T_{X/S\longrightarrow Y/S}(U)$. Then \begin{equation}\label{eq:tan-coord} v=\sum\nolimits_av(y^a)\,\vphi^\sharp\circ\frac\partial{\partial y^a}. \end{equation} In particular, we have \[ \SheafTypeface T_{X/S\to Y/S}=\vphi^*(\SheafTypeface T_{Y/S})\defi\SheafTypeface O_X\otimes_{\vphi_0^{-1}\SheafTypeface O_Y}\vphi_0^{-1}\SheafTypeface T_{Y/S}, \] and this $\SheafTypeface O_X$-module is locally free, of rank $\rk_x\SheafTypeface T_{X/S\to Y/S}=\dim_{S,\vphi_0(x)}Y$ for $x\in X_0$. \end{Prop} \begin{proof} We may assume that $(y^a)$ is a globally defined fibre coordinate system. Define the vector field $v'\in\vphi^*(\SheafTypeface T_{Y/S})(U)\subseteq\SheafTypeface T_{X/S\to Y/S}(U)$ by \[ v'\defi\sum\nolimits_av(y^a)\,\vphi^\sharp\circ\frac\partial{\partial y^a}. \] Let $\phi$ and $\phi'$ be the infinitesimal flows of $v$ and $v'$, respectively. For any index $a$, we have $v'(y^a)=v(y^a)$, and hence $\phi^\sharp(y^a)=\phi'^\sharp(y^a)$. This implies that $\phi=\phi'$, by reason of \cite{ahw-sing}*{Propositions 5.18, 4.19, Corollary 5.22}. Hence, we have $v'=v$. In particular, the vector fields $\smash{\vphi^\sharp\circ\frac\partial{\partial y^a}}$ form a local basis of sections of $\SheafTypeface T_{X/S\longrightarrow Y/S}$, and this readily implies the remaining assertions. \end{proof} \begin{Cor}[tan-free][local freeness of $\SheafTypeface T_{X/S}$] Let $S\in\ssplfg{\knums}$ and $X/S\in\SMan_S$. Then $\SheafTypeface T_{X/S}$ is locally free, with $\rk_x\SheafTypeface T_{X/S}=\dim_{S,x}X$, for $x\in X_0.$ \end{Cor} A special case of the above concerns the relative tangent spaces. \begin{Def}[tangent-def][tangent space] Let $p=p_X:X\longrightarrow S$ be a superspace over $S$. For any point $x\in X_0$ we let $\mathfrak m_{X,x}$ be the maximal ideal of $\SheafTypeface O_{X,x}$ and $\vkappa(x)\defi\SheafTypeface O_{X,x}/\mathfrak m_{X,x}$. Setting $s\defi p_{X,0}(x)$, we define \[ T_x(X/S)\defi\GDer[_{\SheafTypeface O_{S,s}}]0{\SheafTypeface O_{X,x},\vkappa(x)}, \] the $\ints$-span of all homogeneous $v\in\GHom[_{\SheafTypeface O_{S,s}}]0{\SheafTypeface O_{X,x},\vkappa(x)}$ such that \begin{equation}\label{eq:tanvector-def} v(fg)=v(f)g(x)+(-1)^{\Abs0f\Abs0v}f(x)v(g). \end{equation} This is naturally a super-vector space over $\vkappa(x)$, called the \Define{tangent space at $x$ over $S$}. For $S=*$, we also write $T_xX$. The elements are called \Define{tangent vectors} (over $S$). As is immediate from the definitions, the tangent space coincides with the tangent sheaf over $S$ along the morphism $(*,\vkappa(x))\longrightarrow X$. \end{Def} \begin{Cor}[tansp-dim][dimension of $T_{S,x}X$] Let $S\in\ssplfg{\knums}$, $X/S$ be a supermanifold over $S$, and $x\in X_0$. Then $\dim_\knums T_{S,x}X=\dim_{S,x}X$. \end{Cor} \begin{Def}[tan-mor][tangent morphism] Let $\vphi:X/S\longrightarrow Y/S$ be a morphism of superspaces over $S$. We define the \Define{tangent morphism} \[ \SheafTypeface T_{\vphi/S}:\SheafTypeface T_{X/S}\longrightarrow\SheafTypeface T_{X/S\to Y/S} \] by setting \[ \SheafTypeface T_{\vphi/S}(v)\defi v\circ\vphi^\sharp \] for any locally defined vector field $v$ over $S$. In view of \thmref{Prop}{tan-coord}, if $Y$ is in $\SMan_S$, then the range of $\smash{\SheafTypeface T_{\vphi/S}}$ is in $\smash{\vphi^*(\SheafTypeface T_{Y/S})}$. Similarly, we obtain for any $x\in X_0$ a \Define{tangent map} \[ T_x(\vphi/S):T_x(X/S)\longrightarrow T_{\vphi_0(x)}(Y/S) \] by setting \[ T_x(\vphi/S)(v)\defi v\circ\vphi^\sharp_x \] for any tangent vector $v$ over $S$. \end{Def} \subsection{Morphisms of constant rank}\label{subs:const-rank} In order to handle supergroup orbits through $T$-valued points, we will need to understand morphisms of locally constant rank in the setting of relative supermanifolds. Already for ordinary supermanifolds, the notion is somewhat different from the standard one used for manifolds. The correct definition was first given in \cite{leites}*{2.3.8}. For our present purposes, it is useful to state this in a more invariant form. We need the following definitions and facts, which are more or less standard. \begin{Def}[qcoh][Conditions on module sheaves] Let $\SheafTypeface E$ be a sheaf (of $\ints$-modules) and $I=(I_\ev,I_\odd)$ a graded set, \ie~a pair of sets. We write $\SheafTypeface E^{(I)}$ for the direct sum $\bigoplus_{I_\ev}\SheafTypeface E\oplus\bigoplus_{I_\odd}\SheafTypeface E$ with its natural $\ints/2\ints$-grading. Let $X$ be a superspace and $\SheafTypeface F$ an $\SheafTypeface O_X$-module (understood to be graded). We say that $\SheafTypeface E$ is \Define{locally generated by sections} if any $x\in X_0$ admits an open neighbourhood $U\subseteq X_0$ and a surjective morphism of $\SheafTypeface O_X|_U$-modules $(\SheafTypeface O_X|_U)^{(I)}\longrightarrow\SheafTypeface E|_U$ for some $I$. If $I$ can be chosen to be finite for any $x$, we say that $\SheafTypeface E$ is of \Define{finite type}. \end{Def} \begin{Prop}[free-vals] Let $X$ be a superspace and $\vphi:\SheafTypeface E\longrightarrow\SheafTypeface F$ a morphism of $\SheafTypeface O_X$-modules, with $\SheafTypeface E$ of finite type and $\SheafTypeface F$ finite locally free. For $x\in X_0$, we define \[ \SheafTypeface E(x)\defi\SheafTypeface E_x/\mathfrak m_{X,x}\SheafTypeface E_x. \] For every $x\in X_0$, the following are equivalent: \begin{enumerate}[wide] \item\label{item:free-vals-i} The $\vkappa(x)$-linear map defined below is injective: \[ \vphi(x)\defi\vphi_x\otimes_{\SheafTypeface O_{X,x}}{\id}_{\vkappa(x)}:\SheafTypeface E(x)\longrightarrow\SheafTypeface F(x). \] \item\label{item:free-vals-ii} For some open neighbourhood $U\subseteq X_0$ of $x$, the morphism $\vphi|_U$ is injective and the $\SheafTypeface O_X|_U$-module $(\SheafTypeface F/\im\vphi)|_U$ is locally free. \item\label{item:free-vals-iii} For some open neighbourhood $U\subseteq X_0$ of $x$, $\vphi|_U$ admits a left inverse. \item\label{item:free-vals-iv} There exist an open neighbourhood $U\subseteq X_0$ of $x$ and homogeneous bases of sections for $\SheafTypeface E|_U$ and $\SheafTypeface F|_U$, such that the matrix $M_\vphi$ of $\vphi$ is \[ M_\vphi= \begin{Matrix}1 A&0\\ 0&D \end{Matrix},\quad A= \begin{Matrix}1 1&0\\ 0&0 \end{Matrix},\quad D= \begin{Matrix}1 1&0\\ 0&0 \end{Matrix}. \] \end{enumerate} The set of all those $x\in X_0$ where this holds is open. Moreover, in this case, $\SheafTypeface E$ is locally free on an open neighbourhood of $x$. \end{Prop} \begin{proof} The equivalence of \eqref{item:free-vals-i}--\eqref{item:free-vals-iii} follows from \cite{gro-dieu-ega1new}*{Chapitre 0, Proposition 5.5.4}, and the equivalence with \eqref{item:free-vals-iv} can be seen from its proof. \end{proof} \thmref{Prop}{free-vals} suggests the following definitions. \begin{Def}[const-rank-def][morphisms of constant rank] Let $X$ be a superspace and $\vphi$ a morphism $\SheafTypeface E\longrightarrow\SheafTypeface F$ of $\SheafTypeface O_X$-modules. We say that $\vphi$ is \Define{split} if $\SheafTypeface F/\im\vphi$ is locally free. Let $f:X/S\longrightarrow Y/S$ be a morphism of superspaces over $S$ and $x\in X_0$. We say that $f$ is of \Define{locally constant rank over $S$ at $x$} if for some open neighbourhood $U$ of $x$, the tangent map \[ \SheafTypeface T_{f/S}:\SheafTypeface T_{X/S}|_U\longrightarrow(f^*\SheafTypeface T_{Y/S})|_U \] is a split morphism of $\SheafTypeface O_X|_U$-modules. We say $f$ is of \Define{locally constant rank over $S$} if it is of locally constant rank over $S$ at $x$ for any $x\in X_0$. \end{Def} \begin{Cor}[locconst-char] Let $f:X/S\longrightarrow Y/S$ be a morphism over $S$ and $x\in X_0$, where $X/S\in\ssplfg{S}$ and $Y/S$ is a supermanifold over $S$. Then the following are equivalent: \begin{enumerate} \item\label{item:locconst-char-i} The morphism $f$ has locally constant rank over $S$ at $x$. \item\label{item:locconst-char-ii} For every $x'$ in a neighbourhood of $x$, the map \[ (\im\SheafTypeface T_{f/S})(x')\longrightarrow(f^*\SheafTypeface T_{Y/S})(x')=T_{f_0(x')}(Y/S) \] induced by the inclusion $\im\SheafTypeface T_{f/S}\longrightarrow f^*(\SheafTypeface T_{Y/S})$ is injective. \item\label{item:locconst-char-iii} There exist an open neighbourhood $U\subseteq X_0$ of $x$ and homogeneous bases of $\SheafTypeface T_{X/S}|_U$ and $f^*\SheafTypeface T_{Y/S}|_U$ such that the matrix $M$ of $\SheafTypeface T_{f/S}|_U$ has the form \[ M=\begin{Matrix}1 A&0\\ 0&D \end{Matrix},\quad A= \begin{Matrix}1 1&0\\ 0&0 \end{Matrix},\quad D= \begin{Matrix}1 1&0\\ 0&0 \end{Matrix}. \] \end{enumerate} \end{Cor} \begin{proof} Locally, $X$ admits an embedding into a supermanifold $Z/S$ over $S$, so that locally, $\SheafTypeface T_{X/S}$ injects into $\SheafTypeface T_{X/S\to Z/S}$, which is finite locally free by \thmref{Prop}{tan-coord}. Hence, $\SheafTypeface T_{X/S}$ is of finite type. By the same proposition, $f^*(\SheafTypeface T_{Y/S})$ is finite locally free. Therefore, the assumptions of \thmref{Prop}{free-vals} are verified, which proves the assertion. \end{proof} With the above definition and corollary, we generalise the rank theorem \cite{leites}*{Theorem 2.3.9, Proposition 3.2.9} in two respects: First, one may consider supermanifolds and morphisms over a general base superspace $S$. Secondly, we show the regularity not only of fibres, but also of the inverse images of subsupermanifolds of the image. \begin{Prop}[constant-rank-fibprod][rank theorem] Let $X/S$ and $Y/S$ be in $\SMan_S$, and let $f:X/S\longrightarrow Y/S$ be a morphism of locally constant rank over $S$. Then the following statements hold true: \begin{enumerate}[wide] \item\label{item:constant-rank-fibprod-i} For any $x\in X_0$, there is an open subset $U\subseteq X_0$, so that the morphism $f|_U$ factors as $f|_U=j\circ p$. Here, $j:Y'/S\longrightarrow Y/S$ is an injective local embedding of supermanifolds over $S$ and $p:X|_U/S\longrightarrow Y'/S$ is a surjective submersion over $S$. Moreover, we may take $Y'=(Y_0',\SheafTypeface O_{Y'})$, where $Y_0'\defi f_0(U)$, endowed with the quotient topology with respect to $f_0$, and $\SheafTypeface O_{Y'}\defi(\SheafTypeface O_Y/\SheafTypeface J)|_{Y_0'}$, $\SheafTypeface J\defi\ker f^\sharp$. The morphism $j$ is given by taking $j_0$ equal to the embedding of $Y_0'$ into $Y_0$, and $j^\sharp$ the quotient map with respect to the ideal $\SheafTypeface J$. \item\label{item:constant-rank-fibprod-ii} If $f':X'/S\longrightarrow Y/S$ is an embedding of supermanifolds over $S$ with $f'_0(X'_0)\subseteq f_0(X_0)$ and ideal $\SheafTypeface J'\supseteq\SheafTypeface J$, then the fibre product $X'\times_YX$ exists as a supermanifold over $S$, and the projection $p_2:X'\times_YX\longrightarrow X$ is an embedding over $S$. We have \begin{equation}\label{eq:constrk-pullback-dim} \dim_S(X'\times_YX)=\dim_SX'+\dim_SX-\dim_SY'. \end{equation} \end{enumerate} The supermanifold $Y'/S$ over $S$ constructed in item \eqref{item:constant-rank-fibprod-i} is called the \Define{image of $f|_U$}. For the assertion of item \eqref{item:constant-rank-fibprod-ii} to hold, it is sufficient to assume that $f$ has locally constant rank over $S$ at any $x\in f_0^{-1}(f'_0(x'))$, for any $x'\in X'_0$. \end{Prop} \begin{proof} The statement of \eqref{item:constant-rank-fibprod-i} is well-known in case $S=*$ \cite{leites}*{Theorem 2.3.9}, in view of \thmref{Cor}{locconst-char}. By \thmref{Th}{invfun-loc}, the inverse function theorem holds over a general base. Thus, by \thmref{Cor}{locconst-char}, the proof of the rank theorem carries over with only incremental changes to the general case. As for \eqref{item:constant-rank-fibprod-ii}, the assumption clearly implies that $f'$ factors through $j$ to an embedding $p':X'/S\longrightarrow Y'/S$ over $S$. Since $p$ is a submersion over $S$, the fibre product $X'\times_{Y'}X$ exists, and has the fibre dimension stated on the right-hand side of \eqref{eq:constrk-pullback-dim}. (See \cite{leites}*{Lemma 3.2.8} for the case of $S=*$, the proof of which applies in general, appealing again to \thmref{Th}{invfun-loc} and its usual corollaries.) Since $j$ is an injective local embedding, it is a monomorphism, and it follows that $X'\times_{Y'}X$ is actually the fibre product of $f'$ and $f$. We have a commutative diagram \[ \begin{tikzcd} X'\times_{Y'}X\dar[swap]{p_1}\rar{p_2}&X\dar[swap]{p}\arrow{rdd}{f}\\ X'\arrow[swap]{rrd}{f'}\rar{p'}&Y'\arrow{rd}[description]{j}\\ &&Y \end{tikzcd} \] of morphisms over $S$ such that the left upper square is a pullback whose lower row is an embedding. In particular, $p_{2,0}$ is injective. The image of $p_{2,0}$ is the locally closed subset $f_0^{-1}(f'_0(X_0'))$ of $X_0$. To show that this map is closed, we shall show that it is proper. Let $K\subseteq X_0$ be a compact subset and $L\defi p_0^{\prime-1}(p_0(K))$, which is a compact subset of $X_0'$. Then $p_{2,0}^{-1}(K)$ is a closed subset of $\smash{(X'\times_YX)_0=X'_0\times_{Y'_0}X_0}$ whose image in $X_0'\times X_0$ is contained in $L\times K$. Thus, $\smash{p_{2,0}^{-1}(K)}$ is compact and $p_{2,0}$ is proper, hence closed by \cite{bourbaki-gt1}*{Chapter I, \S 10, Propositions 1 and 7}. Moreover, $\smash{p_2^\sharp}$ is a surjective sheaf map. Hence, $p_2$ is an embedding. \end{proof} \begin{Rem}[rk-converse] From the relative inverse function theorem (\thmref{Th}{invfun-loc}), it is clear that the usual normal form theorems hold for submersions and immersions over $S$. Therefore, the converse of \thmref{Prop}{constant-rank-fibprod} holds: Any morphism $f:X/S\longrightarrow Y/S$ which factors as $f=j\circ p$ where $p$ is a submersion over $S$ and $j$ is an immersion over $S$ has locally constant rank over $S$. \end{Rem} \subsection{Isotropies at generalised points}\label{subs:isotropy} In what follows, fix a Lie supergroup $G$ (\ie a group object in $\SMan_\knums$) and an action $a:G\times X\longrightarrow X$ of $G$ on a supermanifold $X$. Let $T\in\ssplfg{\knums}$ and $x\in_TX$ be a $T$-valued point. We recall from Equation \eqref{eq:orbmap-def} the definition of the orbit morphism through $x$, \[ a_x:G_T/T=(T\times G)/T\longrightarrow X_T/T=(T\times X)/T, \] by \[ a_x(t,g)=\Parens1{t,a(g,x(t))}=\Parens1{t,g\cdot x(t)},\quad\forall (t,g)\in_RG_T, \] and for any $R\in\ssplfg{\knums}$. When $T=*$ is the singleton space, \ie $x\in X_0$ is an ordinary point, then $a_x:G\longrightarrow X$ is the usual orbit morphism \cite{ccf}*{Definition 8.1.4}. Let $\mathfrak g$ be the Lie superalgebra of $G$, \ie the set of left-invariant vector fields on $G$. This is a Lie superalgebra over $\knums$. For $v\in\mathfrak g$, let $a_v\in\Gamma(\SheafTypeface T_X)$ denote the \Define{fundamental vector field} induced by the action. It is characterised by the equality \begin{equation}\label{eq:fundvf-def} (v\otimes1)\circ a^\sharp=(1\otimes a_v)\circ a^\sharp. \end{equation} Let $x\in_TX$ with $T\in\ssplfg{\knums}$. The equation above specialises to \begin{equation}\label{eq:fundvf-def-pt} \begin{split} (1\otimes v)\circ a_x^\sharp&=(p_1,\sigma)^\sharp\circ(1\otimes 1\otimes(x^\sharp\circ a_v))\circ({\id}_T\times a)^\sharp\\ &=(\Delta_T\times{\id}_G)^\sharp\circ(1\otimes(x^\sharp\circ a_v)\otimes 1)\circ({\id}_T\times(a\circ\sigma))^\sharp \end{split} \end{equation} where we denote the flip by $\sigma$ and the diagonal morphism by $\Delta_T$. Let $\SheafTypeface A_\mathfrak g$ be the \Define{fundamental distribution}, \ie the submodule \[ \SheafTypeface A_\mathfrak g\defi\SheafTypeface O_X\cdot a_\mathfrak g\subseteq\SheafTypeface T_X,\quad a_\mathfrak g\defi \Set1{a_v}{v\in\mathfrak g}. \] We shall need to understand when the orbit morphism $a_x$ for an arbitrary $x\in_TX$ is a morphism of locally constant rank over $T$. The following is a full characterisation. \begin{Th}[action-locconst] Let $x\in_TX$. The following statements are equivalent: \begin{enumerate}[wide] \item\label{item:action-locconst-i} The morphism $a_x:X_T\longrightarrow G_T$ has locally constant rank over $T$. \item\label{item:action-locconst-ii} The pullback $x^*(\SheafTypeface A_\mathfrak g)$ is a locally direct summand of the $\SheafTypeface O_T$-module $x^*(\SheafTypeface T_X)$. \item\label{item:action-locconst-iii} For every $t\in T_0$, the canonical map $\SheafTypeface A_\mathfrak g(x_0(t))\longrightarrow T_{x_0(t)}X$ is injective. \item\label{item:action-locconst-iv} For any $t\in T_0$, there are homogeneous $v_j\in\mathfrak g$ such that $a_{v_j}(x_0(t))\in T_{x_0(t)}X$ are linearly independent and the $x^\sharp\circ a_{v_j}$ span $x^*(\SheafTypeface A_\mathfrak g)$ in a neighbourhood of $t$ in $T_0$. \end{enumerate} \end{Th} In the \emph{proof}, we use the following two lemmas. \begin{Lem}[pb-vals] Let $f:Y\longrightarrow Z$ be a morphism of superspaces and $\SheafTypeface E$ an $\SheafTypeface O_Z$-module. Fix $y\in Y_0$. Then the map $e\longmapsto 1\otimes e:\SheafTypeface E_{f_0(y)}\longrightarrow(f^*\SheafTypeface E)_y$ induces an isomorphism \[ \vkappa_Y(y)\otimes_{\vkappa_Z(f_0(y))}\SheafTypeface E(f_0(y))\longrightarrow(f^*\SheafTypeface E)(y) \] of $\vkappa_Y(y)$-super vector spaces. \end{Lem} \begin{proof} Let $z\defi f_0(y)$. Now simply note that $\vkappa_Y(y)$ is an $\SheafTypeface O_{Z,z}$-module \via the map $f^\sharp_x:\SheafTypeface O_{Z,z}\longrightarrow\SheafTypeface O_{Y,y}$. In particular, we have \[ (f^*\SheafTypeface E)(y)=\vkappa_Y(y)\otimes_{\SheafTypeface O_{Y,y}}\SheafTypeface O_{Y,y}\otimes_{\SheafTypeface O_{Z,z}}\SheafTypeface E_z=\vkappa_Y(y)\otimes_{\SheafTypeface O_{Z,z}}\SheafTypeface E_z=\vkappa_Y(y)\otimes_{\vkappa_Z(z)}\SheafTypeface E(z). \] This proves our claim. \end{proof} \begin{Lem}[im-funddist] The map \[ x^*(\SheafTypeface A_\mathfrak g)\longrightarrow ({\id}_T,1_G)^*\Parens1{\im\SheafTypeface T_{a_x/T}}:w\longmapsto\Delta_T^\sharp\circ(1\otimes w) \] is an isomorphism. \end{Lem} \begin{proof} First, we define a map $\vphi:\SheafTypeface T_{x:T\to X}\longrightarrow({\id}_T,1_G)^*\Parens0{\SheafTypeface T_{({\id}_T,x):T\to X_T/T}}$ by \[ \vphi(w)\defi\Delta_T^\sharp\circ(1\otimes w). \] It admits a left inverse $\psi$, defined by \[ \psi(u)\defi u\circ p_2^\sharp \] where $p_2:X_T\longrightarrow X$ is the second projection. Indeed, \[ \psi(\vphi(w))=\Delta_T^\sharp\circ (1\otimes w)\circ p_2^\sharp=w. \] Moreover, we have \[ \begin{split} \vphi(x^\sharp\circ a_v)&=\Delta_T^\sharp\circ(1\otimes (x^\sharp\circ a_v))\circ({\id}_T\times a(1_G,\cdot))^\sharp\\ &=({\id}_T,1_G)^\sharp\circ (\Delta_T\times{\id}_G)^\sharp\circ (1\otimes(x^\sharp\circ a_v)\otimes 1)\circ({\id_T}\times(a\circ\sigma))^\sharp\\ &=({\id}_T,1_G)^\sharp\circ v\circ a_x^\sharp=({\id}_T,1_G)^\sharp\circ\SheafTypeface T_{a_x/T}(v) \end{split} \] by \eqref{eq:fundvf-def-pt}, so $\vphi$ descends to a map $x^*(\SheafTypeface A_\mathfrak g)\longrightarrow ({\id}_T,1_G)^*\Parens1{\im\SheafTypeface T_{a_x/T}}$, as claimed. The above computation also shows that \[ \psi(({\id}_T,1_G)^\sharp\circ\SheafTypeface T_{a_x/T}(v))=\psi(\vphi(x^\sharp\circ a_v))=x^\sharp\circ a_v, \] so this map admits a left inverse. But the $({\id}_T,1_G)^\sharp\circ\SheafTypeface T_{a_x/T}(v)$ for $v\in\mathfrak g$ generate $({\id}_T,1_G)^*(\im\SheafTypeface T_{a_x/T})$, so $\vphi$ is surjective, and is an isomorphism. \end{proof} \begin{proof}[\prfof{Th}{action-locconst}] For every $(t,g)\in T_0\times G_0$, we consider the canonical map \begin{equation}\label{eq:orb-canmap} \iota_{(t,g)}\defi\iota_{\SheafTypeface T_{a_x/T,(t,g)}}:\Parens1{\im\SheafTypeface T_{a_x/T}}(t,g)\longrightarrow T_{g\cdot x_0(t)}(X_T/T). \end{equation} By \thmref{Cor}{locconst-char}, the morphism $a_x$ is of locally constant rank over $T$ if and only if for all $(t,g)$, the map $\iota_{(t,g)}$ is injective. Since $a_x$ is $G$-equivariant, it is equivalent that it be injective for all points of the form $(t,1)$, where $t\in T_0$ is arbitrary. By \thmref{Lem}{im-funddist}, we have $x^*(\SheafTypeface A_\mathfrak g)\cong({\id}_T,1_G)^*(\im\SheafTypeface T_{a_x/T})$. Because all residue fields in question are equal to $\knums$, \thmref{Lem}{pb-vals} shows that \[ \begin{split} \Parens1{\im\SheafTypeface T_{a_x/T}}(t,1)&=\Parens1{x^*(\SheafTypeface A_\mathfrak g)}(t)=\SheafTypeface A_\mathfrak g(x_0(t)),\\ \Parens1{a_x^*\Parens1{\SheafTypeface T_{X_T/T}}}(t,1)&=(\SheafTypeface T_{X_T/T})(t,x_0(t))=T_{x_0(t)}X=(x^*(\SheafTypeface T_X))(t). \end{split} \] Thus, by \thmref{Prop}{free-vals}, conditions \eqref{item:action-locconst-i}--\eqref{item:action-locconst-iii} are equivalent. If \eqref{item:action-locconst-iv} holds, then $\iota_{(t,1)}$ maps a generating set of $(x^*\SheafTypeface A_\mathfrak g)(t)=(\im\SheafTypeface T_{a_x/T})(t,1)$ to a basis of $T_{x_0(t)}X$, so it is injective and \eqref{item:action-locconst-i} holds. Conversely, assume \eqref{item:action-locconst-ii} and \eqref{item:action-locconst-iii}. Thus, we may choose homogeneous $v_j\in\mathfrak g$ such that $a_{v_j}(x_0(t))\in T_{x_0(t)}X$ are linearly independent and span the image of $(x^*(\SheafTypeface A_\mathfrak g))(t)$. By assumption, the canonical images of the $x^\sharp\circ a_{v_j}$ in $(x^*(\SheafTypeface A_\mathfrak g))(t)$ are linearly independent, so that $(x^\sharp\circ a_{v_j})_t$ form a minimal generating set of $(x^*(\SheafTypeface A_\mathfrak g))_t$ by the Nakayama Lemma. Since this module is free, they form a basis. Since $x^*(\SheafTypeface A_\mathfrak g)$ is finite locally free, \cite{gro-to}*{4.1.1} shows that the $x^\sharp\circ a_{v_j}$ form a local basis of sections, proving \eqref{item:action-locconst-iv}, and thus, the theorem. \end{proof} \begin{Cor}[ordpt] Let $T=*$ and $x\in X_0$. Then the orbit morphism $a_x:G\longrightarrow X$ has locally constant rank. \end{Cor} \begin{proof} In this case, $x^*\SheafTypeface E=\SheafTypeface E(x_0(*))$ for any $\SheafTypeface O_X$-module $\SheafTypeface E$, so the condition \eqref{item:action-locconst-ii} of \thmref{Th}{action-locconst} becomes void. \end{proof} We now apply these general results to the problem of the representability of the isotropy supergroup functor. To that end, we define for $t\in T_0$: \[ \mathfrak g_x(t)\defi\Set1{v\in\mathfrak g}{a_v(x_0(t))=0}. \] \begin{Th}[trans-iso] Let $x\in_TX$ with $T\in\ssplfg{\knums}$. Assume that $a_x$ has locally constant rank over $T$. Then the functor $G_x:\ssplfg{T}\longrightarrow\Sets$ from \thmref{Def}{isotrop-def} is representable by a supermanifold over $T$ of fibre dimension \begin{equation}\label{eq:trans-iso-dim} \dim_{T,(t,g)}G_x=\dim_\knums\mathfrak g_x(t). \end{equation} The canonical morphism $G_x\longrightarrow G_T$ is a closed embedding. Conversely, assume that $G_x$ is representable in $\ssplfg{T}$. Then the canonical morphism $j:G_x\longrightarrow G_T$ is an injective immersion with closed image. If $G_x$ is representable in $\SMan_T$, then $j$ is a closed embedding. \end{Th} \begin{proof} By \thmref{Prop}{constant-rank-fibprod}, locally in the domain, the image of $a_x$ exists as a supermanifold over $T$, and has fibre dimension \[ \dim_{T,x_0(t)}\im a_x=\rk T_{(t,g)}(a_x/T)=\dim\mathfrak g-\dim\mathfrak g_x(t). \] In view of \thmref{Prop}{constant-rank-fibprod}, it will be sufficient to prove for any superfunction $f$ defined on an open subspace of $X_T$: \[ a_x^\sharp(f)=0\ \Longrightarrow\ x_T^\sharp(f)=0. \] But for any supermanifold $R$ and any $t\in_RT$, we have \[ a_x^\sharp(f)(t,1_{G_T})=f(t,1_{G_T}\cdot x(t))=f(t,x(t))=x_T^\sharp(f)(t), \] so this statement is manifestly verified. Hence, $G_x$ is representable and the canonical morphism is a closed embedding. The expression for the fibre dimension of $G_x$ follows from Equation \eqref{eq:constrk-pullback-dim}, since $\dim_TT=0$. Conversely, assume the functor $G_x$ is representable in $\ssplfg{T}$. Then $j$ is manifestly a monomorphism, \ie $G_x(R)\longrightarrow G_T(R)$ is injective for any $R\in\ssplfg{\knums}$. Inserting $R=*$, we see that the underlying map is injective with image \[ \Set1{(t,g)\in T_0\times G_0}{g\cdot x_0(t)=x_0(t)}, \] which is closed. Inserting $R=*[\tau|\theta]$, we see that the tangent map $T_{(t,g)}(j/T)$ is injective for every $(t,g)$, by \thmref{Lem}{der-mor}. If $G_x$ is a Lie supergroup, then $j_0$ is a closed topological embedding by \thmref{Th}{imm-lie}, and hence, $j$ is an embedding (as follows from \thmref{Th}{invfun-loc}). \end{proof} Specialising \thmref{Th}{trans-iso} by the use of \thmref{Cor}{ordpt}, we recover the case of orbits through an ordinary point first treated by B.~Kostant \cite{kostant} in the setting of Lie--Hopf algebras, by C.P.~Boyer and O.A.~S\'anchez-Valenzuela \cite{bsv} for differentiable Lie supergroups, and by L.~Balduzzi, C.~Carmeli, and G.~Cassinelli \cite{bcc} using a functorial framework and super Harish-Chandra pairs. \begin{Cor} Let $T=*$ and $x\in X_0$. Then $G_x$ is representable by a supermanifold and the canonical morphism $G_x\longrightarrow G$ is a closed embedding. \end{Cor} \subsection{Orbits through generalised points}\label{subs:orbits} Having discussed the representability of the isotropy supergroup functor, we pass now to the existence of orbits. In what follows, to avoid heavy notation, we will largely eschew writing $/S$ for morphisms over $S$, instead mostly stating the property of being `over $S$' in words. We have the following generalisation of Godement's theorem \cite{ah-berezin}*{Theorem 2.6}, with an essentially unchanged proof. We have added the detail that in this situation, the quotients are universal categorical. \begin{Prop}[eq-quot] Let $R/S$ be an equivalence relation on $X/S$ in $\SMan_S$, as defined in \thmref{Ex}{ex-groupoid} \eqref{item:ex-groupoid-iii}. Then the following assertions are equivalent: \begin{enumerate}[wide] \item\label{item:eq-quot-i} The weakly geometric quotient $\pi:X\longrightarrow X/R$ exists in $\SMan_S$ and, as a morphism, is a submersion over $S$. \item\label{item:eq-quot-ii} The subsupermanifold $R$ of $X\times_SX$ is closed, and (one of, and hence both of) $s,t:R\longrightarrow X$ are submersions over $S$. \end{enumerate} If this is the case, then $\pi:X\longrightarrow X/R$ is a universal weakly geometric quotient. The quotient is \Define{effective}, that is, the morphism $(t,s):R\longrightarrow X\times_{X/R}X$ is an isomorphism. Moreover, its fibre dimension is \begin{equation}\label{eq:quot-rel-dim} \dim_S(X/R)=2\dim_S X-\dim_SR. \end{equation} \end{Prop} \begin{proof} Apart from that about universal weakly geometric quotients, all statements are proved for $S=*$ in Refs.~\cites{almorox,ah-berezin}. In general, the proof carries over unchanged. Let us prove the claim of universality for the weakly geometric quotient. So, let the assumption of item \eqref{item:eq-quot-i} be fulfilled and set $Q\defi X/R$. Then $\pi$ is a submersion over $S$, and hence, $X'\defi Q'\times_QX$ exists in $\SMan_S$ for any $\psi:Q'\longrightarrow Q$, by \cite{ahw-sing}*{Proposition 5.41} and the normal form theorem for submersions over $S$ (which follows from \thmref{Th}{invfun-loc}). By item \eqref{item:eq-quot-ii}, $s$ is also a submersion over $S$. Then so is $\pi\circ s$, and $R'\defi (Q'\times Q')\times_{Q\times Q}R$ exists in $\SMan_S$, where $R$ lies over $Q\times Q$ \via $(\pi\times\pi)\circ (t,s):R\longrightarrow Q\times Q$. First, we claim that condition \eqref{item:eq-quot-ii} holds for the equivalence relation $R'/S$ on $X'/S$ in $\SMan_S$. Note that we have a pullback diagram \[ \begin{tikzcd} R'=Q'\times_QR\dar[swap]{s'}\rar{}&R\dar{\pi\circ s}\\ Q'\rar{\psi}&Q \end{tikzcd} \] Since $\pi\circ s$ is a submersion over $S$, so is $s'$. Next, consider the morphism \[ R'=(Q'\times Q')\times_{Q\times Q}R\longrightarrow X'\times_SX'=(Q'\times Q')\times_{Q\times Q}(X\times_SX). \] It is an embedding by \cite{ahw-sing}*{Corollary 5.29}. Thus, the assumption \eqref{item:eq-quot-ii} is verified for $R'$ and $X'$, the weakly geometric quotient $\pi':X'\longrightarrow X'/R'$ exists in $\SMan_S$, and it is a submersion over $S$. It is categorical by \thmref{Cor}{wgeom-quot-is-cat}. The morphism $p_1={\id}_{Q'}\times_Q\pi:X'\longrightarrow Q'$ is manifestly $R'$-invariant, so that there is a unique morphism \[ \vphi:X'/R'\longrightarrow Q',\quad\vphi\circ\pi'={\id}_{Q'}\times_Q\pi. \] Since so is $p_1$, $\vphi$ is a surjective submersion. To see that it is a local isomorphism, we compute the dimensions of the supermanifolds over $S$ in question. On one hand, we have \[ \dim_SQ=2\dim_SX-\dim_SR, \] and on the other, we have \begin{align*} \dim_SX'/R'&=2\dim_SX'-\dim_S R'\\ &=2(\dim_{Q'}X'+\dim_SQ')-(\dim_{Q'\times Q'}R'+2\dim_SQ')\\ &=2\dim_Q'X'-\dim_{Q'\times Q'}R'=2\dim_QX-\dim_{Q\times Q}R\\ &=2(\dim_QX+\dim_SQ)-(\dim_{Q\times Q}R+2\dim_SQ)\\ &=2\dim_SX-\dim_SR \end{align*} Upon invoking the inverse function theorem (\thmref{Th}{invfun-loc}), this proves that $\vphi$ is a local isomorphism over $S$. Finally, we need to show that $\vphi_0$ is injective. To that end, let $q_j'\in Q_0'$, $x_j\in X_0$, such that $\psi_0(q_j')=\pi_0(x_j')$. Assume that $\vphi_0(\pi_0'(q_1',x_1))=\vphi_0(\pi_0'(q_2',x_2))$, so that $q_1'=q_2'$, because \[ \vphi_0\circ\pi_0'=p_{1,0}:X'_0=Q_0'\times_{Q_0}X_0\longrightarrow Q_0'. \] It follows that $\pi_0(x_1)=\psi_0(q_1')=\psi_0(q_2')=\pi_0(x_2)$, so that $(x_1,x_2)\in R_0$, since $\pi$ is an effective quotient. Then $(q_1',q_2',x_1,x_2)\in R'_0$, so that $\pi_0'(q_1',x_1)=\pi_0'(q_2',x_2)$, proving the injectivity. \end{proof} We now wish to apply this proposition to supergroup actions. Thus, fix a Lie supergroup $G$ and a $G$-supermanifold $X$. Let $x\in_TX$, where $T$ is some supermanifold. We assume that $G_x$ is representable in $\SMan_T$ and that the canonical morphism $G_x\longrightarrow G_T$ is an embedding over $T$ (automatically closed). We define an equivalence relation $R_x$ on $G_T$ by \[ R_x\defi G_T\times_TG_x,\quad i:R_x\longrightarrow G_T\times_TG_T, \] where $i$ is given by \[ i(g,g')\defi(g,gg'),\quad\forall (g,g')\in_{T'/T}\Parens1{G_T\times_TG_x}/T, \] and for any supermanifold $T'/T$ over $T$. It is straightforward to check that $i$ is an embedding and indeed, that $R_x$ is an equivalence relation. \begin{Prop}[orb-quot] Let $G_x$ be representable in $\SMan_T$. Then the universal weakly geometric quotient $\pi_x:G_T\longrightarrow G\cdot x$ of $G_T$ by $G_x$ exists in $\SMan_T$. It is an effective quotient and a submersion over $T$. Its fibre dimension is \begin{equation}\label{eq:quot-dim} \dim_T G\cdot x=\dim G-\dim_TG_x. \end{equation} \end{Prop} \begin{proof} The underlying map of $G_x\longrightarrow G_T$ is injective and a homeomorphism onto its closed image, so it is proper. Therefore, the map underlying the morphism $i:R_x\longrightarrow G_T\times_TG_T$ is closed. The first projection $s$ of $R_x$ is obviously a submersion over $T$. Then \thmref{Prop}{eq-quot} applies, and we reach our conclusion. Equation \eqref{eq:quot-dim} follows from Equation \eqref{eq:quot-rel-dim}, since $\dim_TR_x=\dim G+\dim_TG_x$. \end{proof} \begin{Not} By abuse of language, the morphism $\tilde a_x:G\cdot x\longrightarrow X_T$ over $T$ induced by $a_x$ will also be called the \Define{orbit morphism}. \end{Not} Combining this fact with our previous results, we get the following theorem. \begin{Th}[orbit] Let $x:T\longrightarrow X$. The following are equivalent: \begin{enumerate}[wide] \item\label{item:orbit-i} The morphism $a_x$ has locally constant rank over $T$. \item\label{item:orbit-ii} The isotropy functor $G_x$ is representable in $\SMan_T$. \end{enumerate} In this case, the canonical morphism $j:G_x\longrightarrow G_T$ is a closed embedding, the weakly geometric and universal categorical quotient $G\cdot x$ exists, $\pi_x:G_T\longrightarrow G\cdot x$ is a surjective submersion over $T$, the fibre dimension of $G\cdot x$ is \begin{equation}\label{eq:trans-orb-dim} \dim_{T,(t,g\cdot x_0(t))}G\cdot x=\dim G-\dim\mathfrak g_x(t),\quad\forall (t,g)\in (G_T)_0=T_0\times G_0, \end{equation} and $\tilde a_x$ is an immersion over $T$. Moreover, if $U\subseteq X_0$ is open such that $a_x|_U$ admits an image in the sense of \thmref{Prop}{constant-rank-fibprod}, then so does $\tilde a_x|_{\pi_{x,0}(U)}$, and these images coincide. \end{Th} \begin{proof} The implication \eqref{item:orbit-i} $\Rightarrow$ \eqref{item:orbit-ii} is the content of \thmref{Th}{trans-iso}. Conversely, let $G_x$ be representable in $\SMan_T$. Then $j$ is a closed embedding, by \thmref{Th}{trans-iso}. From \thmref{Prop}{orb-quot}, we conclude that $G\cdot x$ exists and $\pi_x:G_T\longrightarrow G\cdot x$ is a surjective submersion over $T$. Because \[ \ker T_{(t,g)}(\pi_x/T)=\mathfrak g_x(t)=\ker T_{(t,g)}(a_x/T) \] and $\tilde a_x\circ\pi_x=a_x$, it follows that $\tilde a_x$ is an immersion over $T$. By \thmref{Rem}{rk-converse}, $a_x$ is of locally constant rank over $T$. This shows that \eqref{item:orbit-ii} holds. Equation \eqref{eq:trans-orb-dim} follows from Equation \eqref{eq:quot-dim} and Equation \eqref{eq:trans-iso-dim}. Moreover, since $\pi_{x,0}$ is surjective and $\pi_x^\sharp$ is injective, it follows that the images of $a_x|_U$ and $\tilde a_x|_{\pi_{x,0}(U)}$ are equal whenever one of the two is defined, proving the asserted equivalence. The remaining statements follow from \thmref{Prop}{orb-quot}. \end{proof} \section{Coadjoint superorbits and their super-symplectic forms}\label{sec:coad} In this section, we construct the Kirillov--Kostant--Souriau form in the setting of coadjoint superorbits through $T$-valued points. For the case of ordinary points, where $T=*$, coadjoint orbits of supergroups were studied by B.~Kostant \cite{kostant}, R.~Fioresi and M.A.~Lled\'o \cite{fl}, and by H.~Salmasian \cite{salmasian}. By the introduction of the parameter space $T$, it is always possible to work with \emph{even} supersymplectic forms, provided they are considered over $T$. Compare with the work of Tuynman \cites{tuyn10a,tuyn10b}, who is obliged to work with inhomogeneous forms. We will follow the notation and conventions of Sections \ref{sec:groupoid-quots}--\ref{sec:super-quot} and Ref.~\cite{ap-sphasym}, only briefly recalling the basic ingredients. Let $G$ be a Lie supergroup---\ie a group object in $\SMan_\knums=\SMan_{\knums,\Bbbk}^\varpi$---with Lie superalgebra $\mathfrak g$. We set $\mathfrak g_\Bbbk\defi\mathfrak g_{\Bbbk,\ev}\oplus\mathfrak g_\odd$, where $\mathfrak g_{\Bbbk,\ev}$ is the Lie algebra of $G_0$. (Note that the latter is a $\Bbbk$-form of $\mathfrak g_\ev$.) The dual $\knums$-super vector space of $\mathfrak g$ will be denoted by $\mathfrak g^*$. Let $\mathfrak g_\Bbbk^*$ be the set of $\knums$-linear functionals $f=f_\ev\oplus f_\odd\in\mathfrak g^*$ such that $f_\ev(\mathfrak g_{\Bbbk,\ev})\subseteq\Bbbk$. We denote the adjoint action of $G$ on ${\mathbb A}(\mathfrak g_\Bbbk)$ by $\Ad$. The \Define{coadjoint action} is defined by \[ \Dual1{\Ad^*(g)(f)}x\defi\Dual1f{\Ad(g^{-1})(x)},\quad\forall g\in_TG,x\in_T{\mathbb A}(\mathfrak g_\Bbbk),f\in_T{\mathbb A}(\mathfrak g_\Bbbk^*), \] where $\Dual0\cdot\cdot$ denotes the canonical pairing of $\mathfrak g^*$ and $\mathfrak g$. \subsection{The super-symplectic Kirillov--Kostant--Souriau form} Let $T\in\ssplfg{\knums}$ and $f\in_T{\mathbb A}(\mathfrak g_\Bbbk^*)$ be a $T$-valued point of the dual of the Lie superalgebra $\mathfrak g$. We define an even super-skew symmetric tensor $\Omega_f$, \[ \Omega_f:\SheafTypeface T_{G_T/T}\otimes_{\SheafTypeface O_{G_T}}\SheafTypeface T_{G_T/T}\longrightarrow\SheafTypeface O_{G_T}, \] by the formula \[ \Omega_f\Parens0{v,w}\defi\Dual1{p_{G_T}^\sharp(f)}{[v,w]},\quad\forall v,w\in(\SheafTypeface O_{G_T}\otimes\mathfrak g)(U), \] where $U\subseteq T_0\times G_0$ is open, $p_{G_T}=p_1:G_T\longrightarrow T$, and we identify $f$ with a section of $\SheafTypeface O_T\otimes\mathfrak g^*$ \via the natural bijection \[ \Hom1{T,{\mathbb A}(\mathfrak g_\Bbbk^*)}\longrightarrow\Gamma\Parens1{(\SheafTypeface O_T\otimes\mathfrak g^*)_{\Bbbk,\ev}}, \] compare \cite{ahw-sing}*{Corollary 4.26, Proposition 5.18}. The identification is \via \[ f^\sharp(x)=\Dual0fx,\quad\forall\,x\in\mathfrak g\subseteq\Gamma(\SheafTypeface O_{{\mathbb A}(\mathfrak g_\Bbbk^*)}). \] From now on and until the end of this subsection, assume that $G_f$ is representable in $\SMan_T$, so that in particular, $G\cdot f$ exists in $\SMan_T$, by \thmref{Th}{orbit}. \begin{Lem} The $2$-form $\Omega_f$ descends to a well-defined even super-skew symmetric tensor $\tilde\omega_f$, \[ \tilde\omega_f:\SheafTypeface T_{G_T/T\to G\cdot f/T}\otimes_{\SheafTypeface O_{G_T}}\SheafTypeface T_{G_T/T\to G\cdot f/T}\longrightarrow\SheafTypeface O_{G_T}, \] by the formula \[ \tilde\omega_f\Parens1{\SheafTypeface T_{\pi_f/T}(v),\SheafTypeface T_{\pi_f/T}(w)}\defi\Dual1{p_{G_T}^\sharp(f)}{[v,w]},\quad\forall v,w\in(\SheafTypeface O_{G_T}\otimes\mathfrak g)(U), \] for every open $U\subseteq(G_T)_0$. The $2$-form $\tilde\omega_f$ is non-degenerate. \end{Lem} \begin{proof} Let $v\in\SheafTypeface T_{G_T/T}(U)$ be homogeneous and $x\in\mathfrak g\subseteq\Gamma(\SheafTypeface O_{{\mathbb A}(\mathfrak g_\Bbbk^*)})$. Let $(x_j)$ be a homogeneous basis of $\mathfrak g$ and expand $v=\sum_jv^jx_j$. Then we compute for all $R\in\ssplfg{\knums}$ and all $\mu\in_R{\mathbb A}(\mathfrak g_\Bbbk^*)$ that \begin{align*} a_{x_j}(\mu)(x)&=\frac d{d\tau}\Big|_{\tau=0}\Dual1{\Ad^*(\phi^{x_j})(\mu)}x=\frac d{d\tau}\Big|_{\tau=0}\Dual1\mu{\Ad(\phi^{-x_j})(x)}\\ &=-\Dual1\mu{[x_j,x]}=-\mu(\ad(x_j)(x))=-{\ad^*}(x_j)(\mu)(x). \end{align*} Here, we let $\Abs0\tau=\Abs0{x_j}$ and follow the conventions of \thmref{Def}{inf-flow}. Equation \eqref{eq:fundvf-def-pt} shows that \[ v\circ a_f^\sharp=\sum_jv^j\cdot(\Delta_T\times{\id}_G)^\sharp\circ(1\otimes(f^\sharp\circ a_{x_j})\otimes1)\circ({\id}_T\times(a\circ\sigma))^\sharp. \] Therefore, for all $R$ and all $(t,g)\in_RG_T$, we have \[ \begin{split} (v\circ a_f^\sharp)(x)(t,g)&=\sum_jv^j(t,g)\Dual1{\ad^*(x_j)(f(t))}{\Ad(g^{-1})(x)}\\ &=\sum_jv^j(t,g)\Dual1{\Ad^*(g)(\ad^*(x_j)(f(t)))}x \end{split} \] Vector fields are uniquely determined by their action on systems of local fibre coordinates, by \thmref{Prop}{tan-coord}. Moreover, any homogeneous basis of $\mathfrak g$ contained in $\mathfrak g_\Bbbk$ defines a global fibre coordinate system on ${\mathbb A}_T(\mathfrak g_\Bbbk^*)$. Thus, we have \[ \SheafTypeface T_{\pi_f/T}(v)=0\quad\Longleftrightarrow\quad\sum_jv^j(t,g)\Ad^*(g)(\ad^*(x_j)(f(t)))=0\quad\forall\,R,(t,g)\in_RG_T. \] On the other hand, we may express \[ \begin{split} \Dual1{p_{G_T}^\sharp(f)}{[v,w]}(t,g)&=\sum_jv^j(t,g)\Dual1{\ad^*(x_j)(f(t))}{(t,g)^\sharp\circ w}\\ &=\sum_jv^j(t,g)\Dual1{\Ad^*(g)(\ad^*(x_j)(f(t)))}{\Ad(g^{-1})((t,g)^\sharp\circ w)}. \end{split} \] This shows immediately that $\tilde\omega_f$ is well-defined. Setting $\check w\defi ({\id}_T\times{\Ad})^\sharp\circ w$, the above computation shows that \[ \Dual1{p_{G_T}^\sharp(f)}{[v,\check w]}(t,g)=\sum_jv^j(t,g)\Dual1{\Ad^*(g)(\ad^*(x_j)(f(t)))}w. \] Hence, if $\tilde\omega_f(\SheafTypeface T_{\pi_f}(v),\SheafTypeface T_{\pi_f}(\check x_j))=0$ for any $j$, then it follows that $v\circ\pi_f^\sharp=0$, so we see that $\tilde\omega_f$ is non-degenerate. \end{proof} Since $G\cdot f\in\SMan_T$, we have \[ \SheafTypeface T_{G_T/T\to G\cdot f/T}=\pi_f^*(\SheafTypeface T_{G\cdot f/T}), \] by \thmref{Prop}{tan-coord}, so we may ask whether $\tilde\omega_f$ is induced by some tensor $\omega_f$ on $G\cdot f$. Indeed, this is the case, as we presently show. The module inverse image and direct image functors $((\pi_f)^*,(\pi_f)_{0*})$ form an adjoint pair, so there is a natural bijection \[ \begin{tikzcd} \Hom[_{\SheafTypeface O_{G\cdot f}}]1{\textstyle\bigwedge^2\SheafTypeface T_{G\cdot f/T},(\pi_f)_{0*}\SheafTypeface O_{G_T}} \rar{\pi_f^*} &\Hom[_{\SheafTypeface O_{G_T}}]1{\textstyle\bigwedge^2\SheafTypeface T_{G_T/T\to G\cdot f/T},\SheafTypeface O_{G_T}}. \end{tikzcd} \] \begin{Prop}[pf-2form] There is a unique even super-skew symmetric tensor \[ \omega_f:\SheafTypeface T_{G\cdot f/T}\otimes_{\SheafTypeface O_{G\cdot f}}\SheafTypeface T_{G\cdot f/T}\longrightarrow\SheafTypeface O_{G\cdot f} \] such that $\pi_f^*(\omega_f)=\tilde\omega_f$. \end{Prop} \begin{proof} By the above, there is a unique even super-skew symmetric tensor \[ \omega_f:\SheafTypeface T_{G\cdot f/T}\otimes_{\SheafTypeface O_{G\cdot f}}\SheafTypeface T_{G\cdot f/T}\longrightarrow (\pi_f)_{0*}\SheafTypeface O_{G_T}, \] such that $\pi_f^*(\omega_f)=\tilde\omega_f$. We need to show that it takes values in the subsheaf $\SheafTypeface O_{G\cdot f}$. But $G\cdot f=G_T/G_f$ is a weakly geometric quotient by \thmref{Prop}{orb-quot}, so that by \thmref{Rem}{colimit-explicit}, we have \[ \SheafTypeface O_{G\cdot f}=\Parens0{(\pi_f)_{0*}\SheafTypeface O_{G_T}}^{G_f}. \] It thus remains to prove that $\omega_f$ takes values in the sheaf of invariants. To that end, fix a homogeneous basis $(x_j)$ of $\mathfrak g$ contained in $\mathfrak g_\Bbbk$. Take any $v,w\in\SheafTypeface T_{G\cdot f/T}(U)$, where $U\subseteq (G\cdot f)_0$ is open and define $V\defi(\pi_f)_0^{-1}(U)\subseteq T_0\times G_0$. We may write $\pi_f^\sharp\circ v=\sum_jv^j\,(1\otimes x_j)\circ\pi_f^\sharp)$ for some $v^j\in\SheafTypeface O_{G_T}(V)$, $\Abs0{v^j}=\Abs0{x_j}+\Abs0v$, and similarly for $w$. Denote by $(t,g,h)$ the generic point of $G_T|_V\times_TG_f|_V$. We compute for any superfunction $k$ on $G\cdot f$, defined on an open subset of $U$, that \[ (\pi_f^\sharp\circ v)(k)(t,gh)=v(k)\Parens1{(t,gh)\cdot f(t)}=v(k)\Parens1{(t,g)\cdot f(t)}=(\pi_f^\sharp\circ v)(k)(t,g). \] Here, we are using the fact that $G\cdot f$ is a universal categorical quotient (\thmref{Th}{orbit}), so that, by \thmref{Prop}{orbit-action}, it admits a $G$-action for which $\pi_f$ is equivariant and $f$, considered as a $T$-valued point of $G\cdot f$, is fixed by $G_f$. On the other hand, using results from Ref.~\cite{ap-sphasym}, we have \begin{align*} \sum_j\Parens1{v^j(x_j\circ\pi_f^\sharp)(k)}(t,gh) &=\sum_jv^j(t,gh)\frac d{d\tau}\Big|_{\tau=0}k\Parens1{(t,gh\exp(\tau x_j))\cdot f(t)}\\ &=\sum_jv^j(t,gh)\frac d{d\tau}\Big|_{\tau=0}k\Parens1{(t,g\exp(\tau\Ad(h)(x_j))h)\cdot f(t)}\\ &=\sum_jv^j(t,gh)\frac d{d\tau}\Big|_{\tau=0}k\Parens1{(t,g\exp(\tau\Ad(h)(x_j)))\cdot f(t)}\\ &=\sum_jv^j(t,gh)(\Ad(h)(x_j)\circ\pi_f^\sharp)(k)(t,g). \end{align*} Combining both computations, we arrive at the equality \begin{equation}\label{eq:ad-orb} \sum_jv^j(t,gh)(\Ad(h)(x_j)\circ\pi_f^\sharp)=\sum_jv(t,g)(x_j\circ\pi^\sharp_f) \end{equation} of vector fields over $T$ along the morphism \[ \pi_f\circ m_{G_T}=\pi_f\circ p_1:G_T\times_TG_f\longrightarrow G\cdot f. \] Using Equation \eqref{eq:ad-orb}, we may compute \begin{align*} \omega_f(v,w)(t,gh)&=\tilde\omega_f(\pi_f^\sharp\circ v,\pi_f^\sharp\circ w)(t,gh)\\ &=\sum_{jk}(-1)^{\Abs0{x_j}\Abs0{x_k}}(v^jw^k)(t,gh)\Dual1{f(t)}{[x_j,x_k]}\\ &=\sum_{jk}(-1)^{\Abs0{x_j}\Abs0{x_k}}(v^jw^k)(t,gh)\Dual1{f(t)}{[\Ad(h)(x_j),\Ad(h)(x_k)]}\\ &=\sum_{jk}(-1)^{\Abs0{x_j}\Abs0{x_k}}(v^jw^k)(t,g)\Dual1{f(t)}{[x_j,x_k]}\\ &=\tilde\omega_f(\pi_f^\sharp\circ v,\pi_f^\sharp\circ w)(t,g)=\omega_f(v,w)(t,g), \end{align*} which shows that indeed, $\omega_f(v,w)$ is right $G_f$-invariant, and hence, that $\omega_f$ takes values in the sheaf $\SheafTypeface O_{G\cdot f}$, as desired. \end{proof} We may consider $\omega_f$ as a global section of $\Omega^2_{G\cdot f/T}=\bigwedge^2\Omega^1_{G\cdot f/T}$, \ie a $2$-form over $T$. We show that it is closed. \begin{Prop} The $2$-form $\omega_f$ over $T$ is relatively closed. \end{Prop} \begin{proof} The element of $\Gamma(\SheafTypeface O_{G_T}\otimes\mathfrak g^*)$ corresponding to $f$ is a left $G$-invariant $1$-form (which is, moreover, even and real-valued). We show that it gives a potential for the pullback of $\omega_f$. To that end, we follow ideas of Chevalley--Eilenberg \cite{ce}. Let $v,w\in\mathfrak g$. Denote by $d=d_{G_T/T}$ the relative differential. Then \[ \iota_wd+d\iota_w=\SheafTypeface L_w, \] where $\iota_v$, $\Abs0{\iota_v}=\Abs0v$, denotes relative contraction, and $\SheafTypeface L_v$, $\Abs0{\SheafTypeface L_v}=\Abs0v$, denotes the relative Lie derivative. We have \begin{align*} df(v,w)&=(-1)^{\Abs0v\Abs0w}\iota_w\iota_vdf=(-1)^{\Abs0v\Abs0w}\iota_w(\SheafTypeface L_vf)\\ &=-[\SheafTypeface L_v,\iota_w]f=-\iota_{[v,w]}f=-\Dual1f{[v,w]}=-\tilde\omega_f(\SheafTypeface T_{\pi_f/T}(v),\SheafTypeface T_{\pi_f/T}(w)), \end{align*} since $\iota_wf=\Dual0fw\in\Gamma(\SheafTypeface O_T)$, so that $d\iota_wf=0=\SheafTypeface L_v\iota_wf$. Since both sides of the equation are $\SheafTypeface O_{G_T}$-bilinear, the equation \[ \tilde\omega_f\Parens1{\SheafTypeface T_{\pi_f/T}(v),\SheafTypeface T_{\pi_f/T}(w)}=-df(v,w) \] holds for any vector fields $v,w$ on $G_T$ over $T$, defined on some open subset. But since $\tilde\omega_f=\pi_f^*(\omega_f)$ by \thmref{Prop}{pf-2form}, we have \[ \pi_f^\sharp(\omega_f)(v,w)=\tilde\omega_f\Parens1{\SheafTypeface T_{\pi_f/T}(v),\SheafTypeface T_{\pi_f/T}(w)}=-df(v,w) \] for any vector fields $v,w$ on $G_T$ over $T$. Thus, \[ \pi_f^\sharp(d\omega_f)=d\pi_f^\sharp(\omega_f)=-d^2f=0. \] Since $\pi_f^\sharp$ is an injective sheaf map, we conclude that $d\omega_f=0$. \end{proof} We summarise the above results in the following theorem. \begin{Th}[coadj-sympl] Let $G$ be a Lie supergroup with Lie superalgebra $\mathfrak g$. Let $T\in\ssplfg{\knums}$ and $f:T\longrightarrow{\mathbb A}(\mathfrak g_\Bbbk^*)$ be such that $G_f$ is representable and $G_f\longrightarrow G_T$ is an embedding. Then the coadjoint orbit $G\cdot f$ exists, is universal categorical, and with the Kirillov--Kostant--Souriau form $\omega_f$, $G\cdot f$ is a supersymplectic supermanifold over $T$. The assumption is verified if the equivalent conditions in \thmref{Th}{action-locconst} hold. \end{Th} \section{Application: Glimpses of the superorbit method}\label{sec:quant} This section offers an application of our general theory of coadjoint orbits to the geometric construction of representations. By way of example, we show how the formalism can be applied to give certain `universal' $T$-families of representations of certain Lie supergroups, namely, the Abelian supergroup ${\mathbb A}^{0|n}$ and certain graded variants of the $3$-dimensional Heisenberg group. At this point, we will only partially address the issue to which extent unitary structures exist on these families, nor will we make precise in which sense they are universal. We intend to treat these issues in forthcoming work, together with an extension to more general Lie supergroups. \subsection{Representations of Lie supergroups over some base} Fix $T\in\ssplfg{\knums}$. To set the stage both for the general representation theory of supergroups over $T$ and in particular, for the examples to be considered below, we give some very general definitions. The functor $\SheafTypeface O:\Parens1{\ssplfg{T}}^{\mathrm{op}}\longrightarrow\Sets$ defined on objects $U/T$ in $\ssplfg{T}$ by \[ \SheafTypeface O(U/T)\defi\Gamma(\SheafTypeface O_{U,\ev}) \] and on morphisms $f:U'/T\longrightarrow U/T$ in $\ssplfg{T}$ by \[ \SheafTypeface O(f):\SheafTypeface O(U/T)\longrightarrow\SheafTypeface O(U'/T):h\longmapsto f^\sharp(h) \] is a ring object in the category $\Bracks1{\Parens1{\ssplfg{T}}^{\mathrm{op}},\Sets}$. \begin{Def} Let $G$ be a supergroup over $T$. A \Define{representation of $G$} is a pair $(\SheafTypeface H,\pi)$ consisting of: \begin{enumerate}[wide] \item a $\ints/2\ints$-graded $\SheafTypeface O$-module object $\SheafTypeface H:\Parens1{\ssplfg{T}}^{\mathrm{op}}\longrightarrow\Sets$ and \item a morphism $\pi:G\times\SheafTypeface H\longrightarrow\SheafTypeface H$, denoted by \[ \pi(g)\psi,\quad \forall\,U\in\ssplfg{\knums},t\in_UT,g\in_tG,\psi\in\SheafTypeface H(t), \] \end{enumerate} such that $\pi(g)$ leaves the homogeneous components of $\SheafTypeface H$ invariant and the following equations are satisfied: \begin{equation}\label{eq:linear-action} \begin{gathered} \pi(1_G(t))\psi=\psi,\quad\pi(g_1g_2)\psi=\pi(g_1)\Parens1{\pi(g_2)\psi},\\ \pi(g)(\lambda \psi_1+\psi_2)=\lambda\pi(g)\psi_1+\pi(g)\psi_2, \end{gathered} \end{equation} for all $t\in_UT$, $g,g_1,g_2\in_tG$, $\psi,\psi_1,\psi_2\in\SheafTypeface H(t)$, $\lambda\in\SheafTypeface O(t)$. A graded $\SheafTypeface O$-submodule $\SheafTypeface H'\subseteq\SheafTypeface H$ is a \Define{$G$-subrepresentation} if it is $G$-invariant, \ie if $\pi$ descends to a morphism $G\times\SheafTypeface H'\longrightarrow\SheafTypeface H'$. \end{Def} This concept generalises the existent notions of representations of Lie supergroups in several ways. To make contact with the literature, recall the following construction, which produces a graded $\SheafTypeface O$-module for any $\SheafTypeface O_T$-module: Let $\SheafTypeface H$ be a (graded) $\SheafTypeface O_T$-module. Define for $t\in_UT$: \[ \SheafTypeface H(t)\defi\Gamma\Parens1{(t^*\SheafTypeface H)_\ev},\quad\SheafTypeface H_i(t)\defi\Gamma\Parens1{(t^*\SheafTypeface H_i)_\ev} \] (where $(-)_\ev$ refers to the total grading) and for any commutative diagram \[ \begin{tikzcd} U'\arrow{rd}[swap]{t'}\arrow{rr}{f}&&U\arrow{ld}{t}\\ &T, \end{tikzcd} \] set \[ \SheafTypeface H(f)\defi\Gamma(f^\sharp\otimes1):\SheafTypeface H(t)\longrightarrow\SheafTypeface H(t'), \] where $\Gamma$ denotes the global sections functor, as usual. The $\SheafTypeface O$-module structure is given by \[ \SheafTypeface O(t)\times\SheafTypeface H(t)\longrightarrow\SheafTypeface H(t):(h,\psi)\longmapsto h\cdot\psi, \] where $\cdot$ is the module structure on global sections. In particular, for $T=*$, any super-vector space $V$ over $\knums$ defines such an $\SheafTypeface O$-module. Assume that $V$ is finite-dimensional and we are given a linear action $\pi:G\times\SheafTypeface H\longrightarrow\SheafTypeface H$ where $\SheafTypeface H={\mathbb A}^\knums(V)$ is the functor given on objects $U$ by $\SheafTypeface H(U)=\Gamma\Parens1{(\SheafTypeface O_U\otimes V)_\ev}$ and linear actions are defined by the identities in Equation \eqref{eq:linear-action}. Then we may define a representation $(\SheafTypeface H,\pi)$ by \[ \pi(g)\psi\defi\pi(g,\psi), \] for all $U\in\ssplfg{\knums}$, $g\in_UG$, and $\psi\in\SheafTypeface H(U)=\Gamma\Parens1{(\SheafTypeface O_U\otimes V)_\ev}$. If $G$ is a Lie supergroup, then a linear action is the same thing as a representation of the associated supergroup pair $(G_0,\mathfrak g)$, compare \cite{a-supercw}*{Proposition 1.5}. For the affine algebraic case, compare also \cite{ccf}*{Definition 11.7.2}. \begin{Ex}[reg-rep][the left-regular representation] Let $G$ be a supergroup over $T$. The \Define{left-regular representation} $(\SheafTypeface H_G,\lambda_G)$ of $G$ is defined by taking \[ \SheafTypeface H_G(t)\defi\Gamma\Parens1{\SheafTypeface O_{U\times_TG,\ev}} \] for all $t\in_UT$, \[ \SheafTypeface H_G(f)\defi (f\times_T{\id}_G)^\sharp:\SheafTypeface H_G(t)\longrightarrow\SheafTypeface H_G(t') \] for all $f:t'\longrightarrow t$, and \[ \lambda_G(g)\psi\defi\Parens1{({\id}_U\times_Tm_G)\circ(({\id}_U,g^{-1})\times_T{\id}_G)}^\sharp(\psi)=\psi(-,g^{-1}(-)) \] for all $t\in_UT$, $g\in_tG$, $\psi\in\SheafTypeface H_G(t)=\Gamma\Parens1{\SheafTypeface O_{U\times_TG,\ev}}$. Here, $g^{-1}$ is $i_G(g)$, as usual. \end{Ex} Let $G$ be a supergroup over $T$. By definition, the \Define{Lie superalgebra} of $G$ is the $\SheafTypeface O_T$-submodule $\mathfrak g$ of the direct image sheaf $p_{0*}(\SheafTypeface T_{G/T})$ of \Define{left-invariant vector fields} on $G$, defined by \[ \mathfrak g(U)\defi\Set1{v\in\SheafTypeface T_{G/T}(p_0^{-1}(U))}{m_G^\sharp\circ v=(1\otimes v)\circ m_G^\sharp} \] for any open set $U\subseteq T_0$, endowed with the usual bracket of vector fields. We may consider $\mathfrak g$ as a functor, as explained above. Then the \Define{derived representation} $L$ of $(\SheafTypeface H_G,\lambda)$ is the morphism $d\lambda_G:\mathfrak g\times\SheafTypeface H_G\longrightarrow\SheafTypeface H_G$ defined by $d\lambda_G(v)\defi -L_v$ and \[ L_v\psi\defi\Parens1{(1_{U\times_TG}\times_T{\id}_G)^\sharp\circ (v\otimes1)\circ ({\id}_U\times_Tm_G)^\sharp}(\psi) \] for all $U\in\ssplfg{\knums}$, $t\in_UT$, $v\in\mathfrak g(t)=\Gamma((t^*\mathfrak g)_\ev)$, and $\psi\in\SheafTypeface H_G(t)=\Gamma\Parens1{\SheafTypeface O_{U\times_TG,\ev}}$. Here, we have \[ 1_{U\times_TG}\defi ({\id}_U,1_G(t)):U\longrightarrow U\times_TG. \] Similarly, we define $R:\mathfrak g\times\SheafTypeface H_G\longrightarrow\SheafTypeface H_G$ by \[ R_v\psi\defi\Parens1{(1_{U\times_TG}\times_T{\id}_G)^\sharp\circ v_{13}\circ ({\id}_U\times_Tm_G)^\sharp}(\psi) \] for all $t\in_UT$, $v\in\mathfrak g(t)=\Gamma((t^*\mathfrak g)_\ev)$, and $\psi\in\SheafTypeface H_G(t)=\Gamma\Parens1{\SheafTypeface O_{U\times_TG,\ev}}$. Here, we define \[ v_{13}\defi((1\,2)^{-1}\times_T{\id}_G)^\sharp\circ (1\otimes v)\circ((1\,2)\times_T{\id}_G)^\sharp, \] where $(1\,2):G\times_TU\longrightarrow U\times_TG$ is the flip. \bigskip\noindent Let us now indicate how to apply these ideas to transplant the orbit method into the world of Lie supergroups. Let $G$ be a Lie supergroup and $f\in_T{\mathbb A}(\mathfrak g_\Bbbk^*)$ a $T$-valued point of the dual of the Lie superalgebra $\mathfrak g$ (see the introduction to Section \ref{sec:coad}). The Lie superalgebra of $G_T\defi T\times G$ will be denoted by $\mathfrak g_T$ and equals $\SheafTypeface O_T\otimes\mathfrak g$, as is easy to see. The representations that appear in the superorbit method are all instances of the following simple construction. \begin{Prop}[pol-sect] Let $\mathfrak h\subseteq\mathfrak g_T$ be an $\SheafTypeface O_T$-submodule. Then the graded $\SheafTypeface O$-submodule $\SheafTypeface H_f^\mathfrak h\subseteq\SheafTypeface H_{G_T}$ defined by \[ \SheafTypeface H_f^\mathfrak h(t)\defi\Set1{\psi\in\SheafTypeface H_{G_T}(t)}{\forall v\in\mathfrak h(t)=\Gamma((t^*\mathfrak h)_\ev):R_v\psi=-i\Dual0{f(t)}v\psi} \] for all $U\in\ssplfg{\knums}$, $t\in_UT$, is a $G_T$-subrepresentation of $(\SheafTypeface H_{G_T},\lambda_{G_T})$. \end{Prop} \begin{proof} The action $R$ is $\SheafTypeface O$-linear and commutes with $L$. \end{proof} In the generality they are defined here, the representations $(\smash{\SheafTypeface H_f^\mathfrak h,\lambda_{G_T}})$ are not interesting. The relevant case is when $\mathfrak h$ is an $\SheafTypeface O_T$-Lie subsuperalgebra of $\mathfrak g_T$ that is $\Omega_f$-isotropic, \ie $\Omega_f(v,v')=0$ for all local sections $v,v'$ of $\mathfrak h$. If $\mathfrak h$ is maximally isotropic, then one thinks of $\SheafTypeface H^\mathfrak h_f$ as the `space of $\mathfrak h$-polarised sections of the canonical line bundle on $G\cdot f$' and, following Kirillov's orbit philosophy, expects suitable `completions' thereof to be irreducible. In the classical case of a Lie group over $T=*$, this is indeed true, and under certain assumptions, \eg when $G$ is nilpotent, one thus obtains all irreducible unitary representations \cite{rais}. For nilpotent Lie supergroups, it is known by the work of H.~Salmasian and K.-H.~Neeb \cites{salmasian,ns11} that irreducible unitary representations (in the sense of Ref.~\cites{cctv,cctv-err}) are parametrised by coadjoint orbits through ordinary points of $\mathfrak g^*$. The known constructions of these representations are however somewhat roundabout. As we show below, by way of example, for the Clifford supergroup of dimension $1|2$, they are realised as certain $\SheafTypeface H_f^\mathfrak h$. Moreover, we show that our approach, for general $T$, also allows for a Plancherel decomposition of the regular representation, at least for the simplest case of the Abelian Lie supergroup ${\mathbb A}^{0|n}$, where coadjoint orbits through ordinary points are totally insufficient. We believe that these examples are mere inklings of a vastly more general picture covering the representation theory and harmonic analysis of nilpotent and possibly more general Lie supergroups. \subsection{\texorpdfstring{The Plancherel formula for ${\mathbb A}^{0|n}$}{The Plancherel formula for the superaffine space}} Let $G={\mathbb A}^{0|n}$ be the additive supergroup of the super-vector space $\mathfrak g=\knums^{0|n}$. The coadjoint action $\Ad^*$ of $G$ is trivial. If we let $T\defi{\mathbb A}(\mathfrak g^*)$ and consider the generic point $f={\id}_T\in_T{\mathbb A}(\mathfrak g^*)$, then the following diagram commutes: \[ \begin{tikzcd} G_T\arrow{rr}{a_f}\arrow{rd}[swap]{p_1}&&{\mathbb A}(\mathfrak g^*)_T\\ &T\arrow{ru}[swap]{\Delta_T} \end{tikzcd} \] Thus, $a_f$ factors as the composition of an embedding with a surjective submersion and thus has constant rank by \thmref{Rem}{rk-converse}. Alternatively, observe that the fundamental distribution $\SheafTypeface A_\mathfrak g=0$, so that the criterion \eqref{item:action-locconst-ii} of \thmref{Th}{action-locconst} is verified. Moreover, the above factorisation coincides with the standard one into $\pi_f$ and $\tilde a_f$, \ie $G_f=G_T$, $G\cdot f=T$, $\pi_f=p_1$, and $\tilde a_f=\Delta_T$. The Kirillov--Kostant--Souriau form $\omega_f$ is zero. The general philosophy of `geometric quantisation' or `Kirillov's orbit method' demands the choice of a \Define{polarising} (\ie maximally isotropic) subalgebra $\mathfrak h\subseteq\mathfrak g_T$. Since $\Omega_f=0$, we must have $\mathfrak h=\mathfrak g_T$. The corresponding $G_T$-subrepresentation $\SheafTypeface H=\SheafTypeface H_f^\mathfrak h$ of $(\SheafTypeface H_{G_T},\lambda_{G_T})$ is given for all $U\in\ssplfg{\knums}$, $t\in_UT$, by \[ \SheafTypeface H(t)\defi\Set1{\psi}{\psi\in\Gamma\Parens1{(\SheafTypeface O_{G_U})_\ev},\forall v\in\mathfrak g:R_{1\otimes v}\psi=-i\Dual0{f(t)}v\psi}. \] This is the functor of a free $\SheafTypeface O_T$-module of rank $1|0$, since it has the basis of sections \[ \psi_0\defi e^{-i\sum_j\theta_j\xi^j}\in\SheafTypeface H({\id}_T)=\Gamma(\SheafTypeface O_{G_T,\ev}). \] Here, $\theta_1,\dotsc,\theta_n$ is some arbitrary basis of $\mathfrak g$ and $\xi^1,\dotsc,\xi^n$ is the dual basis of $\mathfrak g^*$, considered as coordinate superfunctions on $T={\mathbb A}(\mathfrak g^*)$ and $G={\mathbb A}(\mathfrak g)$, respectively. The representation of $G_T$ on $\SheafTypeface H$ is determined by its action on the special vector \[ \psi_t\defi\SheafTypeface H_{G_T}(t)(\psi_0)=e^{-i\sum_jt_j\xi^j},\quad t_j\defi t^\sharp(\theta_j). \] With $\pi$ denoting the restriction of $\lambda_{G_T}$ to $\SheafTypeface H$ and $g^j\defi g^\sharp(\xi^j)$, it is given by \[ \begin{split} \pi(g)\psi_t&=(({\id}_U,g^{-1})\times_T{\id}_G)^\sharp({\id}_U\times_Tm_G)^\sharp(e^{-i\sum_jt_j\xi^j})\\ &=(({\id}_U,g^{-1})\times_T{\id}_G)^\sharp(e^{-i\sum_jt_j(\xi^j_1+\xi^j_2)})\\ &=e^{-i\sum_jt_j(-g^j+\xi^j)}=e^{i\Dual0tg}\psi_t, \end{split} \] that is, it is a character, as was to be expected. We have the following `abstract' Fourier inversion formula. \begin{Prop}[odd-fi] For any superfunction $f$ on $G$, we have \[ \int_TD(\theta)\,\str\pi(f)=(-1)^{n(n+1)/2}i^nf_0(0), \] where $\pi(f)$ is defined by \[ \pi(f)\defi\int_GD(\xi)\,f\pi, \] and the integrals are Berezin integrals. \end{Prop} For the Berezin integral, see \citelist{\cite{leites}*{Chapter 2, \S 4} \cite{manin}*{Chapter 4, \S 6} \cite{deligne-morgan}*{\S 3.9}}. \begin{proof} Since $\pi$ is a character, the operator $\pi(f)$ is a function: \[ \pi(f)=\int_G D(\xi)\,fe^{i\sum_j\theta_j\xi^j}\in\Gamma(\SheafTypeface O_T) \] Therefore, $\str\pi(f)$ is that same function. (Incidentally, this may be viewed as a baby version of Kirillov's character formula.) The assertion now follows from the Euclidean Fourier inversion formula \cite{as-sbos}*{Proposition C.17}. \end{proof} We obtain the following Plancherel formula. \begin{Cor} For all superfunctions $f$ and $g$ on $G$, we have \[ \int_TD(\theta)\,\str(\pi(f)^\dagger\pi(g))=(-1)^{n(n+1)/2}i^n\int_GD(\xi)\,\overline fg. \] Here, $(-)^\dagger$ is the super-adjoint with respect to the $\SheafTypeface O_T$-inner product on $\SheafTypeface H$ normalised by $\Sdual0{\psi_0}{\psi_0}=1$ and $\overline{\id}$ is the antilinear antiautomorphism of $\SheafTypeface O_G$ defined by $\overline{\xi^j}=\xi^j$. \end{Cor} \begin{proof} Using the methods of Ref.~\cite{ahl-cliff}, one sees that $\pi(f)^\dagger\pi(g)=\pi(f^**g)$, where $*$ is the convolution product on $G$ and $f^*=i^\sharp(\overline f)$ where $i$ is the inversion of $G$. Since $\delta=\xi^1\dotsc\xi^n$ is the Dirac delta on $G$, the formula follows from \thmref{Prop}{odd-fi}. \end{proof} \begin{Rem} Thus, by judiciously applying the orbit method to $T$-valued points, we have obtained a decomposition of the left regular representation of $G$ into an `odd direct integral' of `unitary' characters. By contrast, a direct sum decomposition of the function space $\Gamma(\SheafTypeface O_G)$ into irreducible unitary $G$-representations is impossible, since the only such representation is the trivial one! \end{Rem} \subsection{The orbit method for Heisenberg type supergroups} Let us consider the Lie superalgebra $\mathfrak g$ over $\knums$ spanned by homogeneous vectors $x,y,z$ satisfying the unique non-zero relation \[ [x,y]=z. \] When $x,y,z$ are even, $\mathfrak g$ is the classical Heisenberg algebra of dimension $3|0$. When $x,y$ are odd, $z$ must be even. The central element $z$ spans a copy of $\knums$, so $\mathfrak g$ is a unital Lie algebra in the sense of Ref.~\cite{andler-sahi}, and its unital enveloping algebra $\Uenv0{\mathfrak g}/(1-z)$ is the Clifford algebra $\mathop{\mathrm{Cliff}}(2,\knums)$. (NB: We will use a different normalisation below.) For this reason, $\mathfrak g$ is called the Clifford--Lie superalgebra, and its representation theory was studied \eg in Refs.~\cites{salmasian,ahl-cliff}. The construction of the representations used there is \emph{ad hoc}. Below, we show how they arise in a natural fashion. A third possibility, which does not seem to have been considered before, is that $x,y$ are of distinct parity (but see Ref.~\cite{frydryszak}). In this case, $z$ is odd. As we show below, besides characters, there exists a family of representations (which happen to be finite-dimensional) parametrised by $T={\mathbb A}^{0|1}$, which bear a striking resemblance to the Schr\"odinger representation of the Heisenberg group. \subsubsection{Parity-independent computations} A number of computations concerning the Lie superalgebra $\mathfrak g$ of Heisenberg type introduced above are somewhat independent of the parity of its elements. We begin with the coadjoint representation of $\mathfrak g$. Let $x^*,y^*,z^*$ be the basis dual to $x,y,z$. In terms of this basis, we have \[ \ad^*(x)= \begin{Matrix}1 0&0&0\\ 0&0&-(-1)^{\Abs0x\Abs0z}\\ 0&0&0 \end{Matrix},\quad \ad^*(y)= \begin{Matrix}1 0&0&(-1)^{\Abs0y}\\ 0&0&0\\ 0&0&0 \end{Matrix},\quad \ad^*(z)=0. \] Recall the definitions given at the beginning of Section \ref{sec:super-quot}. We will consider the field $\Bbbk=\reals$, since we are mainly interested in super versions of real Lie groups. A Lie supergroup $G$ (\ie a group object in the category of supermanifolds over $(\knums,\reals)$ of class $\SheafTypeface C^\varpi$) with Lie superalgebra $\mathfrak g$ is uniquely determined by the choice of a real Lie group $G_0$ whose Lie algebra is a real form $\mathfrak g_{\reals,\ev}$ of $\mathfrak g_\ev$, compare Ref.~\cite{ap-sphasym}. We fix $\mathfrak g_\reals\defi\mathfrak g_{\reals,\ev}\oplus\mathfrak g_\odd$ by setting $\mathfrak g_{\reals,\ev}\defi\mathfrak g_\ev\cap\Span0{x,y,z}_\reals$. Let $G$ be the connected and simply connected Lie supergroup whose Lie superalgebra is $\mathfrak g$ and whose Lie group has Lie algebra $\mathfrak g_{\reals,\ev}$. Unless $\mathfrak g$ is purely even, $G_0$ is the additive group of $\reals$. With these conventions, $\ad^*(v)$ is the fundamental vector field corresponding to $v\in\mathfrak g$ under the coadjoint action $\Ad^*$ of $G$. Let $T\in\ssplfg{\knums}$ be arbitary and $f=\alpha x^*+\beta y^*+\gamma z^*\in_T{\mathbb A}(\mathfrak g_\reals^*)$. Observe that \[ \ad^*(ax+by+cz)(f)=-a\gamma y^*+(-1)^{\Abs0y(1+\Abs0z)}b\gamma x^* \] for $v=ax+by+cz\in\mathfrak g$, where $a,b,c\in\knums$. Thus, if $u=a'x+b'y+c'z\in\mathfrak g\subseteq\Gamma(\SheafTypeface O_{{\mathbb A}(\mathfrak g_\Bbbk^*)})$ with arbitary $a',b',c'\in\knums$, then \[ \Parens1{f^\sharp\circ\ad^*(v)}(u)=-ab'\gamma+(-1)^{\Abs0y(1+\Abs0z)}ba'\gamma. \] \thmref{Prop}{tan-coord} gives \begin{equation}\label{eq:heis-isotropy} f^\sharp\circ\ad^*(v)= \begin{cases} \displaystyle-\gamma\,f^\sharp\circ\frac\partial{\partial y}, &\text{ if }v=x,\\ \displaystyle(-1)^{\Abs0y(1+\Abs0z)}\gamma\, f^\sharp\circ\frac\partial{\partial x}, &\text{ if }v=y,\\ 0, &\text{ if }v=z, \end{cases} \end{equation} where we use $x,y,z$ as a coordinate system on ${\mathbb A}(\mathfrak g_\Bbbk^*)$. Let $t\in T_0$. The image of $f^\sharp\circ\ad^*(v)$ in $T_{f_0(t)}{\mathbb A}(\mathfrak g_\Bbbk^*)=(f^*\SheafTypeface T_{{\mathbb A}(\mathfrak g_\Bbbk^*)})(t)$ is \[ (f^\sharp\circ\ad^*(v))(t)=\ad^*(v)(f_0(t))= \begin{cases} \displaystyle-\gamma(t)\,\Parens2{\frac\partial{\partial y}}(f_0(t)), &\text{ if }v=x,\\ \displaystyle(-1)^{\Abs0y(1+\Abs0z)}\gamma(t)\,\Parens2{\frac\partial{\partial x}}(f_0(t)), &\text{ if }v=y. \end{cases} \] These are zero if $\gamma(t)=0$ and linearly independent otherwise. In the latter case, condition \eqref{item:action-locconst-iii} of \thmref{Th}{action-locconst} is verified. In the former case, the images of $(f^\sharp\circ\ad^*(v))$, $v=x,y$, in $(f^*\SheafTypeface A_\mathfrak g)(t)$ are zero if and only if $\gamma_t\in\gamma_t\mathfrak m_{T,t}$. For simplicity, let $T\in\SMan_\knums$ and $(\tau,\theta)$ be a local coordinate system at $t$ such that $\tau^j(t)=0$ for all $j$. Assume that $\gamma_t=\gamma_th_t$ for some $h_t\in\SheafTypeface O_{T,t}$, but $\gamma_t\neq0$. Then in the expansion $\gamma=\sum_J\gamma_J\theta^J$ there is some minimal $I$ such that $\gamma_I(t)\neq0$. It follows that $\gamma_I(t)=\gamma_I(t)h_0(t)$, so that $h\notin\mathfrak m_{T,t}$. Thus, applying \thmref{Th}{action-locconst}, we have proved that for $T\in\SMan_\knums$, $a_f$ has locally constant rank over $T$ if and only if \[ \forall t\in T_0:\Parens1{\gamma(t)=0\ \Longrightarrow\ \gamma_t=0}. \] If $T_0$ is connected, then this condition is equivalent to: $\gamma\in\Gamma(\SheafTypeface O_T^\times)$ or $\gamma=0$. The orbit exists if the orbit map $a_f$ attached to $f$ has locally constant rank, by \thmref{Prop}{orb-quot}. To compute the coadjoint action, we realise $G$ in matrix form and $\mathfrak g$ as left-invariant vector fields on $G$. For any $R\in\ssplfg{\knums}$, consider $3\times 3$ matrices with entries in $\SheafTypeface O_R$. We fix the parity on the matrices by decreeing that the rows and columns of nos.~$1,2,3$ have parities depending on those of $x,y,z$ according to Table~\ref{tab:paritydist}. \begin{table}[h] \caption{Parity distribution for the supergroups of Heisenberg type\label{tab:paritydist}} \begin{tabular}{|c|c|c||c|c|c|} \hline $\Abs0x$&$\Abs0y$&$\Abs0z$&1&2&3\\ \hline \hline $\mathstrut^{\displaystyle\mathstrut}\ev$&$\ev$&$\ev$&$\ev$&$\ev$&$\ev$\\ $\odd$&$\odd$&$\ev$&$\odd$&$\ev$&$\odd$\\ $\ev$&$\odd$&$\odd$&$\ev$&$\ev$&$\odd$\\ $\odd$&$\ev$&$\odd$&$\odd$&$\ev$&$\ev$\\ \hline \end{tabular} \end{table} Then matrices of the form \[ \begin{Matrix}1 1&a'&c'\\ 0&1&b'\\ 0&0&1 \end{Matrix} \] are even if and only if $\Abs0{a'}=\Abs0x$, $\Abs0{b'}=\Abs0y$, and $\Abs0{c'}=\Abs0z$. Let $G'(R)$ be the set of these matrices where in addition $\{a',b',c'\}\subseteq\Gamma(\SheafTypeface O_{R,\reals})$. Clearly, by defining the group multiplication by the multiplication of matrices, $G'$ is the point functor of a Lie supergroup. As we shall show presently, it is isomorphic to $G$. Since $G'_0=G_0$ is the additive group of $\reals$, unless $G$ is purely even, it will be sufficient to show that the Lie superalgebra of left-invariant vector fields on $G'$ is precisely $\mathfrak g$. Let $(a,b,c)$ be the coordinate system on $G$ defined on points by \[ h% \begin{Matrix}1 1&a'&c'\\ 0&1&b'\\ 0&0&1 \end{Matrix} \defi \begin{cases} (-1)^{\Abs0x}a'&h=a,\\ (-1)^{\Abs0y}b'&h=b,\\ (-1)^{\Abs0z}c'&h=c. \end{cases} \] Note that this sign convention is natural in the following sense: Consider the supermanifold $G'$ as the affine superspace of strictly upper triangular matrices. Then $a,b,c$ are the linear superfunctions which constitute the dual basis to the standard basis $(E_{12},E_{13},E_{23})$ of elementary matrices. Let $\frac\partial{\partial a},\frac\partial{\partial b},\frac\partial{\partial c}$ be the coordinate vector fields given by the coordinate system $(a,b,c)$. Let $R_x,R_y,R_z$ be the left-invariant vector fields on $G'$ determined by \[ R_x(1_{G'})=\frac\partial{\partial a}(1_{G'}),\quad R_y(1_{G'})=\frac\partial{\partial b}(1_{G'}),\quad R_z\defi[R_x,R_y], \] where write $R_x(1_{G'})$ for $1_{G'}^\sharp\circ R_x$, \emph{etc.} We now proceed to compute these explicitly. Let $\phi^x:*[\tau_x]\longrightarrow G'$ be the infinitesimal flow of $R_x(1_{G'})$, where $\Abs0{\tau_x}=\Abs0x$. (Compare \thmref{Def}{inf-flow}.) For any function $h$ on $G'$, we have \[ \frac\partial{\partial\tau_x}\Big|_{{\tau_x}=0} h% \begin{Matrix}1 1&(-1)^{\Abs0x}\tau_x&0\\ 0&1&0\\ 0&0&1 \end{Matrix} = \Parens2{\frac\partial{\partial a}h}(1_{G'}), \] as one sees by inserting the coordinates $h=a,b,c$. Thus, we have \[ (\phi^x)^\sharp(h)=h \begin{Matrix}1 1&(-1)^{\Abs0x}\tau_x&0\\ 0&1&0\\ 0&0&1 \end{Matrix}\!. \] Similarly, we obtain \[ (\phi^y)^\sharp(h)=h% \begin{Matrix}1 1&0&0\\ 0&1&(-1)^{\Abs0y}\tau_y\\ 0&0&1 \end{Matrix} \] for the infinitesimal flow $\phi^y$ of $R_y(1_{G'})$. We compute \[ \begin{split} (R_xh)% \begin{Matrix}1 1&a'&c'\\ 0&1&b'\\ 0&0&1 \end{Matrix}&= \frac\partial{\partial\tau_x}\Big|_{\tau_x=0}h% \begin{Matrix}1 1&a'+(-1)^{\Abs0x}\tau_x&c'\\ 0&1&b'\\ 0&0&1 \end{Matrix}\!,\\ (R_yh)% \begin{Matrix}1 1&a'&c'\\ 0&1&b'\\ 0&0&1 \end{Matrix}&= \frac\partial{\partial\tau_y}\Big|_{\tau_y=0}h \begin{Matrix}1 1&a'&(-1)^{\Abs0y}a'\tau_y+c'\\ 0&1&(-1)^{\Abs0y}\tau_y+b'\\ 0&0&1 \end{Matrix}\!, \end{split} \] by again inserting the coordinates for $h$. We obtain \begin{equation}\label{eq:rx-ry} R_x=\frac\partial{\partial a},\quad R_y=\frac\partial{\partial b}+(-1)^{\Abs0x\Abs0y}a\frac\partial{\partial c}. \end{equation} Here, we have used the parity identity $\Abs0x+\Abs0y+\Abs0z=\ev$. From these expressions, we see immediately that \begin{equation}\label{eq:rz} R_z=[R_x,R_y]=(-1)^{\Abs0x\Abs0y}\Bracks3{\frac\partial{\partial a},a\frac\partial{\partial c}}=(-1)^{\Abs0x\Abs0y}\frac\partial{\partial c}, \end{equation} and that this is the only non-zero bracket between the vector fields $R_x,R_y,R_z$. The sign $(-1)^{\Abs0x\Abs0y}$ that appears in the case of $\Abs0x=\Abs0y=\odd$ is an artefact of the parity distribution which is non-standard in that case. Since $R_x,R_y,R_z$ are linearly independent, they span the Lie superalgebra of $G'$, and it follows that $G\cong G'$. In what follows, we will identify these two supergroups. Moreover, we will identify $x,y,z$ with $R_x,R_y,R_z$, respectively. For further use below, we note that the right-invariant vector fields $L_x,L_y,L_z$, defined by \[ L_v\defi -i_G^\sharp\circ R_v\circ i_G^\sharp,\quad v=x,y,z, \] take on the form \begin{equation}\label{eq:lx-ly-lz} L_x=\frac\partial{\partial a}+b\frac\partial{\partial c},\quad L_y=\frac\partial{\partial b},\quad L_z=(-1)^{\Abs0x\Abs0y}\frac\partial{\partial c}. \end{equation} One immediately checks the bracket relation $[L_x,L_y]=-L_z$. We now calculate the adjoint action of $G$ in terms of the matrix presentation. Let $U\in\ssplfg{\knums}$ and $(g,v)\in_UG\times{\mathbb A}^\knums(\mathfrak g)$ (\cf Ref.~\cite{ap-sphasym} for the notation), where we write \[ g= \begin{Matrix}1 1&a'&c'\\ 0&1&b'\\ 0&0&1 \end{Matrix},\quad v=\xi x(1_G)+\eta y(1_G)+\zeta z(1_G)\in\Gamma((1_G(g)^*\SheafTypeface T_G)_\ev). \] According to the definition of $a$, $b$, and $c$, the generic point ${\id}_G\in_GG$ is \[ {\id}_G= \begin{Matrix}1 1&(-1)^{\Abs0x}a&(-1)^{\Abs0z}c\\ 0&1&(-1)^{\Abs0y}b\\ 0&0&1 \end{Matrix}\!. \] Denote the diagonal morphism of $U$ by $\Delta_U$. We compute, for any function $h$ on $G$: \begin{align*} \Ad(g)(v)(h)&=\Delta_U^\sharp(1\otimes v\otimes 1)h(g\,{\id}_G\,g^{-1})\\ &= \Delta_U^\sharp(1\otimes v\otimes 1)\,h\!% \begin{Matrix}1 1&(-1)^{\Abs0x}a&(-1)^{\Abs0z}c+(-1)^{\Abs0y}a'b-(-1)^{\Abs0x}ab'\\ 0&1&(-1)^{\Abs0y}b\\ 0&0&1 \end{Matrix}\!. \end{align*} To evaluate this further, we insert $a,b,c$ for $h$. For $h=a,b$, Equation \eqref{eq:rx-ry} tells us that we get $\xi$ and $\eta$, respectively. For $h=c$, we get, upon applying Equation \eqref{eq:rz}: \[ (-1)^{\Abs0x\Abs0y}\zeta+(-1)^{\Abs0x(\Abs0y+\odd)}\eta a'-(-1)^{\Abs0y}\xi b'. \] Thus, identifying $x$ with $x(1_G)$, \emph{etc.}, and writing $v$ in columns, we find \[ \Ad\! \begin{Matrix}1 1&a'&c'\\ 0&1&b'\\ 0&0&1 \end{Matrix}\! \begin{Matrix}1 \xi x\\ \eta y\\ \zeta z \end{Matrix} = \begin{Matrix}1 \xi x\\ \eta y\\ (\zeta+(-1)^{\Abs0x}\eta a'-(-1)^{(\Abs0x+\odd)\Abs0y}\xi b')z \end{Matrix}\!. \] One may verify the correctness of this result by rederiving the bracket relation \begin{align*} [x,y]&=\frac\partial{\partial\tau_y}\Big|_{\tau_y=0}(-1)^{\Abs0x\Abs0y}[x,\tau_yy]\\ &=\frac{\partial^2}{\partial\tau_y\partial\tau_x}\Big|_{\tau_x=\tau_y=0}(-1)^{\Abs0x\Abs0y}\Ad \begin{Matrix}1 1&(-1)^{\Abs0x}\tau_x&0\\ 0&1&0\\ 0&0&1 \end{Matrix} (\tau_yy)\\ &=\frac{\partial^2}{\partial\tau_y\partial\tau_x}\Big|_{\tau_x=\tau_y=0}(-1)^{\Abs0x\Abs0y}(-1)^{\Abs0x\Abs0y+\Abs0x}\tau_y((-1)^{\Abs0x}\tau_x)=z. \end{align*} It is now straightforward if somewhat tedious to derive \begin{equation}\label{eq:Adstar-explicit} \Ad^*\!% \begin{Matrix}1 1&a'&c'\\ 0&1&b'\\ 0&0&1 \end{Matrix}\!% \begin{Matrix}1 \xi^*x^*\\ \eta^*y^*\\ \zeta^*z^* \end{Matrix} = \begin{Matrix}1 (\xi^*+(-1)^{\Abs0y(\Abs0x+\odd)}b'\zeta^*)x^*\\ (\eta^*-(-1)^{\Abs0x}a'\zeta^*)y^*\\ \zeta^*z^* \end{Matrix} \end{equation} for any \[ (g,v^*)\in_UG\times{\mathbb A}^\knums(\mathfrak g^*),\quad g= \begin{Matrix}1 1&a'&c'\\ 0&1&b'\\ 0&0&1 \end{Matrix}, \quad v^*= \begin{Matrix}1 \xi^*x^*\\ \eta^*y^*\\ \zeta^*z^* \end{Matrix}\!. \] As for the adjoint action, we make a sanity check: \begin{align*} \ad^*(x)(z^*)&=\frac{\partial^2}{\partial\tau_z\partial\tau_x}\Big|_{\tau_x=\tau_z=0}(-1)^{\Abs0x\Abs0z}\Ad^*% \begin{Matrix}1 1&(-1)^{\Abs0x}\tau_x&0\\ 0&1&0\\ 0&0&1 \end{Matrix}(\tau_zz^*)\\ &=\frac{\partial^2}{\partial\tau_z\partial\tau_x}\Big|_{\tau_x=\tau_z=0}(-1)^{\Abs0x\Abs0z+\Abs0x}(-(-1)^{\Abs0x}\tau_x)\tau_zz^*=-(-1)^{\Abs0x\Abs0z}z^*, \end{align*} which is in agreement with our previous computations. Let us return to our $T$-valued point $f$ in the case where $\alpha=\beta=0$, \ie we have $f=\gamma z^*\in_T{\mathbb A}(\mathfrak g^*_\reals)$. Then \begin{equation}\label{eq:heisen-isotropy-group} (t,g)\in_UG_f\ \Longleftrightarrow\ a't^\sharp(\gamma)=b't^\sharp(\gamma)=0,\quad g=\begin{Matrix}11&a'&c'\\ 0&1&b'\\ 0&0&1\end{Matrix}\!. \end{equation} Moreover, the orbit map $a_f:G_T\longrightarrow{\mathbb A}(\mathfrak g_\reals^*)$ takes the form \[ (a_f)^\sharp(x)=\gamma b,\quad (a_f)^\sharp(y)=-(-1)^{\Abs0x\Abs0z}\gamma a,\quad (a_f)^\sharp(z)=\gamma, \] in terms of coordinates $a,b,c$ on $G$ and the (linear) coordinates $x,y,z$ on ${\mathbb A}(\mathfrak g_\reals^*)$, given by \[ h% \begin{Matrix}1 \xi^*x^*\\ \eta^*y^*\\ \zeta^*z^* \end{Matrix} = \begin{cases} (-1)^{\Abs0x}\xi^*x(x^*)=\xi^*&h=x,\\ (-1)^{\Abs0y}\eta^*y(y^*)=\eta^*&h=y,\\ (-1)^{\Abs0z}\zeta^*z(z^*)=\zeta^*&h=z. \end{cases} \] We will now analyse this further, separately in the two cases in which $G$ is not a Lie group (\ie when at least one of $x,y,z$ is odd). \subsubsection{\texorpdfstring{The Clifford supergroup of dimension $1|2$}{The Clifford supergroup of dimension 1|2}} Assume that $\Abs0x=\Abs0y=\odd$. In this case, $G$ is called the \Define{Clifford supergroup}. This case has been given a definitive treatment by Neeb and Salmasian \cites{ns11,salmasian}, see also Ref.~\cite{ahl-cliff} for the related harmonic analysis. Our emphasis here will be to put it in the general context the orbit method. Moreover, we shall obtain the full family of Clifford modules for any non-trivial central character in one sweep. We will take $T\defi{\mathbb A}^1\setminus0$ and $\gamma\defi u$, the standard coordinate function on ${\mathbb A}^1$, so $f=\gamma z^*:T\longrightarrow{\mathbb A}(\mathfrak g_\Bbbk^*)$. Since $\gamma$ is invertible, $a_f$ has locally constant rank over $T$, and in particular, $G_f$ is a Lie supergroup over $T$. It is completely determined by its underlying Lie group $(G_f)_0$ over $T_0$ and its Lie superalgebra $\mathfrak g_f$ (over $\SheafTypeface O_T$), defined by \[ \mathfrak g_f(U)\defi\Set2{v=\textstyle\sum_jv^je_j\in\SheafTypeface O_T(U)\otimes_\knums\mathfrak g}{\textstyle\sum_jv^j(1_G^\sharp\circ e_j\circ a_f^\sharp)=\sum_jv^jf^\sharp\circ a_{e_j}=0}, \] for any open set $U\subseteq T_0$. In view of Equation \eqref{eq:heis-isotropy}, we have $\mathfrak g_f=\SheafTypeface O_Tz$. For the superspace $U=*$, the condition in Equation \eqref{eq:heisen-isotropy-group} is void. We conclude that the point functor of $G_f$ is given by \[ G_f(U)=\Set2{\Parens2{t,\begin{Matrix}0 1&0&c'\\ 0&1&0\\ 0&0&1 \end{Matrix}}}{t,c'\in\Gamma(\SheafTypeface O_{U,\reals,\ev})}, \] for all $U\in\ssplfg{\knums}$, so that $G_f\cong{\mathbb A}^1_T$ with the standard addition of ${\mathbb A}^1$ as multiplication over $T$. The orbit $G\cdot f=G_T/G_f$ is $T\times{\mathbb A}^{0|2}$ with fibre coordinates $a,b$. The local embedding $\tilde a_f:G\cdot f\longrightarrow{\mathbb A}(\mathfrak g_\Bbbk^*)_T$ over $T$ is given by \[ (\tilde a_f)^\sharp(x)=\gamma b,\quad (\tilde a_f)^\sharp(y)=-\gamma a,\quad (\tilde a_f)^\sharp(z)=\gamma. \] Again following the general philosophy of geometric quantisation or Kirillov's orbit method, we choose a polarising subalgebra. To avoid reality problems, we consider the case of $\knums=\cplxs$. In the real case, we would have to complexify anyway. A polarising subalgebra corresponds here to the preimage $\mathfrak h$ in $\mathfrak g_T=\SheafTypeface O_T\otimes\mathfrak g$ of a locally direct submodule of $\mathfrak g_T/\mathfrak g_f$ which is maximally totally isotropic with respect to the supersymplectic form induced by $\omega_f$. We will consider the case of \[ \mathfrak h\defi\Span0{x,z}_{\SheafTypeface O_T}. \] The image in $\mathfrak g_T/\mathfrak g_f$ is indeed maximally totally isotropic. The space of $\mathfrak h$-polarised sections of the canonical line bundle on $G\cdot f$ is the $\SheafTypeface O$-submodule $\SheafTypeface H_f^\mathfrak h$ of $\SheafTypeface H_{G_T}$ introduced in \thmref{Prop}{pol-sect}. It is given by \[ \SheafTypeface H(t)\defi\Set1{\psi}{\psi\in\Gamma(\SheafTypeface O_{G_U,\ev}),R_x\psi=0,R_z\psi=-it^\sharp(\gamma)\psi}, \] for $U\in\ssplfg{\cplxs}$, $t\in_UT$. By Equations \eqref{eq:rx-ry} and \eqref{eq:rz}, this amounts to \[ \psi=\vphi e^{it^\sharp(\gamma)c} \] where $\vphi\in\Gamma(\SheafTypeface O_{U\times{\mathbb A}^{0|1},\ev})$, and we consider $b$ as fibre coordinate on $(U\times{\mathbb A}^{0|1})/U$. Thus, $\psi$ admits an expansion in the powers $b^0,b^1$ of $b$, with coefficients in functions on $U$. Thus, $\SheafTypeface H$ is the functor of the free $\SheafTypeface O_T$-module of rank $1|1=\dim\Gamma(\SheafTypeface O_{{\mathbb A}^{0|1}})$. We denote the corresponding $\SheafTypeface O_T$-module by the same letter. Denoting the restriction of $\lambda_{G_T}$ to $\SheafTypeface H$ by $\pi$, we compute for $g=\begin{Matrix}01&a'&c'\\ 0&1&b'\\ 0&0&1\end{Matrix}\in_UG$: \[ \begin{split} \pi(g)\psi&=(({\id}_U,g^{-1})\times m_G)^\sharp\Parens1{\vphi(b_1+b_2)e^{it^\sharp(\gamma)(c_1+c_2+a_1b_2)}}\\ &=\vphi(b-b')e^{it^\sharp(\gamma)(-c'+a'b'+c-a'b)}=e^{it^\sharp(\gamma)(-a'b+a'b'-c')}\psi(b-b') \end{split} \] Formally deriving this expression, we readily obtain the infinitesimal action \[ d\pi(x)=-ibt^\sharp(\gamma),\quad d\pi(y)=-\frac\partial{\partial b},\quad d\pi(z)=it^\sharp(\gamma). \] Since the supercommutator of $\pi(x)$ and $\pi(y)$ is an anticommutator, we recognise this as the `fermionic Fock space' or `spinor module' of the $\SheafTypeface O_T$-Clifford algebra $\mathop{\mathrm{Cliff}}(2,\SheafTypeface O_T)\defi(\SheafTypeface O_T\otimes\Uenv0{\mathfrak g})/(z-i\gamma\cdot 1)$. That is, we have a trivial bundle of `spinor' modules $\cplxs^{1|1}$ over the base space $\reals^\times$, where the central character on the fibre at $t\in\reals^\times$ is $i\gamma(t)=it$. (The fibres are unital algebra representations of $\mathop{\mathrm{Cliff}}(2,\cplxs)$.) \subsubsection{\texorpdfstring{The odd Heisenberg supergroup of dimension $1|2$}{The odd Heisenberg supergroup of dimension 1|2}} Assume now that $\Abs0x=\ev$, $\Abs0y=\Abs0z=\odd$. In this case, we call $G$ the \Define{odd Heisenberg supergroup}, since it is a central extension of the Abelian Lie supergroup ${\mathbb A}^{1|1}$ with respect to a $2$-cocycle corresponding to an odd supersymplectic form. We will take $T\defi{\mathbb A}^{0|1}$ and $\gamma\defi\theta$, the standard coordinate function on ${\mathbb A}^{0|1}$, so $f=\theta z^*:T\longrightarrow{\mathbb A}(\mathfrak g_\Bbbk^*)$. In this case, Equation \eqref{eq:heisen-isotropy-group} gives \[ G_f=(\reals,\SheafTypeface O_{G_f}),\quad\SheafTypeface O_{G_f}\defi\SheafTypeface O_{{\mathbb A}^1}[b,c,\theta]/(a\theta,b\theta), \] where $b,c$ are odd, $a$ is the standard coordinate function on ${\mathbb A}^1$, and the embedding $j:G_f\longrightarrow G_T$ is the obvious one. Clearly, $G_f$ is not a supermanifold over $T$. To determine the orbit, let $h$ be a function on $G_T$ and expand \[ h=h_0+h_bb+h_cc+h_\theta\theta+h_{bc}bc+h_{b\theta}b\theta+h_{c\theta}c\theta+h_{bc\theta}bc\theta \] where $h_I$ are functions on ${\mathbb A}^1$. The multiplication $m$ of $G_T$ is given by \[ m^\sharp(a)=a_1+a_2,\quad m^\sharp(b)=b_1+b_2,\quad m^\sharp(c)=c_1+c_2+a_1b_2, \] where we write $a_i\defi p_i^\sharp(a)$, \etc{} Thus, writing $m'\defi m\circ({\id}_{G_T}\times_Tj)$, we find that \[ \begin{split} m^{\prime\sharp}(h)=&\;m^{\prime\sharp}(h_0)+m^{\prime\sharp}(h_b)(b_1+b_2)+m^{\prime\sharp}(h_c)(c_1+c_2+a_1b_2)+m^{\prime\sharp}(h_\theta)\theta\\ &+m^{\prime\sharp}(h_{bc})(b_1c_1+b_1c_2+a_1b_1b_2-c_1b_2+b_2c_2)+m^{\prime\sharp}(h_{b\theta})b_1\theta\\ &+m^{\prime\sharp}(h_{c\theta})(c_1\theta+c_2\theta)+m^{\prime\sharp}(h_{bc\theta})(b_1c_1\theta+b_1c_2\theta). \end{split} \] Since $p_1^\sharp(h)$ contains only $b_1,c_1$, if $h$ is invariant, then all summands involving $b_2$ or $c_2$ have to vanish. Moreover, on $G_T\times_TG_f$, we have \[ m^{\prime\sharp}(h_I)\theta=h_I(a_1+a_2)\theta=h_I(a_1)\theta+h_I'(a_1)a_2\theta=h_I(a_1)\theta=p_1^\sharp(h_I)\theta, \] so the invariance condition is verified automatically for the $\theta$ and $b\theta$ components. Therefore, $h$ is left $G_f$-invariant if and only if \[ \begin{cases} m^{\prime\sharp}(h_I)=p_1^\sharp(h_I), &\text{ for }I=0,\\ h_I=0,&\text{ for }I=b,c,bc,c\theta,bc\theta. \end{cases} \] In other words, $h$ is of the form \[ h=h_0+h_\theta\theta+h_{b\theta}b\theta \] where $h_0$ is constant and $h_\theta$, $h_{b\theta}$ are arbitrary. It follows that the colimit in $\SSp_T$ of $m,p_1:G_T\times_TG_f\longrightarrow G_T$ is given by \[ Q\defi\Parens0{*,\SheafTypeface O_Q},\quad\SheafTypeface O_Q=\Set1{f\in\Gamma(\SheafTypeface O_{{\mathbb A}^1})[\eps|\theta]/(\eps^2,\eps\theta)}{f_0\in\knums}, \] together with the morphism $\pi_f:G_T\longrightarrow Q$ determined by \[ \pi_f^\sharp(a)=a,\quad\pi_f^\sharp(\eps)=b\theta,\quad\pi_f^\sharp(\theta)=\theta, \] see \thmref{Rem}{colimit-explicit}. By \thmref{Prop}{wgeom-cat}, $Q$ is a regular superspace in the sense of \cite{ahw-sing}*{Definition 4.12}, but it is not locally finitely generated, because it is not a subspace of $Y_q\defi(*,\knums[\kern-.35ex[ a]\kern-.35ex][\theta^1,\dotsc,\theta^q])$ for any $q$. (If $Q$ were locally finitely generated, then it would have to be a subspace of some $Y_q$ \cite{ahw-sing}*{Example 3.50}.) Nonetheless, we have the subrepresentation $\SheafTypeface H_f^\mathfrak h$ of $\SheafTypeface H_{G_T}$ from \thmref{Prop}{pol-sect} for polarising subalgebras $\mathfrak h\subseteq\mathfrak g_T$. We choose \[ \mathfrak h\defi\Span0{x,z}_{\SheafTypeface O_T}. \] Once again, $\SheafTypeface H=\SheafTypeface H_f^\mathfrak h$ is given by \[ \SheafTypeface H(t)\defi\Set1{\psi}{\psi\in\Gamma(\SheafTypeface O_{G_U,\ev}),R_x\psi=0,R_z\psi=-it^\sharp(\gamma)\psi} \] for $U\in\ssplfg{\knums}$, $t\in_UT$. We see that the condition on $\psi$ amounts to \[ \psi=\vphi e^{it^\sharp(\gamma)c}=\vphi(1+it^\sharp(\gamma)c), \] where $\vphi\in\Gamma(\SheafTypeface O_{U\times{\mathbb A}^{0|1},\ev})$ admits a finite expansion in $b$ with coefficients in functions on $U$. Again, this corresponds to the $\SheafTypeface O_T$-module $\SheafTypeface O_T\otimes\smash{\cplxs^{1|1}}$. The restriction $\pi$ of $\lambda_{G_T}$ to $\SheafTypeface H$ is given by the same formula as before: \[ \pi(g)\psi=e^{it^\sharp(\gamma)(-a'b+a'b'-c')}\psi(b-b'),\quad\forall g=\begin{Matrix}01&a'&c'\\ 0&1&b'\\ 0&0&1\end{Matrix}. \] Formally deriving this expression, one obtains the following infinitesimal action: \[ d\pi(x)=-it^\sharp(\gamma)b,\quad d\pi(y)=-\frac\partial{\partial b},\quad d\pi(z)=it^\sharp(\gamma). \] Since the supercommutator of $d\pi(x)$ and $d\pi(y)$ is an ordinary commutator, this is a parity reversed Schr\"odinger representation, parametrised by $T={\mathbb A}^{0|1}$. If instead we consider the polarising subalgebra $\mathfrak h=\Span0{y,z}_{\SheafTypeface O_T}$, then the dimension of the representation $\SheafTypeface H_f^\mathfrak h$ changes drastically, although the action is formally very similar. (Essentially, $a$ and $b$ exchange their roles.) This has also been observed by Tuynman \cite{tuyn10b} in his setting and seems to reflect the fact that in this case, the orbit is not a supermanifold.
train/arxiv
BkiUbbg4uBhhxHcxANHj
5
1
\section{Introduction} \label{sec:intro} The Sun has a complicated magnetic field structure; many features of the Sun and proxies for the solar activity are related to the evolution of the Sun's magnetic field. The solar mean magnetic field (SMMF) is a surprising, non-zero measurement of the imbalance of opposite polarities of magnetic flux observed on the full visible disc of the Sun \citep{svalgaard_suns_1975}, and is defined as the mean line-of-sight (LOS) magnetic field when observing the Sun-as-a-star \citep{scherrer_mean_1977, scherrer_mean_1977-1, garcia_integrated_1999}. In the literature the SMMF is also sometimes referred to as the general magnetic field (GMF) \citep{severny_time_1971} or the mean magnetic field (MMF) \citep{kotov_mean_2008} of the Sun. Observations of the SMMF have typically been made by measuring the Zeeman splitting of spectral lines using a ground-based Babcock-type magnetograph \citep{scherrer_mean_1977}, although more recently the SMMF has been calculated from full-disc LOS magnetograms taken from space-borne telescopes such as the Solar Dynamics Observatory Helioseismic and Magnetic Imager (SDO/HMI) \citep{kutsenko_contribution_2017, bose_variability_2018}. It is understood that the strength of the SMMF may vary depending on the spectral line used to measure it \citep{kotov_mean_2008, kotov_enigmas_2012}; however, the SMMF varies slowly with the solar activity cycle, with an amplitude on the order of a Gauss during solar maximum and a tenth of a Gauss during solar minimum \citep{plachinda_general_2011}. In addition, the SMMF displays a strong, quasi-coherent rotational signal which must arise from inhomogeneities on the solar disc with lifetimes of several rotations \citep{chaplin_studies_2003, xie_temporal_2017}. Despite existing literature on SMMF observations spanning several decades, the SMMF origin remains an open debate in solar physics. The principal component of the SMMF is commonly assumed to be weak, large-scale magnetic flux, distributed over the entire solar disc, rather than from magnetic flux concentrations (MFCs), active regions (ARs), or sunspots \citep{severny_time_1971, scherrer_mean_1977, xiang_ensemble_2016}. However, conversely, \citet{scherrer_mean_1972} found that the SMMF was most highly correlated with only the inner-most one quarter, by area, of the solar disc, which is more sensitive to active latitudes. In recent literature, \citet{bose_variability_2018} provided a novel approach to understanding the SMMF whereby they decomposed the SMMF through feature identification and pixel-by-pixel analysis of full-disc magnetograms. They concluded that: (i) the observed variability in the SMMF lies in the polarity imbalance of large-scale magnetic field structures on the visible surface of the Sun; (ii) the correlation between the flux from sunspots and the SMMF is statistically insignificant; and (iii) more critically that the background flux dominates the SMMF, accounting for around 89 per cent of the variation in the SMMF. However, there still remained a strong manifestation of the rotation signal in the background component presented by \citet{bose_variability_2018}. This signal is indicative of inhomogeneous magnetic features with lifetimes on the order of several solar rotations, rather than the short-lived, weaker fields usually associated with the large-scale background. It therefore raises the question of whether their technique assigned flux from MFCs or ARs to the background. It is possible that some of the strong flux may have been assigned to the background signal, which then contributed to this rotation signal. Despite these findings, it is known that the strength of the SMMF is weaker during solar minimum, when there are fewer ARs, and stronger during solar maximum, when there are more ARs \citep{plachinda_general_2011}. This is suggestive that the evolution of ARs has relevance for the evolution of the SMMF. There is a contrasting view in the literature which claims AR flux dominates the SMMF. \citet{kutsenko_contribution_2017} state that a large component of the SMMF may be explained by strong and intermediate flux regions. These regions are associated with ARs, suggesting between 65 to 95 per cent of the SMMF could be attributed to strong and intermediate flux from ARs, and the fraction of the occupied area varied between 2 to 6 per cent of the solar disc, depending on the chosen threshold for separating weak and strong flux. This finding suggests that strong, long-lived, inhomogeneous MFCs produce the strong rotation signal in the SMMF; however, \citet{kutsenko_contribution_2017} also discuss that there is an entanglement of strong flux (typically associated with ARs) and intermediate flux (typically associated with network fields and remains of decayed ARs). This means it is difficult to determine whether strong ARs or their remnants contribute to the SMMF. The Sun's dynamo and hence magnetic field is directly coupled to the solar rotation. The Sun exhibits latitude-dependent and depth-dependent differential rotation with a sidereal, equatorial period of around 25~days \citep{howe_solar_2009}. To Earth-based observers, the synodic rotation of the Sun is observed at around 27~days, and the SMMF displays a dominant signal at this period, and its harmonics \citep{chaplin_studies_2003, xie_temporal_2017, bose_variability_2018}. It was also reported by \citet{xie_temporal_2017} that the differential solar rotation was observed in the SMMF with measured synodic rotational periods of $28.28 \, \pm \, 0.67$~days and $27.32 \, \pm \, 0.64$~days for the rising and declining phases, respectively, of all of the solar cycles in their considered time-frame. On the other hand, \citet{xiang_ensemble_2016} utilised ensemble empirical mode decomposition (EEMD) analysis to extract modes of the SMMF and found two rotation periods which are derived from different strengths of magnetic flux elements. They found that a rotation period of 26.6~days was related to weaker magnetic flux elements within the SMMF, while the measured period was slightly longer, at 28.5~days, for stronger magnetic flux elements. In this work, we use high-cadence (sub-minute) observations of the SMMF, made by the Birmingham Solar Oscillations Network (BiSON) \citep{chaplin_bison_1996, chaplin_noise_2005, hale_performance_2016}, to investigate its morphology. This work provides a frequency domain analysis of the SMMF, and a rotationally-modulated (RM) component with a period of around 27~days is clearly observed as several peaks in the power spectrum. The breakdown of the paper is as follows. In Section~\ref{sec:data}, we provide an overview of the BiSON data used in this work; how the observations are made and the SMMF data are acquired. As this work provides an investigation of the SMMF in the frequency domain, in Section~\ref{sec:method} we discuss in detail how the power spectrum was modelled. In Section~\ref{sec:results} the results from modelling the power spectrum are presented. We outline the key findings and draw similarities between the properties of the RM component and ARs, suggesting that ARs may provide a strong contribution to the SMMF. Conclusions and discussions are presented in Section~\ref{sec:conc}. \section{Methodology} \label{sec:method} \subsection{Parametrization of the SMMF Power Spectrum} \label{sec:method_model_lifetimes} As we have 40-second cadence observations of the SMMF, we were able to investigate the power spectrum up to a Nyquist frequency of 12500~$\mu$Hz. There are a number of features within the full SMMF power spectrum, shown in Fig.~\ref{fig:SMMF_40s_PSD}. \begin{figure} \includegraphics[width=\columnwidth]{Figures/BiSON_full_PSD.pdf} \caption{Power spectrum of 40-second cadence SMMF from the Sutherland BiSON station observed between 1992 -- 2012 on a logarithmic scale up to the full Nyquist frequency.} \label{fig:SMMF_40s_PSD} \end{figure} The peaks between 0.2 -- 2.0 $\mu\mathrm{Hz}$ in Fig.~\ref{fig:SMMF_FT} are a manifestation of rotation in the SMMF. The distinct set of peaks indicates the existence of a long-lived, inhomogeneous, rotationally-modulated (RM) source. Due to the quasi-coherent nature of the SMMF, and based on the comparatively short timescales for the emergence of magnetic features compared to their slow decay \citep{zwaan_solar_1981, harvey_properties_1993, hathaway_sunspot_2008}, we assume the evolution of individual features that contribute to the RM component may be modelled by a sudden appearance and a long, exponential decay. In the frequency-domain, each of the RM peaks may therefore be described by a Lorentzian distribution: \begin{equation} L_n(\nu; \Gamma, A_n, \nu_n) = \frac{2{A_n}^2}{\pi \Gamma} \left(1 + \left(\frac{(\nu - \nu_n)}{\Gamma/2} \right)^2\right)^{-1} \, , \label{eq:symm_lorentzian} \end{equation} where $\nu$ is frequency, $A_n$ is the root-mean-square amplitude of the peak, $\Gamma$ is the linewidth of the peak, $\nu_n$ is the frequency of the peak, and $n$ simply flags each peak. The mean-squared power in the time domain from the RM component of the SMMF is given by the sum of the ${A_n}^2$ of the individual harmonics in the power spectrum. Through this formulation we can measure the $e$-folding time ($T_e$) of the amplitude of the RM component, as it is related to the linewidth of the peak by: \begin{equation} \Gamma = (\pi \, T_e)^{-1} \, . \label{eq:mode_lifetime} \end{equation} The low-frequency power due to instrumental drifts, solar activity, and the window function can be incorporated into the model via the inclusion of a zero-frequency centred Lorentzian \citep{basu_asteroseismic_2017}, given by: \begin{equation} H(\nu; \sigma, \tau) = \frac{4{\sigma}^2\tau}{1 + (2\pi \nu\tau)^2} \, , \label{eq:harvey} \end{equation} where $\sigma$ is the characteristic amplitude of the low frequency signal, and $\tau$ describes the characteristic timescale of the excursions around zero in the time-domain. Finally, the high frequency power is accounted for by the inclusion of a constant offset due to shot-noise, $c$ \citep{basu_asteroseismic_2017}. In the absence of any gaps in the data, the model function used to describe the power spectrum is given by: \begin{equation} P(\nu, \,{\bf a}) = \sum_{n=1}^{N} L_n(\nu; \Gamma, A_n, \nu_n) \, + \, H(\nu; \sigma, \tau) \, + \, c \, ; \label{eq:PSD_fit} \end{equation} the subscript, $n$, describes a single peak in the power spectrum. In implementing the model we constrain the mode frequencies such that they must be integer values of $\nu_0$: $\nu_n \, = \, n \nu_0$. This means that we define a single rotation frequency only, $\nu_0$, and subsequent peaks are the harmonic frequencies. It is worth noting explicitly that this function assumes the line width of each Lorentzian peak is the same, only the amplitudes and central frequencies differ. The duty cycle of the Sutherland SMMF observations is very low, $\sim 15$ per cent, therefore it was important to take into consideration the effect that gaps in the data have on the power spectrum. Gaps in the data cause an aliasing of power from the signal frequencies to other frequencies in the spectrum, and the nature of the aliasing depends on the properties of the window function of the observations. Periodic gaps in the data give rise to sidebands in the power spectrum and random gaps cause a more broadband shifting of power, meaning that some power from the low-frequency RM component is aliased to higher frequencies. The daily, periodic gaps in the BiSON data, due to single-site observations, produce sidebands around a frequency of 1/day, i.e. $\sim$~11.57 $\mu$Hz, and its harmonics. The aliased power is therefore located at frequencies: \begin{equation} \nu_{n, i} = i \, (\frac{1}{\mathrm{day}} \pm \nu_{n}) \, , \label{eq:sidebands} \end{equation} where $i$ denotes the sideband number and $n$ denotes the harmonic of the mode. The sideband structure implied by equation~(\ref{eq:sidebands}) is shown clearly in Fig.~\ref{fig:sidebands}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/sideband.pdf} \caption{Locations of aliased power in sideband peaks. The orange dotted-lines show the locations of frequencies at multiples of 1/day. The green dashed-lines show the location of the sideband peaks -- harmonic frequencies reflected around multiples of 1/day. The inset shows a zoom of one set of sideband peaks around 1/day.} \label{fig:sidebands} \end{figure} The tails of the aliased peaks are long, therefore aliased power was re-distributed across the entire frequency range which produced a red-noise-like background component. To understand the broadband effects of the window function we generated an artificial time series from a single Lorentzian (representing the fundamental RM component). The artificial data were generated by calculating the inverse Fourier transform of the power spectrum which had the same Nyquist frequency and frequency resolution of the SMMF power spectrum. We then injected the gaps from the BiSON observations into this artificial time series, to ensure the window function was the same as the BiSON SMMF, and finally investigated the resultant power spectrum both without and without the window function. Fig.~\ref{fig:PSDs} shows the effect of the window function on the resultant power spectrum. The power spectrum generated from the time series without gaps produces a single Lorentzian peak (amber and green lines). The injection of gaps into the time series (orange line) produces both the red-noise-like background component, as well as the sidebands, which bears a striking resemblance to the power spectrum of the BiSON SMMF observations (black line) and also the power spectrum of the window function (blue line). \begin{figure} \includegraphics[width=\columnwidth]{Figures/gap_test.pdf} \caption{The effects of the window function on the power spectrum is shown by using a fake data set and this is compared to the BiSON power spectrum. Black line: BiSON SMMF power spectrum; blue line: power spectrum of the window function; green and dark-orange lines: the power spectrum of the artificial data without and with gaps, respectively; amber line: the input peak used to generate the artificial data over-plotted. The power spectra of the BiSON SMMF and the window function have been shifted upwards by a factor of 6 and 30, respectively, for clarity.} \label{fig:PSDs} \end{figure} This shows that the BiSON SMMF spectrum has a red-noise-like background component that is not due to any ephemeral signal, but due to the re-distribution of power by the window function of the BiSON observations. In the time domain, the observed data, $y(t)$, includes the window function which, analytically, we can express as a multiplication of the uninterrupted, underlying signal, $f(t)$, with the window function, $g(t)$: \begin{equation} y(t) = f(t) \, g(t) \label{eq:timeseries} \, , \end{equation} where: \begin{equation} g(t) = \begin{cases} 1 & \text{for } |B(t)| > 0 \\ 0 & \text{for } |B(t)| = 0 \end{cases} \, . \label{eq:window} \end{equation} Multiplication in the time domain becomes a convolution in the frequency domain. To model the observed power spectrum in a robust manner, taking into account the intricacies caused by gaps in the data, we used a model which was formed of a model power spectrum, $P(\nu; \,{\bf a})$ (equation~(\ref{eq:PSD_fit})), convolved with the Fourier transform of the window function of the observations ($\left|\mathcal{F}\left[g(t)\right]\right|^2$), i.e., \begin{equation} P'(\nu, \,{\bf a}) \, = \, P(\nu, \,{\bf a}) * \left|\mathcal{F}\left[g(t)\right]\right|^2 \, . \label{eq:PSD_fit_conv} \end{equation} Care was taken during this operation to ensure Parseval's theorem was obeyed, that no power was lost or gained from the convolution: \begin{equation} \sum_{\nu} P'(\nu) \, = \,\sum_{\nu} P(\nu) \, = \, \frac{1}{N} \sum_{t} B(t)^2 \, , \label{eq:parseval} \end{equation} where $N$ here is the number of observed cadences. \subsection{Modelling the SMMF Power Spectrum} \label{sec:method_modelling} Parameter estimation using the model defined in the previous section, including all parameters, ${\bf a}$, was performed in a Bayesian manner using a Markov Chain Monte Carlo (MCMC) fitting routine. Following from Bayes' theorem we can state that the posterior probability distribution, $p({\bf a} | D, I)$, is proportional to the likelihood function, $L(D | {\bf a}, I)$, multiplied by a prior probability distribution, $p({\bf a} | I)$: \begin{equation} p({\bf a} | D, I) \propto L(D | {\bf a}, I) \, p({\bf a} | I) \, , \label{eq:bayes} \end{equation} where $D$ are the data, and $I$ is any prior information. To perform the MCMC integration over the parameter space we must define a likelihood function; however, in practice, it is more convenient to work with logarithmic probabilities. The noise in the power spectrum is distributed as $\chi^2$ 2 degrees-of-freedom \citep{anderson_modeling_1990, handberg_bayesian_2011, davies_low-frequency_2014}, therefore the log likelihood function is: \begin{equation} \ln{(L)} = - \sum\limits_{i} \left\{ \ln{(M_{i}({\bf a}))} + \frac{O_i}{M_{i}({\bf a})} \right\} \, , \label{eq:likelihood_functino} \end{equation} for a model, $M_i$, with parameters, ${\bf a}$, and observed power, $O_i$, where $i$ describes the frequency bin. This likelihood function assumes that all the frequency bins are statistically independent but the effect of the window function means that they are not. We handled this issue in the manner described below, which used simulations based on the artificial data discussed in Section~\ref{sec:method_model_lifetimes}. The prior information on each of the parameters used during the MCMC sampling were uniform distributions (denoted by $\mathcal{U}(l, u)$ with $l$ and $u$ representing the lower and upper limits of the distribution, respectively): \begin{gather*} \nu_0 \, \sim \,\mathcal{U}(0.38, 0.50) \> \mu\mathrm{Hz} \\ \Gamma \, \sim \,\mathcal{U}(0.00, 0.11) \> \mu\mathrm{Hz} \\ A_1 \, \sim \,\mathcal{U}(100, 350) \> \mathrm{mG} \\ A_2 \, \sim \,\mathcal{U}(50, 200) \> \mathrm{mG} \\ A_3 \, \sim \,\mathcal{U}(20, 150) \> \mathrm{mG} \\ A_4 \, \sim \,\mathcal{U}(10, 100) \> \mathrm{mG} \\ \sigma \, \sim \,\mathcal{U}(0.01, 500) \> \mathrm{mG} \\ \tau \, \sim \,\mathcal{U}(0.10, 200) \> 10^6 \, \mathrm{s} \\ c \, \sim \,\mathcal{U}(10^{-3}, 10^{2}) \> \mathrm{G}^2 \, \mathrm{Hz}^{-1} \, . \\ \end{gather*} The limits on the priors were set to cover a sensible range in parameter space, whilst limiting non-physical results or frequency aliasing. The power spectrum of the 40-second cadence SMMF was modelled using equation~(\ref{eq:PSD_fit_conv}) (with $N = 4$ Lorentzian peaks in $P(\nu, \,{\bf a})$) using the affine-invariant MCMC sampler \textsc{emcee} \citep{foreman-mackey_emcee_2013} to explore the posterior parameter space. The chains are not independent when using \textsc{emcee}, therefore convergence was interrogated using the integrated autocorrelation time. We computed the autocorrelation time using \textsc{emcee} and found $\tau \sim 120$~steps. \cite{foreman-mackey_emcee_2013} suggests that chains of length $\geq 50\tau$ are often sufficient. After a burn in of 6000 steps, we used 7000 iterations on 50 chains to explore the posterior parameter space, which was sufficient to ensure we had convergence on the posterior probability distribution. As a result of the convolution in the model the widths of the posterior distributions for the model parameters were systematically underestimated. This effect arises because we do not account explicitly for the impact of the window function convolution on the covariance of the data; it is difficult to overcome computationally, especially with such a large data set ($\sim 10^7$ data points). To overcome this issue we performed the simulations using artificial data, described above, both with and without the effects of the window function and the use of the convolution in the model. This helped us to understand how the convolution affected our ability to measure the true posterior widths, which allowed us to account for the systematic underestimate of the credible regions of the posterior when modelling the power spectrum of the observed BiSON SMMF. We also analysed the data as daily, one-day-cadence averages; this gave a higher fill ($\sim 55$ per cent) but a lower Nyquist frequency ($\sim 5.787$~mHz). Because of the much lower Nyquist, modelling the background power spectral density was more challenging but the duty cycle was approximately three times higher, resulting in a smaller effect from the window function. We note that we recovered results in our analysis of the daily averaged data that were consistent with those from the analysis of the data with a 40-second cadence. \section{Data} \label{sec:data} \subsection{Summary of the Data Set} \citet{chaplin_studies_2003} provided the first examination of the SMMF using data from the Birmingham Solar Oscillations Network (BiSON), and the work presented in this paper is a continuation of that study. BiSON is a six-station, ground-based, global network of telescopes continuously monitoring the Sun, which principally makes precise measurements of the line-of-sight (LOS) velocity of the photosphere due to solar $p$ mode oscillations \citep{hale_performance_2016}. Through the use of polarizing optics and additional electronics, the BiSON spectrometers can measure both the disc-averaged LOS velocity and magnetic field in the photosphere \citep{chaplin_studies_2003}; however, not all BiSON sites measure the SMMF. In this study we focus on the data collected by the Sutherland node in South Africa, which was also used by \cite{chaplin_studies_2003}. This is the only station that has had the capability to measure and collect data on the SMMF long-term. Data are sampled on a 40-second cadence, and the SMMF data collected by the Sutherland station span the epoch from 01/1992 -- 12/2012 (i.e. covering 7643~days). Over this period, the average duty cycle of the 40-second data is $\sim 15.6$ per cent. If instead we take a daily average of the BiSON SMMF, the average duty cycle is $\sim 55.2$ per cent. This gives a higher duty cycle but a lower Nyquist frequency. Because of the much lower Nyquist frequency, modelling the background power spectral density is more challenging; therefore we use the 40-second cadence data in this work. However, both data sets return similar results; we discuss later in Section~\ref{sec:method_modelling} how we handled the impact of the low duty cycle of the 40-second data. A comparison of the daily-averaged SMMF observations made by BiSON to those made by the Wilcox Solar Observatory (WSO) is given in \citet{chaplin_studies_2003}. \subsection{Obtaining the SMMF from BiSON} To acquire the SMMF from BiSON data, the method described by \citet{chaplin_studies_2003} was adopted; here we discuss the key aspects. Each BiSON site employs a resonant scattering spectrometer (RSS) to measure the Doppler shift of the Zeeman $^{2}\mathrm{S}_{1/2} \, \rightarrow \, ^{2}\mathrm{P}_{1/2}$ line of potassium, at 769.9~nm \citep{brookes_resonant-scattering_1978}. A potassium vapour cell placed within a longitudinal magnetic field Zeeman splits the laboratory line into the two allowed D1 transitions \citep{lund_spatial_2017}. The intensity of the longer wavelength (red; $I_R$) and shorter wavelength (blue; $I_B$) components of the line may be measured by the RSS almost simultaneously, by using polarizing optics to switch between the red and blue wings of the line, to form the ratio given by equation~(\ref{eq:ratio}) which is used as a proxy for the Doppler shift from the LOS velocity of the photosphere (see \citet{brookes_observation_1976, brookes_resonant-scattering_1978, elsworth_performance_1995, chaplin_studies_2003, broomhall_new_2009, davies_bison_2014, lund_spatial_2017}): \begin{equation} \mathcal{R} = \frac{I_B - I_R}{I_B + I_R} \, . \label{eq:ratio} \end{equation} Photospheric magnetic fields Zeeman split the Fraunhofer line and the Zeeman-split components have opposite senses of circular polarization \citep{chaplin_studies_2003}. Additional polarizing optics are used in the RSS to manipulate the sense of circular polarization (either + or -) that is passed through the instrument. The ratio $\mathcal{R}_{+}$ or $\mathcal{R}_{-}$ is formed, and the ratios $\mathcal{R}_{\pm}$ would be equal if there was no magnetic field present. The observed ratio ($\mathcal{R}_{\pm}$) may be decomposed as: \begin{equation} \mathcal{R}_{\pm} = \mathcal{R}_{\mathrm{orb}} + \mathcal{R}_{\mathrm{spin}} + \mathcal{R}_{\mathrm{grs}} + \delta {r}_{\mathrm{osc}}(t) \pm \delta {r}_{\mathrm{B}}(t) \, , \label{eq:vel_comp} \end{equation} where $\mathcal{R}_{\mathrm{orb}}$ is due to the radial component of the Earth's orbital velocity around the Sun, $\mathcal{R}_{\mathrm{spin}}$ is due to the component towards the Sun of the Earth's diurnal rotation about its spin axis as a function of latitude and time, $\mathcal{R}_{\mathrm{grs}}$ is from the gravitational red-shift of the solar line \citep{elsworth_techniques_1995, dumbill_observation_1999}, $\delta {r}_{\mathrm{osc}}(t)$ is due to the LOS velocity due to $p$ mode oscillations, and $\delta {r}_B(t)$ is due to the magnetic field ($\pm$ denotes the polarity of the Zeeman-split line that is being observed) \citep{dumbill_observation_1999}. The effect of the magnetic field on the ratio is shown in Fig.~\ref{fig:ratio_split}, and from equation~(\ref{eq:R_diff}), the difference between the opposite magnetic field ratios is twice the magnetic ratio residual, i.e.: \begin{equation} \mathcal{R}_{+} - \mathcal{R}_{-} = 2 \, \delta {r}_{\mathrm{B}}(t) \, . \label{eq:R_diff} \end{equation} \begin{figure} \includegraphics[width=\columnwidth]{Figures/Fred_ratio_zoom.pdf} \caption{An example of the BiSON ratio data over a 30-minute period. The separation between the 2 ratios is due to the solar mean magnetic field and oscillations are due to the 5-minute $p$ mode signal.} \label{fig:ratio_split} \end{figure} The BiSON RSS is measuring the velocity variation on the solar disc, and therefore a calibration from the ratio to a velocity is necessary. One method of calibration is achieved by first fitting a 2nd- or 3rd-order polynomial as a function of velocity to the observed ratio averaged over both magnetic polarities, as discussed by \citet{elsworth_techniques_1995}. Here we chose to fit the ratio in terms of velocity, $\mathcal{R}_{\mathrm{calc}}(u)$, i.e., \begin{equation} \mathcal{R}_{\mathrm{calc}}(u) = \sum_{n} \mathcal{R}_{n} u^n \, , \label{eq:calc_ratio} \end{equation} where: \begin{equation} u = v_{\mathrm{orb}} + v_{\mathrm{spin}} \, , \label{eq:stn_vel} \end{equation} and $v_{\mathrm{orb}}$ is the velocity component related to the ratio, $\mathcal{R}_{\mathrm{orb}}$; $v_{\mathrm{spin}}$ is related to the ratio, $\mathcal{R}_{\mathrm{spin}}$; $n$ is the polynomial order. It is possible to see that through the removal of $\mathcal{R}_{\mathrm{calc}}(u)$ (which we set up to also account for $\mathcal{R}_{\mathrm{grs}}$) from the observed ratios, one is left with the ratio residuals of the $p$ mode oscillations and the magnetic field, i.e., \begin{equation} \mathcal{R}_{\pm} - \mathcal{R}_{\mathrm{calc}}(u) = \delta {r}_{\mathrm{osc}}(t) \pm \delta {r}_{\mathrm{B}}(t) \, . \label{eq:ratio_resid} \end{equation} Furthermore, conversion from ratio residuals into velocity residuals uses the calibration given by equation~(\ref{eq:vel_resid}): \begin{equation} \delta v(t) = \left( \frac{d\mathcal{R}_{calc}}{dV} \right)^{-1} \, \delta {r}(t) \label{eq:vel_resid} \, . \end{equation} In order to finally obtain the SMMF in units of magnetic field, one must combine equation~(\ref{eq:R_diff}) and equation~(\ref{eq:vel_resid}) with the conversion factor in equation~(\ref{eq:K_B}) \citep{dumbill_observation_1999}, and the entire procedure can be simplified into: \begin{equation} B(t) = \frac{1}{2} \left( \frac{d\mathcal{R}_{calc}}{dV} \right)^{-1} \frac{(\mathcal{R}_{+} - \mathcal{R}_{-})}{K_B} \, , \label{eq:simplified_SMMF_cal} \end{equation} where, \begin{equation} K_B = \frac{8}{3} \, \frac{\mu_B}{h} \, \frac{c}{\nu} \approx 2.89 \, \mathrm{ms}^{-1} \, \mathrm{G}^{-1} \, , \label{eq:K_B} \end{equation} and $\mu_B$ is the Bohr magneton, $h$ is Planck's constant, $c$ is the speed of light, and $\nu$ is the frequency of the photons, Through the application of this methodology, one acquires the SMMF as shown in Fig.~(\ref{fig:SMMF_TS}). The power spectrum of the full, 7643-day Sutherland data set is shown in Fig.~(\ref{fig:SMMF_FT}), and it shows a strong rotational signal at a period of $\sim27$~days. The power spectrum of the SMMF is shown again in Fig.~\ref{fig:SMMF_40s_PSD} on a logarithmic scale covering the entire frequency range, which highlights the broadband background component of the power spectrum. \begin{figure} \subfloat[Time-series of BiSON 40-s cadence SMMF \label{fig:SMMF_TS}]{\includegraphics[width=0.98\columnwidth, right]{Figures/BiSON_full_TS.pdf}} \\ \subfloat[Power spectrum of BiSON 40-s cadence SMMF \label{fig:SMMF_FT}]{\includegraphics[width=\columnwidth]{Figures/BiSON_lin_mu_PSD.pdf}} \caption{(a) 40-second cadence observations of the SMMF from the Sutherland BiSON station between 1992 and 2012. The sense of the field was chosen to match both \citet{chaplin_studies_2003} and the WSO observations, where positive is for a field pointing outwards from the Sun. (b) Power spectrum of the SMMF on a 40-second cadence truncated to $10 \, \mu\mathrm{Hz}$, however the Nyquist frequency is 12500~$\mu$Hz.} \label{fig:BiSON_SMMF} \end{figure} \section{Results} \label{sec:results} \subsection{Rotation} \label{sec:rot_results} From the adjusted posterior distributions for each of the parameters, acquired through modelling the power spectrum, we were able to measure the fundamental rotational frequency and linewidth of the RM component. The latter was assumed to be the same for each peak. In Table~\ref{tab:full_fit_params} the median values of marginalised posterior distributions for each of the model parameters of equation~(\ref{eq:PSD_fit_conv}) are displayed. The resultant posterior distributions were approximately normally distributed and there was no significant covariance between parameters, therefore reported uncertainties on the parameters correspond to the $68$ per cent credible intervals either side of the median in the posterior distributions, adjusted for the systematic window function effects. In addition, we show the raw data with the model fit over-plotted in Fig.~\ref{fig:full_PSD_fit} and Fig.~\ref{fig:full_PSD_fit_linear}, on logarithmic and linear scales, respectively, to highlight the fit over the full frequency range, and the RM peaks, respectively. \begin{table} \caption{Power spectrum model median results. Numbers in brackets denote uncertainties on the last 2 digits, and all uncertainties correspond to the $68 \%$ credible intervals either side of the median for adjusted posterior widths.} \label{tab:full_fit_params} \begin{tabular}{l r r l r r } \hline {\bf $\theta$} & {Value} & {Unit} & {\bf $\theta$} & {Value} & {Unit} \\ \hline {$\nu_0$} & {0.4270$\left(_{-18}^{+18}\right)$} & {$\mu\mathrm{Hz} $} & {$A_4$} & {32.6$\pm2.1$} & {$\mathrm{mG}$} \\ {$\Gamma$} & {0.0264$\left(_{-35}^{+35}\right)$} & {$\mu\mathrm{Hz} $} & {$\tau$} & {51.8$\pm6.8$} & {$\mathrm{days}$} \\ {$A_1$} & {166.0$\pm10.7 $} & {$\mathrm{mG}$} & {$\sigma$} & {83.4$\pm5.4$} & {$\mathrm{mG}$} \\ {$A_2$} & {115.9$\pm7.4$} & {$\mathrm{mG}$} & {$c$} & {0.2103$\left(_{-03}^{+03}\right)$} & {$\mathrm{G}^2\mathrm{Hz}^{-1}$} \\ {$A_3$} & {83.2$\pm5.3$} & {$\mathrm{mG}$} & {} & {} & {} \\ \hline \end{tabular} \end{table} \begin{figure} \subfloat[Full power spectrum of the SMMF on logarithmic axes \label{fig:full_PSD_fit}]{\includegraphics[width=0.98\columnwidth, right]{Figures/BiSON_PSD_model_log.pdf}} \\ \subfloat[Power spectrum of the SMMF on linear axes, up to a frequency of $2.5 \, \mu\mathrm{Hz}$ \label{fig:full_PSD_fit_linear}]{\includegraphics[width=\columnwidth]{Figures/BiSON_PSD_model_lin.pdf}} \caption{Power spectrum and the best-fitting model for: (a) the full power spectrum of the SMMF on logarithmic axes. (b) Power spectrum of the SMMF on linear axes, up to a frequency of $2.5 \, \mu\mathrm{Hz}$ in order to show the fundamental signal peaks due to rotation-modulated ARs. The data is displayed in black and the model is shown in green.} \label{fig:PSD_fits} \end{figure} The central frequency of the model, $\nu_0$, implies a fundamental synodic rotation period of $27.11 \pm 0.11$~days, and hence a sidereal rotation period of $25.23\pm0.11$~days. The rotation period measured here is in agreement with other literature for the rotation signal in the SMMF \citep{chaplin_studies_2003, xie_temporal_2017}. According to the model for differential rotation in equation~(\ref{eq:diff_rot_freq}), the measured rotation period suggests that the observed SMMF is sensitive to a time-averaged latitude of around $12^{\circ}$. This latitude is consistent with those spanned by sunspots and ARs over the solar activity cycle \citep{maunder_note_1904, mcintosh_deciphering_2014}, and particularly during the declining phase of the solar cycle \citep{thomas_asteroseismic_2019}. This suggests the origin of the RM component of the SMMF could be linked to active regions. \subsection{Lifetimes} From the measured linewidth of the Lorentzian peaks, we have calculated the lifetime of the RM component using equation~(\ref{eq:mode_lifetime}). The linewidth suggests a RM lifetime of $139.6 \pm 18.5$~days, which is in the region of $\sim 20 \pm 3$~weeks. The effects of differential rotation and AR migration do not impact our ability to measure the linewidth, and thus lifetime, of the peaks (as explained in Appendix~\ref{sec:smearing}). The typical lifetime of active magnetic regions and sunspots is on the order of weeks to months \citep{zwaan_solar_1981, howard_sunspot_2001, hathaway_sunspot_2008}, therefore the observations of the SMMF by BiSON measure a lifetime of the RM component which is consistent with the lifetime of ARs and sunspots. This again suggests that the RM signal is linked to active regions of magnetic field, suggesting them as a possible source of the signal. When verifying these results by repeating the analysis with a daily averaged SMMF (see Section~\ref{sec:method_modelling}), the results for the linewidth were consistent. \section{Discussions and Conclusions} \label{sec:conc} We have presented, for the first time, a frequency-domain analysis of $\sim$20 years of high-cadence (40-second) BiSON observations of the SMMF. The investigation of very high-cadence observations of the SMMF allowed the exploration of the power spectrum up to $12.5$~mHz and the long duration of observations provided near-nHz resolution in the power spectrum which allowed us to measure the parameters associated with the rotationally modulated (RM) component of the SMMF. We have measured the central frequency of the RM component, allowing us to infer the sidereal period of the RM to be $25.23~\pm~0.11$~days. This rotation period matches to an activity cycle average latitude of $\sim 12^{\circ}$, which is in the region of the typical latitudes for active magnetic regions averaged over the activity cycle \citep{maunder_note_1904, mcintosh_deciphering_2014, thomas_asteroseismic_2019}. For the first time, using the linewidth of the peaks we have measured the lifetime of the RM component in the SMMF. The lifetime of the source of the RM component was inferred to be $139.6~\pm~18.5$ days. This lifetime is consistent with those of active magnetic regions and sunspots, in the region of weeks to months \citep{zwaan_solar_1981, hathaway_sunspot_2008}. There has been considerable debate in the literature concerning the origin of the SMMF. In this study, as the properties of the RM component are consistent with ARs, we have presented novel evidence suggesting them as the source of the SMMF. \section{Testing the Effects of Differential Rotation and Migration} \label{sec:smearing} As a result of solar differential rotation and the migration of ARs towards the solar equator during the activity cycle, it is understood that the rotation period of ARs will vary throughout the solar cycle. As we have inferred that the RM component of the SMMF is likely linked to ARs, we may therefore assume that the RM component is also sensitive to latitudinal migration. Here we analysed the effect of this migration and differential rotation on our ability to make inferences on the lifetime of the RM component. Several studies have modelled the the solar differential rotation, and its variation with latitude and radius of the Sun (see \citet{beck_comparison_2000} and \cite{howe_solar_2009} for in-depth reviews of the literature on solar differential rotation). Magnetic features have been shown to be sensitive to rotation deeper than the photosphere; therefore in general magnetic features can be seen to rotate with a shorter period than the surface plasma \citep{howe_solar_2009}. \citet{chaplin_distortion_2008} analysed the effects of differential rotation on the shape of asteroseismic $p$ modes of oscillation with a low angular degree (i.e. $l \leq 3$), and showed that the consequence of differential rotation is to broaden the observed linewidth of a mode peak. The authors provide a model of the resultant profile of a $p$ mode whose frequency is shifted in time to be a time-average of several instantaneous Lorentzian profiles with central frequency $\nu(t)$, given by: \begin{equation} \langle P(\nu) \rangle \, = \, \frac{1}{T} \int^T_0 H \left( 1 \, + \, \left( \frac{\nu - \nu(t)}{\Gamma /2} \right)^2 \right)^{-1} dt \, , \label{eq:stacked_lorentzians} \end{equation} where the angled brackets indicate an average over time, $H$ and $\Gamma$ are the mode height (maximum power spectral density) and linewidth, respectively, and the full period of observation is given by $T$. \citet{chaplin_distortion_2008} also show that by assuming a simple, linear variation of the unperturbed frequency, $\nu_0$, from the start to the end of the time-series by a total frequency shift $\Delta\nu$: \begin{equation} \nu(t) \, = \, \nu_0 \, + \Delta\nu \frac{t}{T} \, , \label{eq:linear_variation} \end{equation} the resultant profile of a $p$ mode can analytically be modelled by equation~(\ref{eq:atan_lorentzians}): \begin{equation} \langle P(\nu) \rangle \, = \, \frac{H}{2\epsilon} \arctan \left( \frac{2 \epsilon}{1 - \epsilon^2 + X^2 } \right) \, , \label{eq:atan_lorentzians} \end{equation} where $\epsilon$ and $X$ are defined in equation~\ref{eq:epsilon} and equation~\ref{eq:X}: \begin{equation} \epsilon \, = \, \frac{\Delta\nu}{\Gamma} \, ; \label{eq:epsilon} \end{equation} \begin{equation} X \, = \, \frac{\nu - [\nu_0 + (\Delta\nu/2)]}{\Gamma /2} \, . \label{eq:X} \end{equation} As the mode linewidths are broadened by this effect, we evaluated whether our ability to resolve the true linewidth of the RM, and hence the lifetime, was affected. In order to evaluate this we computed the broadened profiles given by both equation~(\ref{eq:stacked_lorentzians}) and equation~(\ref{eq:atan_lorentzians}), and fit the model for a single Lorentzian peak, to determine whether there was a notable difference in the linewidth. In the first instance, we computed the broadened peak using equation~(\ref{eq:stacked_lorentzians}). Over the duration of the observations, we computed the daily instantaneous profile, $P(\nu(t))$. The time-averaged profile, $ \langle P(\nu) \rangle$, was a weighted average of each instantaneous profile, where the weights were given by the squared-daily-SMMF, in order to allow a larger broadening contribution at times when the SMMF amplitude is higher. In the second instance, we computed the broadened peak using equation~(\ref{eq:atan_lorentzians}). Over the duration of the observations the daily frequency shift, $\Delta\nu$, was computed. The time-averaged shift, $\Delta\nu$, was a weighted average, where again the weightings were given by the squared-daily-SMMF. To determine the shift in the rotation rate as the active bands migrate to the solar equator, we used the model of the solar differential rotation as traced by magnetic features ($\Omega_m$) given by: \begin{equation} \frac{\Omega_m}{2 \pi} \, = \, 462 - 74 \mu^2 - 53 \mu^4 \, \mathrm{nHz} \, , \label{eq:diff_rot_freq} \end{equation} where $\mu \, = \, \cos \theta $ and $\theta$ is the co-latitude \citep{snodgrass_magnetic_1983, brown_inferring_1989}. The time-dependence on the latitude of the active regions used the best-fitting quadratic model by \cite{li_latitude_2001-1}. In both instances, the broadened peak was modelled as a single Lorentzian peak using equation~(\ref{eq:symm_lorentzian}). Again, we use \textsc{emcee} \citep{foreman-mackey_emcee_2013} to explore the posterior parameter space, with priors similar to the above full-fit on the relevant parameters. \subsection{Results: Time-Averaged Broadened Profile} Over the entire duration of the SMMF observations, the time-averaged profile was calculated, using equation~(\ref{eq:stacked_lorentzians}), and this is shown in Fig.~\ref{fig:weighted_shift}. The broadened mode used the input parameters outlined in Table~\ref{tab:full_fit_params}, however with the background parameter set to zero. By eye, the broadened profile does not appear to have a significantly larger linewidth. The input linewidth was $0.0264 \pm 0.0035 \, \mu\mathrm{Hz} $, and the fit to the time-averaged broadened peak produced a linewidth of $0.0262^{+0.0038}_{-0.0037} \, \mu\mathrm{Hz} $. The linewidth of the broadened peak under this method was rather unchanged from that of the true peak, and both linewidths are within uncertainties of each other. This result shows that numerically, the mode broadening effect of differential rotation and migration does not affect our ability to resolve the linewidth of the peak, and hence the predicted lifetime of the RM component of the SMMF. \begin{figure} \centering \subfloat[Time-Averaged Broadened Profile \label{fig:weighted_shift}]{\includegraphics[width=0.9\columnwidth]{Figures/weighted_shifted_peak.pdf}} \\ \subfloat[Analytically Broadened Profile \label{fig:atan_shift}]{\includegraphics[width=0.9\columnwidth]{Figures/chaplin_shifted_peak.pdf}} \caption{(a) Shows the peak distribution before and after the time-averaged broadening, and the fit to the broadened peak. (b) Shows the peak distribution before and after the analytical broadening, and the fit to the broadened peak. In both plots the broadened peaks have been shifted by the relevant frequency to overlay them on top of the true $\nu_0$ for comparison.}\label{fig:shifted_peaks} \end{figure} \subsection{Results: Analytically Broadened Profile} The time-averaged frequency shift due to differential rotation was calculated, much in the same way as equation~(\ref{eq:stacked_lorentzians}), to be $\Delta\nu \, = \,0.01285 \, \mu\mathrm{Hz}$. This shift was used to generate the broadened profile using equation~(\ref{eq:atan_lorentzians}). The broadened mode distribution also used the input parameters outlined in Table~\ref{tab:full_fit_params}, however with the background parameter set to zero. Similar to the numerically broadened peak, by eye, the analytically broadened profile does not appear to have a significantly larger linewidth (see Fig.~\ref{fig:atan_shift}). The input linewidth was $0.0264 \pm 0.0035 \, \mu\mathrm{Hz} $, and the linewidth of the analytically broadened peak from the fit was $0.0263^{+0.0038}_{-0.0037} \, \mu\mathrm{Hz} $, which was within the uncertainties of the linewidth of the input peak. This result shows, analytically, the mode broadening effect of differential rotation and migration does not affect our ability to resolve the linewidth of the peak, and hence the lifetime of the RM component of the SMMF. \subsection{Discussion} Both broadening methods applied were shown to have a negligible effect on the linewidth of the profile, and our ability to resolve the true linewidth of the peak remains unaffected. This result provides confidence that the measured linewidth in Table~\ref{tab:full_fit_params} was the true linewidth of the RM peaks, providing the correct lifetime for RM component, unaffected by migration and differential rotation. \section{Introduction} \label{sec:intro} The Sun has a complicated magnetic field structure; many features of the Sun and proxies for the solar activity are related to the evolution of the Sun's magnetic field. The solar mean magnetic field (SMMF) is a surprising, non-zero measurement of the imbalance of opposite polarities of magnetic flux observed on the full visible disc of the Sun \citep{svalgaard_suns_1975}, and is defined as the mean line-of-sight (LOS) magnetic field when observing the Sun-as-a-star \citep{scherrer_mean_1977, scherrer_mean_1977-1, garcia_integrated_1999}. In the literature the SMMF is also sometimes referred to as the general magnetic field (GMF) \citep{severny_time_1971} or the mean magnetic field (MMF) \citep{kotov_mean_2008} of the Sun. Observations of the SMMF have typically been made by measuring the Zeeman splitting of spectral lines using a ground-based Babcock-type magnetograph \citep{scherrer_mean_1977}, although more recently the SMMF has been calculated from full-disc LOS magnetograms taken from space-borne telescopes such as the Solar Dynamics Observatory Helioseismic and Magnetic Imager (SDO/HMI) \citep{kutsenko_contribution_2017, bose_variability_2018}. It is understood that the strength of the SMMF may vary depending on the spectral line used to measure it \citep{kotov_mean_2008, kotov_enigmas_2012}; however, the SMMF varies slowly with the solar activity cycle, with an amplitude on the order of a Gauss during solar maximum and a tenth of a Gauss during solar minimum \citep{plachinda_general_2011}. In addition, the SMMF displays a strong, quasi-coherent rotational signal which must arise from inhomogeneities on the solar disc with lifetimes of several rotations \citep{chaplin_studies_2003, xie_temporal_2017}. Despite existing literature on SMMF observations spanning several decades, the SMMF origin remains an open debate in solar physics. The principal component of the SMMF is commonly assumed to be weak, large-scale magnetic flux, distributed over the entire solar disc, rather than from magnetic flux concentrations (MFCs), active regions (ARs), or sunspots \citep{severny_time_1971, scherrer_mean_1977, xiang_ensemble_2016}. However, conversely, \citet{scherrer_mean_1972} found that the SMMF was most highly correlated with only the inner-most one quarter, by area, of the solar disc, which is more sensitive to active latitudes. In recent literature, \citet{bose_variability_2018} provided a novel approach to understanding the SMMF whereby they decomposed the SMMF through feature identification and pixel-by-pixel analysis of full-disc magnetograms. They concluded that: (i) the observed variability in the SMMF lies in the polarity imbalance of large-scale magnetic field structures on the visible surface of the Sun; (ii) the correlation between the flux from sunspots and the SMMF is statistically insignificant; and (iii) more critically that the background flux dominates the SMMF, accounting for around 89 per cent of the variation in the SMMF. However, there still remained a strong manifestation of the rotation signal in the background component presented by \citet{bose_variability_2018}. This signal is indicative of inhomogeneous magnetic features with lifetimes on the order of several solar rotations, rather than the short-lived, weaker fields usually associated with the large-scale background. It therefore raises the question of whether their technique assigned flux from MFCs or ARs to the background. It is possible that some of the strong flux may have been assigned to the background signal, which then contributed to this rotation signal. Despite these findings, it is known that the strength of the SMMF is weaker during solar minimum, when there are fewer ARs, and stronger during solar maximum, when there are more ARs \citep{plachinda_general_2011}. This is suggestive that the evolution of ARs has relevance for the evolution of the SMMF. There is a contrasting view in the literature which claims AR flux dominates the SMMF. \citet{kutsenko_contribution_2017} state that a large component of the SMMF may be explained by strong and intermediate flux regions. These regions are associated with ARs, suggesting between 65 to 95 per cent of the SMMF could be attributed to strong and intermediate flux from ARs, and the fraction of the occupied area varied between 2 to 6 per cent of the solar disc, depending on the chosen threshold for separating weak and strong flux. This finding suggests that strong, long-lived, inhomogeneous MFCs produce the strong rotation signal in the SMMF; however, \citet{kutsenko_contribution_2017} also discuss that there is an entanglement of strong flux (typically associated with ARs) and intermediate flux (typically associated with network fields and remains of decayed ARs). This means it is difficult to determine whether strong ARs or their remnants contribute to the SMMF. The Sun's dynamo and hence magnetic field is directly coupled to the solar rotation. The Sun exhibits latitude-dependent and depth-dependent differential rotation with a sidereal, equatorial period of around 25~days \citep{howe_solar_2009}. To Earth-based observers, the synodic rotation of the Sun is observed at around 27~days, and the SMMF displays a dominant signal at this period, and its harmonics \citep{chaplin_studies_2003, xie_temporal_2017, bose_variability_2018}. It was also reported by \citet{xie_temporal_2017} that the differential solar rotation was observed in the SMMF with measured synodic rotational periods of $28.28 \, \pm \, 0.67$~days and $27.32 \, \pm \, 0.64$~days for the rising and declining phases, respectively, of all of the solar cycles in their considered time-frame. On the other hand, \citet{xiang_ensemble_2016} utilised ensemble empirical mode decomposition (EEMD) analysis to extract modes of the SMMF and found two rotation periods which are derived from different strengths of magnetic flux elements. They found that a rotation period of 26.6~days was related to weaker magnetic flux elements within the SMMF, while the measured period was slightly longer, at 28.5~days, for stronger magnetic flux elements. In this work, we use high-cadence (sub-minute) observations of the SMMF, made by the Birmingham Solar Oscillations Network (BiSON) \citep{chaplin_bison_1996, chaplin_noise_2005, hale_performance_2016}, to investigate its morphology. This work provides a frequency domain analysis of the SMMF, and a rotationally-modulated (RM) component with a period of around 27~days is clearly observed as several peaks in the power spectrum. The breakdown of the paper is as follows. In Section~\ref{sec:data}, we provide an overview of the BiSON data used in this work; how the observations are made and the SMMF data are acquired. As this work provides an investigation of the SMMF in the frequency domain, in Section~\ref{sec:method} we discuss in detail how the power spectrum was modelled. In Section~\ref{sec:results} the results from modelling the power spectrum are presented. We outline the key findings and draw similarities between the properties of the RM component and ARs, suggesting that ARs may provide a strong contribution to the SMMF. Conclusions and discussions are presented in Section~\ref{sec:conc}. \section{Methodology} \label{sec:method} \subsection{Parametrization of the SMMF Power Spectrum} \label{sec:method_model_lifetimes} As we have 40-second cadence observations of the SMMF, we were able to investigate the power spectrum up to a Nyquist frequency of 12500~$\mu$Hz. There are a number of features within the full SMMF power spectrum, shown in Fig.~\ref{fig:SMMF_40s_PSD}. \begin{figure} \includegraphics[width=\columnwidth]{Figures/BiSON_full_PSD.pdf} \caption{Power spectrum of 40-second cadence SMMF from the Sutherland BiSON station observed between 1992 -- 2012 on a logarithmic scale up to the full Nyquist frequency.} \label{fig:SMMF_40s_PSD} \end{figure} The peaks between 0.2 -- 2.0 $\mu\mathrm{Hz}$ in Fig.~\ref{fig:SMMF_FT} are a manifestation of rotation in the SMMF. The distinct set of peaks indicates the existence of a long-lived, inhomogeneous, rotationally-modulated (RM) source. Due to the quasi-coherent nature of the SMMF, and based on the comparatively short timescales for the emergence of magnetic features compared to their slow decay \citep{zwaan_solar_1981, harvey_properties_1993, hathaway_sunspot_2008}, we assume the evolution of individual features that contribute to the RM component may be modelled by a sudden appearance and a long, exponential decay. In the frequency-domain, each of the RM peaks may therefore be described by a Lorentzian distribution: \begin{equation} L_n(\nu; \Gamma, A_n, \nu_n) = \frac{2{A_n}^2}{\pi \Gamma} \left(1 + \left(\frac{(\nu - \nu_n)}{\Gamma/2} \right)^2\right)^{-1} \, , \label{eq:symm_lorentzian} \end{equation} where $\nu$ is frequency, $A_n$ is the root-mean-square amplitude of the peak, $\Gamma$ is the linewidth of the peak, $\nu_n$ is the frequency of the peak, and $n$ simply flags each peak. The mean-squared power in the time domain from the RM component of the SMMF is given by the sum of the ${A_n}^2$ of the individual harmonics in the power spectrum. Through this formulation we can measure the $e$-folding time ($T_e$) of the amplitude of the RM component, as it is related to the linewidth of the peak by: \begin{equation} \Gamma = (\pi \, T_e)^{-1} \, . \label{eq:mode_lifetime} \end{equation} The low-frequency power due to instrumental drifts, solar activity, and the window function can be incorporated into the model via the inclusion of a zero-frequency centred Lorentzian \citep{basu_asteroseismic_2017}, given by: \begin{equation} H(\nu; \sigma, \tau) = \frac{4{\sigma}^2\tau}{1 + (2\pi \nu\tau)^2} \, , \label{eq:harvey} \end{equation} where $\sigma$ is the characteristic amplitude of the low frequency signal, and $\tau$ describes the characteristic timescale of the excursions around zero in the time-domain. Finally, the high frequency power is accounted for by the inclusion of a constant offset due to shot-noise, $c$ \citep{basu_asteroseismic_2017}. In the absence of any gaps in the data, the model function used to describe the power spectrum is given by: \begin{equation} P(\nu, \,{\bf a}) = \sum_{n=1}^{N} L_n(\nu; \Gamma, A_n, \nu_n) \, + \, H(\nu; \sigma, \tau) \, + \, c \, ; \label{eq:PSD_fit} \end{equation} the subscript, $n$, describes a single peak in the power spectrum. In implementing the model we constrain the mode frequencies such that they must be integer values of $\nu_0$: $\nu_n \, = \, n \nu_0$. This means that we define a single rotation frequency only, $\nu_0$, and subsequent peaks are the harmonic frequencies. It is worth noting explicitly that this function assumes the line width of each Lorentzian peak is the same, only the amplitudes and central frequencies differ. The duty cycle of the Sutherland SMMF observations is very low, $\sim 15$ per cent, therefore it was important to take into consideration the effect that gaps in the data have on the power spectrum. Gaps in the data cause an aliasing of power from the signal frequencies to other frequencies in the spectrum, and the nature of the aliasing depends on the properties of the window function of the observations. Periodic gaps in the data give rise to sidebands in the power spectrum and random gaps cause a more broadband shifting of power, meaning that some power from the low-frequency RM component is aliased to higher frequencies. The daily, periodic gaps in the BiSON data, due to single-site observations, produce sidebands around a frequency of 1/day, i.e. $\sim$~11.57 $\mu$Hz, and its harmonics. The aliased power is therefore located at frequencies: \begin{equation} \nu_{n, i} = i \, (\frac{1}{\mathrm{day}} \pm \nu_{n}) \, , \label{eq:sidebands} \end{equation} where $i$ denotes the sideband number and $n$ denotes the harmonic of the mode. The sideband structure implied by equation~(\ref{eq:sidebands}) is shown clearly in Fig.~\ref{fig:sidebands}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/sideband.pdf} \caption{Locations of aliased power in sideband peaks. The orange dotted-lines show the locations of frequencies at multiples of 1/day. The green dashed-lines show the location of the sideband peaks -- harmonic frequencies reflected around multiples of 1/day. The inset shows a zoom of one set of sideband peaks around 1/day.} \label{fig:sidebands} \end{figure} The tails of the aliased peaks are long, therefore aliased power was re-distributed across the entire frequency range which produced a red-noise-like background component. To understand the broadband effects of the window function we generated an artificial time series from a single Lorentzian (representing the fundamental RM component). The artificial data were generated by calculating the inverse Fourier transform of the power spectrum which had the same Nyquist frequency and frequency resolution of the SMMF power spectrum. We then injected the gaps from the BiSON observations into this artificial time series, to ensure the window function was the same as the BiSON SMMF, and finally investigated the resultant power spectrum both without and without the window function. Fig.~\ref{fig:PSDs} shows the effect of the window function on the resultant power spectrum. The power spectrum generated from the time series without gaps produces a single Lorentzian peak (amber and green lines). The injection of gaps into the time series (orange line) produces both the red-noise-like background component, as well as the sidebands, which bears a striking resemblance to the power spectrum of the BiSON SMMF observations (black line) and also the power spectrum of the window function (blue line). \begin{figure} \includegraphics[width=\columnwidth]{Figures/gap_test.pdf} \caption{The effects of the window function on the power spectrum is shown by using a fake data set and this is compared to the BiSON power spectrum. Black line: BiSON SMMF power spectrum; blue line: power spectrum of the window function; green and dark-orange lines: the power spectrum of the artificial data without and with gaps, respectively; amber line: the input peak used to generate the artificial data over-plotted. The power spectra of the BiSON SMMF and the window function have been shifted upwards by a factor of 6 and 30, respectively, for clarity.} \label{fig:PSDs} \end{figure} This shows that the BiSON SMMF spectrum has a red-noise-like background component that is not due to any ephemeral signal, but due to the re-distribution of power by the window function of the BiSON observations. In the time domain, the observed data, $y(t)$, includes the window function which, analytically, we can express as a multiplication of the uninterrupted, underlying signal, $f(t)$, with the window function, $g(t)$: \begin{equation} y(t) = f(t) \, g(t) \label{eq:timeseries} \, , \end{equation} where: \begin{equation} g(t) = \begin{cases} 1 & \text{for } |B(t)| > 0 \\ 0 & \text{for } |B(t)| = 0 \end{cases} \, . \label{eq:window} \end{equation} Multiplication in the time domain becomes a convolution in the frequency domain. To model the observed power spectrum in a robust manner, taking into account the intricacies caused by gaps in the data, we used a model which was formed of a model power spectrum, $P(\nu; \,{\bf a})$ (equation~(\ref{eq:PSD_fit})), convolved with the Fourier transform of the window function of the observations ($\left|\mathcal{F}\left[g(t)\right]\right|^2$), i.e., \begin{equation} P'(\nu, \,{\bf a}) \, = \, P(\nu, \,{\bf a}) * \left|\mathcal{F}\left[g(t)\right]\right|^2 \, . \label{eq:PSD_fit_conv} \end{equation} Care was taken during this operation to ensure Parseval's theorem was obeyed, that no power was lost or gained from the convolution: \begin{equation} \sum_{\nu} P'(\nu) \, = \,\sum_{\nu} P(\nu) \, = \, \frac{1}{N} \sum_{t} B(t)^2 \, , \label{eq:parseval} \end{equation} where $N$ here is the number of observed cadences. \subsection{Modelling the SMMF Power Spectrum} \label{sec:method_modelling} Parameter estimation using the model defined in the previous section, including all parameters, ${\bf a}$, was performed in a Bayesian manner using a Markov Chain Monte Carlo (MCMC) fitting routine. Following from Bayes' theorem we can state that the posterior probability distribution, $p({\bf a} | D, I)$, is proportional to the likelihood function, $L(D | {\bf a}, I)$, multiplied by a prior probability distribution, $p({\bf a} | I)$: \begin{equation} p({\bf a} | D, I) \propto L(D | {\bf a}, I) \, p({\bf a} | I) \, , \label{eq:bayes} \end{equation} where $D$ are the data, and $I$ is any prior information. To perform the MCMC integration over the parameter space we must define a likelihood function; however, in practice, it is more convenient to work with logarithmic probabilities. The noise in the power spectrum is distributed as $\chi^2$ 2 degrees-of-freedom \citep{anderson_modeling_1990, handberg_bayesian_2011, davies_low-frequency_2014}, therefore the log likelihood function is: \begin{equation} \ln{(L)} = - \sum\limits_{i} \left\{ \ln{(M_{i}({\bf a}))} + \frac{O_i}{M_{i}({\bf a})} \right\} \, , \label{eq:likelihood_functino} \end{equation} for a model, $M_i$, with parameters, ${\bf a}$, and observed power, $O_i$, where $i$ describes the frequency bin. This likelihood function assumes that all the frequency bins are statistically independent but the effect of the window function means that they are not. We handled this issue in the manner described below, which used simulations based on the artificial data discussed in Section~\ref{sec:method_model_lifetimes}. The prior information on each of the parameters used during the MCMC sampling were uniform distributions (denoted by $\mathcal{U}(l, u)$ with $l$ and $u$ representing the lower and upper limits of the distribution, respectively): \begin{gather*} \nu_0 \, \sim \,\mathcal{U}(0.38, 0.50) \> \mu\mathrm{Hz} \\ \Gamma \, \sim \,\mathcal{U}(0.00, 0.11) \> \mu\mathrm{Hz} \\ A_1 \, \sim \,\mathcal{U}(100, 350) \> \mathrm{mG} \\ A_2 \, \sim \,\mathcal{U}(50, 200) \> \mathrm{mG} \\ A_3 \, \sim \,\mathcal{U}(20, 150) \> \mathrm{mG} \\ A_4 \, \sim \,\mathcal{U}(10, 100) \> \mathrm{mG} \\ \sigma \, \sim \,\mathcal{U}(0.01, 500) \> \mathrm{mG} \\ \tau \, \sim \,\mathcal{U}(0.10, 200) \> 10^6 \, \mathrm{s} \\ c \, \sim \,\mathcal{U}(10^{-3}, 10^{2}) \> \mathrm{G}^2 \, \mathrm{Hz}^{-1} \, . \\ \end{gather*} The limits on the priors were set to cover a sensible range in parameter space, whilst limiting non-physical results or frequency aliasing. The power spectrum of the 40-second cadence SMMF was modelled using equation~(\ref{eq:PSD_fit_conv}) (with $N = 4$ Lorentzian peaks in $P(\nu, \,{\bf a})$) using the affine-invariant MCMC sampler \textsc{emcee} \citep{foreman-mackey_emcee_2013} to explore the posterior parameter space. The chains are not independent when using \textsc{emcee}, therefore convergence was interrogated using the integrated autocorrelation time. We computed the autocorrelation time using \textsc{emcee} and found $\tau \sim 120$~steps. \cite{foreman-mackey_emcee_2013} suggests that chains of length $\geq 50\tau$ are often sufficient. After a burn in of 6000 steps, we used 7000 iterations on 50 chains to explore the posterior parameter space, which was sufficient to ensure we had convergence on the posterior probability distribution. As a result of the convolution in the model the widths of the posterior distributions for the model parameters were systematically underestimated. This effect arises because we do not account explicitly for the impact of the window function convolution on the covariance of the data; it is difficult to overcome computationally, especially with such a large data set ($\sim 10^7$ data points). To overcome this issue we performed the simulations using artificial data, described above, both with and without the effects of the window function and the use of the convolution in the model. This helped us to understand how the convolution affected our ability to measure the true posterior widths, which allowed us to account for the systematic underestimate of the credible regions of the posterior when modelling the power spectrum of the observed BiSON SMMF. We also analysed the data as daily, one-day-cadence averages; this gave a higher fill ($\sim 55$ per cent) but a lower Nyquist frequency ($\sim 5.787$~mHz). Because of the much lower Nyquist, modelling the background power spectral density was more challenging but the duty cycle was approximately three times higher, resulting in a smaller effect from the window function. We note that we recovered results in our analysis of the daily averaged data that were consistent with those from the analysis of the data with a 40-second cadence. \section{Data} \label{sec:data} \subsection{Summary of the Data Set} \citet{chaplin_studies_2003} provided the first examination of the SMMF using data from the Birmingham Solar Oscillations Network (BiSON), and the work presented in this paper is a continuation of that study. BiSON is a six-station, ground-based, global network of telescopes continuously monitoring the Sun, which principally makes precise measurements of the line-of-sight (LOS) velocity of the photosphere due to solar $p$ mode oscillations \citep{hale_performance_2016}. Through the use of polarizing optics and additional electronics, the BiSON spectrometers can measure both the disc-averaged LOS velocity and magnetic field in the photosphere \citep{chaplin_studies_2003}; however, not all BiSON sites measure the SMMF. In this study we focus on the data collected by the Sutherland node in South Africa, which was also used by \cite{chaplin_studies_2003}. This is the only station that has had the capability to measure and collect data on the SMMF long-term. Data are sampled on a 40-second cadence, and the SMMF data collected by the Sutherland station span the epoch from 01/1992 -- 12/2012 (i.e. covering 7643~days). Over this period, the average duty cycle of the 40-second data is $\sim 15.6$ per cent. If instead we take a daily average of the BiSON SMMF, the average duty cycle is $\sim 55.2$ per cent. This gives a higher duty cycle but a lower Nyquist frequency. Because of the much lower Nyquist frequency, modelling the background power spectral density is more challenging; therefore we use the 40-second cadence data in this work. However, both data sets return similar results; we discuss later in Section~\ref{sec:method_modelling} how we handled the impact of the low duty cycle of the 40-second data. A comparison of the daily-averaged SMMF observations made by BiSON to those made by the Wilcox Solar Observatory (WSO) is given in \citet{chaplin_studies_2003}. \subsection{Obtaining the SMMF from BiSON} To acquire the SMMF from BiSON data, the method described by \citet{chaplin_studies_2003} was adopted; here we discuss the key aspects. Each BiSON site employs a resonant scattering spectrometer (RSS) to measure the Doppler shift of the Zeeman $^{2}\mathrm{S}_{1/2} \, \rightarrow \, ^{2}\mathrm{P}_{1/2}$ line of potassium, at 769.9~nm \citep{brookes_resonant-scattering_1978}. A potassium vapour cell placed within a longitudinal magnetic field Zeeman splits the laboratory line into the two allowed D1 transitions \citep{lund_spatial_2017}. The intensity of the longer wavelength (red; $I_R$) and shorter wavelength (blue; $I_B$) components of the line may be measured by the RSS almost simultaneously, by using polarizing optics to switch between the red and blue wings of the line, to form the ratio given by equation~(\ref{eq:ratio}) which is used as a proxy for the Doppler shift from the LOS velocity of the photosphere (see \citet{brookes_observation_1976, brookes_resonant-scattering_1978, elsworth_performance_1995, chaplin_studies_2003, broomhall_new_2009, davies_bison_2014, lund_spatial_2017}): \begin{equation} \mathcal{R} = \frac{I_B - I_R}{I_B + I_R} \, . \label{eq:ratio} \end{equation} Photospheric magnetic fields Zeeman split the Fraunhofer line and the Zeeman-split components have opposite senses of circular polarization \citep{chaplin_studies_2003}. Additional polarizing optics are used in the RSS to manipulate the sense of circular polarization (either + or -) that is passed through the instrument. The ratio $\mathcal{R}_{+}$ or $\mathcal{R}_{-}$ is formed, and the ratios $\mathcal{R}_{\pm}$ would be equal if there was no magnetic field present. The observed ratio ($\mathcal{R}_{\pm}$) may be decomposed as: \begin{equation} \mathcal{R}_{\pm} = \mathcal{R}_{\mathrm{orb}} + \mathcal{R}_{\mathrm{spin}} + \mathcal{R}_{\mathrm{grs}} + \delta {r}_{\mathrm{osc}}(t) \pm \delta {r}_{\mathrm{B}}(t) \, , \label{eq:vel_comp} \end{equation} where $\mathcal{R}_{\mathrm{orb}}$ is due to the radial component of the Earth's orbital velocity around the Sun, $\mathcal{R}_{\mathrm{spin}}$ is due to the component towards the Sun of the Earth's diurnal rotation about its spin axis as a function of latitude and time, $\mathcal{R}_{\mathrm{grs}}$ is from the gravitational red-shift of the solar line \citep{elsworth_techniques_1995, dumbill_observation_1999}, $\delta {r}_{\mathrm{osc}}(t)$ is due to the LOS velocity due to $p$ mode oscillations, and $\delta {r}_B(t)$ is due to the magnetic field ($\pm$ denotes the polarity of the Zeeman-split line that is being observed) \citep{dumbill_observation_1999}. The effect of the magnetic field on the ratio is shown in Fig.~\ref{fig:ratio_split}, and from equation~(\ref{eq:R_diff}), the difference between the opposite magnetic field ratios is twice the magnetic ratio residual, i.e.: \begin{equation} \mathcal{R}_{+} - \mathcal{R}_{-} = 2 \, \delta {r}_{\mathrm{B}}(t) \, . \label{eq:R_diff} \end{equation} \begin{figure} \includegraphics[width=\columnwidth]{Figures/Fred_ratio_zoom.pdf} \caption{An example of the BiSON ratio data over a 30-minute period. The separation between the 2 ratios is due to the solar mean magnetic field and oscillations are due to the 5-minute $p$ mode signal.} \label{fig:ratio_split} \end{figure} The BiSON RSS is measuring the velocity variation on the solar disc, and therefore a calibration from the ratio to a velocity is necessary. One method of calibration is achieved by first fitting a 2nd- or 3rd-order polynomial as a function of velocity to the observed ratio averaged over both magnetic polarities, as discussed by \citet{elsworth_techniques_1995}. Here we chose to fit the ratio in terms of velocity, $\mathcal{R}_{\mathrm{calc}}(u)$, i.e., \begin{equation} \mathcal{R}_{\mathrm{calc}}(u) = \sum_{n} \mathcal{R}_{n} u^n \, , \label{eq:calc_ratio} \end{equation} where: \begin{equation} u = v_{\mathrm{orb}} + v_{\mathrm{spin}} \, , \label{eq:stn_vel} \end{equation} and $v_{\mathrm{orb}}$ is the velocity component related to the ratio, $\mathcal{R}_{\mathrm{orb}}$; $v_{\mathrm{spin}}$ is related to the ratio, $\mathcal{R}_{\mathrm{spin}}$; $n$ is the polynomial order. It is possible to see that through the removal of $\mathcal{R}_{\mathrm{calc}}(u)$ (which we set up to also account for $\mathcal{R}_{\mathrm{grs}}$) from the observed ratios, one is left with the ratio residuals of the $p$ mode oscillations and the magnetic field, i.e., \begin{equation} \mathcal{R}_{\pm} - \mathcal{R}_{\mathrm{calc}}(u) = \delta {r}_{\mathrm{osc}}(t) \pm \delta {r}_{\mathrm{B}}(t) \, . \label{eq:ratio_resid} \end{equation} Furthermore, conversion from ratio residuals into velocity residuals uses the calibration given by equation~(\ref{eq:vel_resid}): \begin{equation} \delta v(t) = \left( \frac{d\mathcal{R}_{calc}}{dV} \right)^{-1} \, \delta {r}(t) \label{eq:vel_resid} \, . \end{equation} In order to finally obtain the SMMF in units of magnetic field, one must combine equation~(\ref{eq:R_diff}) and equation~(\ref{eq:vel_resid}) with the conversion factor in equation~(\ref{eq:K_B}) \citep{dumbill_observation_1999}, and the entire procedure can be simplified into: \begin{equation} B(t) = \frac{1}{2} \left( \frac{d\mathcal{R}_{calc}}{dV} \right)^{-1} \frac{(\mathcal{R}_{+} - \mathcal{R}_{-})}{K_B} \, , \label{eq:simplified_SMMF_cal} \end{equation} where, \begin{equation} K_B = \frac{8}{3} \, \frac{\mu_B}{h} \, \frac{c}{\nu} \approx 2.89 \, \mathrm{ms}^{-1} \, \mathrm{G}^{-1} \, , \label{eq:K_B} \end{equation} and $\mu_B$ is the Bohr magneton, $h$ is Planck's constant, $c$ is the speed of light, and $\nu$ is the frequency of the photons, Through the application of this methodology, one acquires the SMMF as shown in Fig.~(\ref{fig:SMMF_TS}). The power spectrum of the full, 7643-day Sutherland data set is shown in Fig.~(\ref{fig:SMMF_FT}), and it shows a strong rotational signal at a period of $\sim27$~days. The power spectrum of the SMMF is shown again in Fig.~\ref{fig:SMMF_40s_PSD} on a logarithmic scale covering the entire frequency range, which highlights the broadband background component of the power spectrum. \begin{figure} \subfloat[Time-series of BiSON 40-s cadence SMMF \label{fig:SMMF_TS}]{\includegraphics[width=0.98\columnwidth, right]{Figures/BiSON_full_TS.pdf}} \\ \subfloat[Power spectrum of BiSON 40-s cadence SMMF \label{fig:SMMF_FT}]{\includegraphics[width=\columnwidth]{Figures/BiSON_lin_mu_PSD.pdf}} \caption{(a) 40-second cadence observations of the SMMF from the Sutherland BiSON station between 1992 and 2012. The sense of the field was chosen to match both \citet{chaplin_studies_2003} and the WSO observations, where positive is for a field pointing outwards from the Sun. (b) Power spectrum of the SMMF on a 40-second cadence truncated to $10 \, \mu\mathrm{Hz}$, however the Nyquist frequency is 12500~$\mu$Hz.} \label{fig:BiSON_SMMF} \end{figure} \section{Results} \label{sec:results} \subsection{Rotation} \label{sec:rot_results} From the adjusted posterior distributions for each of the parameters, acquired through modelling the power spectrum, we were able to measure the fundamental rotational frequency and linewidth of the RM component. The latter was assumed to be the same for each peak. In Table~\ref{tab:full_fit_params} the median values of marginalised posterior distributions for each of the model parameters of equation~(\ref{eq:PSD_fit_conv}) are displayed. The resultant posterior distributions were approximately normally distributed and there was no significant covariance between parameters, therefore reported uncertainties on the parameters correspond to the $68$ per cent credible intervals either side of the median in the posterior distributions, adjusted for the systematic window function effects. In addition, we show the raw data with the model fit over-plotted in Fig.~\ref{fig:full_PSD_fit} and Fig.~\ref{fig:full_PSD_fit_linear}, on logarithmic and linear scales, respectively, to highlight the fit over the full frequency range, and the RM peaks, respectively. \begin{table} \caption{Power spectrum model median results. Numbers in brackets denote uncertainties on the last 2 digits, and all uncertainties correspond to the $68 \%$ credible intervals either side of the median for adjusted posterior widths.} \label{tab:full_fit_params} \begin{tabular}{l r r l r r } \hline {\bf $\theta$} & {Value} & {Unit} & {\bf $\theta$} & {Value} & {Unit} \\ \hline {$\nu_0$} & {0.4270$\left(_{-18}^{+18}\right)$} & {$\mu\mathrm{Hz} $} & {$A_4$} & {32.6$\pm2.1$} & {$\mathrm{mG}$} \\ {$\Gamma$} & {0.0264$\left(_{-35}^{+35}\right)$} & {$\mu\mathrm{Hz} $} & {$\tau$} & {51.8$\pm6.8$} & {$\mathrm{days}$} \\ {$A_1$} & {166.0$\pm10.7 $} & {$\mathrm{mG}$} & {$\sigma$} & {83.4$\pm5.4$} & {$\mathrm{mG}$} \\ {$A_2$} & {115.9$\pm7.4$} & {$\mathrm{mG}$} & {$c$} & {0.2103$\left(_{-03}^{+03}\right)$} & {$\mathrm{G}^2\mathrm{Hz}^{-1}$} \\ {$A_3$} & {83.2$\pm5.3$} & {$\mathrm{mG}$} & {} & {} & {} \\ \hline \end{tabular} \end{table} \begin{figure} \subfloat[Full power spectrum of the SMMF on logarithmic axes \label{fig:full_PSD_fit}]{\includegraphics[width=0.98\columnwidth, right]{Figures/BiSON_PSD_model_log.pdf}} \\ \subfloat[Power spectrum of the SMMF on linear axes, up to a frequency of $2.5 \, \mu\mathrm{Hz}$ \label{fig:full_PSD_fit_linear}]{\includegraphics[width=\columnwidth]{Figures/BiSON_PSD_model_lin.pdf}} \caption{Power spectrum and the best-fitting model for: (a) the full power spectrum of the SMMF on logarithmic axes. (b) Power spectrum of the SMMF on linear axes, up to a frequency of $2.5 \, \mu\mathrm{Hz}$ in order to show the fundamental signal peaks due to rotation-modulated ARs. The data is displayed in black and the model is shown in green.} \label{fig:PSD_fits} \end{figure} The central frequency of the model, $\nu_0$, implies a fundamental synodic rotation period of $27.11 \pm 0.11$~days, and hence a sidereal rotation period of $25.23\pm0.11$~days. The rotation period measured here is in agreement with other literature for the rotation signal in the SMMF \citep{chaplin_studies_2003, xie_temporal_2017}. According to the model for differential rotation in equation~(\ref{eq:diff_rot_freq}), the measured rotation period suggests that the observed SMMF is sensitive to a time-averaged latitude of around $12^{\circ}$. This latitude is consistent with those spanned by sunspots and ARs over the solar activity cycle \citep{maunder_note_1904, mcintosh_deciphering_2014}, and particularly during the declining phase of the solar cycle \citep{thomas_asteroseismic_2019}. This suggests the origin of the RM component of the SMMF could be linked to active regions. \subsection{Lifetimes} From the measured linewidth of the Lorentzian peaks, we have calculated the lifetime of the RM component using equation~(\ref{eq:mode_lifetime}). The linewidth suggests a RM lifetime of $139.6 \pm 18.5$~days, which is in the region of $\sim 20 \pm 3$~weeks. The effects of differential rotation and AR migration do not impact our ability to measure the linewidth, and thus lifetime, of the peaks (as explained in Appendix~\ref{sec:smearing}). The typical lifetime of active magnetic regions and sunspots is on the order of weeks to months \citep{zwaan_solar_1981, howard_sunspot_2001, hathaway_sunspot_2008}, therefore the observations of the SMMF by BiSON measure a lifetime of the RM component which is consistent with the lifetime of ARs and sunspots. This again suggests that the RM signal is linked to active regions of magnetic field, suggesting them as a possible source of the signal. When verifying these results by repeating the analysis with a daily averaged SMMF (see Section~\ref{sec:method_modelling}), the results for the linewidth were consistent. \section{Discussions and Conclusions} \label{sec:conc} We have presented, for the first time, a frequency-domain analysis of $\sim$20 years of high-cadence (40-second) BiSON observations of the SMMF. The investigation of very high-cadence observations of the SMMF allowed the exploration of the power spectrum up to $12.5$~mHz and the long duration of observations provided near-nHz resolution in the power spectrum which allowed us to measure the parameters associated with the rotationally modulated (RM) component of the SMMF. We have measured the central frequency of the RM component, allowing us to infer the sidereal period of the RM to be $25.23~\pm~0.11$~days. This rotation period matches to an activity cycle average latitude of $\sim 12^{\circ}$, which is in the region of the typical latitudes for active magnetic regions averaged over the activity cycle \citep{maunder_note_1904, mcintosh_deciphering_2014, thomas_asteroseismic_2019}. For the first time, using the linewidth of the peaks we have measured the lifetime of the RM component in the SMMF. The lifetime of the source of the RM component was inferred to be $139.6~\pm~18.5$ days. This lifetime is consistent with those of active magnetic regions and sunspots, in the region of weeks to months \citep{zwaan_solar_1981, hathaway_sunspot_2008}. There has been considerable debate in the literature concerning the origin of the SMMF. In this study, as the properties of the RM component are consistent with ARs, we have presented novel evidence suggesting them as the source of the SMMF. \section{Testing the Effects of Differential Rotation and Migration} \label{sec:smearing} As a result of solar differential rotation and the migration of ARs towards the solar equator during the activity cycle, it is understood that the rotation period of ARs will vary throughout the solar cycle. As we have inferred that the RM component of the SMMF is likely linked to ARs, we may therefore assume that the RM component is also sensitive to latitudinal migration. Here we analysed the effect of this migration and differential rotation on our ability to make inferences on the lifetime of the RM component. Several studies have modelled the the solar differential rotation, and its variation with latitude and radius of the Sun (see \citet{beck_comparison_2000} and \cite{howe_solar_2009} for in-depth reviews of the literature on solar differential rotation). Magnetic features have been shown to be sensitive to rotation deeper than the photosphere; therefore in general magnetic features can be seen to rotate with a shorter period than the surface plasma \citep{howe_solar_2009}. \citet{chaplin_distortion_2008} analysed the effects of differential rotation on the shape of asteroseismic $p$ modes of oscillation with a low angular degree (i.e. $l \leq 3$), and showed that the consequence of differential rotation is to broaden the observed linewidth of a mode peak. The authors provide a model of the resultant profile of a $p$ mode whose frequency is shifted in time to be a time-average of several instantaneous Lorentzian profiles with central frequency $\nu(t)$, given by: \begin{equation} \langle P(\nu) \rangle \, = \, \frac{1}{T} \int^T_0 H \left( 1 \, + \, \left( \frac{\nu - \nu(t)}{\Gamma /2} \right)^2 \right)^{-1} dt \, , \label{eq:stacked_lorentzians} \end{equation} where the angled brackets indicate an average over time, $H$ and $\Gamma$ are the mode height (maximum power spectral density) and linewidth, respectively, and the full period of observation is given by $T$. \citet{chaplin_distortion_2008} also show that by assuming a simple, linear variation of the unperturbed frequency, $\nu_0$, from the start to the end of the time-series by a total frequency shift $\Delta\nu$: \begin{equation} \nu(t) \, = \, \nu_0 \, + \Delta\nu \frac{t}{T} \, , \label{eq:linear_variation} \end{equation} the resultant profile of a $p$ mode can analytically be modelled by equation~(\ref{eq:atan_lorentzians}): \begin{equation} \langle P(\nu) \rangle \, = \, \frac{H}{2\epsilon} \arctan \left( \frac{2 \epsilon}{1 - \epsilon^2 + X^2 } \right) \, , \label{eq:atan_lorentzians} \end{equation} where $\epsilon$ and $X$ are defined in equation~\ref{eq:epsilon} and equation~\ref{eq:X}: \begin{equation} \epsilon \, = \, \frac{\Delta\nu}{\Gamma} \, ; \label{eq:epsilon} \end{equation} \begin{equation} X \, = \, \frac{\nu - [\nu_0 + (\Delta\nu/2)]}{\Gamma /2} \, . \label{eq:X} \end{equation} As the mode linewidths are broadened by this effect, we evaluated whether our ability to resolve the true linewidth of the RM, and hence the lifetime, was affected. In order to evaluate this we computed the broadened profiles given by both equation~(\ref{eq:stacked_lorentzians}) and equation~(\ref{eq:atan_lorentzians}), and fit the model for a single Lorentzian peak, to determine whether there was a notable difference in the linewidth. In the first instance, we computed the broadened peak using equation~(\ref{eq:stacked_lorentzians}). Over the duration of the observations, we computed the daily instantaneous profile, $P(\nu(t))$. The time-averaged profile, $ \langle P(\nu) \rangle$, was a weighted average of each instantaneous profile, where the weights were given by the squared-daily-SMMF, in order to allow a larger broadening contribution at times when the SMMF amplitude is higher. In the second instance, we computed the broadened peak using equation~(\ref{eq:atan_lorentzians}). Over the duration of the observations the daily frequency shift, $\Delta\nu$, was computed. The time-averaged shift, $\Delta\nu$, was a weighted average, where again the weightings were given by the squared-daily-SMMF. To determine the shift in the rotation rate as the active bands migrate to the solar equator, we used the model of the solar differential rotation as traced by magnetic features ($\Omega_m$) given by: \begin{equation} \frac{\Omega_m}{2 \pi} \, = \, 462 - 74 \mu^2 - 53 \mu^4 \, \mathrm{nHz} \, , \label{eq:diff_rot_freq} \end{equation} where $\mu \, = \, \cos \theta $ and $\theta$ is the co-latitude \citep{snodgrass_magnetic_1983, brown_inferring_1989}. The time-dependence on the latitude of the active regions used the best-fitting quadratic model by \cite{li_latitude_2001-1}. In both instances, the broadened peak was modelled as a single Lorentzian peak using equation~(\ref{eq:symm_lorentzian}). Again, we use \textsc{emcee} \citep{foreman-mackey_emcee_2013} to explore the posterior parameter space, with priors similar to the above full-fit on the relevant parameters. \subsection{Results: Time-Averaged Broadened Profile} Over the entire duration of the SMMF observations, the time-averaged profile was calculated, using equation~(\ref{eq:stacked_lorentzians}), and this is shown in Fig.~\ref{fig:weighted_shift}. The broadened mode used the input parameters outlined in Table~\ref{tab:full_fit_params}, however with the background parameter set to zero. By eye, the broadened profile does not appear to have a significantly larger linewidth. The input linewidth was $0.0264 \pm 0.0035 \, \mu\mathrm{Hz} $, and the fit to the time-averaged broadened peak produced a linewidth of $0.0262^{+0.0038}_{-0.0037} \, \mu\mathrm{Hz} $. The linewidth of the broadened peak under this method was rather unchanged from that of the true peak, and both linewidths are within uncertainties of each other. This result shows that numerically, the mode broadening effect of differential rotation and migration does not affect our ability to resolve the linewidth of the peak, and hence the predicted lifetime of the RM component of the SMMF. \begin{figure} \centering \subfloat[Time-Averaged Broadened Profile \label{fig:weighted_shift}]{\includegraphics[width=0.9\columnwidth]{Figures/weighted_shifted_peak.pdf}} \\ \subfloat[Analytically Broadened Profile \label{fig:atan_shift}]{\includegraphics[width=0.9\columnwidth]{Figures/chaplin_shifted_peak.pdf}} \caption{(a) Shows the peak distribution before and after the time-averaged broadening, and the fit to the broadened peak. (b) Shows the peak distribution before and after the analytical broadening, and the fit to the broadened peak. In both plots the broadened peaks have been shifted by the relevant frequency to overlay them on top of the true $\nu_0$ for comparison.}\label{fig:shifted_peaks} \end{figure} \subsection{Results: Analytically Broadened Profile} The time-averaged frequency shift due to differential rotation was calculated, much in the same way as equation~(\ref{eq:stacked_lorentzians}), to be $\Delta\nu \, = \,0.01285 \, \mu\mathrm{Hz}$. This shift was used to generate the broadened profile using equation~(\ref{eq:atan_lorentzians}). The broadened mode distribution also used the input parameters outlined in Table~\ref{tab:full_fit_params}, however with the background parameter set to zero. Similar to the numerically broadened peak, by eye, the analytically broadened profile does not appear to have a significantly larger linewidth (see Fig.~\ref{fig:atan_shift}). The input linewidth was $0.0264 \pm 0.0035 \, \mu\mathrm{Hz} $, and the linewidth of the analytically broadened peak from the fit was $0.0263^{+0.0038}_{-0.0037} \, \mu\mathrm{Hz} $, which was within the uncertainties of the linewidth of the input peak. This result shows, analytically, the mode broadening effect of differential rotation and migration does not affect our ability to resolve the linewidth of the peak, and hence the lifetime of the RM component of the SMMF. \subsection{Discussion} Both broadening methods applied were shown to have a negligible effect on the linewidth of the profile, and our ability to resolve the true linewidth of the peak remains unaffected. This result provides confidence that the measured linewidth in Table~\ref{tab:full_fit_params} was the true linewidth of the RM peaks, providing the correct lifetime for RM component, unaffected by migration and differential rotation. \section*{Acknowledgements} We would like to thank all those who are, or have been, associated with BiSON. The authors would like to acknowledge the support of the UK Science and Technology Facilities Council (STFC). Funding for the Stellar Astrophysics Centre (SAC) is provided by The Danish National Research Foundation (Grant DNRF106). This research also made use of the open-source Python packages: \textsc{Astropy},\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{robitaille_astropy_2013, the_astropy_collaboration_astropy_2018}, \textsc{corner} \citep{foreman-mackey_corner.py_2016}, \textsc{emcee} \citep{foreman-mackey_emcee_2013}, \textsc{Matplotlib} \citep{hunter_matplotlib_2007}, \textsc{Numpy} \citep{harris_array_2020}, \textsc{Pandas} \citep{mckinney_data_2010}, and \textsc{SciPy} \citep{jones_scipy_2001}. \section*{Data Availability} This work uses data from the Birmingham Solar-Oscillations Network (BiSON), which may be accessed via the \href{http://bison.ph.bham.ac.uk/opendata}{BiSON Open Data Portal}.\footnote{http://bison.ph.bham.ac.uk/opendata} \bibliographystyle{mnras} \section*{Acknowledgements} We would like to thank all those who are, or have been, associated with BiSON. The authors would like to acknowledge the support of the UK Science and Technology Facilities Council (STFC). Funding for the Stellar Astrophysics Centre (SAC) is provided by The Danish National Research Foundation (Grant DNRF106). This research also made use of the open-source Python packages: \textsc{Astropy},\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{robitaille_astropy_2013, the_astropy_collaboration_astropy_2018}, \textsc{corner} \citep{foreman-mackey_corner.py_2016}, \textsc{emcee} \citep{foreman-mackey_emcee_2013}, \textsc{Matplotlib} \citep{hunter_matplotlib_2007}, \textsc{Numpy} \citep{harris_array_2020}, \textsc{Pandas} \citep{mckinney_data_2010}, and \textsc{SciPy} \citep{jones_scipy_2001}. \section*{Data Availability} This work uses data from the Birmingham Solar-Oscillations Network (BiSON), which may be accessed via the \href{http://bison.ph.bham.ac.uk/opendata}{BiSON Open Data Portal}.\footnote{http://bison.ph.bham.ac.uk/opendata} \bibliographystyle{mnras}
train/arxiv
BkiUc2Y4ubngyA6kKuyy
5
1
\section{INTRODUCTION} In Reinforcement Learning (RL), an agent interacts with an unknown environment and seeks to learn a policy which maps states to distribution over actions to maximise a long-term numerical reward. Combined with deep neural networks as function approximators, policy gradient methods have enjoyed many empirical successes on RL problems such as video games~\citep{mnih2016asynchronous} and robotics~\citep{levine2016end}. Their recent success can be attributed to their ability to scale gracefully to high dimensional state-action spaces and complex dynamics. The main idea behind policy gradient methods is to parametrize the policy and perform stochastic gradient ascent on the discounted cumulative reward directly~\citep{sutton2000policy}. To estimate the gradient, we sample trajectories from the distribution induced by the policy. Due to the stochasticity of both policy and environment, variance of the gradient estimation can be very large, and lead to significant policy degradation. Instead of directly optimizing the cumulative rewards, which can be challenging due to large variance, some approaches~\citep{kakade2002approximately, azar2012dynamic, pirotta2013safe, schulman2015trust} propose to optimize a surrogate objective that can provide local improvements to the current policy at each iteration. The idea is that the advantage function of a policy $\pi$ can produce a good estimate of the performance of another policy $\pi'$ when the two policies give rise to similar state visitation distributions. Therefore, these approaches explicitly control the state visitation distribution shift between successive policies. However, controlling the state visitation distribution shift requires measuring it, which is non-trivial. Direct methods are prohibitively expensive. Therefore, in order to make the optimization tractable, the aforementioned methods rely on constraining action probabilities by mixing policies~\citep{kakade2002approximately, pirotta2013safe}, introducing trust regions~\citep{schulman2015trust, achiam2017constrained} or clipping the surrogate objective~\citep{schulman2017proximal, wang2019truly}. Our key motivation in this work is that constraining the probabilities of the immediate future actions might not be enough to ensure that the surrogate objective is still a valid estimate of the performance of the next policy and consequently might lead to instability and premature convergence. Instead, we argue that we should reason about the long-term effect of the policies on the distribution of the future states. In particular, we directly consider the divergence between state-action visitation distributions induced by successive policies and use it as a regularization term added to the surrogate objective. This regularization term is itself optimized in an adversarial and off-policy manner by leveraging recent advances in off-policy policy evaluation~\citep{nachum2019dualdice} and off-policy imitation learning~\citep{kostrikov2019imitation}. We incorporate these ideas in the PPO algorithm in order to ensure safer policy learning and better reuse of off-policy data. We call our proposed method PPO-DICE. The present paper is organized as follows: after reviewing conservative approaches for policy learning, we provide theoretical insights motivating our method. We explain how off-policy adversarial formulation can be derived to optimize the regularization term. We then present the algorithmic details of our proposed method. Finally, we show empirical evidences of the benefits of PPO-DICE as well as ablation studies. \section{PRELIMINARIES} \subsection{MARKOV DECISION PROCESSES AND VISITATION DISTRIBUTIONS} In reinforcement learning, an agent interacts with its environment, which we model as a discounted Markov Decision Process (MDP) $(\mathcal{S}, \mathcal{A}, \gamma, \P, r, \rho)$ with state space $\mathcal{S}$, action space $\mathcal{A}$, discount factor $\gamma \in [0, 1)$, transition model $\P$ where $\P(s' \mid s, a)$ is the probability of transitioning into state $s'$ upon taking action $a$ in state $s$, reward function $r : (\mathcal{S} \times \mathcal{A}) \rightarrow \mathbb{R}$ and initial distribution $\rho$ over $\mathcal{S}$. We denote by $\pi(a \mid s)$ the probability of choosing action $a$ in state $s$ under the policy $\pi$. The value function for policy $\pi$, denoted $V^{\pi}:\mathcal{S} \rightarrow \mathbb{R}$, represents the expected sum of discounted rewards along the trajectories induced by the policy in the MDP starting at state $s$: $V^{\pi}(s) \triangleq \mathbb{E} \left [\sum_{t=0}^{\infty} \gamma^t r_t \mid s_0 = s, \pi \right]$. Similarly, the action-value ($Q$-value) function $Q^{\pi}:\mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ and the \textit{advantage} function $A^\pi: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ are defined as: $Q^{\pi}(s, a) \triangleq \mathbb{E} \left [\sum_{t=0}^{\infty} \gamma^t r_t \mid (s_0, a_0)=(s, a), \pi \right]$ and $A^\pi(s, a) \triangleq Q^\pi(s, a) - V^\pi(s)$. The goal of the agent is to find a policy $\pi$ that maximizes the expected value from under the initial state distribution $\rho$: \begin{equation*} \max_{\pi} J(\pi) \triangleq (1-\gamma) \mathbb{E}_{s \sim \rho} [V^\pi(s)]. \end{equation*} We define the discounted state visitation distribution $d^{\pi}_\rho$ induced by a policy $\pi$: \begin{equation*} d^\pi_\rho(s) \triangleq (1 - \gamma) \sum_{t=0}^\infty \gamma^t \mathrm{Pr}^\pi(s_t =s \mid s_0 \sim \rho), \end{equation*} where $\mathrm{Pr}^\pi(s_t =s \mid s_0 \sim \rho)$ is the probability that $s_t = s$, after we execute $\pi$ for $t$ steps, starting from initial state $s_0$ distributed according to $\rho$. Similarly, we define the discounted state-action visitation distribution $\mu^\pi_\rho(s, a)$ of policy $\pi$ \begin{equation*} \mu_\rho^\pi(s, a) \triangleq (1 - \gamma) \sum_{t=0}^\infty \gamma^t \mathrm{Pr}^\pi(s_t =s, a_t = a \mid s_0 \sim \rho). \end{equation*} It is known \citep{puterman1990markov} that $\mu_\rho^\pi(s, a) = d_\rho^\pi(s) \cdot \pi(a \mid s)$ and that $\mu^\pi$ is characterized via: $\forall (s', a') \in \mathcal{S} \times \mathcal{A}$ \footnote{By abuse of notation, we confound probability distributions with their Radon–Nikodym derivative with respect to the Lebesgue measure (for continuous spaces) or counting measure (for discrete spaces).} \begin{align} \label{eq:visitaion_eq} \mu^\pi_\rho(s', a') & = (1 -\gamma) \rho(s') \pi(a' \mid s') \\ & \quad + \gamma \int \pi(a' \mid s') \P(s' \mid s, a) \mu^\pi_\rho(s, a) ds~da \nonumber, \end{align} \subsection{CONSERVATIVE UPDATE APPROACHES} Most policy training approaches in RL can be understood as updating a current policy $\pi$ to a new improved policy $\pi'$ based on the advantage function $A^\pi$ or an estimate $\hat{A} $ of it. We review here some popular approaches that implement conservative updates in order to stabilize policy training. First, let us state a key lemma from the seminal work of \citet{kakade2002approximately} that relates the performance difference between two policies to the advantage function. \begin{lemma}[The performance difference lemma \citep{kakade2002approximately}] \label{lemma:performance diff} For all policies $\pi$ and $\pi'$, \begin{equation}\label{eq:performance diff} J(\pi') = J(\pi) + \mathbb{E}_{s \sim d^{\pi'}_\rho} \mathbb{E}_{a \sim \pi'(. \mid s)} \left[ A^{\pi}(s, a) \right]. \end{equation} \end{lemma} This lemma implies that maximizing Equation~\eqref{eq:performance diff} will yield a new policy $\pi'$ with guaranteed performance improvement over a given policy $\pi$. Unfortunately, a naive direct application of this procedure would be prohibitively expensive since it requires estimating $d^{\pi'}_\rho$ for all $\pi'$ candidates. To address this issue, Conservative Policy Iteration (CPI) \citep{kakade2002approximately} optimizes a surrogate objective defined based on current policy $\pi^i$ at each iteration $i$, \begin{equation} L_{\pi_i}(\pi') = J(\pi_i) + \mathbb{E}_{\textcolor{red}{s \sim d^{\pi_i}_\rho}} \mathbb{E}_{a \sim \pi'(. \mid s)} \left[ A^{\pi_i}(s, a) \right], \end{equation} by ignoring changes in state visitation distribution due to changes in the policy. Then, CPI returns the stochastic mixture $\pi_{i+1} = \alpha_i \pi_i^{+} + (1-\alpha_i) \pi_i$ where $\pi_i^{+} = \arg\max_{\pi'} L_{\pi_i}(\pi')$ is the greedy policy and $\alpha_i \in [0,1]$ is tuned to guarantee a monotonically increasing sequence of policies. Inspired by CPI, the Trust Region Policy Optimization algorithm (TRPO)~\citep{schulman2015trust} extends the policy improvement step to any general stochastic policy rather than just mixture policies. TRPO maximizes the same surrogate objective as CPI subject to a Kullback-Leibler (KL) divergence constraint that ensures the next policy $\pi_{i+1}$ stays within $\delta$-neighborhood of the current policy $\pi_i$: \begin{align}\label{eq:trpo_update} \pi_{i+1} & = \arg\max_{\pi'} L_{\pi_i}(\pi') \\ \text{s.t} & \quad \mathbb{E}_{s \sim d^{\pi_i}_\rho}\left[ D_{\mathrm{KL}}(\pi'( \cdot \mid s) \| \pi_i( \cdot \mid s))\right] \leq \delta, \nonumber \end{align} where $D_{\mathrm{KL}}$ is the Kullback–Leibler divergence. In practise, TRPO considers a differentiable parameterized policy $\{ \pi_\theta, \theta \in \Theta\}$ and solves the constrained problem~\eqref{eq:trpo_update} in parameter space $\Theta$. In particular, the step direction is estimated with conjugate gradients, which requires the computation of multiple Hessian-vector products. Therefore, this step can be computationally heavy. To address this computational bottleneck, Proximal Policy Optimization (PPO) \citep{schulman2017proximal} proposes replacing the KL divergence constrained objective~\eqref{eq:trpo_update} of TRPO by clipping the objective function directly as: \begin{align}\label{eq:ppo_update} L^{\mathrm{clip}}_{\pi_i}& (\pi') = \mathbb{E}_{ (s, a) \sim \mu^{\pi_i}_\rho}\Big[ \min\big\{ A^{\pi_i}(s, a) \cdot \kappa_{\pi'/\pi_i}(s, a), \nonumber \\ & A^{\pi_i}(s, a) \cdot \mathrm{clip}(\kappa_{\pi'/\pi_i}(s, a), 1-\epsilon, 1+\epsilon)\big\}\Big], \end{align} where $\epsilon >0$ and $\kappa_{\pi'/\pi_i}(s, a) = \frac{\pi'(s, a)}{\pi_i(s, a)}$ is the importance sampling ratio. \section{THEORETICAL INSIGHTS} In this section, we present the theoretical motivation of our proposed method. At a high level, algorithms CPI, TRPO, and PPO follow similar policy update schemes. They optimize some surrogate performance objective ($L_{\pi_i}(\pi')$ for CPI and TRPO and $L^{\mathrm{clip}}_{\pi}(\pi')$ for PPO) while ensuring that the new policy $\pi_{i+1}$ stays in the vicinity of the current policy $\pi_i$. The vicinity requirement is implemented in different ways: \begin{compactenum}[\hspace{0pt} 1.] \setlength{\itemsep}{2pt} \item CPI computes a sequence of stochastic policies that are mixtures between consecutive greedy policies. \item TRPO imposes a constraint on the KL divergence between old policy and new one ($ \mathbb{E}_{s \sim d^{\pi_i}_\rho}\left[ D_{\mathrm{KL}}(\pi'( \cdot \mid s) \| \pi_i( \cdot \mid s))\right] \leq \delta$). \item PPO directly clips the objective function based on the value of the importance sampling ratio $\kappa_{\pi'/\pi_i}$ between the old policy and new one. \end{compactenum} Such conservative updates are critical for the stability of the policy optimization. In fact, the surrogate objective $L_{\pi_i}(\pi')$ (or its clipped version) is valid only in the neighbourhood of the current policy $\pi_i$, i.e, when $\pi'$ and $\pi_i$ visit all the states with similar probabilities. The following lemma more precisely formalizes this\footnote{The result is not novel, it can be found as intermediate step in proof of theorem 1 in \citet{achiam2017constrained}, for example.}: \begin{lemma} \label{lemma:lower_bound_perf} For all policies $\pi$ and $\pi'$, \begin{align} \label{eq:lower_bound_perf} J(\pi') & \geq L_\pi(\pi') - \epsilon^\pi D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho) \\ & \geq L^{\mathrm{clip}}_\pi(\pi') - \epsilon^\pi D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho), \nonumber \end{align} where $\epsilon^\pi = \max_{s \in \mathcal{S}} |\mathbb{E}_{a \sim \pi'(\cdot \mid s)} \left[ A^{\pi}(s, a) \right]|$ and $D_{\mathrm{TV}}$ is the total variation distance. \end{lemma} The proof is provided in appendix for completeness. Lemma~\ref{lemma:lower_bound_perf} states that $L_\pi(\pi')$ (or $L^{\mathrm{clip}}_\pi(\pi')$) is a sensible lower bound to $J(\pi')$ as long as $\pi$ and $\pi'$ are close in terms of total variation distance between their corresponding state visitation distributions $d^{\pi'}_\rho$ and $d^\pi_\rho$. However, the aforementioned approaches enforce closeness of $\pi'$ and $\pi$ in terms of their action probabilities rather than their state visitation distributions. This can be justified by the following inequality~\citep{achiam2017constrained}: \begin{equation} \label{eq:bound_tv_div} D_{\mathrm{TV}}(d^{\pi'}_\rho\| d^{\pi}_\rho) \leq \frac{2 \gamma}{1-\gamma} \mathbb{E}_{s \sim d^\pi_\rho} \left[ D_{\mathrm{TV}}( \pi'(. | s) \| \pi(. | s)) \right]. \end{equation} Plugging the last inequality~\eqref{eq:bound_tv_div} into~\eqref{eq:lower_bound_perf} leads to the following lower bound: \begin{equation}\label{eq:bound_pol} J(\pi') \geq L_\pi(\pi') - \frac{2\gamma\epsilon^\pi}{1-\gamma} \mathbb{E}_{s \sim d^\pi_\rho} \left[ D_{\mathrm{TV}}( \pi'(. | s) \| \pi(. | s)) \right]. \end{equation} The obtained lower bound~\eqref{eq:bound_pol} is, however, clearly looser than the one in inequality \eqref{eq:bound_tv_div}. Lower bound~\eqref{eq:bound_pol} suffers from an additional multiplicative factor $\frac{1}{1-\gamma}$, which is the effective planning horizon. It is essentially due to the fact that we are characterizing a long-horizon quantity, such as the state visitation distribution $d^\pi_\rho(s)$, by a one-step quantity, such as the action probabilities $\pi(\cdot \mid s)$. Therefore, algorithms that rely solely on action probabilities to define closeness between policies should be expected to suffer from instability and premature convergence in long-horizon problems. Furthermore, in the exact case if we take at iteration $i$, $\pi_{i+1} \leftarrow \arg \max_{\pi'} L_{\pi_i}(\pi') - \epsilon^{\pi_i} D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi_i}_\rho)$, then \begin{align*} J(\pi_{i+1}) & \geq L_{\pi_i}(\pi_{i+1}) - \epsilon^{\pi_i} D_{\mathrm{TV}}(d^{\pi_{i+1}}_\rho \| d^{\pi_i}_\rho) \\ & \geq L_{\pi_i}(\pi_{i}) \tag{by optimality of $\pi_{i+1}$}\\ & = J(\pi_i) \end{align*} Therefore, this provides a monotonic policy improvement, while TRPO suffers from a performance degradation that depends on the level of the trust region $\delta$ (see Proposition 1 in~\citet{achiam2017constrained}). It follows from our discussion that $D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho)$ is a more natural proximity term to ensure safer and more stable policy updates. Previous approaches excluded using this term because we don't have access to $d^{\pi'}_\rho$, which would require executing $\pi'$ in the environment. In the next section, we show how we can leverage recent advances in off-policy policy evaluation to address this issue. \section{OFF-POLICY FORMULATION OF DIVERGENCES} In this section, we explain how divergences between state-visitation distributions can be approximated. This is done by leveraging ideas from recent works on off-policy learning ~\citep{nachum2019dualdice, kostrikov2019imitation}. Consider two different policies $\pi$ and $\pi'$. Suppose that we have access to state-action samples generated by executing the policy $\pi$ in the environment, i.e, $(s, a) \sim \mu_\rho^\pi$. As motivated by the last section, we aim to estimate $D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho)$ without requiring on-policy data from $\pi'$. Note that in order to avoid using importance sampling ratios, it is more convenient to estimate $D_{\mathrm{TV}}(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho)$, i.e, the total divergence between state-action visitation distributions rather than the divergence between state visitation distributions. This is still a reasonable choice as $D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho)$ is upper bounded by $D_{\mathrm{TV}}(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho)$ as shown below: \begin{align*} D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho) & = \int_s \Big|(d^{\pi'}_\rho - d^{\pi}_\rho)(s) \Big| ds \\ & = \int_s \Big | \int_a (\mu^{\pi'}_\rho - \mu^{\pi}_\rho)(s, a) da \Big | ds \\ & \leq \int_s \int_a \Big| (\mu^{\pi'}_\rho - \mu^{\pi}_\rho)(s, a)\Big| da~ds \\ & =D_{\mathrm{TV}}(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho). \end{align*} The total variation distance belongs to a broad class of divergences known as $\phi$-divergences~\citep{sriperumbudur2009integral}. A $\phi$-divergence is defined as, \begin{equation}\label{eq:phi_div} D_\phi(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho) = \mathbb{E}_{(s, a) \sim \mu^{\pi'}_\rho} \left[ \phi\left (\frac{\mu^{\pi}_\rho(s, a)}{\mu^{\pi'}_\rho(s, a)}\right) \right], \end{equation} where $\phi: [0, \infty) \rightarrow \mathbb{R}$ is a convex, lower-semicontinuous function and $\phi(1)= 0$. Well-known divergences can be obtained by appropriately choosing $\phi$. These include the KL divergence ($\phi(t) = t \log(t)$), total variation distance ($\phi(t) = |t-1|$), $\chi^2$-divergence ($\phi(t) = (t-1)^2$), etc. Working with the form of $\phi$-divergence given in Equation~\eqref{eq:phi_div} requires access to analytic expressions of both $\mu^{\pi}_\rho$ and $\mu^{\pi}_\rho$ as well as the ability to sample from $\mu^{\pi'}_\rho$. We have none of these in our problem of interest. To bypass these difficulties, we turn to the alternative variational representation of $\phi$-divergences~\citep{nguyen2009surrogate, huang2017parametric} as \begin{align}\label{eq:variational_div} D_\phi(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho) = & \sup_{f: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}} \Big[ \mathbb{E}_{(s, a) \sim \mu^{\pi'}_\rho}[ f(s, a)]- \nonumber \\ & \quad \mathbb{E}_{(s, a) \sim \mu^{\pi}_\rho}[ \phi^\star \circ f(s, a)] \Big], \end{align} where $\phi^\star(t) = \sup_{u \in \mathbb{R}}\{t u - \phi(u)\}$ is the convex conjugate of $\phi$. The variational form in Equation~\eqref{eq:variational_div} still requires sampling from $\mu^{\pi'}_\rho$, which we cannot do. To address this issue, we use a clever change of variable trick introduced by~\citet{nachum2019dualdice}. Define $g:\mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ as the fixed point of the following Bellman equation, \begin{equation} \label{eq:change_var} g(s, a) = f(s, a) + \gamma \P^{\pi'}g(s, a), \end{equation} where $\P^{\pi'}$ is the transition operator induced by $\pi'$, defined as $\P^{\pi'}g(s, a) = \int \pi'(a' \mid s') \P(s' \mid s, a) g(s', a')$. $g$ may be interpreted as the action-value function of the policy $\pi'$ in a modified MDP which shares the same transition model $\P$ as the original MDP, but has $f$ as the reward function instead of $r$. Applying the change of variable~\eqref{eq:change_var} to \eqref{eq:variational_div} and after some algebraic manipulation as done in \citet{nachum2019dualdice}, we obtain \begin{align}\label{eq:dice} D_\phi(\mu^{\pi'}_\rho \| & \mu^{\pi}_\rho) = \sup_{g: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}} \Big[ (1-\gamma) \mathbb{E}_{s \sim \rho, a \sim \pi'}[ g(s, a)]- \nonumber \\ & \mathbb{E}_{(s, a) \sim \mu^{\pi}_\rho}\left[ \phi^\star \left((g - \gamma \P^{\pi'}g)(s,a)\right)\right] \Big]. \end{align} Thanks to the change of variable, the first expectation over $\mu^{\pi'}_\rho$ in \eqref{eq:variational_div} is converted to an expectation over the initial distribution and the policy i.e $s \sim \rho(\cdot), a \sim \pi'(\cdot \mid s)$. Therefore, this new form of the $\phi$-divergence in \eqref{eq:dice} is completely off-policy and can be estimated using only samples from the policy $\pi$. \paragraph{Other possible divergence representations:} Using the variational representation of $\phi$-divergences was a key step in the derivation of Equation~\eqref{eq:dice}. But in fact any representation that admits a linear term with respect to $\mu^{\pi'}_\rho$ (i.e $\mathbb{E}_{(s, a) \sim \mu^{\pi'}_\rho}[ f(s, a)]$) would work as well. For example, one can use the the Donkser-Varadhan representation~\citep{donsker1983asymptotic} to alternatively express the KL divergence as: \begin{align}\label{eq:donkser} D_\phi(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho) = &\sup_{f: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}} \Big[ \mathbb{E}_{(s, a) \sim \mu^{\pi'}_\rho}[ f(s, a)]- \\ & \quad \log \left( \mathbb{E}_{(s, a) \sim \mu^{\pi}_\rho}[ \exp( f(s, a))] \right) \Big]. \nonumber \end{align} The \textit{log-expected-exp} in this equation makes the Donkser-Varadhan representation~\eqref{eq:donkser} more numerically stable than the variational one~\eqref{eq:dice} when working with KL divergences. Because of its genericity for $\phi$-divergences, we base the remainder of our exposition on~\eqref{eq:dice}. But it is straightforward to adapt the approach and algorithm to using~\eqref{eq:donkser} for better numerical stability when working with KL divergences specifically. Thus, in practice we will use the latter in our experiments with KL-based regularization, but not in the ones with $\chi^2$-based regularization. \section{A PRACTICAL ALGORITHM USING ADVERSARIAL DIVERGENCE} \begin{algorithm*}[h!] \caption{\textsc{PPO-DICE}}\label{alg:main} \begin{algorithmic}[1] \State \textbf{Initialisation}: random initialize parameters $\theta_1$ (policy), $\psi_1$ (discriminator) and $\omega_1$ (value function). \For{i=1, \ldots} \State Generate a batch of $M$ rollouts $\{s^{(j)}_1, a^{(j)}_1, r^{(j)}_1, s^{(j)}_{1}, \ldots, s^{(j)}_{T}, a^{(j)}_{T}, r^{(j)}_T, s^{(j)}_{T+1}\}_{j=1}^M$ by executing policy $\pi_{\theta_i}$ in the environment for $T$ steps. \State Estimate Advantage function: $\hat{A}(s^{(j)}_t, a^{(j)}_t) = \sum_{t=1}^T (\gamma \lambda)^{t-1} (r^{(j)}_t + \gamma V_{\omega_i}(s^{(j)}_{t+1}) - V_{\omega_i}(s^{(j)}_t))$ \State Compute target value $y^{(j)}_t = r^{(j)}_t + \gamma r^{(j)}_{t+1} + \ldots + \gamma^{T+1-t} V_{\omega_i}(s_{T+1})$ \State $\omega = \omega_i; \theta = \theta_i; \psi = \psi_i$ \For{epoch n=1, \ldots N} \For{iteration k=1, \ldots K} \State {\bf \textcolor{gray!70!blue}{// Compute discriminator loss:}} \State $ \hat{L}_D(\psi, \theta) = \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T} \phi^\star\left(g_\psi(s^{(j)}_t, a^{(j)}_t) - \gamma g_\psi(s^{(j)}_{t+1}, a'^{(j)}_{t+1})\right)- (1 - \gamma) g_\psi(s^{(j)}_1, a'^{(j)}_t) $ where $a'^{(j)}_t \sim \pi_\theta(\cdot \mid s^{(j)}_1), a'^{(j)}_{t+1} \sim \pi_\theta(\cdot \mid s^{(j)}_{t+1})$. \State {\bf \textcolor{gray!70!blue}{// Update discriminator parameters:}} (using learning rate $c_\psi \eta$) \State $ \psi \leftarrow \psi - c_\psi\eta \nabla_{\psi} \hat{L}_D(\psi, \theta);$ \EndFor \State {\bf \textcolor{gray!70!blue}{// Compute value loss:}} \State $ \hat{L}_V(\omega) = \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T}\left(V_\omega(s_t^{(j)}) - y^{(j)}_t\right)^2 $ \State {\bf \textcolor{gray!70!blue}{// Compute PPO clipped loss:}} \State $ \hat{L}^{\mathrm{clip}}(\theta) = \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T} \min\big\{ \hat{A}(s^{(j)}_t, a^{(j)}_t) \kappa_{\pi_\theta /\pi_{\theta_i}}(s^{(j)}_t, a^{(j)}_t), \hat{A}(s^{(j)}_t, a^{(j)}_t) \mathrm{clip}(\kappa_{\pi_\theta /\pi_{\theta_i}}(s^{(j)}_t, a^{(j)}_t), 1-\epsilon, 1+\epsilon)\big\} $ \State {\bf \textcolor{gray!70!blue}{// Update parameters:}} (using learning rate $\eta$) \State{$ \omega \leftarrow \omega - \eta \nabla_{\omega} \hat{L}_V(\omega); $} \State{$ \theta \leftarrow \theta + \eta \nabla_{\theta} (\hat{L}^{\mathrm{clip}}(\theta) + \lambda \cdot \hat{L}_D(\psi,\theta)) $ (if reparametrization trick applicable, else gradient step on Eq.~\ref{eq:empirical-score-function-objective})} \EndFor \State $\omega_{i+1} = \omega; \theta_{i+1} = \theta; \psi_{i+1} = \psi$ \EndFor \end{algorithmic} \end{algorithm*} We now turn these insights into a practical algorithm. The lower bounds in lemma ~\ref{lemma:lower_bound_perf}, suggest using a regularized PPO objective\footnote{\label{footnote:clip} Both regularized $L^\mathrm{clip}_{\pi_i}$ and $L_{\pi_i}$ are lower bounds on policy performance in Lemma \ref{lemma:lower_bound_perf}. We use $L^\mathrm{clip}_{\pi_i}$ rather than $L_{\pi_i}$ because we expect it to work better as the clipping already provides some constraint on action probabilities. Also this will allow a more direct empirical assessment of what the regularization brings compared to vanilla PPO.} : $L^{\mathrm{clip}}_{\pi}(\pi') - \lambda D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho)$, where $\lambda$ is a regularization coefficient. If in place of the total variation we use the off-policy formulation of $\phi$-divergence $D_\phi(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho)$ as detailed in Equation \eqref{eq:dice}, our main optimization objective can be expressed as the following min-max problem: \begin{align} \max_{\pi'} & \min_{g: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}} L^{\mathrm{clip}}_{\pi_i}(\pi') - \lambda \Big( (1-\gamma) \mathbb{E}_{s \sim \rho, a \sim \pi'}[ g(s, a)]- \nonumber \\ & \mathbb{E}_{(s, a) \sim \mu^{\pi_i}_\rho}\left[ \phi^\star \left((g - \gamma \P^{\pi'}g)(s,a)\right)\right] \Big), \label{eq:main_obj} \end{align} When the inner minimization over $g$ is fully optimized, it is straightforward to show -- using the score function estimator -- that the gradient of this objective with respect to $\pi$ is (proof is provided in appendix): \begin{align} \label{eq:score function estimator} & \nabla_{\pi'} L^{\mathrm{clip}}_{\pi_i}({\pi'}) - \lambda \Big( (1-\gamma) \mathbb{E}_{\substack {s \sim \rho \\ a \sim \pi'}}[ g(s, a) \nabla_{\pi'} \log\pi'(a \mid s)] \nonumber \\ & + \gamma \mathbb{E}_{(s, a) \sim \mu^{\pi_i}_\rho}\big[ \frac{\partial \phi^{\star}}{\partial t} \left((g - \gamma \P^{\pi'}g)(s,a)\right) \\ & \mathbb{E}_{s' \sim \P(\cdot \mid s, a), a' \sim \pi'(\cdot \mid s')} \left[ g(s', a') \nabla_{\pi'} \log\pi'(a' \mid s')\right]\big] \Big). \nonumber \end{align} Furthermore, we can use the reparametrization trick if the policy $\pi$ is parametrized by a Gaussian, which is usually the case in continuous control tasks. We call the resulting new algorithm PPO-DICE, (detailed in Algorithm~\ref{alg:main}), as it uses the clipped loss of PPO and leverages the DIstribution Correction Estimation idea from \citet{nachum2019dualdice}. In the min-max objective~\eqref{eq:main_obj}, $g$ plays the role of a discriminator, as in Generative Adversarial Networks (GAN)~\citep{goodfellow2014generative}. The policy $\pi'$ plays the role of a generator, and it should balance between increasing the likelihood of actions with large advantage versus inducing a state-action distribution that is close to the one of $\pi_i$. As shown in Algorithm ~\ref{alg:main}, both policy and discriminator are parametrized by neural networks $\pi_\theta$ and $g_\psi$ respectively. We estimate the objective~\eqref{eq:main_obj} with samples from $\pi_i = \pi_{\theta_i}$ as follows. At a given iteration $i$, we generate a batch of $M$ rollouts $\{s^{(j)}_1, a^{(j)}_1, r^{(j)}_1, s^{(j)}_{1}, \ldots, s^{(j)}_{T}, a^{(j)}_{T}, r^{(j)}_T, s^{(j)}_{T+1}\}_{j=1}^M$ by executing the policy $\pi_i$ in the environment for $T$ steps. Similarly to the PPO procedure, we learn a value function $V_\omega$ by updating its parameters $\omega$ with gradient descent steps, optimizing the following squared error loss: \begin{equation} \hat{L}_V(\omega) = \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T}\left(V_\omega(s_t^{(j)}) - y^{(j)}_t\right)^2, \end{equation} where $y^{(j)}_t = r^{(j)}_t + \gamma r^{(j)}_{t+1} + \ldots + \gamma^{T+1-t}V_\omega(s_{T+1})$. Then, to estimate the advantage, we use the truncated generalized advantage estimate \begin{equation} \hat{A}(s^{(j)}_t, a^{(j)}_t) = \sum_{t=1}^T (\gamma \lambda)^{t-1} (r^{(j)}_t + \gamma V_\omega(s^{(j)}_{t+1}) - V_\omega(s^{(j)}_t)). \end{equation} This advantage estimate is used to compute an estimate of $L^{\mathrm{clip}}_{\pi_i}$ given by: \begin{align} & \hat{L}^{\mathrm{clip}}(\theta) = \\ &\quad \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T} \min\Big\{ \hat{A}(s^{(j)}_t, a^{(j)}_t) \kappa_{\pi_\theta /\pi_{i}}(s^{(j)}_t, a^{(j)}_t), \nonumber \\ & \quad \hat{A}(s^{(j)}_t, a^{(j)}_t) \cdot \mathrm{clip}(\kappa_{\pi_\theta /\pi_{i}}(s^{(j)}_t, a^{(j)}_t), 1-\epsilon, 1+\epsilon)\Big\} \nonumber \end{align} The parameters $\psi$ of the discriminator are learned by gradient descent on the following empirical version of the regularization term in the min-max objective \eqref{eq:main_obj} \begin{align} \label{eq:empirical_reg} \hat{L}_D(\psi, \theta) & = \frac{-1}{MT}\sum_{j=1}^M\sum_{t=1}^{T} (1 - \gamma) g_\psi(s^{(j)}_1, a'^{(j)}_t) \\ & \quad - \phi^\star\left(g_\psi(s^{(j)}_t, a^{(j)}_t) - \gamma g_\psi(s^{(j)}_{t+1}, a'^{(j)}_{t+1})\right), \nonumber \end{align} where $a'^{(j)}_t \sim \pi_\theta(\cdot \mid s^{(j)}_1)$ and $ a'^{(j)}_{t+1} \sim \pi_\theta(\cdot \mid s^{(j)}_{t+1})$. If the reparametrization trick is applicable (which is almost always the case for continuous control tasks), the parameters $\theta$ of the policy are updated via gradient ascent on the objective $\hat{L}^{\mathrm{clip}}(\theta) + \lambda \hat{L}_D(\psi, \theta)$ as we can backpropagate gradient though the action sampling while computing $\hat{L}_D(\psi, \theta)$ in Equation~\eqref{eq:empirical_reg}. Otherwise, $\theta$ are updated via gradient ascent on the following objective: \begin{align} & \quad \hat{L}^{\mathrm{clip}}(\theta) - \nonumber \\ & \quad \frac{\lambda}{MT}\sum_{j=1}^M\sum_{t=1}^{T} (1 - \gamma) g_\psi(s^{(j)}_1, a'^{(j)}_t) \log \pi_\theta(a'^{(j)}_t \mid s^{(j)}_1) \nonumber \\ & \quad + \gamma \frac{\partial \phi^{\star}}{\partial t}\left(g_\psi(s^{(j)}_t, a^{(j)}_t) - \gamma g_\psi(s^{(j)}_{t+1}, a'^{(j)}_{t+1})\right) \nonumber \\ & \quad \cdot g_\psi(s^{(j)}_{t+1}, a'^{(j)}_{t+1}) \log \pi_\theta(a'^{(j)}_{t+1}) \mid s^{(j)}_{t+1}) \label{eq:empirical-score-function-objective} \end{align} Note that the gradient of this equation with respect to $\theta$ corresponds to an empirical estimate of the score function estimator we provided in Equation~\ref{eq:score function estimator}. We train the value function, policy, and discriminator for $N$ epochs using $M$ rollouts of the policy $\pi_i$. We can either alternate between updating the policy and the discriminator, or update $g_\psi$ for a few steps $M$ before updating the policy. We found that the latter worked better in practice, likely due to the fact that the target distribution $\mu^{\pi_i}_\rho$ changes with every iteration $i$. We also found that increasing the learning rate of the discriminator by a multiplicative factor $c_\psi$ of the learning rate for the policy and value function $\eta$ improved performance. \paragraph{Choice of divergence:} The algorithmic approach we just described is valid with any choice of $\phi$-divergence for measuring the discrepancy between state-visitation distributions. It remains to choose an appropriate one. While Lemma~\ref{lemma:lower_bound_perf} advocates the use of total variation distance ($\phi(t) = |t-1|$), it is notoriously hard to train high dimensional distributions using this divergence (see~\citet{kodali2017convergence} for example). Moreover, the convex conjugate of $\phi(t) = |t-1|$ is $\phi^\star(t) = t$ if $ |t| \leq \frac{1}{2}$ and $\phi^\star(t) = \infty$ otherwise. This would imply the need to introduce an extra constraint $\|g - \P^\pi g\|_\infty \leq \frac{1}{2}$ in the formulation~\eqref{eq:dice}, which may be hard to optimize. Therefore, we will instead use the KL divergence ($\phi(t) = t \log(t), \phi^\star (t) = \exp(t-1)$). This is still a well justified choice as we know that $D_{\mathrm{TV}}(\mu^{\pi'}_\rho \| \mu^\pi_\rho) \leq \sqrt{\frac{1}{2} D_{\mathrm{KL}}(\mu^{\pi'}_\rho \| \mu^\pi_\rho)}$ thanks to Pinsker's inequality. We will also try $\chi^2$-divergence ($\phi(t)=(t-1)^2$) that yields a squared regularization term. \section{RELATED WORK} Constraining policy updates, in order to minimize the information loss due to policy improvement, has been an active area of investigation.~\citet{kakade2002approximately} originally introduce CPI by maximizing a lower bound on the policy improvement and relaxing the greedification step through a mixture of successive policies.~\citet{pirotta2013safe} build on~\citet{kakade2002approximately} refine the lower bounds and introduce a new mixture scheme. Moreover, CPI inspired some popular Deep RL algorithms such as TRPO~\citep{schulman2015trust} and PPO~\citep{schulman2015trust}, Deep CPI~\citep{vieillard2019deep} and MPO~\citep{AbdolmalekiSTMH18}. The latter uses similar updates to TRPO/PPO in the parametric version of its E-step. So, our method can be incorporated to it. Our work is related to regularized MDP literature~\citep{neu2017unified,geist2019theory}. Shannon Entropic regularization is used in value iteration scheme~\citep{haarnoja2017reinforcement, dai2018sbeed} and in policy iteration schemes~\citep{haarnoja2018soft}. Note that all the mentioned works employ regularization on the action probabilities. Recently,~\citet{wang2019divergence} introduce divergence-augmented policy optimization where they penalize the policy update by a Bregman divergence on the state visitation distributions, motivated the mirror descent method. While their framework seems general, it doesn't include the divergences we employ in our algorithm. In fact, their method enables the use of the \emph{conditional} KL divergence between state-action visitations distribution defined by $\int \mu_\rho^\pi(s, a) \log \frac{\pi(a \mid s)}{\pi'(a \mid a)}$ and not the KL divergence $\int \mu_\rho^\pi(s, a) \log \frac{\mu_\rho^\pi(s, a)}{\mu_\rho^{\pi'}(s, a)}$. Note the action probabilities ratio inside the $\log$ in the conditional KL divergence allows them to use the policy gradient theorem, a key ingredient in their framework, which cannot be done for the KL divergence. Our work builds on recent off-policy approaches: DualDICE~\citep{nachum2019dualdice} for policy evaluation and ValueDICE~\citep{kostrikov2019imitation} for imitation learning. Both use the off-policy formulation of KL divergence. The former uses the formulation to estimate the ratio of the state visitation distributions under the target and behavior policies. Whereas, the latter learns a policy by minimizing the divergence. The closest related work is the recently proposed AlgaeDICE~\citep{nachum2019algaedice} for off-policy policy optimization. They use the divergence between state-action visitation distribution induced by $\pi$ and a behavior distribution, motivated by similar techniques in~\citet{nachum2019dualdice}. However, they incorporate the regularization to the dual form of policy performance $J(\pi) = \mathbb{E}_{(s, a) \sim \mu^\pi_\rho}[r(s, a)]$ whereas we consider a surrogate objective (lower bound on the policy performance). Moreover, our method is online off-policy in that we collect data with each policy found in the optimization procedure, but also use previous data to improve stability. Whereas, their algorithm is designed to learn a policy from a fixed dataset collected by behaviour policies. Further comparison with AlgaeDICE is provided in appendix. \section{EXPERIMENTS AND RESULTS} We use the PPO implementation by \cite{pytorchrl} as a baseline and modify it to implement our proposed PPO-DICE algorithm. We run experiments on a randomly selected subset of environments in the Atari suite \citep{ale} for high-dimensional observations and discrete action spaces, as well as on the OpenAI Gym \citep{openaigym} MuJoCo environments, which have continuous state-action spaces. All shared hyperparameters are set at the same values for both methods, and we use the hyperparameter values recommended by \cite{pytorchrl} for each set of environments, Atari and MuJoCo~\footnote{Code: https://github.com/facebookresearch/ppo-dice}. \subsection{IMPORTANT ASPECTS OF PPO-DICE} \subsubsection{Choice of Divergence} \begin{figure}[t] \centering \includegraphics[width=0.2\textwidth]{figures/Hopper-v2_ChiS.pdf} \hspace{10pt} \includegraphics[width=0.2\textwidth]{figures/KangarooNoFrameskip-v4_ChiS.pdf} \caption{Comparison of $\chi^2$ and KL divergences for PPO-DICE for two randomly selected environments in OpenAI Gym MuJoCo and Atari, respectively. We see that KL performs better than $\chi^2$ in both settings. Performance plotted across 10 seeds with 1 standard error shaded.} \label{fig:hopper_chis} \end{figure} We conducted an initial set of experiments to compare two different choices of divergences, KL and $\chi^2$, for the regularization term of PPO-DICE. \cref{fig:hopper_chis} shows training curves for one continuous action and one discrete action environment. There, as in the other environments in which we run this comparison, KL consistently performed better than $\chi^2$. We thus opted to use KL divergence in all subsequent experiments. \subsubsection{Effect of Varying $\lambda$} \label{sec:lambda} \begin{figure}[h] \centering \includegraphics[width=0.27\textwidth]{figures/Hopper-v2_ablate_lambda.pdf} \caption{Varying $\lambda$ in \texttt{Hopper\_v2}, 10 seeds, 1 standard error shaded. PPO-DICE is somewhat sensitive to $\lambda$ value, but the theoretically-motivated adaptive version works well.} \label{fig:hopper_lambda} \end{figure} Next we wanted to evaluate the sensitivity of our method to the $\lambda$ parameter that controls the strength of the regularization. We examine in \cref{fig:hopper_lambda} the performance of PPO-DICE when varying $\lambda$. There is a fairly narrow band for \texttt{Hopper-v2} that performs well, between $0.01$ and $1$. Theory indicates that the proper value for $\lambda$ is the maximum of the absolute value of the advantages (see Lemma \ref{lemma:lower_bound_perf}). This prompted us to implement an adaptive approach, where we compute the 90th percentile of advantages within the batch (for stability), which we found performed well across environments. To avoid introducing an additional hyperparameter by tuning $\lambda$, we use the adaptive method for subsequent experiments. \begin{figure}[h!] \centering \includegraphics[width=0.27\textwidth]{figures/Hopper-v2_noclip.pdf} \caption{Comparison of PPO-DICE with clipped loss $L^\text{clip}$ and without $L$. We see that clipping the action loss is crucial for good performance.} \label{fig:hopper_noclip} \end{figure} \begin{table*}[h!] \vspace{5px} \centering \vspace{-10pt} \begin{tabular}{l|ll} \toprule Game & PPO & PPO-DICE \\ \midrule AirRaid & $ 4305.0 \pm 638.15 $ & $ \textcolor{blue}{\mathbf{5217.5 \pm 769.19 }}$ \\ Asterix & $ 4300.0 \pm 169.31 $ & $ \textcolor{blue}{\mathbf{6200.0 \pm 754.10 }}$ \\ Asteroids & $ 1511.0 \pm 125.03 $ & $\textcolor{blue}{\mathbf{ 1653.0 \pm 112.20 }}$ \\ Atlantis & $ 2120400.0 \pm 471609.93 $ & $\textcolor{blue}{\mathbf{ 3447433.33 \pm 100105.82}} $ \\ BankHeist & $ 1247.0 \pm 21.36 $ & $ \textcolor{blue}{\mathbf{1273.33 \pm 7.89 }}$ \\ BattleZone & $\textcolor{blue}{\mathbf{ 29000.0 \pm 2620.43}} $ & $ 19000.0 \pm 2463.06 $ \\ Carnival & $ 3243.33 \pm 369.51 $ & $ 3080.0 \pm 189.81 $ \\ ChopperCommand & $ 566.67 \pm 14.91 $ & $ \textcolor{blue}{\mathbf{900.0 \pm 77.46}} $ \\ DoubleDunk & $ -6.0 \pm 1.62 $ & $ \textcolor{blue}{\mathbf{-4.0 \pm 1.26}} $ \\ Enduro & $ 1129.9 \pm 73.18 $ & $ \textcolor{blue}{\mathbf{1308.33 \pm 120.09}} $ \\ Freeway & $ 32.33 \pm 0.15 $ & $ 32.0 \pm 0.00 $ \\ Frostbite & $ \textcolor{blue}{\mathbf{639.0 \pm 334.28}} $ & $ 296.67 \pm 5.96 $ \\ Gopher & $ 1388.0 \pm 387.65 $ & $ 1414.0 \pm 417.84 $ \\ Kangaroo & $ 4060.0 \pm 539.30 $ & $\textcolor{blue}{\mathbf{ 6650.0 \pm 1558.16 }}$ \\ Phoenix & $ \textcolor{blue}{\mathbf{12614.0 \pm 621.71}} $ & $ 11676.67 \pm 588.24 $ \\ Robotank & $ 7.8 \pm 1.33 $ & $\textcolor{blue}{\mathbf{ 12.1 \pm 2.91}} $ \\ Seaquest & $ 1198.0 \pm 128.82 $ & $ 1300.0 \pm 123.97 $ \\ TimePilot & $ 5070.0 \pm 580.53 $ & $ \textcolor{blue}{\mathbf{7000.0 \pm 562.32}} $ \\ Zaxxon & $ \textcolor{blue}{\mathbf{7110.0 \pm 841.60 }}$ & $ 6130.0 \pm 1112.48 $ \\ \bottomrule \end{tabular} \caption{Mean final reward and 1 standard error intervals across 10 seeds for Atari games evaluated at 10M steps.} \label{tab:atari} \end{table*} \begin{figure*}[h!] \centering \includegraphics[width=0.195\textwidth]{figures/HalfCheetah-v2.pdf} \includegraphics[width=0.195\textwidth]{figures/Hopper-v2.pdf} \includegraphics[width=0.195\textwidth]{figures/Humanoid-v2.pdf} \includegraphics[width=0.195\textwidth]{figures/HumanoidStandup-v2.pdf} \includegraphics[width=0.195\textwidth]{figures/InvertedDoublePendulum-v2.pdf} \caption{Results from OpenAI Gym MuJoCo suite in more complex domains, with 10 seeds and 1 standard error shaded. Results on the full suite of environments can be found in \cref{app:mujoco_results}. \label{fig:mujoco_results}} \end{figure*} \subsubsection{Importance of Clipping the Action Loss} We earlier mentioned (see \cref{footnote:clip}) two possible forms of our regularized objective: one with clipped action loss $L^{\text{clip}}$ and one without $L$. Clipping the action loss was an extra regularizing measure proposed in PPO~\citep{schulman2017proximal}. For our algorithm also, we hypothesized that it would be important for providing additional constraints on the policy update to stay within the trust region. \cref{fig:hopper_noclip} confirms this empirically: we see the effect on our method of clipping the action loss versus keeping it unclipped. Initially, not having the additional regularization allows it to learn faster, but it soon crashes, showing the need for clipping to reduce variance in the policy update. \subsection{RESULTS ON ATARI} Given our above observations we settled on using a KL-regularized $L^\mathrm{clip}$, with the adaptive method for $\lambda$ that we explained Section \ref{sec:lambda}. We run PPO-DICE on randomly selected environments from Atari. We tuned two additional hyperparameters, the learning rate for the discriminator and the number of discriminator optimization steps per policy optimization step. We found that $K=5$ discriminator optimization steps per policy optimization step performed well. Fewer steps showed worse performance because the discriminator was not updating quickly enough, while more optimization steps introduced instability from the discriminator overfitting to the current batch. We also found that increasing the discriminator learning rate to be $c_\psi=10\times$ the policy learning rate helped most environments. We used the same hyperparameters across all environments. Results are shown in \cref{tab:atari}. We see that PPO-DICE significantly outperforms PPO on a majority of Atari environments. See \cref{app:atari_results} for training curves and hyperparameters. \subsection{RESULTS ON OpenAI Gym MuJoCo} For the OpenAI Gym MuJoCo suite, we also used $K=5$ discriminator optimization steps per policy optimization step, and $c_\psi=10\times$ learning rate for the discriminator in all environments. We selected 5 of the more difficult environments to showcase in the main paper (\cref{fig:mujoco_results}), but additional results on the full suite and all hyperparameters used can be found in \cref{app:mujoco_results}. We again see improvement in performance in the majority of environments with PPO-DICE compared to PPO and TRPO. \section{CONCLUSION} In this work, we have argued that using the action probabilities to constrain the policy update is a suboptimal approximation to controlling the state visitation distribution shift. We then demonstrate that using the recently proposed DIstribution Correction Estimation idea~\citep{nachum2019dualdice}, we can directly compute the divergence between the state-action visitation distributions of successive policies and use that to regularize the policy optimization objective instead. Through carefully designed experiments, we have shown that our method beats PPO in most environments in Atari~\citep{ale} and OpenAI Gym MuJoCo~\citep{openaigym} benchmarks. \section{Acknowledgements} We would like to thank Ofir Nachum and Ilya Kostrikov for their helpful feedback and advice during discussions at the early stage of the project. \bibliographystyle{apalike} \section{INTRODUCTION} In Reinforcement Learning (RL), an agent interacts with an unknown environment and seeks to learn a policy which maps states to distribution over actions to maximise a long-term numerical reward. Combined with deep neural networks as function approximators, policy gradient methods have enjoyed many empirical successes on RL problems such as video games~\citep{mnih2016asynchronous} and robotics~\citep{levine2016end}. Their recent success can be attributed to their ability to scale gracefully to high dimensional state-action spaces and complex dynamics. The main idea behind policy gradient methods is to parametrize the policy and perform stochastic gradient ascent on the discounted cumulative reward directly~\citep{sutton2000policy}. To estimate the gradient, we sample trajectories from the distribution induced by the policy. Due to the stochasticity of both policy and environment, variance of the gradient estimation can be very large, and lead to significant policy degradation. Instead of directly optimizing the cumulative rewards, which can be challenging due to large variance, some approaches~\citep{kakade2002approximately, azar2012dynamic, pirotta2013safe, schulman2015trust} propose to optimize a surrogate objective that can provide local improvements to the current policy at each iteration. The idea is that the advantage function of a policy $\pi$ can produce a good estimate of the performance of another policy $\pi'$ when the two policies give rise to similar state visitation distributions. Therefore, these approaches explicitly control the state visitation distribution shift between successive policies. However, controlling the state visitation distribution shift requires measuring it, which is non-trivial. Direct methods are prohibitively expensive. Therefore, in order to make the optimization tractable, the aforementioned methods rely on constraining action probabilities by mixing policies~\citep{kakade2002approximately, pirotta2013safe}, introducing trust regions~\citep{schulman2015trust, achiam2017constrained} or clipping the surrogate objective~\citep{schulman2017proximal, wang2019truly}. Our key motivation in this work is that constraining the probabilities of the immediate future actions might not be enough to ensure that the surrogate objective is still a valid estimate of the performance of the next policy and consequently might lead to instability and premature convergence. Instead, we argue that we should reason about the long-term effect of the policies on the distribution of the future states. In particular, we directly consider the divergence between state-action visitation distributions induced by successive policies and use it as a regularization term added to the surrogate objective. This regularization term is itself optimized in an adversarial and off-policy manner by leveraging recent advances in off-policy policy evaluation~\citep{nachum2019dualdice} and off-policy imitation learning~\citep{kostrikov2019imitation}. We incorporate these ideas in the PPO algorithm in order to ensure safer policy learning and better reuse of off-policy data. We call our proposed method PPO-DICE. The present paper is organized as follows: after reviewing conservative approaches for policy learning, we provide theoretical insights motivating our method. We explain how off-policy adversarial formulation can be derived to optimize the regularization term. We then present the algorithmic details of our proposed method. Finally, we show empirical evidences of the benefits of PPO-DICE as well as ablation studies. \section{PRELIMINARIES} \subsection{MARKOV DECISION PROCESSES AND VISITATION DISTRIBUTIONS} In reinforcement learning, an agent interacts with its environment, which we model as a discounted Markov Decision Process (MDP) $(\mathcal{S}, \mathcal{A}, \gamma, \P, r, \rho)$ with state space $\mathcal{S}$, action space $\mathcal{A}$, discount factor $\gamma \in [0, 1)$, transition model $\P$ where $\P(s' \mid s, a)$ is the probability of transitioning into state $s'$ upon taking action $a$ in state $s$, reward function $r : (\mathcal{S} \times \mathcal{A}) \rightarrow \mathbb{R}$ and initial distribution $\rho$ over $\mathcal{S}$. We denote by $\pi(a \mid s)$ the probability of choosing action $a$ in state $s$ under the policy $\pi$. The value function for policy $\pi$, denoted $V^{\pi}:\mathcal{S} \rightarrow \mathbb{R}$, represents the expected sum of discounted rewards along the trajectories induced by the policy in the MDP starting at state $s$: $V^{\pi}(s) \triangleq \mathbb{E} \left [\sum_{t=0}^{\infty} \gamma^t r_t \mid s_0 = s, \pi \right]$. Similarly, the action-value ($Q$-value) function $Q^{\pi}:\mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ and the \textit{advantage} function $A^\pi: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ are defined as: $Q^{\pi}(s, a) \triangleq \mathbb{E} \left [\sum_{t=0}^{\infty} \gamma^t r_t \mid (s_0, a_0)=(s, a), \pi \right]$ and $A^\pi(s, a) \triangleq Q^\pi(s, a) - V^\pi(s)$. The goal of the agent is to find a policy $\pi$ that maximizes the expected value from under the initial state distribution $\rho$: \begin{equation*} \max_{\pi} J(\pi) \triangleq (1-\gamma) \mathbb{E}_{s \sim \rho} [V^\pi(s)]. \end{equation*} We define the discounted state visitation distribution $d^{\pi}_\rho$ induced by a policy $\pi$: \begin{equation*} d^\pi_\rho(s) \triangleq (1 - \gamma) \sum_{t=0}^\infty \gamma^t \mathrm{Pr}^\pi(s_t =s \mid s_0 \sim \rho), \end{equation*} where $\mathrm{Pr}^\pi(s_t =s \mid s_0 \sim \rho)$ is the probability that $s_t = s$, after we execute $\pi$ for $t$ steps, starting from initial state $s_0$ distributed according to $\rho$. Similarly, we define the discounted state-action visitation distribution $\mu^\pi_\rho(s, a)$ of policy $\pi$ \begin{equation*} \mu_\rho^\pi(s, a) \triangleq (1 - \gamma) \sum_{t=0}^\infty \gamma^t \mathrm{Pr}^\pi(s_t =s, a_t = a \mid s_0 \sim \rho). \end{equation*} It is known \citep{puterman1990markov} that $\mu_\rho^\pi(s, a) = d_\rho^\pi(s) \cdot \pi(a \mid s)$ and that $\mu^\pi$ is characterized via: $\forall (s', a') \in \mathcal{S} \times \mathcal{A}$ \footnote{By abuse of notation, we confound probability distributions with their Radon–Nikodym derivative with respect to the Lebesgue measure (for continuous spaces) or counting measure (for discrete spaces).} \begin{align} \label{eq:visitaion_eq} \mu^\pi_\rho(s', a') & = (1 -\gamma) \rho(s') \pi(a' \mid s') \\ & \quad + \gamma \int \pi(a' \mid s') \P(s' \mid s, a) \mu^\pi_\rho(s, a) ds~da \nonumber, \end{align} \subsection{CONSERVATIVE UPDATE APPROACHES} Most policy training approaches in RL can be understood as updating a current policy $\pi$ to a new improved policy $\pi'$ based on the advantage function $A^\pi$ or an estimate $\hat{A} $ of it. We review here some popular approaches that implement conservative updates in order to stabilize policy training. First, let us state a key lemma from the seminal work of \citet{kakade2002approximately} that relates the performance difference between two policies to the advantage function. \begin{lemma}[The performance difference lemma \citep{kakade2002approximately}] \label{lemma:performance diff} For all policies $\pi$ and $\pi'$, \begin{equation}\label{eq:performance diff} J(\pi') = J(\pi) + \mathbb{E}_{s \sim d^{\pi'}_\rho} \mathbb{E}_{a \sim \pi'(. \mid s)} \left[ A^{\pi}(s, a) \right]. \end{equation} \end{lemma} This lemma implies that maximizing Equation~\eqref{eq:performance diff} will yield a new policy $\pi'$ with guaranteed performance improvement over a given policy $\pi$. Unfortunately, a naive direct application of this procedure would be prohibitively expensive since it requires estimating $d^{\pi'}_\rho$ for all $\pi'$ candidates. To address this issue, Conservative Policy Iteration (CPI) \citep{kakade2002approximately} optimizes a surrogate objective defined based on current policy $\pi^i$ at each iteration $i$, \begin{equation} L_{\pi_i}(\pi') = J(\pi_i) + \mathbb{E}_{\textcolor{red}{s \sim d^{\pi_i}_\rho}} \mathbb{E}_{a \sim \pi'(. \mid s)} \left[ A^{\pi_i}(s, a) \right], \end{equation} by ignoring changes in state visitation distribution due to changes in the policy. Then, CPI returns the stochastic mixture $\pi_{i+1} = \alpha_i \pi_i^{+} + (1-\alpha_i) \pi_i$ where $\pi_i^{+} = \arg\max_{\pi'} L_{\pi_i}(\pi')$ is the greedy policy and $\alpha_i \in [0,1]$ is tuned to guarantee a monotonically increasing sequence of policies. Inspired by CPI, the Trust Region Policy Optimization algorithm (TRPO)~\citep{schulman2015trust} extends the policy improvement step to any general stochastic policy rather than just mixture policies. TRPO maximizes the same surrogate objective as CPI subject to a Kullback-Leibler (KL) divergence constraint that ensures the next policy $\pi_{i+1}$ stays within $\delta$-neighborhood of the current policy $\pi_i$: \begin{align}\label{eq:trpo_update} \pi_{i+1} & = \arg\max_{\pi'} L_{\pi_i}(\pi') \\ \text{s.t} & \quad \mathbb{E}_{s \sim d^{\pi_i}_\rho}\left[ D_{\mathrm{KL}}(\pi'( \cdot \mid s) \| \pi_i( \cdot \mid s))\right] \leq \delta, \nonumber \end{align} where $D_{\mathrm{KL}}$ is the Kullback–Leibler divergence. In practise, TRPO considers a differentiable parameterized policy $\{ \pi_\theta, \theta \in \Theta\}$ and solves the constrained problem~\eqref{eq:trpo_update} in parameter space $\Theta$. In particular, the step direction is estimated with conjugate gradients, which requires the computation of multiple Hessian-vector products. Therefore, this step can be computationally heavy. To address this computational bottleneck, Proximal Policy Optimization (PPO) \citep{schulman2017proximal} proposes replacing the KL divergence constrained objective~\eqref{eq:trpo_update} of TRPO by clipping the objective function directly as: \begin{align}\label{eq:ppo_update} L^{\mathrm{clip}}_{\pi_i}& (\pi') = \mathbb{E}_{ (s, a) \sim \mu^{\pi_i}_\rho}\Big[ \min\big\{ A^{\pi_i}(s, a) \cdot \kappa_{\pi'/\pi_i}(s, a), \nonumber \\ & A^{\pi_i}(s, a) \cdot \mathrm{clip}(\kappa_{\pi'/\pi_i}(s, a), 1-\epsilon, 1+\epsilon)\big\}\Big], \end{align} where $\epsilon >0$ and $\kappa_{\pi'/\pi_i}(s, a) = \frac{\pi'(s, a)}{\pi_i(s, a)}$ is the importance sampling ratio. \section{THEORETICAL INSIGHTS} In this section, we present the theoretical motivation of our proposed method. At a high level, algorithms CPI, TRPO, and PPO follow similar policy update schemes. They optimize some surrogate performance objective ($L_{\pi_i}(\pi')$ for CPI and TRPO and $L^{\mathrm{clip}}_{\pi}(\pi')$ for PPO) while ensuring that the new policy $\pi_{i+1}$ stays in the vicinity of the current policy $\pi_i$. The vicinity requirement is implemented in different ways: \begin{compactenum}[\hspace{0pt} 1.] \setlength{\itemsep}{2pt} \item CPI computes a sequence of stochastic policies that are mixtures between consecutive greedy policies. \item TRPO imposes a constraint on the KL divergence between old policy and new one ($ \mathbb{E}_{s \sim d^{\pi_i}_\rho}\left[ D_{\mathrm{KL}}(\pi'( \cdot \mid s) \| \pi_i( \cdot \mid s))\right] \leq \delta$). \item PPO directly clips the objective function based on the value of the importance sampling ratio $\kappa_{\pi'/\pi_i}$ between the old policy and new one. \end{compactenum} Such conservative updates are critical for the stability of the policy optimization. In fact, the surrogate objective $L_{\pi_i}(\pi')$ (or its clipped version) is valid only in the neighbourhood of the current policy $\pi_i$, i.e, when $\pi'$ and $\pi_i$ visit all the states with similar probabilities. The following lemma more precisely formalizes this\footnote{The result is not novel, it can be found as intermediate step in proof of theorem 1 in \citet{achiam2017constrained}, for example.}: \begin{lemma} \label{lemma:lower_bound_perf} For all policies $\pi$ and $\pi'$, \begin{align} \label{eq:lower_bound_perf} J(\pi') & \geq L_\pi(\pi') - \epsilon^\pi D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho) \\ & \geq L^{\mathrm{clip}}_\pi(\pi') - \epsilon^\pi D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho), \nonumber \end{align} where $\epsilon^\pi = \max_{s \in \mathcal{S}} |\mathbb{E}_{a \sim \pi'(\cdot \mid s)} \left[ A^{\pi}(s, a) \right]|$ and $D_{\mathrm{TV}}$ is the total variation distance. \end{lemma} The proof is provided in appendix for completeness. Lemma~\ref{lemma:lower_bound_perf} states that $L_\pi(\pi')$ (or $L^{\mathrm{clip}}_\pi(\pi')$) is a sensible lower bound to $J(\pi')$ as long as $\pi$ and $\pi'$ are close in terms of total variation distance between their corresponding state visitation distributions $d^{\pi'}_\rho$ and $d^\pi_\rho$. However, the aforementioned approaches enforce closeness of $\pi'$ and $\pi$ in terms of their action probabilities rather than their state visitation distributions. This can be justified by the following inequality~\citep{achiam2017constrained}: \begin{equation} \label{eq:bound_tv_div} D_{\mathrm{TV}}(d^{\pi'}_\rho\| d^{\pi}_\rho) \leq \frac{2 \gamma}{1-\gamma} \mathbb{E}_{s \sim d^\pi_\rho} \left[ D_{\mathrm{TV}}( \pi'(. | s) \| \pi(. | s)) \right]. \end{equation} Plugging the last inequality~\eqref{eq:bound_tv_div} into~\eqref{eq:lower_bound_perf} leads to the following lower bound: \begin{equation}\label{eq:bound_pol} J(\pi') \geq L_\pi(\pi') - \frac{2\gamma\epsilon^\pi}{1-\gamma} \mathbb{E}_{s \sim d^\pi_\rho} \left[ D_{\mathrm{TV}}( \pi'(. | s) \| \pi(. | s)) \right]. \end{equation} The obtained lower bound~\eqref{eq:bound_pol} is, however, clearly looser than the one in inequality \eqref{eq:bound_tv_div}. Lower bound~\eqref{eq:bound_pol} suffers from an additional multiplicative factor $\frac{1}{1-\gamma}$, which is the effective planning horizon. It is essentially due to the fact that we are characterizing a long-horizon quantity, such as the state visitation distribution $d^\pi_\rho(s)$, by a one-step quantity, such as the action probabilities $\pi(\cdot \mid s)$. Therefore, algorithms that rely solely on action probabilities to define closeness between policies should be expected to suffer from instability and premature convergence in long-horizon problems. Furthermore, in the exact case if we take at iteration $i$, $\pi_{i+1} \leftarrow \arg \max_{\pi'} L_{\pi_i}(\pi') - \epsilon^{\pi_i} D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi_i}_\rho)$, then \begin{align*} J(\pi_{i+1}) & \geq L_{\pi_i}(\pi_{i+1}) - \epsilon^{\pi_i} D_{\mathrm{TV}}(d^{\pi_{i+1}}_\rho \| d^{\pi_i}_\rho) \\ & \geq L_{\pi_i}(\pi_{i}) \tag{by optimality of $\pi_{i+1}$}\\ & = J(\pi_i) \end{align*} Therefore, this provides a monotonic policy improvement, while TRPO suffers from a performance degradation that depends on the level of the trust region $\delta$ (see Proposition 1 in~\citet{achiam2017constrained}). It follows from our discussion that $D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho)$ is a more natural proximity term to ensure safer and more stable policy updates. Previous approaches excluded using this term because we don't have access to $d^{\pi'}_\rho$, which would require executing $\pi'$ in the environment. In the next section, we show how we can leverage recent advances in off-policy policy evaluation to address this issue. \section{OFF-POLICY FORMULATION OF DIVERGENCES} In this section, we explain how divergences between state-visitation distributions can be approximated. This is done by leveraging ideas from recent works on off-policy learning ~\citep{nachum2019dualdice, kostrikov2019imitation}. Consider two different policies $\pi$ and $\pi'$. Suppose that we have access to state-action samples generated by executing the policy $\pi$ in the environment, i.e, $(s, a) \sim \mu_\rho^\pi$. As motivated by the last section, we aim to estimate $D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho)$ without requiring on-policy data from $\pi'$. Note that in order to avoid using importance sampling ratios, it is more convenient to estimate $D_{\mathrm{TV}}(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho)$, i.e, the total divergence between state-action visitation distributions rather than the divergence between state visitation distributions. This is still a reasonable choice as $D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho)$ is upper bounded by $D_{\mathrm{TV}}(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho)$ as shown below: \begin{align*} D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho) & = \int_s \Big|(d^{\pi'}_\rho - d^{\pi}_\rho)(s) \Big| ds \\ & = \int_s \Big | \int_a (\mu^{\pi'}_\rho - \mu^{\pi}_\rho)(s, a) da \Big | ds \\ & \leq \int_s \int_a \Big| (\mu^{\pi'}_\rho - \mu^{\pi}_\rho)(s, a)\Big| da~ds \\ & =D_{\mathrm{TV}}(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho). \end{align*} The total variation distance belongs to a broad class of divergences known as $\phi$-divergences~\citep{sriperumbudur2009integral}. A $\phi$-divergence is defined as, \begin{equation}\label{eq:phi_div} D_\phi(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho) = \mathbb{E}_{(s, a) \sim \mu^{\pi'}_\rho} \left[ \phi\left (\frac{\mu^{\pi}_\rho(s, a)}{\mu^{\pi'}_\rho(s, a)}\right) \right], \end{equation} where $\phi: [0, \infty) \rightarrow \mathbb{R}$ is a convex, lower-semicontinuous function and $\phi(1)= 0$. Well-known divergences can be obtained by appropriately choosing $\phi$. These include the KL divergence ($\phi(t) = t \log(t)$), total variation distance ($\phi(t) = |t-1|$), $\chi^2$-divergence ($\phi(t) = (t-1)^2$), etc. Working with the form of $\phi$-divergence given in Equation~\eqref{eq:phi_div} requires access to analytic expressions of both $\mu^{\pi}_\rho$ and $\mu^{\pi}_\rho$ as well as the ability to sample from $\mu^{\pi'}_\rho$. We have none of these in our problem of interest. To bypass these difficulties, we turn to the alternative variational representation of $\phi$-divergences~\citep{nguyen2009surrogate, huang2017parametric} as \begin{align}\label{eq:variational_div} D_\phi(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho) = & \sup_{f: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}} \Big[ \mathbb{E}_{(s, a) \sim \mu^{\pi'}_\rho}[ f(s, a)]- \nonumber \\ & \quad \mathbb{E}_{(s, a) \sim \mu^{\pi}_\rho}[ \phi^\star \circ f(s, a)] \Big], \end{align} where $\phi^\star(t) = \sup_{u \in \mathbb{R}}\{t u - \phi(u)\}$ is the convex conjugate of $\phi$. The variational form in Equation~\eqref{eq:variational_div} still requires sampling from $\mu^{\pi'}_\rho$, which we cannot do. To address this issue, we use a clever change of variable trick introduced by~\citet{nachum2019dualdice}. Define $g:\mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ as the fixed point of the following Bellman equation, \begin{equation} \label{eq:change_var} g(s, a) = f(s, a) + \gamma \P^{\pi'}g(s, a), \end{equation} where $\P^{\pi'}$ is the transition operator induced by $\pi'$, defined as $\P^{\pi'}g(s, a) = \int \pi'(a' \mid s') \P(s' \mid s, a) g(s', a')$. $g$ may be interpreted as the action-value function of the policy $\pi'$ in a modified MDP which shares the same transition model $\P$ as the original MDP, but has $f$ as the reward function instead of $r$. Applying the change of variable~\eqref{eq:change_var} to \eqref{eq:variational_div} and after some algebraic manipulation as done in \citet{nachum2019dualdice}, we obtain \begin{align}\label{eq:dice} D_\phi(\mu^{\pi'}_\rho \| & \mu^{\pi}_\rho) = \sup_{g: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}} \Big[ (1-\gamma) \mathbb{E}_{s \sim \rho, a \sim \pi'}[ g(s, a)]- \nonumber \\ & \mathbb{E}_{(s, a) \sim \mu^{\pi}_\rho}\left[ \phi^\star \left((g - \gamma \P^{\pi'}g)(s,a)\right)\right] \Big]. \end{align} Thanks to the change of variable, the first expectation over $\mu^{\pi'}_\rho$ in \eqref{eq:variational_div} is converted to an expectation over the initial distribution and the policy i.e $s \sim \rho(\cdot), a \sim \pi'(\cdot \mid s)$. Therefore, this new form of the $\phi$-divergence in \eqref{eq:dice} is completely off-policy and can be estimated using only samples from the policy $\pi$. \paragraph{Other possible divergence representations:} Using the variational representation of $\phi$-divergences was a key step in the derivation of Equation~\eqref{eq:dice}. But in fact any representation that admits a linear term with respect to $\mu^{\pi'}_\rho$ (i.e $\mathbb{E}_{(s, a) \sim \mu^{\pi'}_\rho}[ f(s, a)]$) would work as well. For example, one can use the the Donkser-Varadhan representation~\citep{donsker1983asymptotic} to alternatively express the KL divergence as: \begin{align}\label{eq:donkser} D_\phi(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho) = &\sup_{f: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}} \Big[ \mathbb{E}_{(s, a) \sim \mu^{\pi'}_\rho}[ f(s, a)]- \\ & \quad \log \left( \mathbb{E}_{(s, a) \sim \mu^{\pi}_\rho}[ \exp( f(s, a))] \right) \Big]. \nonumber \end{align} The \textit{log-expected-exp} in this equation makes the Donkser-Varadhan representation~\eqref{eq:donkser} more numerically stable than the variational one~\eqref{eq:dice} when working with KL divergences. Because of its genericity for $\phi$-divergences, we base the remainder of our exposition on~\eqref{eq:dice}. But it is straightforward to adapt the approach and algorithm to using~\eqref{eq:donkser} for better numerical stability when working with KL divergences specifically. Thus, in practice we will use the latter in our experiments with KL-based regularization, but not in the ones with $\chi^2$-based regularization. \section{A PRACTICAL ALGORITHM USING ADVERSARIAL DIVERGENCE} \begin{algorithm*}[h!] \caption{\textsc{PPO-DICE}}\label{alg:main} \begin{algorithmic}[1] \State \textbf{Initialisation}: random initialize parameters $\theta_1$ (policy), $\psi_1$ (discriminator) and $\omega_1$ (value function). \For{i=1, \ldots} \State Generate a batch of $M$ rollouts $\{s^{(j)}_1, a^{(j)}_1, r^{(j)}_1, s^{(j)}_{1}, \ldots, s^{(j)}_{T}, a^{(j)}_{T}, r^{(j)}_T, s^{(j)}_{T+1}\}_{j=1}^M$ by executing policy $\pi_{\theta_i}$ in the environment for $T$ steps. \State Estimate Advantage function: $\hat{A}(s^{(j)}_t, a^{(j)}_t) = \sum_{t=1}^T (\gamma \lambda)^{t-1} (r^{(j)}_t + \gamma V_{\omega_i}(s^{(j)}_{t+1}) - V_{\omega_i}(s^{(j)}_t))$ \State Compute target value $y^{(j)}_t = r^{(j)}_t + \gamma r^{(j)}_{t+1} + \ldots + \gamma^{T+1-t} V_{\omega_i}(s_{T+1})$ \State $\omega = \omega_i; \theta = \theta_i; \psi = \psi_i$ \For{epoch n=1, \ldots N} \For{iteration k=1, \ldots K} \State {\bf \textcolor{gray!70!blue}{// Compute discriminator loss:}} \State $ \hat{L}_D(\psi, \theta) = \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T} \phi^\star\left(g_\psi(s^{(j)}_t, a^{(j)}_t) - \gamma g_\psi(s^{(j)}_{t+1}, a'^{(j)}_{t+1})\right)- (1 - \gamma) g_\psi(s^{(j)}_1, a'^{(j)}_t) $ where $a'^{(j)}_t \sim \pi_\theta(\cdot \mid s^{(j)}_1), a'^{(j)}_{t+1} \sim \pi_\theta(\cdot \mid s^{(j)}_{t+1})$. \State {\bf \textcolor{gray!70!blue}{// Update discriminator parameters:}} (using learning rate $c_\psi \eta$) \State $ \psi \leftarrow \psi - c_\psi\eta \nabla_{\psi} \hat{L}_D(\psi, \theta);$ \EndFor \State {\bf \textcolor{gray!70!blue}{// Compute value loss:}} \State $ \hat{L}_V(\omega) = \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T}\left(V_\omega(s_t^{(j)}) - y^{(j)}_t\right)^2 $ \State {\bf \textcolor{gray!70!blue}{// Compute PPO clipped loss:}} \State $ \hat{L}^{\mathrm{clip}}(\theta) = \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T} \min\big\{ \hat{A}(s^{(j)}_t, a^{(j)}_t) \kappa_{\pi_\theta /\pi_{\theta_i}}(s^{(j)}_t, a^{(j)}_t), \hat{A}(s^{(j)}_t, a^{(j)}_t) \mathrm{clip}(\kappa_{\pi_\theta /\pi_{\theta_i}}(s^{(j)}_t, a^{(j)}_t), 1-\epsilon, 1+\epsilon)\big\} $ \State {\bf \textcolor{gray!70!blue}{// Update parameters:}} (using learning rate $\eta$) \State{$ \omega \leftarrow \omega - \eta \nabla_{\omega} \hat{L}_V(\omega); $} \State{$ \theta \leftarrow \theta + \eta \nabla_{\theta} (\hat{L}^{\mathrm{clip}}(\theta) + \lambda \cdot \hat{L}_D(\psi,\theta)) $ (if reparametrization trick applicable, else gradient step on Eq.~\ref{eq:empirical-score-function-objective})} \EndFor \State $\omega_{i+1} = \omega; \theta_{i+1} = \theta; \psi_{i+1} = \psi$ \EndFor \end{algorithmic} \end{algorithm*} We now turn these insights into a practical algorithm. The lower bounds in lemma ~\ref{lemma:lower_bound_perf}, suggest using a regularized PPO objective\footnote{\label{footnote:clip} Both regularized $L^\mathrm{clip}_{\pi_i}$ and $L_{\pi_i}$ are lower bounds on policy performance in Lemma \ref{lemma:lower_bound_perf}. We use $L^\mathrm{clip}_{\pi_i}$ rather than $L_{\pi_i}$ because we expect it to work better as the clipping already provides some constraint on action probabilities. Also this will allow a more direct empirical assessment of what the regularization brings compared to vanilla PPO.} : $L^{\mathrm{clip}}_{\pi}(\pi') - \lambda D_{\mathrm{TV}}(d^{\pi'}_\rho \| d^{\pi}_\rho)$, where $\lambda$ is a regularization coefficient. If in place of the total variation we use the off-policy formulation of $\phi$-divergence $D_\phi(\mu^{\pi'}_\rho \| \mu^{\pi}_\rho)$ as detailed in Equation \eqref{eq:dice}, our main optimization objective can be expressed as the following min-max problem: \begin{align} \max_{\pi'} & \min_{g: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}} L^{\mathrm{clip}}_{\pi_i}(\pi') - \lambda \Big( (1-\gamma) \mathbb{E}_{s \sim \rho, a \sim \pi'}[ g(s, a)]- \nonumber \\ & \mathbb{E}_{(s, a) \sim \mu^{\pi_i}_\rho}\left[ \phi^\star \left((g - \gamma \P^{\pi'}g)(s,a)\right)\right] \Big), \label{eq:main_obj} \end{align} When the inner minimization over $g$ is fully optimized, it is straightforward to show -- using the score function estimator -- that the gradient of this objective with respect to $\pi$ is (proof is provided in appendix): \begin{align} \label{eq:score function estimator} & \nabla_{\pi'} L^{\mathrm{clip}}_{\pi_i}({\pi'}) - \lambda \Big( (1-\gamma) \mathbb{E}_{\substack {s \sim \rho \\ a \sim \pi'}}[ g(s, a) \nabla_{\pi'} \log\pi'(a \mid s)] \nonumber \\ & + \gamma \mathbb{E}_{(s, a) \sim \mu^{\pi_i}_\rho}\big[ \frac{\partial \phi^{\star}}{\partial t} \left((g - \gamma \P^{\pi'}g)(s,a)\right) \\ & \mathbb{E}_{s' \sim \P(\cdot \mid s, a), a' \sim \pi'(\cdot \mid s')} \left[ g(s', a') \nabla_{\pi'} \log\pi'(a' \mid s')\right]\big] \Big). \nonumber \end{align} Furthermore, we can use the reparametrization trick if the policy $\pi$ is parametrized by a Gaussian, which is usually the case in continuous control tasks. We call the resulting new algorithm PPO-DICE, (detailed in Algorithm~\ref{alg:main}), as it uses the clipped loss of PPO and leverages the DIstribution Correction Estimation idea from \citet{nachum2019dualdice}. In the min-max objective~\eqref{eq:main_obj}, $g$ plays the role of a discriminator, as in Generative Adversarial Networks (GAN)~\citep{goodfellow2014generative}. The policy $\pi'$ plays the role of a generator, and it should balance between increasing the likelihood of actions with large advantage versus inducing a state-action distribution that is close to the one of $\pi_i$. As shown in Algorithm ~\ref{alg:main}, both policy and discriminator are parametrized by neural networks $\pi_\theta$ and $g_\psi$ respectively. We estimate the objective~\eqref{eq:main_obj} with samples from $\pi_i = \pi_{\theta_i}$ as follows. At a given iteration $i$, we generate a batch of $M$ rollouts $\{s^{(j)}_1, a^{(j)}_1, r^{(j)}_1, s^{(j)}_{1}, \ldots, s^{(j)}_{T}, a^{(j)}_{T}, r^{(j)}_T, s^{(j)}_{T+1}\}_{j=1}^M$ by executing the policy $\pi_i$ in the environment for $T$ steps. Similarly to the PPO procedure, we learn a value function $V_\omega$ by updating its parameters $\omega$ with gradient descent steps, optimizing the following squared error loss: \begin{equation} \hat{L}_V(\omega) = \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T}\left(V_\omega(s_t^{(j)}) - y^{(j)}_t\right)^2, \end{equation} where $y^{(j)}_t = r^{(j)}_t + \gamma r^{(j)}_{t+1} + \ldots + \gamma^{T+1-t}V_\omega(s_{T+1})$. Then, to estimate the advantage, we use the truncated generalized advantage estimate \begin{equation} \hat{A}(s^{(j)}_t, a^{(j)}_t) = \sum_{t=1}^T (\gamma \lambda)^{t-1} (r^{(j)}_t + \gamma V_\omega(s^{(j)}_{t+1}) - V_\omega(s^{(j)}_t)). \end{equation} This advantage estimate is used to compute an estimate of $L^{\mathrm{clip}}_{\pi_i}$ given by: \begin{align} & \hat{L}^{\mathrm{clip}}(\theta) = \\ &\quad \frac{1}{MT}\sum_{j=1}^M\sum_{t=1}^{T} \min\Big\{ \hat{A}(s^{(j)}_t, a^{(j)}_t) \kappa_{\pi_\theta /\pi_{i}}(s^{(j)}_t, a^{(j)}_t), \nonumber \\ & \quad \hat{A}(s^{(j)}_t, a^{(j)}_t) \cdot \mathrm{clip}(\kappa_{\pi_\theta /\pi_{i}}(s^{(j)}_t, a^{(j)}_t), 1-\epsilon, 1+\epsilon)\Big\} \nonumber \end{align} The parameters $\psi$ of the discriminator are learned by gradient descent on the following empirical version of the regularization term in the min-max objective \eqref{eq:main_obj} \begin{align} \label{eq:empirical_reg} \hat{L}_D(\psi, \theta) & = \frac{-1}{MT}\sum_{j=1}^M\sum_{t=1}^{T} (1 - \gamma) g_\psi(s^{(j)}_1, a'^{(j)}_t) \\ & \quad - \phi^\star\left(g_\psi(s^{(j)}_t, a^{(j)}_t) - \gamma g_\psi(s^{(j)}_{t+1}, a'^{(j)}_{t+1})\right), \nonumber \end{align} where $a'^{(j)}_t \sim \pi_\theta(\cdot \mid s^{(j)}_1)$ and $ a'^{(j)}_{t+1} \sim \pi_\theta(\cdot \mid s^{(j)}_{t+1})$. If the reparametrization trick is applicable (which is almost always the case for continuous control tasks), the parameters $\theta$ of the policy are updated via gradient ascent on the objective $\hat{L}^{\mathrm{clip}}(\theta) + \lambda \hat{L}_D(\psi, \theta)$ as we can backpropagate gradient though the action sampling while computing $\hat{L}_D(\psi, \theta)$ in Equation~\eqref{eq:empirical_reg}. Otherwise, $\theta$ are updated via gradient ascent on the following objective: \begin{align} & \quad \hat{L}^{\mathrm{clip}}(\theta) - \nonumber \\ & \quad \frac{\lambda}{MT}\sum_{j=1}^M\sum_{t=1}^{T} (1 - \gamma) g_\psi(s^{(j)}_1, a'^{(j)}_t) \log \pi_\theta(a'^{(j)}_t \mid s^{(j)}_1) \nonumber \\ & \quad + \gamma \frac{\partial \phi^{\star}}{\partial t}\left(g_\psi(s^{(j)}_t, a^{(j)}_t) - \gamma g_\psi(s^{(j)}_{t+1}, a'^{(j)}_{t+1})\right) \nonumber \\ & \quad \cdot g_\psi(s^{(j)}_{t+1}, a'^{(j)}_{t+1}) \log \pi_\theta(a'^{(j)}_{t+1}) \mid s^{(j)}_{t+1}) \label{eq:empirical-score-function-objective} \end{align} Note that the gradient of this equation with respect to $\theta$ corresponds to an empirical estimate of the score function estimator we provided in Equation~\ref{eq:score function estimator}. We train the value function, policy, and discriminator for $N$ epochs using $M$ rollouts of the policy $\pi_i$. We can either alternate between updating the policy and the discriminator, or update $g_\psi$ for a few steps $M$ before updating the policy. We found that the latter worked better in practice, likely due to the fact that the target distribution $\mu^{\pi_i}_\rho$ changes with every iteration $i$. We also found that increasing the learning rate of the discriminator by a multiplicative factor $c_\psi$ of the learning rate for the policy and value function $\eta$ improved performance. \paragraph{Choice of divergence:} The algorithmic approach we just described is valid with any choice of $\phi$-divergence for measuring the discrepancy between state-visitation distributions. It remains to choose an appropriate one. While Lemma~\ref{lemma:lower_bound_perf} advocates the use of total variation distance ($\phi(t) = |t-1|$), it is notoriously hard to train high dimensional distributions using this divergence (see~\citet{kodali2017convergence} for example). Moreover, the convex conjugate of $\phi(t) = |t-1|$ is $\phi^\star(t) = t$ if $ |t| \leq \frac{1}{2}$ and $\phi^\star(t) = \infty$ otherwise. This would imply the need to introduce an extra constraint $\|g - \P^\pi g\|_\infty \leq \frac{1}{2}$ in the formulation~\eqref{eq:dice}, which may be hard to optimize. Therefore, we will instead use the KL divergence ($\phi(t) = t \log(t), \phi^\star (t) = \exp(t-1)$). This is still a well justified choice as we know that $D_{\mathrm{TV}}(\mu^{\pi'}_\rho \| \mu^\pi_\rho) \leq \sqrt{\frac{1}{2} D_{\mathrm{KL}}(\mu^{\pi'}_\rho \| \mu^\pi_\rho)}$ thanks to Pinsker's inequality. We will also try $\chi^2$-divergence ($\phi(t)=(t-1)^2$) that yields a squared regularization term. \section{RELATED WORK} Constraining policy updates, in order to minimize the information loss due to policy improvement, has been an active area of investigation.~\citet{kakade2002approximately} originally introduce CPI by maximizing a lower bound on the policy improvement and relaxing the greedification step through a mixture of successive policies.~\citet{pirotta2013safe} build on~\citet{kakade2002approximately} refine the lower bounds and introduce a new mixture scheme. Moreover, CPI inspired some popular Deep RL algorithms such as TRPO~\citep{schulman2015trust} and PPO~\citep{schulman2015trust}, Deep CPI~\citep{vieillard2019deep} and MPO~\citep{AbdolmalekiSTMH18}. The latter uses similar updates to TRPO/PPO in the parametric version of its E-step. So, our method can be incorporated to it. Our work is related to regularized MDP literature~\citep{neu2017unified,geist2019theory}. Shannon Entropic regularization is used in value iteration scheme~\citep{haarnoja2017reinforcement, dai2018sbeed} and in policy iteration schemes~\citep{haarnoja2018soft}. Note that all the mentioned works employ regularization on the action probabilities. Recently,~\citet{wang2019divergence} introduce divergence-augmented policy optimization where they penalize the policy update by a Bregman divergence on the state visitation distributions, motivated the mirror descent method. While their framework seems general, it doesn't include the divergences we employ in our algorithm. In fact, their method enables the use of the \emph{conditional} KL divergence between state-action visitations distribution defined by $\int \mu_\rho^\pi(s, a) \log \frac{\pi(a \mid s)}{\pi'(a \mid a)}$ and not the KL divergence $\int \mu_\rho^\pi(s, a) \log \frac{\mu_\rho^\pi(s, a)}{\mu_\rho^{\pi'}(s, a)}$. Note the action probabilities ratio inside the $\log$ in the conditional KL divergence allows them to use the policy gradient theorem, a key ingredient in their framework, which cannot be done for the KL divergence. Our work builds on recent off-policy approaches: DualDICE~\citep{nachum2019dualdice} for policy evaluation and ValueDICE~\citep{kostrikov2019imitation} for imitation learning. Both use the off-policy formulation of KL divergence. The former uses the formulation to estimate the ratio of the state visitation distributions under the target and behavior policies. Whereas, the latter learns a policy by minimizing the divergence. The closest related work is the recently proposed AlgaeDICE~\citep{nachum2019algaedice} for off-policy policy optimization. They use the divergence between state-action visitation distribution induced by $\pi$ and a behavior distribution, motivated by similar techniques in~\citet{nachum2019dualdice}. However, they incorporate the regularization to the dual form of policy performance $J(\pi) = \mathbb{E}_{(s, a) \sim \mu^\pi_\rho}[r(s, a)]$ whereas we consider a surrogate objective (lower bound on the policy performance). Moreover, our method is online off-policy in that we collect data with each policy found in the optimization procedure, but also use previous data to improve stability. Whereas, their algorithm is designed to learn a policy from a fixed dataset collected by behaviour policies. Further comparison with AlgaeDICE is provided in appendix. \section{EXPERIMENTS AND RESULTS} We use the PPO implementation by \cite{pytorchrl} as a baseline and modify it to implement our proposed PPO-DICE algorithm. We run experiments on a randomly selected subset of environments in the Atari suite \citep{ale} for high-dimensional observations and discrete action spaces, as well as on the OpenAI Gym \citep{openaigym} MuJoCo environments, which have continuous state-action spaces. All shared hyperparameters are set at the same values for both methods, and we use the hyperparameter values recommended by \cite{pytorchrl} for each set of environments, Atari and MuJoCo~\footnote{Code: https://github.com/facebookresearch/ppo-dice}. \subsection{IMPORTANT ASPECTS OF PPO-DICE} \subsubsection{Choice of Divergence} \begin{figure}[t] \centering \includegraphics[width=0.2\textwidth]{figures/Hopper-v2_ChiS.pdf} \hspace{10pt} \includegraphics[width=0.2\textwidth]{figures/KangarooNoFrameskip-v4_ChiS.pdf} \caption{Comparison of $\chi^2$ and KL divergences for PPO-DICE for two randomly selected environments in OpenAI Gym MuJoCo and Atari, respectively. We see that KL performs better than $\chi^2$ in both settings. Performance plotted across 10 seeds with 1 standard error shaded.} \label{fig:hopper_chis} \end{figure} We conducted an initial set of experiments to compare two different choices of divergences, KL and $\chi^2$, for the regularization term of PPO-DICE. \cref{fig:hopper_chis} shows training curves for one continuous action and one discrete action environment. There, as in the other environments in which we run this comparison, KL consistently performed better than $\chi^2$. We thus opted to use KL divergence in all subsequent experiments. \subsubsection{Effect of Varying $\lambda$} \label{sec:lambda} \begin{figure}[h] \centering \includegraphics[width=0.27\textwidth]{figures/Hopper-v2_ablate_lambda.pdf} \caption{Varying $\lambda$ in \texttt{Hopper\_v2}, 10 seeds, 1 standard error shaded. PPO-DICE is somewhat sensitive to $\lambda$ value, but the theoretically-motivated adaptive version works well.} \label{fig:hopper_lambda} \end{figure} Next we wanted to evaluate the sensitivity of our method to the $\lambda$ parameter that controls the strength of the regularization. We examine in \cref{fig:hopper_lambda} the performance of PPO-DICE when varying $\lambda$. There is a fairly narrow band for \texttt{Hopper-v2} that performs well, between $0.01$ and $1$. Theory indicates that the proper value for $\lambda$ is the maximum of the absolute value of the advantages (see Lemma \ref{lemma:lower_bound_perf}). This prompted us to implement an adaptive approach, where we compute the 90th percentile of advantages within the batch (for stability), which we found performed well across environments. To avoid introducing an additional hyperparameter by tuning $\lambda$, we use the adaptive method for subsequent experiments. \begin{figure}[h!] \centering \includegraphics[width=0.27\textwidth]{figures/Hopper-v2_noclip.pdf} \caption{Comparison of PPO-DICE with clipped loss $L^\text{clip}$ and without $L$. We see that clipping the action loss is crucial for good performance.} \label{fig:hopper_noclip} \end{figure} \begin{table*}[h!] \vspace{5px} \centering \vspace{-10pt} \begin{tabular}{l|ll} \toprule Game & PPO & PPO-DICE \\ \midrule AirRaid & $ 4305.0 \pm 638.15 $ & $ \textcolor{blue}{\mathbf{5217.5 \pm 769.19 }}$ \\ Asterix & $ 4300.0 \pm 169.31 $ & $ \textcolor{blue}{\mathbf{6200.0 \pm 754.10 }}$ \\ Asteroids & $ 1511.0 \pm 125.03 $ & $\textcolor{blue}{\mathbf{ 1653.0 \pm 112.20 }}$ \\ Atlantis & $ 2120400.0 \pm 471609.93 $ & $\textcolor{blue}{\mathbf{ 3447433.33 \pm 100105.82}} $ \\ BankHeist & $ 1247.0 \pm 21.36 $ & $ \textcolor{blue}{\mathbf{1273.33 \pm 7.89 }}$ \\ BattleZone & $\textcolor{blue}{\mathbf{ 29000.0 \pm 2620.43}} $ & $ 19000.0 \pm 2463.06 $ \\ Carnival & $ 3243.33 \pm 369.51 $ & $ 3080.0 \pm 189.81 $ \\ ChopperCommand & $ 566.67 \pm 14.91 $ & $ \textcolor{blue}{\mathbf{900.0 \pm 77.46}} $ \\ DoubleDunk & $ -6.0 \pm 1.62 $ & $ \textcolor{blue}{\mathbf{-4.0 \pm 1.26}} $ \\ Enduro & $ 1129.9 \pm 73.18 $ & $ \textcolor{blue}{\mathbf{1308.33 \pm 120.09}} $ \\ Freeway & $ 32.33 \pm 0.15 $ & $ 32.0 \pm 0.00 $ \\ Frostbite & $ \textcolor{blue}{\mathbf{639.0 \pm 334.28}} $ & $ 296.67 \pm 5.96 $ \\ Gopher & $ 1388.0 \pm 387.65 $ & $ 1414.0 \pm 417.84 $ \\ Kangaroo & $ 4060.0 \pm 539.30 $ & $\textcolor{blue}{\mathbf{ 6650.0 \pm 1558.16 }}$ \\ Phoenix & $ \textcolor{blue}{\mathbf{12614.0 \pm 621.71}} $ & $ 11676.67 \pm 588.24 $ \\ Robotank & $ 7.8 \pm 1.33 $ & $\textcolor{blue}{\mathbf{ 12.1 \pm 2.91}} $ \\ Seaquest & $ 1198.0 \pm 128.82 $ & $ 1300.0 \pm 123.97 $ \\ TimePilot & $ 5070.0 \pm 580.53 $ & $ \textcolor{blue}{\mathbf{7000.0 \pm 562.32}} $ \\ Zaxxon & $ \textcolor{blue}{\mathbf{7110.0 \pm 841.60 }}$ & $ 6130.0 \pm 1112.48 $ \\ \bottomrule \end{tabular} \caption{Mean final reward and 1 standard error intervals across 10 seeds for Atari games evaluated at 10M steps.} \label{tab:atari} \end{table*} \begin{figure*}[h!] \centering \includegraphics[width=0.195\textwidth]{figures/HalfCheetah-v2.pdf} \includegraphics[width=0.195\textwidth]{figures/Hopper-v2.pdf} \includegraphics[width=0.195\textwidth]{figures/Humanoid-v2.pdf} \includegraphics[width=0.195\textwidth]{figures/HumanoidStandup-v2.pdf} \includegraphics[width=0.195\textwidth]{figures/InvertedDoublePendulum-v2.pdf} \caption{Results from OpenAI Gym MuJoCo suite in more complex domains, with 10 seeds and 1 standard error shaded. Results on the full suite of environments can be found in \cref{app:mujoco_results}. \label{fig:mujoco_results}} \end{figure*} \subsubsection{Importance of Clipping the Action Loss} We earlier mentioned (see \cref{footnote:clip}) two possible forms of our regularized objective: one with clipped action loss $L^{\text{clip}}$ and one without $L$. Clipping the action loss was an extra regularizing measure proposed in PPO~\citep{schulman2017proximal}. For our algorithm also, we hypothesized that it would be important for providing additional constraints on the policy update to stay within the trust region. \cref{fig:hopper_noclip} confirms this empirically: we see the effect on our method of clipping the action loss versus keeping it unclipped. Initially, not having the additional regularization allows it to learn faster, but it soon crashes, showing the need for clipping to reduce variance in the policy update. \subsection{RESULTS ON ATARI} Given our above observations we settled on using a KL-regularized $L^\mathrm{clip}$, with the adaptive method for $\lambda$ that we explained Section \ref{sec:lambda}. We run PPO-DICE on randomly selected environments from Atari. We tuned two additional hyperparameters, the learning rate for the discriminator and the number of discriminator optimization steps per policy optimization step. We found that $K=5$ discriminator optimization steps per policy optimization step performed well. Fewer steps showed worse performance because the discriminator was not updating quickly enough, while more optimization steps introduced instability from the discriminator overfitting to the current batch. We also found that increasing the discriminator learning rate to be $c_\psi=10\times$ the policy learning rate helped most environments. We used the same hyperparameters across all environments. Results are shown in \cref{tab:atari}. We see that PPO-DICE significantly outperforms PPO on a majority of Atari environments. See \cref{app:atari_results} for training curves and hyperparameters. \subsection{RESULTS ON OpenAI Gym MuJoCo} For the OpenAI Gym MuJoCo suite, we also used $K=5$ discriminator optimization steps per policy optimization step, and $c_\psi=10\times$ learning rate for the discriminator in all environments. We selected 5 of the more difficult environments to showcase in the main paper (\cref{fig:mujoco_results}), but additional results on the full suite and all hyperparameters used can be found in \cref{app:mujoco_results}. We again see improvement in performance in the majority of environments with PPO-DICE compared to PPO and TRPO. \section{CONCLUSION} In this work, we have argued that using the action probabilities to constrain the policy update is a suboptimal approximation to controlling the state visitation distribution shift. We then demonstrate that using the recently proposed DIstribution Correction Estimation idea~\citep{nachum2019dualdice}, we can directly compute the divergence between the state-action visitation distributions of successive policies and use that to regularize the policy optimization objective instead. Through carefully designed experiments, we have shown that our method beats PPO in most environments in Atari~\citep{ale} and OpenAI Gym MuJoCo~\citep{openaigym} benchmarks. \section{Acknowledgements} We would like to thank Ofir Nachum and Ilya Kostrikov for their helpful feedback and advice during discussions at the early stage of the project. \bibliographystyle{apalike}
train/arxiv
BkiUbmzxK0fkXPSOoryJ
5
1
\section{Introduction} Mathematical model for generating superdense compact star models compatible with observational data has got wide attention among researchers. A number of papers have been appeared in literature in the recent past along this direction considering matter distribution incorporating charge \citep{Maurya11a,Maurya11b,Maurya11c,Pant12,Maurya15}. It has been suggested, as a result of theoretical investigations of \cite{Ruderman72} and \cite{Canuto74}, that matter may not be isotropic in high density regime of $10^{15}~gm/cm^{3}$. Hence it is pertinent to construct charged distribution incorporating anisotropy in pressure. \cite{Bonner60, Bonner65} has shown that a spherical distribution of matter can retain its equilibrium by counter balancing the gravitational force of attraction by Coulombian force of repulsion due to the presence of charge. It was shown by \cite{Stettner73} that a spherical distribution of uniform density accompanied by charge is more stable than distribution without charge. The study of charge distributions on spheroidal spacetimes have been carried out by \cite{Patel87}, \cite{Singh98}, \cite{Sharma01}, \cite{Gupta05}, \cite{Komathiraj07}. The spheroidal spacetime is found to accommodate superdense stars like neutron stars in both charged and uncharged cases. Study of strange stars and quark stars in the presence of electric charge have been done by \cite{Sharma06}, \cite{Mukherjee01}, \cite{Mukherjee02}. Recently charged fluid models have also been studied by \cite{Maurya11a,Maurya11b,Maurya11c}, \cite{Pant12} \& \cite{Maurya15}. In this paper, we have obtained a new class of solutions for charged fluid distribution on the background of pseudo spheroidal spacetime. Particular choices for radial pressure $p_{r}$ and electric field intensity $E$ are taken so that the physical requirements and regularity conditions are not violated. The bounds for the geometric parameter $K$ and the parameter $\alpha$ associated with charge, are determined using various physical requirements that are expected to satisfy in its region of validity. It is found that these models can accommodate a number of pulsars like 4U 1820-30, PSR J1903+327, 4U 1608-52, Vela X-1, PSR J1614-2230, Cen X-3, given by \cite{Gangopadhyay13}. When $ \alpha = 0, $ the model reduces to the uncharged anisotropic distribution given by \cite{Ratanpal15}. In section \ref{sec:2}, we have solved the field equations and in section \ref{sec:3}, we have obtained the bounds for different parameters using physical acceptability and regularity conditions. In section \ref{sec:4}, We have displayed a variety of pulsars in agreement wit the charged pseudo-spheroidal model developed. In particular we have studied a model for various physical conditions throughout the distribution and discussed the main results at the end of this section. \section{Spacetime Metric} \label{sec:2} We shall take the interior spacetime metric representing charged anisotropic matter distribution as \begin{equation}\label{IMetric1} ds^{2}=e^{\nu(r)}dt^{2}-\left(\frac{1+K\frac{r^{2}}{R^{2}}}{1+\frac{r^2}{R^{2}}} \right)dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2} \right), \end{equation} where $K$ and $R$ are geometric parameters and $K>1$. This spacetime, known as pseudo-spheroidal spacetime, has been studied by number of researchers~\cite {Tikekar98,Tikekar99,Tikekar05,Thomas05,Thomas07,Paul11,Chattopadhyay10,Chattopadhyay12} have found that it can accommodate compact superdense stars. Since the metric potential $g_{rr}$ is chosen apriori, the other metric potential $\nu\left(r \right)$ is to be determined by solving the Einstein-Maxwell field equations \begin{equation}\label{FE} R_{i}^{j}-\frac{1}{2}R\delta_{i}^{j}=8\pi\left(T_{i}^{j}+\pi_{i}^{j}+E_{i}^{j} \right), \end{equation} where, \begin{equation}\label{Tij} T_{i}^{j}=\left(\rho+p \right)u_{i}u^{j}-p\delta_{i}^{j}, \end{equation} \begin{equation}\label{piij} \pi_{i}^{j}=\sqrt{3}S\left[c_{i}c^{j}-\frac{1}{2}\left(u_{i}u^{j}-\delta_{i}^{j} \right) \right], \end{equation} and \begin{equation}\label{Eij} E_{i}^{j}=\frac{1}{4\pi}\left(-F_{ik}F^{jk}+\frac{1}{4}F_{mn}F^{mn}\delta_{i}^{j} \right). \end{equation} Here $\rho$, $p$, $u_{i}$, $S$ and $c^{i}$, respectively, denote the proper density, fluid pressure, unit-four velocity, magnitude of anisotropic tensor and a radial vector given by $\left(0, -e^{-\lambda/2}, 0, 0 \right)$. $F_{ij}$ denotes the anti-symmetric electromagnetic field strength tensor defined by \begin{equation}\label{Fij} F_{ij}=\frac{\partial A_{j}}{\partial x_{i}}-\frac{\partial A_{i}}{\partial x_{j}}, \end{equation} which satisfies the Maxwell equations \begin{equation}\label{ME1} F_{ij,k}+F_{jk,i}+F_{ki,j}=0, \end{equation} and \begin{equation}\label{ME2} \frac{\partial}{\partial x^{k}}\left(F^{ik}\sqrt{-g} \right)=4\pi\sqrt{-g}J^{i}, \end{equation} where $g$ denotes the determinant of $g_{ij}$, $A_{i}=\left(\phi(r), 0, 0, 0 \right)$ is four-potential and \begin{equation}\label{Ji} J^{i}=\sigma u^{i}, \end{equation} is the four-current vector where $\sigma$ denotes the charge density. The only non-vanishing components of $F_{ij}$ is $F_{01}=-F_{10}$. Here \begin{equation}\label{F01} F_{01}=-\frac{e^{\frac{\nu+\lambda}{2}}}{r^{2}}\int_{0}^{r} 4\pi r^{2}\sigma e^{\lambda/2}dr, \end{equation} and the total charge inside a radius $r$ is given by \begin{equation}\label{qr} q(r)=4\pi\int_{0}^{r} \sigma r^{2}e^{\lambda/2}dr. \end{equation} The electric field intensity $E$ can be obtained from $E^{2}=-F_{01}F^{01}$, which subsequently reduces to \begin{equation}\label{E} E=\frac{q(r)}{r^{2}}. \end{equation} The field equations given by (\ref{FE}) are now equivalent to the following set of the non-linear ODE's \begin{equation}\label{FE1} \frac{1-e^{-\lambda}}{r^{2}}+\frac{e^{-\lambda}\lambda'}{r}=8\pi\rho+E^{2}, \end{equation} \begin{equation}\label{FE2} \frac{e^{-\lambda}-1}{r^{2}}+\frac{e^{-\lambda}\nu'}{r}=8\pi p_{r}-E^{2}, \end{equation} \begin{equation}\label{FE3} e^{-\lambda}\left(\frac{\nu''}{2}+\frac{\nu'^{2}}{4}-\frac{\nu'\lambda'}{4}+\frac{\nu'-\lambda'}{2r} \right)=8\pi p_{\perp}+E^{2}, \end{equation} where we have taken \begin{equation}\label{pr1} p_{r}=p+\frac{2S}{\sqrt{3}}, \end{equation} \begin{equation}\label{pp1} p_{\perp}=p-\frac{S}{\sqrt{3}}. \end{equation} Because $e^{\lambda}=\frac{1+K\frac{r^{2}}{R^{2}}}{1+\frac{r^{2}}{R^{2}}}$, the metric potential $\lambda$ is known function of $r$. The set of equations (\ref{FE1}) - (\ref{FE3}) are to be solved for five unknowns $\nu$, $\rho$, $p_{r}$, $p_{\perp}$ and $E$. So we have two free variables for which suitable assumption can be made. We shall assume the following expressions for $p_{r}$ and $E$. \begin{equation}\label{pr2} 8\pi p_{r}=\frac{K-1}{R^{2}}\frac{1-\frac{r^{2}}{R^{2}}}{\left(1+K\frac{r^{2}}{R^{2}} \right)^{2} }, \end{equation} \begin{equation}\label{E2} E^{2}=\frac{\alpha\left(K-1 \right)}{R^{2}}\frac{\frac{r^{2}}{R^{2}}}{\left(1+K\frac{r^{2}}{R^{2}} \right)}. \end{equation} It can be noticed from equation (\ref{pr2}) that $p_{r}$ vanishes at $r=R$ and hence we take the geometric parameter $R$ as the radius of distribution. Further $p_{r}\geq 0$ for all values of $r$ in the range $0\leq r\leq R$. It can also be noted that $E^{2}$ is regular at $r=0$. On substituting the values of $p_{r}$ and $E^{2}$ in (\ref{FE2}) we obtain, after a lengthy calculation \begin{equation}\label{enu} e^{\nu}=CR^{\frac{\left[K^{2}-(2+\alpha)K+\alpha+1 \right]}{K}}\left(1+K\frac{r^{2}}{R^{2}} \right)^{\left(\frac{K+\alpha+1}{2K} \right)}\left(1+\frac{r^{2}}{R^{2}} \right)^{\frac{K-\alpha-3}{2}}, \end{equation} where $C$ is a constant of integration. Hence, the spacetime metric takes the explicit form \begin{eqnarray}\label{IMetric2} ds^{2} & = & CR^{\frac{\left[K^{2}-(2+\alpha)K+\alpha+1 \right]}{K}}\left(1+K\frac{r^{2}}{R^{2}} \right)^{\left(\frac{K+\alpha+1}{2K} \right)}\left(1+\frac{r^{2}}{R^{2}} \right)^{\frac{K-\alpha-3}{2}}dt^{2}\\\nonumber & & -\left(\frac{1+K\frac{r^{2}}{R^{2}}}{1+\frac{r^{2}}{R^{2}}} \right)dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right). \end{eqnarray} The constant of integration $C$ can be evaluated by matching the interior spacetime metric with Riessner-Nordstr{\"o}m metric \begin{equation}\label{EMetric2} ds^{2}=\left(1-\frac{2m}{r}+\frac{q^{2}}{r^{2}} \right)dt^{2}-\left(1-\frac{2m}{r}+\frac{q^{2}}{r^{2}} \right)^{-1}dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right), \end{equation} across the boundary $r=R$. This gives \begin{equation}\label{M} M=\frac{R}{2}\frac{\left[K^{2}+\alpha(K-1)-1 \right]}{\left(1+K \right)^{2}}, \end{equation} and \begin{equation}\label{C} C=R^{\frac{-\left[K^2-(2+\alpha)K+\alpha+1 \right]}{K}}\left(1+K \right)^{-\left(\frac{3K+\alpha+1}{2K} \right)}2^{\left(\frac{\alpha-K+5}{2} \right)}. \end{equation} Here $M$ denotes the total mass of the charged anisotropic distribution. \section{Physical Requirements and Bounds for Parameters} \label{sec:3} The gradient of radial pressure is obtained from equation (\ref{pr2}) in the form \begin{equation}\label{dprdr} 8\pi\frac{dp_{r}}{dr}=-\frac{2r(K-1)}{R^{4}}\frac{1+2K-K\frac{r^{2}}{R^{2}}}{\left(1+K\frac{r^{2}}{R^{2}} \right)^{3}}<0. \end{equation} It can be noticed from equation (\ref{dprdr}) that the radial pressure is decreasing function of $r$. Now, equation (\ref{FE1}) gives the density of the distribution as \begin{equation}\label{rho3} 8\pi\rho=\left(\frac{K-1}{R^{2}}\right)\frac{3+(K-\alpha)\frac{r^{2}}{R^{2}}}{\left(1+K\frac{r^{2}}{R^{2}} \right)^{2}}. \end{equation} The conditon $\rho(r=0)>0$ is clearly satisfied and $\rho(r=R)>0$ gives the following inequality connecting $\alpha$ and $K$. \begin{equation}\label{In1} 0 \leq \alpha<3+K. \end{equation} Differentiating (\ref{rho3}) with respect to $r$, we get \begin{equation}\label{drhodr} 8\pi\frac{d\rho}{dr}=-\frac{2r(K-1)}{R^{4}}\frac{5K+\alpha+K(K-\alpha)\frac{r^{2}}{R^{2}}}{\left(1+K\frac{r^{2}}{R^{2}} \right)^{3}}. \end{equation} It is observed that $\frac{d\rho}{dr}(r=0)=0$ and $\frac{d\rho}{dr}(r=R)<0$ leads to the inequality \begin{equation}\label{In2} K^{2}-K(\alpha-5)+\alpha \geq 0. \end{equation} The inequality (\ref{In2}) together with the condition $K>1$ give a bound for $\alpha$ as \begin{equation}\label{In3b} 0\leq\alpha<\frac{K(K+5)}{K-1}. \end{equation} The expression for $p_{\perp}$ is \begin{equation}\label{pp3} 8\pi p_{\perp}=\frac{4K-4+X_{1}\frac{r^{2}}{R^{2}}+X_{2}\frac{r^{4}}{R^{4}}+X_{3}\frac{r^{6}}{R^{6}}}{R^2\left(4+Y_{1}\frac{r^{2}}{R^{2}}+Y_{2}\frac{r^{4}}{R^{4}}+Y_{3}\frac{r^{6}}{R^{6}}+4K^{3}\frac{r^{8}}{R^{8}} \right)}, \end{equation} where, $X_{1}=4K^{2}+(-12\alpha-16)K+12\alpha+12$, $X_{2}=6K^{3}+(-10\alpha-22)K^2+(4\alpha+14)K+6\alpha+2$, $X_{3}=K^4+(-2\alpha-4)K^3+(\alpha^{2}+2\alpha+6)K^2+(-2\alpha^{2}-2\alpha-4)K+\alpha^{2}+2\alpha+1$, $Y_{1}=12K+4$, $Y_{2}=12K^{2}+12K$ and $Y_{3}=4K^{3}+12K^2$. The condition $p_{\perp}>0$ at the boundary $ r = R $ imposes a restriction on $ K $ and $\alpha$ respectively given by \begin{equation}\label{In3a} K > 2 \sqrt{3}-1 \end{equation} and \begin{equation}\label{In3} 0\leq\alpha<\frac{10+5K+K^{2}}{K-1}-\sqrt{\frac{89+102K+57K^2+8K^3}{\left(K-1 \right)^{2}}}. \end{equation} The expression for $\frac{dp_{\perp}}{dr}$ is given by \begin{equation}\label{dppdr} \frac{dp_{\perp}}{dr}=\frac{-r\left(8K^2+(12\alpha+8)K-12\alpha-16+A_{1}\frac{r^{2}}{R^{2}}+A_{2}\frac{r^{4}}{R^{4}}+A_{3}\frac{r^{6}}{R^{6}}+A_{4}\frac{r^{8}}{R^{8}} \right)}{R^{4}\left(2+B_{1}\frac{r^{2}}{R^{2}}+B_{2}\frac{r^{4}}{R^{4}}+B_{3}\frac{r^{6}}{R^{6}}+B_{4}\frac{r^{8}}{R^{8}}+B_{5}\frac{r^{10}}{R^{10}}+2K^4\frac{r^{12}}{R^{12}} \right)}, \end{equation} where, $A_{1}=-4K^3+(28-4\alpha)K^2+(16\alpha-20)K-12\alpha-4$, $A_{2}=3K^4+(-4\alpha-4)K^3+(-3\alpha^{2}-28\alpha-30)K^2+(6\alpha^{2}+44\alpha+36)K-3\alpha^{2}-12\alpha-5$, $A_{3}=10K^4+(-16\alpha-36)K^{3}+(-2\alpha^{2}+4\alpha+16)K^2+(4\alpha^{2}+16\alpha+12)K-2\alpha^{2}-4\alpha-2$, $A_{4}=K^{5}+(-2\alpha-4)K^{4}+(\alpha^{2}+2\alpha+6)K^{3}+(-2\alpha^{2}-2\alpha-4)k^{2}+\alpha^{2}+2\alpha+1$, $B_{1}=8k+4$, $B_{2}=12K^{2}+16K+2$, $B_{3}=8K^{3}+24K^{2}+8K$, $2K^4+16K^3+12K^2$ and $B_{4}=4K^{4}+8K^{3}$. The value of $\frac{dp_{\perp}}{dr}=0$ at the origin and $\frac{dp_{\perp}}{dr}(r=R)<0$ gives the following bounds for $ K $ and $\alpha$ respectively \begin{equation}\label{In4a} 2 \sqrt{13}-5 < K < 5 \end{equation} and \begin{equation}\label{In4} 0 \leq \alpha < \frac{K^3+10 K^2+25 K-20}{K^2-6 K+5}+\sqrt{\frac{16 K^5+233 K^4+252 K^3+278 K^2-788 K+265}{\left(K^2-6 K+5\right)^2}} \end{equation} In order to examine the strong energy condition, we evaluate the expression $\rho-p_{r}-2p_{\perp}$ at the centre and on the boundary of the star. It is found that \begin{equation}\label{In5} \left(\rho-p_{r}-2p_{\perp} \right)(r=0)=0, \end{equation} and $\left(\rho-p_{r}-2p_{\perp} \right)(r=R)>0$ gives the bound on $ K $ and $\alpha$, namely \begin{equation}\label{In6a} 1< K < 1+2 \sqrt{6} \end{equation} \begin{equation}\label{In6} 0\leq\alpha<\frac{8+3K+K^2}{K-1}+\sqrt{\frac{41+46K+49K^2+8K^3}{\left(K-1 \right)^{2}}}. \end{equation} The expressions for adiabatic sound speed $\frac{dp_{r}}{d\rho}$ and $\frac{dp_{\perp}}{d\rho}$ in the radial and transverse directions, respectively, are given by \begin{equation}\label{dprdrho} \frac{dp_{r}}{d\rho}=\frac{1+2K-K\frac{r^{2}}{R^{2}}}{5k+\alpha+K(K-\alpha)\frac{r^{2}}{R^{2}}} , \end{equation} and \begin{equation}\label{dppdrho} \frac{dp_{\perp}}{d\rho}=\frac{\left(1+K\frac{r^{2}}{R^{2}} \right)^{3}\left[8K^{2}+(12\alpha+8)K-12\alpha-16+C_{1}\frac{r^{2}}{R^{2}}+C_{2}\frac{r^{4}}{R^{4}}+C_{3}\frac{r^{6}}{R^{6}}+C_{4}\frac{r^{8}}{R^{8}} \right]}{2(K-1)\left[5K+\alpha+K(K-\alpha)\frac{r^{2}}{R^{2}} \right]\left[2+D_{1}\frac{r^{2}}{R^{2}}+D_{2}\frac{r^{4}}{R^{4}}+D_{3}\frac{r^{6}}{R^{6}}+D_{4}\frac{r^{8}}{R^{8}}+D_{5}\frac{r^{10}}{R^{10}}+2K^4\frac{r^{12}}{R^{12}} \right]}, \end{equation} where, $C_{1}=-4K^{3}+(18-4\alpha)K^2+(16\alpha-20)K-12\alpha-4$, $C_{2}=3K^{4}+(-4\alpha-4)K^{3}+\left(-3\alpha^{2}-28\alpha-30 \right)K^{2}+\left(6\alpha^{2}+44\alpha+36 \right)K-3\alpha^{2}-12\alpha-5$, $C_{3}=10K^{4}+(-16\alpha-36)K^{3}+\left(-2\alpha^{2}+4\alpha+16 \right)K^2+\left(4\alpha^{2}+16\alpha+12 \right)K-2\alpha^{2}-4\alpha-2$, $C_{4}=K^{5}+(-2\alpha-4)K^{4}+\left(\alpha^{2}+2\alpha+6 \right)K^{3}+\left(-2\alpha^{2}-2\alpha-4 \right)K^{2}+\left(\alpha^{2}+2\alpha+1 \right)K$, $D_{1}=8K+4$, $D_{2}=12K^{2}+16K+2$, $D_{3}=8K^{3}+24K^{2}+8K$, $D_{4}=2K^{4}+16K^{3}+12K^{2}$ and $D_{5}=4K^{4}+8K^{3}$. The condition $ 0 \leq \frac{dp_{r}}{d\rho}\leq 1$ is evidently satisfied at the centre whereas at the boundary it gives a restriction on $ \alpha $ as \begin{equation}\label{Inpre7} 0\leq\alpha < \frac{K^2+4 K-1}{K-1} , ~K > 1. \end{equation} Further $\frac{dp_{\perp}}{d\rho}\leq 1$ at the centre will lead to the following inequalities \begin{equation}\label{Inprepre7} K > \frac{4}{3} \end{equation} and \begin{equation}\label{Inpreprepre7} 0\leq \alpha <\frac{1}{2} (3 K-4). \end{equation} Moreover at the boundary $(r=R)$, we have the following restrictions on $ K $ and $ \alpha $. \begin{equation}\label{Inpreprepreprepre7} -5 + 2 \sqrt{13} \leq K < 5 \end{equation} and \begin{equation}\label{Inprepreprepreprepre7} 0\leq \alpha \leq \frac{K^3+10 K^2+25 K-20}{K^2-6 K+5}+\sqrt{\frac{16 K^5+233 K^4+252 K^3+278 K^2-788 K+265}{\left(K^2-6 K+5\right)^2}}, \end{equation} The necessary condition for the model to represent a stable relativistic star is that $\Gamma>\frac{4}{3}$ throughout the star. $\Gamma>\frac{4}{3}$ at $r=0$ gives a bound on $\alpha$ which is identical to (\ref{In1}). Further, $\Gamma\to\infty$ as $r\to R$ and hence the condition is automatically satisfied. It can be noticed that $E=0$ at $r=0$, showing the regularity of the charged distribution. \\ The upper limits of $ \alpha $ in the inequalities (\ref{In1}), (\ref{In3b}), (\ref{In3}), (\ref{In4}), (\ref{In6}), (\ref{Inpre7}) and (\ref{Inpreprepre7}) for different permissible values of $ K $ are shown in Table~\ref{tab:1}. It can be noticed that for $ 2.4641 < K \leq 3.7641 $ the bound for $ \alpha $ is $ 0 \leq \alpha \leq 0.6045. $ \pagebreak \begin{table}[hbtp] \caption{The upper limits of $ \alpha $ for different permissible values of $ K $.} \label{tab:1} \begin{tabular}{cccccccc} \toprule \multicolumn{1}{c}{}&\multicolumn{7}{c}{Inequality Numbers} \\ \cline{2-8} $ K $ & (\ref{In1}) & (\ref{In3b}) & (\ref{In3}) & (\ref{Inpre7}) & (\ref{Inpreprepre7}) & (\ref{In4}) & (\ref{In6}) \\ \hline 2.4641 & 5.4641 & 12.5622 & \textbf{0.0000} & 10.1962 & 1.6962 & 0.0802 & 30.9893 \\ 2.5041 & 5.5041 & 12.4932 & \textbf{0.0170} & 10.1635 & 1.7562 & 0.0938 & 30.6186 \\ 2.6041 & 5.6041 & 12.3445 & \textbf{0.0599} & 10.0977 & 1.9062 & 0.1287 & 29.7861 \\ 2.7041 & 5.7041 & 12.2250 & \textbf{0.1036} & 10.0514 & 2.0562 & 0.1648 & 29.0693 \\ 2.8041 & 5.8041 & 12.1299 & \textbf{0.1480} & 10.0213 & 2.2062 & 0.2021 & 28.4488 \\ 2.9041 & 5.9041 & 12.0552 & \textbf{0.1931} & 10.0048 & 2.3562 & 0.2405 & 27.9094 \\ 3.0041 & 6.0041 & 11.9980 & \textbf{0.2388} & 10.0000 & 2.5062 & 0.2798 & 27.4388 \\ 3.1041 & 6.1041 & 11.9557 & \textbf{0.2852} & 10.0052 & 2.6562 & 0.3201 & 27.0271 \\ 3.2041 & 6.2041 & 11.9263 & \textbf{0.3321} & 10.0189 & 2.8062 & 0.3612 & 26.6662 \\ 3.3041 & 6.3041 & 11.9082 & \textbf{0.3795} & 10.0401 & 2.9562 & 0.4030 & 26.3495 \\ 3.4041 & 6.4041 & 11.8998 & \textbf{0.4275} & 10.0679 & 3.1062 & 0.4457 & 26.0714 \\ 3.5041 & 6.5041 & 11.9002 & \textbf{0.4760} & 10.1015 & 3.2562 & 0.4890 & 25.8272 \\ 3.6041 & 6.6041 & 11.9082 & \textbf{0.5251} & 10.1401 & 3.4062 & 0.5330 & 25.6130 \\ 3.7041 & 6.7041 & 11.9230 & \textbf{0.5745} & 10.1833 & 3.5562 & 0.5776 & 25.4254 \\ 3.7541 & 6.7541 & 11.9327 & \textbf{0.5995} & 10.2065 & 3.6312 & 0.6001 & 25.3407 \\ 3.7641 & 6.7641 & 11.9348 & \textbf{0.6045} & 10.2112 & 3.6462 & 0.6047 & 25.3244 \\ \hline \end{tabular} \end{table} \section{Application to Compact Stars and Discussion} \label{sec:4} In order to compare the charged anisotropic model on pseudo-spheroidal spacetime with observational data, we have considered the pulsar PSR J1614-2230 whose estimated mass and radius are $1.97M_{\odot}$ and $9.69\; km$. On substituting these values in equation (\ref{M}) we have obtained the values of adjustable parameters $K$ and $\alpha$ as $K=3.58524$ and $\alpha=0.292156$ respectively which are well inside their permitted limits. Similarly assuming the estimated masses and radii of some well known pulsars like 4U 1820-30, PSR J1903+327, 4U 1608-52, Vela X-1, PSR J1614-2230, Cen X-3, we have displayed the values of the parameters $K$ and $\alpha$, the central density $\rho_{c}$, surface density $\rho_{R}$, the compactification factor $u=\frac{M}{R}$, $\frac{dp_{r}}{d\rho}(r=0)$ and charge $ Q $ inside the star in Table~\ref{tab:3}. From the table it is clear that our model is in good agreement with the most recent observational data of pulsars given by \cite{Gangopadhyay13}. \\ \begin{table}[h] \caption{Estimated physical values based on the observational data} \label{tab:3} \begin{tabular}{lllllllll} \hline\noalign{\smallskip} \textbf{STAR} & $\mathbf{K} $ & {$ \mathbf{M} $} & {$ \mathbf{R} $} & {$ \mathbf{\rho_c} $} & {$ \mathbf{\rho_R} $} & {$ \mathbf{u (=\frac{M}{R})} $} & $ \mathbf{\left(\frac{dp_r}{d \rho}\right)_{r=0}} $ & $ \mathbf{Q} $ \\ & & $ \mathbf{(M_\odot)} $ & $ \mathbf{(Km)} $ & \textbf{(MeV fm{$\mathbf{^{-3}}$})} & \textbf{(MeV fm{$\mathbf{^{-3}}$})} & & & $ \mathbf{Coulomb} $ \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textbf{4U 1820-30} & 2.815 & 1.58 & 9.1 & 1980.14 & 250.46 & 0.256 & 0.461 & $ 4.031\times10^{20} $\\ \textbf{PSR J1903+327} & 2.880 & 1.667 & 9.438 & 1906.90 & 235.92 & 0.261 & 0.460 & $ 4.184\times10^{20} $\\ \textbf{4U 1608-52} & 3.122 & 1.74 & 9.31 & 2212.22 & 252.97 & 0.276 & 0.455 & $ 4.127\times10^{20} $\\ \textbf{Vela X-1} & 3.078 & 1.77 & 9.56 & 2054.99 & 238.25 & 0.273 & 0.456 & $ 4.240\times10^{20} $\\ \textbf{PSR J1614-2230} & \textbf{3.585} & \textbf{1.97} & \textbf{9.69} & \textbf{2487.35} & \textbf{248.17} & \textbf{0.300} & \textbf{0.448} & $ \mathbf{4.262\times10^{20}} $ \\ \textbf{Cen X-3} & 2.589 & 1.49 & 9.178 & 1705.08 & 233.65 & 0.239 & 0.466 & $ 4.044\times10^{20} $ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} In order to examine the nature of physical quantities throughout the distribution, we have considered a particular star PSR J1614-2230, whose tabulated mass and radius are $M=1.97M_{\odot},\;R=9.69\;km$. Choosing $K=3.58524$ and $\alpha=0.292156$, we have shown the variations of density and pressures in both the charged and uncharged cases in Figure~\ref{fig:1}, Figure~\ref{fig:2} and Figure~\ref{fig:3}. It can be noticed that the pressure is decreasing radially outward. The density in the uncharged case is always greater than the density in the charged case. Similarly the radial pressure $p_{r}$ and transverse pressure $p_{\perp}$ are decreasing radially outward. Similar to that of density, $p_{r}$ and $p_{\perp}$ in the uncharged case accommodate more values compared to charged case. The variation of anisotropy shown in Figure~\ref{fig:4} is initially decreasing with negative values reaches a minimum and then increases. In this case also anisotropy takes lesser values in the charged case compared to uncharged case. The square of sound in the radial and transverse direction (i.e. $\frac{dp_{r}}{d\rho}$ and $\frac{dp_{\perp}}{d\rho}$) are shown in Figure~\ref{fig:5} and Figure~\ref{fig:6} respectively and found that they are less than 1. The graph of $\rho-p_{r}-2p_{\perp}$ against radius is plotted Figure~\ref{fig:7}. It can be observed that it is non-negative for $ 0 \leq r \leq R $ and hence strong energy condition is satisfied throughout the star. \\ A necessary condition for the exact solution to represent stable relativistic star is that the relativistic adiabatic index given by $ \Gamma = \frac{\rho + p_r}{p_r} \frac{d p_r}{d \rho} $ should be greater than $ \frac{4}{3}. $ The variation of adiabatic index throughout the star is shown in Figure~\ref{fig:8} and it is found that $ \Gamma > \frac{4}{3} $ throughout the distribution both in charged and uncharged case. Though we have not assumed any equation of state in the explicit form $ p_r = p_r (\rho) $ and $ p_\perp = p_\perp (\rho) $, we have shown the relation between $ p_r , p_\perp $ against $ \rho $ in the graphical form as displayed in Figure~\ref{fig:9} and Figure~\ref{fig:10}. For a physically acceptable relativistic star the gravitational redshift must be positive and finite at the centre and on the boundary. Further it should be a decreasing function of $ r $. Figure~\ref{fig:11} shows that this is indeed the case. Finally we have plotted the graph of $ E^2 $ against $ r $ which is displayed in Figure~\ref{fig:12}. Initially $ E^2 $ increases from $ 0 $ and reaches a maximum values and then decreases radially outward. The model reduces to the uncharged anisotropic distribution given by \cite{Ratanpal15} when $ \alpha = 0. $ \section*{Acknowledgement} The authors would like to thank IUCAA, Pune for the facilities and hospitality provided to them for carrying out this work. \pagebreak \begin{figure} \includegraphics[scale = 1.25]{1.eps} \caption{Variation of density against radial variable $r$. \label{fig:1}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{2.eps} \caption{Variation of radial pressures against radial variable $r$. \label{fig:2}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{3.eps} \caption{Variation of transverse pressures against radial variable $r$ \label{fig:3}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{4.eps} \caption{Variation of anisotropies against radial variable $r$. \label{fig:4}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{5.eps} \caption{Variation of $ \frac{1}{c^2}\frac{dp_r}{d\rho} $ against radial variable $r$. \label{fig:5}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{6.eps} \caption{Variation of $ \frac{1}{c^2}\frac{dp_\perp}{d\rho} $ against radial variable $r$. \label{fig:6}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{7.eps} \caption{Variation of strong energy condition against radial variable $r$. \label{fig:7}} \end{figure} \begin{figure} \includegraphics[scale = 0.9]{8.eps} \caption{Variation of $ \Gamma $ against radial variable $r$. \label{fig:8}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{9.eps} \caption{Variation of pressures against density for charged case. \label{fig:9}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{10.eps} \caption{Variation of pressures against density for uncharged case. \label{fig:10}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{11.eps} \caption{Variation of gravitational redshift against radial variable $r$. \label{fig:11}} \end{figure} \begin{figure} \includegraphics[scale = 1.25]{12.eps} \caption{Variation of $ E^2 $ against radial variable $r$. \label{fig:12}} \end{figure}
train/arxiv
BkiUeRzxK0iCl7UGXzTA
5
1
\section{Introduction} \label{sec:intro} \emph{Knowledge-based programs} \cite{FHMV1997} are an abstract specification format for concurrent systems, in which the actions of an agent are conditional on formulas of the logic of knowledge \cite{FHMVbook}. This format allows the agent to be described in terms of what it must know in order to perform its actions, independently of how that knowledge is attained or concretely represented by the agent. This leads to implementations that are optimal in their use of the knowledge implicitly available in an agent's local state. The approach has been applied to problems including reliable message transmission \cite{HZ92}, atomic commitment \cite{Had87}, fault-tolerant agreement \cite{DM90}, robot motion planning \cite{BrafmanLMS1997} and cache coherency \cite{BaukusMeyden}. The process of going from an abstract knowledge-based program to a concrete implementation is non-trivial, since it requires reasoning about all the ways that knowledge can be obtained, which can be quite subtle. Adding to the complexity, there is a circularity in that knowledge determines actions, which in turn affect the knowledge that an agent has. It is therefore highly desirable to be able to automate the process of implementation. Unfortunately, this is known to be an inherently complex problem: even deciding whether an implementation exists is intractable \cite{FHMV1997}. Sound local proposition epistemic specifications \cite{EMM98} are a generalization of knowledge-based programs proposed in part due to these complexity problems. These specifications require only \emph{sufficient} conditions for knowledge, where knowledge-based programs require \emph{necessary and sufficient conditions}. By allowing a larger space of potential implementations, this variant ensures that there always exists an implementation. However, some of these implementations are so trivial as to be uninteresting. In practice, one wants implementations in which agents make good use of their knowledge, so that the conditions under which they act closely approximate the necessary and sufficient conditions for knowledge. To date, a systematic approach to the identification of \emph{good} implementations, and of automating the construction of such good implementations, has not been identified. This is the problem we address in the present paper. Ultimately, we seek an automated approach that is implementable in a way that scales to handling realistic examples. In this paper, we use a CTL basis for specifications, and use PTIME complexity of an associated model checking problem in an explicit state representation as a proxy for practical implementability. The contributions of the paper are two-fold: first, we present a general approach to the identification of good implementations, that extends the notion of sound local proposition epistemic specification by ordering the knowledge conditions to be synthesized, and then defining a way to construct implementations using a sequence of approximations to the final synthesized system, in which implementation choices for earlier knowledge conditions are fed back to improve the quality of approximation used to compute later implementation choices. This gives an intuitive approach to the construction of implementations, which we show by example to address some unintuitive aspects of the original knowledge-based program semantics. The approach is parametric in a choice of approximation scheme. Second, we consider a range of possibilities for the approximation scheme to be used in the above ordered semantics, and evaluate the complexity of the synthesis computations associated with each approximation. The analysis leads to the identification of two orthogonal approximations that are optimal in their closeness to a knowledge-based program semantics, while remaining PTIME computable. This identifies the best prospects for future work on synthesis implementations. The paper is structured as follows. Section~\ref{sec:mckt} recalls basic definitions of temporal epistemic logic. Section~\ref{sec:eps} defines epistemic protocol specifications. In Section~\ref{sec:ordsem} we define the ordered semantics approximation approach for identification of good implementations. Section~\ref{sec:approx} defines a range of possible approximation schemes, which are then analyzed for complexity in Section~\ref{sec:complex}. We discuss related work in Section~\ref{sec:related} and conclude with a discussion of future work in Section~\ref{sec:concl}. \section{A Semantic model for Knowledge and Time} \label{sec:mckt} In this section we lay out a general logical framework for agent knowledge, and describe how knowledge arises for agents that execute a concrete protocol in the context of some environment. Let $\mathit{Prop}$ be a finite set of atomic propositions and $Ags$ be a finite set of agents. The language CTL$^*$K$(\mathit{Prop},Ags)$ has the syntax: $$\phi::= ~p~|~\neg \phi~|~\phi_1\lor \phi_2~|~X\phi~|~(\phi_1\!U\!\phi_2)~|~A\phi~|~K_i\phi$$ where $p\in \mathit{Prop}$ and $i\inAgs$. This is CTL$^*$ plus the construct $K_i \phi$, which says that agent $i$ knows that $\phi$ holds. We freely use standard operators that are definable in terms of the above, specifically $F \phi = \mathbf{true} \,\!U\! \phi$, $G \phi = \neg F \neg \phi$, $\phi_1 R \phi_2 = \neg ((\neg \phi_1)\, \!U\! \,(\neg \phi_2))$, $E\phi = \neg A \neg \phi$. Our focus in this paper is on the fragment $\mbox{CTLK}$, in which the branching operators may occur only as $A\phi$ and $E\phi$, where $\phi$ is a formula in which the outermost operator is one of the temporal operators $X, \!U\!, R, F$ or $G$. A further subfragment of this language $\mbox{CTLK}^+$, specified by the grammar $$\phi::= ~p~|~\neg p~|~\phi_1\lor \phi_2~|~\phi_1 \land \phi_2~|~AX\phi~|~AF\phi~|~ AG\phi~|~ A(\phi_1\!U\!\phi_2)~|~A(\phi_1 R\phi_2)~|~K_i\phi$$ where $p\in \mathit{Prop}$ and $i\inAgs$. Intuitively, this is the sublanguage in which all occurrences of the operators $A$ and $K_i$ are in positive position. To give semantics to all these languages it suffices to give semantics to CTL$^*$K$(\mathit{Prop},Ags)$. We do this using a variant of interpreted systems \cite{FHMVbook}. Let $S$ be a set, which we call the set of global states. A {\em run} over $S$ is a function $r:\mathbf{N} \rightarrow S$. A {\em point} is a pair $(r,m)$ where $r$ is a run and $m\in \mathbf{N}$. Given a set $\mathcal{R}$ of runs, we define $\mathit{Points}(\mathcal{R})$ to be the set of all points of runs $r\in \mathcal{R}$. An {\em interpreted system} for $n$ agents is a tuple $\mathcal{I} = (\mathcal{R}, \sim, \pi)$, where $\mathcal{R}$ is a set of runs over $S$, the component $\sim$ is a collection $\{\sim_i\}_{i \in Ags}$, where for each $i \in Ags$, $\sim_i$ is an equivalence relation on $\mathit{Points}(\mathcal{R})$ (called agent $i$'s {\em indistinguishability relation}) and $\pi: S\rightarrow \powerset{\mathit{Prop}}$ is an interpretation function. We say that a run $r'$ is {\em equivalent to a run $r$ up to time $m\in \mathbf{N}$} if $r'(k) = r(k)$ for $0\leq k\leq m$. We can define a general semantics of CTL$^*$K$(V,Ags)$ by means of a relation $\mathcal{I},(r,m)\models \phi$, where $\mathcal{I}$ is an intepreted system, $(r,m)$ is a point of $\mathcal{I}$ and $\phi$ is a formula. This relation is defined inductively as follows: \begin{itemize} \item $\mathcal{I},(r,m) \models p$ if $p\in \pi(r(m))$, for $p \in \mathit{Prop}$; \item $\mathcal{I},(r,m)\models \neg \phi$ if not $\mathcal{I},(r,m)\models \phi$; \item $\mathcal{I},(r,m)\models \phi_1\lor \phi_2$ if $\mathcal{I},(r,m)\models \phi_1$ or $\mathcal{I},(r,m)\models \phi_2$; \item $\mathcal{I},(r,m)\models A \phi$ if $\mathcal{I},(r',m)\models \phi$ for all runs $r'\in \mathcal{R} $ equivalent to $r$ up to time $m$; \item $\mathcal{I},(r,m)\models X \phi$ if $\mathcal{I},(r,m+1)\models \phi$; \item $\mathcal{I},(r,m)\models \phi_1U\phi_2$ if there exists $m'\geq m$ such that $\mathcal{I},(r,m') \models \phi_2$, and $\mathcal{I},(r,k)\models \phi_1$ for $m \leq k < m'$; \item $\mathcal{I},(r,m)\models K_{i} \phi$ if $\mathcal{I},(r',m' ) \models \phi$ for all points $(r',m') \sim_i (r,m)$ of $\mathcal{I}$. \end{itemize} For the knowledge operators, this semantics is essentially the same as the usual interpreted systems semantics. For the temporal operators, it corresponds to a semantics for branching time known as the {\em bundle semantics} \cite{Burgess,MeydenWong}. We write $\mathcal{I} \models \phi$ when $\mathcal{I},(r,0)\models \phi$ for all runs $r$ of $\mathcal{I}$. We are interested in systems in which each of the agents runs a protocol in which it chooses its actions based on local information, in the context of a larger environment. An {\em environment} for agents $Ags$ is a tuple $\Env = \langle S, I, \{\Acts_i\}_{i \in Ags}, \longrightarrow, \{O_i\}_{i\in Ags}, \pi\rangle$, where \be \item $S$ is a finite set of states, \item $I$ is a subset of $S$, representing the initial states, \item for each agent $i$, component $\Acts_i$ is a finite set of actions that may be performed by agent $i$; we define $\Acts = \Pi_{i\in Ags} \Acts_i$ to be the corresponding set of \emph{joint actions} \item $\longrightarrow \, \subseteq S \times \Acts \times S$ is a transition relation, labelled by joint actions, \item for each $i\in Ags$, component $O_i$ is a mapping from $S$ to some set $O$ of observations, \item $\pi: S\rightarrow \powerset{\mathit{Prop}}$ is an interpretation of some set of atomic propositions $\mathit{Prop}$. \ee Intuitively, a joint action $\act{a}$ represents a choice of action $\act{a}_i$ for each agent, performed simultaneously, and the transition relation resolves this into an effect on the state. We assume that $\longrightarrow$ is serial in the sense that for all $s\in S$ and $\act{a} \in \Acts$ there exists $t\in S$ such that $s\ptrans{\act{a}} t$. We assume that $\Acts_i$ always contains at least an action $\act{skip}$, and that for the joint action $\act{a}$ with $\act{a}_i = \act{skip}$ for all agents $i$, we have $s\ptrans{\act{a}} t$ iff $s=t$. The set $O$ of observations is an arbitrary set: for each agent $i$, we will be interested in the equivalence relation $s\sim_i t$ if $O_i(s) = O_i(t)$ induced by the observation function $O_i$ rather than the actual values of $O_i$. A proposition $p$ is \emph{local to agent $i$} in the enviroment $\Env$ if it depends only on the agent's observation, in the sense that for all states $s,t$ with $O_i(s) = O_i(t)$, we have $p \in \pi(s)$ iff $p \in \pi(t)$. We write $\mathit{Prop}_i$ for the set of propositions local to agent $i$. Intuitively, these are the propositions whose values the agent can always determine, based just on its observation. We similarly say that a boolean formula is local to agent $i$ if it contains only propositions that are local to agent $i$. We assume that the set of local propositions is \emph{complete} with respect to the observations, in that for each observation $o$ there exists a local formula $\phi$ such that for all states $s$, we have $O_i(s) = o$ iff $\pi(s) \models \phi$. (This can be ensured by including a proposition $p_o$ that is true at just states $s$ with $O_i(s) = o$, or by including a proposition $v =c$ for each possible value $c$ of each variable $v$ making up agent $i$'s observation.) A {\em concrete protocol} for agent $i\in Ags$ in such an environment $\Env$ is a Dijkstra style nondeterministic looping statement $P_i$ of the form \begin{equation}\label{eq:prot} \pdo ~~\phi_1 \rightarrow a_1~ []~ \ldots~ []~ \phi_k \rightarrow a_k ~~ \pdor \end{equation} where the $a_j$ are actions in $\Acts_i$ and the $\phi_j$ are boolean formulas local to agent $i$. Intuitively, this is a nonterminating program that is executed by the agent repeatedly checking which of the guards $\phi_j$ holds, and then nondeterministically performing one of the corresponding actions $a_i$. If none of the guards holds, then the action $\act{skip}$ is performed. That is, implicitly, there is an additional clause $\neg \phi_1 \land \ldots \neg \phi_n \rightarrow \act{skip}$. Without loss of generality, we may assume that the $a_i$ are distinct. (We can always amalgamate two cases $\phi_1 \rightarrow a$ and $\phi_2 \rightarrow a$ with the same action $a$ into a single case $\phi_1 \lor \phi_2 \rightarrow a$.) We say that action $a_j$ is enabled in protocol $P_i$ at state $s$ if $\phi_j$ holds with respect to the assignment $\pi(s)$, and write $\mathit{en}(P_i,s)$ for the set of all actions enabled in protocol $P_i$ at state $s$. A \emph{joint protocol $P$} is a collection $\{P_i \}_{i\in Ags}$ of protocols for the individual agents. A joint action $a\in \Acts$ is \emph{enabled by $P$ at a state $s$} if $a_i \in \mathit{en}(P_i,s)$ for all $i \in Ags$. We write $\mathit{en}(P,s)$ for the set of all joint actions enabled by $P$ at state $s$. Given an environment $\Env = \langle S, I, \{\Acts\}_{i \in Ags}, \longrightarrow, \{O_i\}_{i\in Ags}, \pi\rangle$ and a joint protocol $P$ for the agents in $\Env$, we may construct an interpreted system $\mathcal{I}(\Env, P) = (\mathcal{R}(\Env, P), \sim, \pi)$ over global states $S$ as follows. The set of runs $\mathcal{R}(\Env, P)$ consists of all runs $r:\mathbf{N} \rightarrow S$ such that $r(0) \in I$ and for all $n \in \mathbf{N}$ there exists $\act{a} \in \mathit{en}(P, r(n))$ such that $r(n) \ptrans{\act{a}} r(n+1)$. The component $\sim = \{\sim_i\}_{i\in Ags}$ is defined by $(r,m) \sim_i (r',m')$ if $O_i(r(m)) = O_i(r'(m'))$, i.e., two points are indistinguishable to agent $i$ if it makes the same observation at the corresponding global states; this is known in the literature as the \emph{observational} semantics for knowledge. The interpretation $\pi$ in the interpreted system $\mathcal{I}(\Env, P)$ is identical to that in the environment~$\Env$. Note that in $\mathcal{I} = \mathcal{I}(\Env, P)$, the satisfaction of formulas of the form $K_i\phi$ in fact depends only on the observation $O_i(r(m))$. We therefore may write $\mathcal{I}, o \models K_i\phi$ for an observation value $o$ to mean $\mathcal{I}, (r,m) \models K_i \phi$ for all points $(r,m)$ of $\mathcal{I}$ with $O_i(r(m)) = o$. \section{Epistemic Protocol Specifications} \label{sec:eps} Protocol templates generalize concrete protocols by introducing some variables that may be instantiated with local boolean formulas in order to obtain a concrete protocol. Formally, a {\em protocol template} for agent $i \in Ags$ is an expression in the same form as~(\ref{eq:prot}), except that the $\phi_j$ are now boolean expressions, not just over the local atomic propositions $\mathit{Prop}_i$, but may also contain boolean variables from an additional set $X$ of \emph{template variables}. We write $\mathit{Vars}(\mathit{Prot}_i)$ for the set of these additional boolean variables that occur in some $\phi_i$. An {\em epistemic protocol specification} is a tuple ${\cal S} = \langle Ags, \Env, \{\mathtt{P}_i\}_{i\in Ags}, \Spec \rangle$, consisting of a set of agents $Ags$, an environment $\Env$ for $Ags$, a collection of protocol templates $\{\mathtt{P}_i\}_{i\in Ags} $ for environment $\Env$, and a collection of epistemic logic formulas $\Spec$ over the agents $Ags$ and atomic propositions $X \cup \mathit{Prop}$. In this paper, we assume $\Spec \subseteq \mbox{CTLK}(Ags,X\cup\mathit{Prop})$. We require that $\mathit{Vars}(\mathtt{P}_i)$ and $\mathit{Vars}(\mathtt{P}_j)$ are disjoint when~$i\neq j$. Intuitively, the protocol templates in such a specification lay out the abstract structure of some concrete protocols, and the variables in $X$ are ``holes" that need to be filled in order to obtain a concrete protocol. The formulas in $\Phi$ state constraints on how the holes may be filled: it is required that these formulas be valid in the model that results from filling the holes. To implement an epistemic protocol specification with respect to the observational semantics, we need to replace each template variable $v$ in each agent $i$'s protocol template by an expression over the agent's local variables, in such a way that the specification formulas are satisfied in the model resulting from executing the resulting standard program. We now formalize this semantics. Let $\theta$ be a substitution mapping each template variable $x\in \mathit{Vars}(\mathtt{P}_i)$, for $i \in Ags$, to a boolean formula local to agent $i$. We may apply such a substitution to a protocol template $\mathtt{P}_i$ in the form (\ref{eq:prot}) by applying $\theta$ to each of the formulas $\phi_j$, yielding $$ \pdo ~~\phi_1\theta \rightarrow a_1~ []~ \ldots~ []~ \phi_k \theta \rightarrow a_k ~~ \pdor $$ which we write as $\mathtt{P}_i\theta$. Since the $\phi_j\theta$ contain only propositions in $\mathit{Prop}_i$, this is a concrete protocol for agent $i$. Consequently, we obtain a joint concrete protocol $\mathtt{P} \theta = \{\mathtt{P}_i\theta\}_{i\in Ags}$, which may be executed in the environment $\Env$, generating the system $\mathcal{I}(\Env, \mathtt{P}\theta)$. The substitution $\theta$ may also be applied to the specification formulas in $\Spec$. Each $\phi\in \Spec$ is a formula over variables $X \cup \mathit{Prop}$, so $\phi \theta$ is a formula over variables $\mathit{Prop}$. We write $\Spec\theta$ for $\{\phi \theta ~|~ \phi \in \Spec\}$. We say that such a substitution $\theta$ provides an {\em implementation} of the epistemic protocol specification ${\cal S}$, provided $ \mathcal{I}(\Env, \{\mathtt{P}_i \theta\}_{i\in Ags}) \models \Spec\theta$. The problem we study in this paper is the following: given an environment $\Env$ and an epistemic protocol specification ${\cal S}$, synthesize an implementation $\theta$. \emph{Knowledge-based programs} \cite{FHMVbook,FHMV1997} are a special case of epistemic protocol specifications. Essentially, knowledge-based programs are epistemic protocol specifications in which the set $\Phi$ is a collection of formulas of the form $AG(x\Leftrightarrow K_i\psi)$, with exactly one such formula for each agent $i\in Ags$ and each template variable $x\in \mathit{Vars}(\mathtt{P}_i)$. That is, each template variable is associated with a formula of the form $K_i\psi$, expressing some property of agent $i$'s knowledge, and we require that the meaning of the template variable be equivalent to this property. The following example, an extension of an example from \cite{BrafmanLMS1997}, illustrates the motivations for knowledge-based programs that have been advocated in the literature. \begin{example} \label{two_robot} Two robots, $A$ and $B$, sit on linear track with discretized positions $0\ldots 10$. Initially $A$ is at position $0$ and $B$ is at position $10$. Their objective is to meet at a position at least $2$, without colliding. Each robot is equipped with noisy position sensor, that gives at each moment of time a natural number value in the interval $[0,\ldots, 10]$. (We consider various different sensor models below, each defined by a relationship between the sensor reading and the actual position.) The robots do not have a sensor for detecting each other's position. Each robot has an action $\act{Halt}$ and an action $\act{Move}$. The $\act{Halt}$ action brings the robot to a stop at its current location, and it will not move again after this action has been performed. The $\act{Move}$ action moves the robot in the direction that it is facing (right, i.e., from 0 to 10 for $A$, and left for $B$). However, the effects of this action are unreliable: when performed, the robot either stays at its current position or moves one step in the designated direction. Because of the nondeterminism in the sensor readings and the robot motion, it is a non-trivial matter to program the robots to achieve their goal. In particular, the programmer needs to reason about how the sensor readings are related to the actual positions, in view of the assumptions about the possible robot motions. However, there is a natural abstract description of the solution to the problem at the level of agent knowledge, which we may capture as a knowledge-based program as follows: $A$ has the epistemic protocol specification $$ \begin{array}[b]{rl} \mathtt{P}_A = & \pdo \\ & ~~ \neg x \rightarrow \act{Move} \\ & ~~[] ~x ~\rightarrow \act{Halt} \\ & \pdor \\ \\ & AG(x\Leftrightarrow K_A(position_A \geq 2)) \end{array} $$ and $B$ has the epistemic protocol specification $$ \begin{array}[b]{rl} \mathtt{P}_B = & \pdo \\ & ~~ y \rightarrow \act{Move} \\ & ~~[]~\neg y \rightarrow \act{Halt} \\ & \pdor \\ ~\\ & AG(y\Leftrightarrow K_B(\bigwedge_{p\in [0, \ldots 10]} position_B = p \Rightarrow AG (position_A < p -1))) \end{array} $$ Intuitively, the specification for $A$ says that $A$ should move to the right until it knows that its position is at least 2. The specification for $B$ says that $B$ should move to the left so long as its knows that, if its current position is $p$, then $A$'s position will always be to the left of the position $p-1$ that a move might cause $B$ to enter. If this does not hold then there could be a collision. One of the benefits of knowledge-based programs is that they can be shown to guarantee correctness properties of solutions for a problem independently of the way that knowledge is acquired and represented. This gives a desirable level of abstraction that enables a single knowledge level description to be used to generate multiple implementations that are tailored to different environments. In the case of the above knowledge-based program, we note that it guarantees several properties independently of the details of the sensor model. Informally, since $A$ halts only when it knows that its position is at least 2, and $K_A p \Rightarrow p$ is a tautology of the logic of knowledge, its program ensures that when $A$ halts, its position will be at least 2. Similarly, since $B$ moves at most one position in any step, and moves only when it knows that moving to the position to its left will not cause a collision with $A$, a move by $B$ will not be the cause of a collision. It remains to show that $A$ does not cause a collision with $B$ --- this requires assumptions about $A$'s sensor. (Note that if A is blind it never halts, and could collide with B even if $B$ never moves, so assumptions are needed.) For termination, moreover, we require fairness assumptions about the way that $A$ and $B$ move (e.g., an action $\act{Move}$ performed infinitely often eventually causes the position to change.). What implementations exist for the knowledge-based program depend on the assumptions we make about the error in the sensor readings. We assume that for each agent $i$, and possible sensor value $v$, there are propositions $sensor_i = v$, $sensor_i \geq v$, and $sensor_i \leq v$ in $\mathit{Prop}_i$, with the obvious meaning. Suppose that we take the robots' position sensor to be free of error, i.e. for each agent $i$, we always have $sensor_i = position_i$. Then agent $i$ always knows its exact position from its sensor value. In this case, the knowledge-based program has an implementation with $\theta(x)$ is $sensor_A= 2$ and $\theta(y)$ is $sensor_B \geq 4$. In this implementation, $A$ halts at position 2 and $B$ halts at position 3 (assuming that they reach these positions.) On the other hand, suppose that the sensor readings may be erroneous, with a maximal error of 1, i.e., when the robot's position is $p$, the sensor value is in $\{p-1,p,p+1\}$. In this case, there exists an implementation $\theta$ in which $\theta(x)$ is $sensor_A = 3 \lor sensor_A = 4 \lor sensor_A = 5$, and $\theta(y)$ is $ sensor_B = 4 \lor sensor_B = 5 \lor sensor_B= 6$. In this implementation, $A$ moves until it gets a sensor reading in the set $\{3,4,5\}$, and then halts. The effect is that $A$ halts at a location in the set $\{2,3,4\}$; which one depends on the pattern of sensor readings obtained. For example, the sequence $(0,0), (1,1),(2,2),(3,2),(4,3)$ of $(position, sensor)$ values leaves $A$ at position 4, whereas the sequence $(0,0), (1,1),(2,3)$ leaves $A$ at position $2$. The effect of the choice of $\theta(y)$ is that $B$ moves to the left and halts in one of the positions $\{5,6,7\}$. One run in which $B$ halts at position $5$ has $(position, sensor)$ values $(10,10), (9,9),(8,8),(7,7),(6,7),(5,4)$. A run in which $B$ halts at position $7$ is where these values are $(10,10), (9,9),(8,8),(7,6)$. Note that here the sensor reading 6 tells $B$ that it is in the interval $[5,7]$, so it could be at $5$. It is therefore not safe to move, since $A$ might be at $4$. One of the advantages of the knowledge-based programs is that their implementations are optimal in the way that they use the information encoded in the agent's observations. For example, the program for $A$ says that $A$ should halt \emph{as soon} as it knows that it is in the goal region. In the case of the sensor with noise at most 1, the putative implementation for $A$ given by $\theta(x) = sensor_A \geq 4$ would also ensure that $A$ halts inside the goal region $[2,10]$, but would not implement the knowledge-based program because there are situations (viz. $sensor_A = 3$), where $A$ does not halt even though it knows that it is safe to halt. \end{example} The semantics for knowledge-based programs results in implementations that are highly optimized in their use of information. Because knowledge for an implementation $\theta$ is computed in the system $\mathcal{I}(E, P\theta)$, agent's may reason with complete information about the implementation they are running in determining what information follows from their observations. This introduces a circularity that makes finding implementations of knowledge-based programs an inherently complex problem. Indeed, it also has the consequence that it is possible for a knowledge-based program to have no implementations. The following provides a simple example where this is the case. It also illustrates a somewhat counterintuitive aspect of knowledge-based programs, that we will argue is improved by our proposed ordered semantics for epistemic specifications below. \begin{example} \label{ex:picnic} Alice and Bob have arranged to meet for a picnic. They are agreed that a picnic should have both wine and cheese, and each should bring one or the other. However, they did not think to coordinate in advance what each is bringing, and they are now not able to communicate, since Alice's phone is in the shop for repairs. They do know that each reasons as follows. Cheese being cheaper than wine, they prefer to bring cheese, and will do so if they know that there is already guaranteed to be wine. Otherwise, they will bring wine. This situation can be captured by the knowledge-based program (for each $i\in \{A,B\}$) and environment depicted in Figure~\ref{fig:picnicenv}. \begin{figure}[t] \centerline{ $ \begin{array}[b]{l} \mathtt{P}_i = \pdo \\ ~~ ~~\mathit{start} \land x_i \rightarrow \act{c} \\ ~~[] ~ \mathit{start} \land \neg x_i \rightarrow \act{w} \\ ~~[] ~ \neg \mathit{start} \rightarrow \act{p} \\ \pdor \\ ~\\ AG(x_i\Leftrightarrow K_iAX w) \end{array} $ \hspace{2cm} \includegraphics[height=2.5cm]{picnicenv.pdf}\\[-5pt]} \caption{Knowledge-based program and environment\label{fig:picnicenv}} \end{figure} Here $\mathit{start}$ is a proposition, local to both agents, that holds before the picnic (at time 0). We use $w,c$ as propositions that hold if there is wine (respectively, cheese) in the picnic state (at time 1). Actions $\act{w},\act{c}, \act{p}$ represent bringing wine, bringing cheese, and picnicking, respectively. For any omitted joint actions $\act{a}$ from a state $s$ in the diagram, we assume an implicit self-loop $s \ptrans{\act{a}} s$. We assume that for all states $s$ and $i\in \{A, B\}$, we have $O_i(s) = s$, i.e., both agents have complete information about the current state. This epistemic specification has no implementations. Note that in any implementation, each agent $i$ must choose either $\act{w}$ or $\act{c}$ at the initial state. For each such selection, there is a unique successor state at time 1, so each implementation system $\mathcal{I}(\mathtt{P}\theta, E)$ has exactly one state at time 1. If this state satisfies $w$, then we have $\mathcal{I}(\mathtt{P}\theta, E) \models K_i(AX w)$, and this implies that both agents select action $\act{c}$ at the start state. But then the state at time 1 does not satisfy $w$. Conversely, if the unique state at time $1$ does not satisfy $w$, then $\mathcal{I}(\mathtt{P}\theta, E) \models \neg K_i(AX w)$, and this implies that both agents select action $\act{w}$ at the start state, which produces a state at time 1 that satisfies $w$, also a contradiction. In either case, the assumption that we have an implementation results in a contradiction, so there are no implementations. $\boxempty$ \end{example} \commentout{ \begin{example} \label{ex:noimp} Consider the environment with just two states, depicted in Figure~\ref{fig:kbpempty}. There are two states $q$ and $q'$ with $s$ initial, and two actions $a,skip$, with the associated transitions depicted. The environment has a single agent $1$, who is blind, so that $O_1(q) = O_1(q') = o$. The proposition $p$ holds only at state $q$. \begin{figure} \centerline{\includegraphics[height=2cm]{kbpempty.pdf}} \caption{\label{fig:kbpempty} Environment for a knowledge-based program with no implementations} \end{figure} Consider the knowledge-based program with $P= \pdo ~x \rightarrow a~\pdor$ and with specification $AG(x\Leftrightarrow K_1 p)$. Since there is only one observation for the agent, there are only two semantically distinct possible substitutions: the substitution $\theta_1(x) = {\bf true}$ and the substitution $\theta_2(x) = {\bf false}$. In the first case, the program $P\theta_1$ always executes the action $a$, and the system $\mathcal{I}(P\theta_1,E)$ contains the two reachable states $q,q'$, which are not distinguishable to the agent, hence $K_1p$, is always false. Hence we do not have $\mathcal{I}(P\theta_1,E)\models AG(x\theta_1 \Leftrightarrow K_1 p)$, so $\theta_1$ does not provide an implementation. In the second case, the program $P\theta_2$ always executes the action $skip$, and the system $\mathcal{I}(P\theta_2,E)$ has only the state $q$ reachable. In this case, $K_1p$ is true at $q$, since in $\mathcal{I}(P\theta_2,E)$, the agent knows that the state $q'$ is not possible. Hence, here also we do not have $\mathcal{I}(P\theta_1,E)\models AG(x\theta_2 \Leftrightarrow K_1 p)$, and $\theta_2$ also does not provide an implementation. $\boxempty$ \end{example} } Testing whether there exists an implementation of a knowledge-based program when the temporal basis of the temporal epistemic logic used is the linear time logic LTL is PSPACE complete \cite{FHMV1997}. However, the primary source of the hardness here is that model checking LTL is already a PSPACE complete problem. In the case of CTL as the temporal basis, where model checking can be done in PTIME, the problem of deciding the existence of an implementation of a given knowledge-based program in a given environment can be shown to be NP-complete. NP hardness follows from Theorem 5.4 in \cite{FHMV1997}, which states that for \emph{atemporal} knowledge-based programs, in which the knowledge formulas $K_i \phi$ used do not contain temporal operators, the complexity of determining the existence of an implementation is NP-complete. However, the construction in the proof in \cite{FHMV1997} requires both the environment and the knowledge-based program to vary. In practice, the size of the knowledge-based program is likely to be significantly smaller than the size of the environment, inasmuch as it is created by hand and effectively amounts to a form of specification. An alternate approach is to measure complexity as a function of the size of the environment for a fixed knowledge-based program. Even here, it turns out, the problem of deciding the existence of an implementation is NP-hard for very simple knowledge-based programs. \begin{theorem} \label{thm:atemp} There exists a fixed atemporal knowledge-based program $\mathtt{P}$ for a single agent, such that the problem of deciding, given an environment $\Env$, whether $\mathtt{P}$ has an implementation in $\Env$, is NP-hard. \end{theorem} The upper bound of NP for deciding the existence of implementations of knowledge-based programs is generalized by the following result for our more general notion of epistemic protocol specification. \begin{theorem} \label{thm:eps-upper} Given an environment $\Env$ and an epistemic protocol specification ${\cal S}$ expressed using $\mbox{CTLK}$, the complexity of determining the existence of an implementation for ${\cal S}$ in $\Env$ is in NP. \end{theorem} Theorem~\ref{thm:eps-upper} assumes that the environment is presented by means of an explicit listing of its states and transitions. In practice, the inputs to the problem will be given in some format that makes their representation succinct, e.g., states will be represented as assignments to some set of variables, and boolean formulas will be used to represent the environment and protocol components. For this alternate input format, the problem of determining the existence of an implementation of a given epistemic protocol specification is NEXPTIME-complete \cite{HM14tacas}. Under either an implicit or explicit representation of environments, these results suggest that synthesis of implementations of general epistemic protocol specifications, and knowledge-based programs in particular, is unlikely to be practical. An implementation using symbolic techniques is presented in \cite{HM14tacas}, but it works only on small examples and scales poorly (it requires the introduction of exponentially many fresh propositions before using BDD techniques; the number of propositions soon reaches the limit that can be handled efficiently by BDD packages.) In the following section, we consider a restricted class of specifications that weakens the notion of knowledge-based program in such a way that implementations can always be found, and focus on how to efficiently derive implementations that approximate the implementations of corresponding knowledge-based programs as closely as possible. \section{An Ordered Semantics} \label{sec:ordsem} Sound local proposition epistemic protocol specifications are a generalization of knowledge-based programs, introduced in \cite{EMM98}, with one of the motivations being that they provide a larger space of potential implementations, that may overcome the problem of the high complexity of finding an implementation. (There is the further motivation that the implementation of a knowledge-based program, when one exists, itself may be intractable; e.g., it is shown in \cite{Meyden96} that for perfect recall implementations of \emph{atemporal} knowledge-based programs, deciding whether $K_i\phi$ holds at a given point of the implementation may be a PSPACE-complete problem. This specific motivation is less of concern for the observational case that we study in this paper.) Formally, a \emph{sound local proposition} epistemic protocol specification is one in which $\Phi$ is given by means of a function $\kappa$ with domain $\mathit{Vars}(\mathtt{P})$, such that for each agent $i$ and each template variable $x\in \mathit{Vars}(\mathtt{P}_i)$, the formula $\kappa(x)$ is of the form $K_i \psi$. The corresponding set of formulas for the epistemic protocol specification is $\Phi = \Phi_\kappa = \{ AG(x \Rightarrow \kappa(x)) ~|~x\in \mathit{Vars}(\mathtt{P}) \}$. As usual for epistemic protocol specifications, an implementation associates to each template variable a boolean formula local to the corresponding agent, such that the resulting system satisfies the specification $\Phi$.% \footnote{By the assumption of locality of $\theta(x)$, validity of $AG(\theta(x)\Rightarrow K_i\psi)$ in a system is equivalent to validity of $AG(\theta(x)\Rightarrow \psi)$, but we retain the epistemic form for emphasis and to maintain the connection to knowledge-based programs.} Thus, whereas a knowledge-based program requires that each knowledge formula in the program be implemented by a {\em necessary and sufficient} local formula, a sound local proposition specification requires only that the implementing local formula be \emph{sufficient}. It is argued in \cite{EMM98} that examples of knowledge-based programs can typically be weakened to sound local proposition specifications without loss of the desired \emph{correctness} properties that hold of all implementations. However, implementations of knowledge-based programs may guarantee \emph{optimality} properties that are not guaranteed by the corresponding sound local proposition specifications. For example, an implementation of a knowledge-based program that states ``if $K_i \phi$ then do $a$" will be optimal in the sense that it ensures that the agent will do $a$ \emph{as soon as} it knows that $\phi$ holds. By contrast, an implementation that replaces $K_i\phi$ by a sufficient condition for this formula may perform $a$ only much later, or even fail to do so, even if the knowledge necessary to do $a$ is deducible from the agent's local state. (An example of such a situation is given in \cite{BaukusMeyden}, which identifies a situation where a cache coherency protocol fails to act on knowledge that it has.) Note that the substitution $\theta_\bot$, defined by $\theta_\bot(x) = \mathbf{false}$ for all template variables $x$, is \emph{always} an implementation for a sound local proposition specification ${\cal S}$ in an environment $\Env$. It is therefore trivial to decide the existence of an implementation, and it is also trivial to produce a succinct representation of an implementation. Of course, an implementation of a program ``if $x$ then do $a$" that sets $x$ to be $\mathbf{false}$ will never perform $a$, so this trivial implementation is generally not of much interest. What is more interesting is to find \emph{good} implementations, that approximate the corresponding knowledge-based program implementations as closely as possible in order to behave as close to optimally as possible, while remaining tractable. Consider the order on substitutions defined by $\theta \leq \theta'$ if for all variables $x$ and states $s\in S$ of the environment we have $\pi(s) \models \theta(x) \Rightarrow \theta'(x)$. If both are implementations of ${\cal S}$ in $\Env$, we may find $\theta'$ preferable in that it provides weaker sufficient conditions (i.e., ones more often true) for the knowledge formulas $K_i\phi$ of interest. Pragmatically, if $\phi$ is a condition that an agent must know to be true before it can safely perform a certain action, the more often the sufficient condition $\theta(x)$ for $K_i\phi$ holds, the more often will the agent perform the action in the implementation. It is therefore reasonable to seek implementations that maximize $\theta$ with respect to the order~$\leq$. The maximal sufficient condition for $K_i\phi$ is $K_i\phi$ itself, in the system $\mathcal{I}(\Env, P\theta)$ corresponding to an implementation $\theta$, expressed as an equivalent local formula.% \footnote{ The existence of such a formula follows from completeness of the set of local propositions. If we extend the propositions in an environment to include for each agent $i$ and possible observation $o$ of the agent, a proposition $p_{i,o}$ that holds at a state $s$ iff $O_i(s) =o$, then the formula $\theta(x)$ such that $\mathcal{I} \models AG(\theta(x) \Leftrightarrow \kappa(x))$, where $\kappa(x) = K_i \phi$, can be constructed as $\bigvee \{ p_{i,o}~|~o \in O_i(S), ~ \mathcal{I}, o \models \kappa(x)\}$, and has size of order the number of observations.} The following result makes this statement precise: \begin{theorem} \label{thm:epskbp} Suppose that ${\cal S}$ is a sound local proposition epistemic protocol specification, and let ${\cal S}'$ be the knowledge-based program resulting from replacing each formula $AG(x \Rightarrow \kappa(x))$ in $\Phi$ by the formula $AG(x \Leftrightarrow \kappa(x))$. Then every implementation $\theta$ of ${\cal S}'$ is an implementation of ${\cal S}$. \end{theorem} However, to have $\theta(x)$ equivalent to $K_i\phi$ in $\mathcal{I}(\Env, P\theta)$ would mean that $\theta$ implements a knowledge-based program. The complexity results of the previous section indicate that this is too strong a requirement, for practical purposes, since it is unlikely to be efficiently implementable. The compromise we explore in this paper is to require $\theta(x)$ to be equivalent to $K_i\phi$ not in the system $\mathcal{I}(\Env, P\theta)$ itself, but in another system that approximates $\mathcal{I}(\Env, P\theta)$. The basis for the correctness of this idea is the following lemma. \begin{lemma} \label{lem:subsysk} Suppose that $\mathcal{I} \subseteq \mathcal{I}'$, that $r$ is a run of $\mathcal{I}$ and that $\phi$ is a formula in which knowledge operators and the branching operator $A$ occur only in positive position. Then $\mathcal{I}',(r,m) \models \phi$ implies $\mathcal{I},(r,m) \models \phi$. \end{lemma} In particular, if, for a sound local proposition epistemic protocol specification ${\cal S}$, the formula $\kappa(x)$ associated to a template variable $x$ is in $\mbox{CTLK}^+$, then this result applies to the formula $AG(x \Rightarrow \kappa(x)) $ in $\Phi_\kappa$, since this is also in $\mbox{CTLK}^+$. Suppose the system $\mathcal{I}'$ approximates the ultimate implementation $\mathcal{I}(\Env, P\theta)$ in the sense that $\mathcal{I}' \supseteq \mathcal{I}(\Env, P\theta)$. Let $\theta(x)$ be a local formula such that $\mathcal{I}' \models AG(\theta(x) \Leftrightarrow \kappa(x))$. Then also $\mathcal{I}' \models AG(\theta(x) \Rightarrow \kappa(x))$, hence, by Lemma~\ref{lem:subsysk}, $\theta(x)$ will also satisfy the correctness condition $\mathcal{I}(\Env, P\theta) \models AG(\theta(x) \Rightarrow \kappa(x))$ necessary for $\theta$ to be an implementation of ${\cal S}$. Our approach to constructing good implementations of ${\cal S}$ will be to compute local formulas $\theta(x)$ that are equivalent to $\kappa(x)$ in approximations $\mathcal{I}'$ of the ultimate implementation being constructed. We take this idea one step further. Suppose that we have used this technique to determine the value of $\theta(x)$ for some of the template variables $x$ of ${\cal S}$. Then we have increased our information about the final implementation $\theta$, so we are able to construct a \emph{better} approximation $\mathcal{I}''$ to the final implementation $\mathcal{I}(\Env, P\theta)$, in the sense that $\mathcal{I}' \supseteq \mathcal{I}'' \supset \mathcal{I}(\Env, P\theta)$. Note that if $\mathcal{I}' \models AG(\phi' \Leftrightarrow \kappa(y))$ and $\mathcal{I}'' \models AG(\phi'' \Leftrightarrow \kappa(y))$, then it follows from $\mathcal{I}' \supseteq \mathcal{I}''$ that $\mathcal{I}'' \models AG(\phi' \Rightarrow \phi'')$. That is, $\phi''$ is weaker than $\phi'$, and hence a better approximation to the knowledge condition $\kappa(y)$ in the ultimate implementation $\mathcal{I}(\Env, P\theta)$. Thus, by proceeding iteratively through the template variables, and improving the approximation as we construct a partial implementation, we are able to obtain better approximations to $\kappa(y)$ in $\mathcal{I}(\Env, P\theta)$ for later variables. More precisely, suppose that we have a total pre-order on the set of all template variables $\mathit{Vars}(\mathtt{P}) = \cup_{i \in Ags} \mathit{Vars}(\mathtt{P}_i)$, i.e., a binary relation $\leq $ on this set that is transitive and satisfies $x\leq y \lor y\leq x$ for all $x,y\in \mathit{Vars}(\mathtt{P})$. Let this be represented by the sequence of subsets $X_1, \ldots , X_k$, where for $i\leq j$ and $x\in X_i$ and $y\in X_j$ we have $x< y$ if $i<j$ and $x \leq y \leq x$ if $i=j$. Suppose we have a sequence of interpreted systems $\mathcal{I}_0 \supseteq \ldots \supseteq \mathcal{I}_k$. Define a substitution $\theta$ to be \emph{consistent} with this sequence if for all $i= 1\ldots k$ and $x\in X_i$, we have $\mathcal{I}_{i-1} \models AG(\theta(x) \Leftrightarrow \kappa(x))$. That is, consistent substitutions associate to each template variable $x$ a local formula that is equivalent to (not just sufficient for) $\kappa(x)$, but in an associated approximation system rather than in the final implementation. \begin{proposition} \label{prop:approx} Suppose that $\mathcal{I}_k $ is isomorphic to $\mathcal{I}(\Env, P\theta)$, and that for all $x\in \mathit{Vars}(\mathtt{P})$, the formula $\kappa(x)$ contains knowledge operators and the branching operator $A$ only in positive position. Then $\theta$ implements the epistemic protocol specification $\langle Ags, \Env, P, \Spec_\kappa \rangle$. \end{proposition} We will apply this result as follows: define an \emph{approximation scheme} to be a mapping that, given an epistemic protocol specification ${\cal S}= \langle Ags, \Env, P,\Phi\rangle$ and a partial substitution $\theta$ for ${\cal S}$, yields a system $\mathcal{I}({\cal S}, \theta)$, satisfying the conditions \be \item if $\theta \subseteq \theta'$ then $\mathcal{I}({\cal S}, \theta) \supseteq \mathcal{I}({\cal S}, \theta')$, and \item if $\theta$ is total, then $\mathcal{I}({\cal S}, \theta)$ is isomorphic to $\mathcal{I}(\Env, P \theta)$. \ee Assume now that ${\cal S}$ is a sound local proposition specification based on the mapping $\kappa$. Given the ordering $\leq$ on $\mathit{Vars}(\mathtt{P})$, with the associated sequence of sets $X_1 \ldots X_k$, we define the sequence $\theta_0, \theta_1, \ldots, \theta_k$ inductively by $\theta_0 = \emptyset$ (the partial substitution that is nowhere defined), and $\theta_{j+1}$ to be the extension of $\theta_j$ obtained by defining, for $x\in X_{j+1}$, the value of $\theta_{j+1}(x)$ to be the local proposition $\phi$ such that $\mathcal{I}({\cal S}, \theta_j) \models AG( \phi \Leftrightarrow \kappa(x))$. Plainly $\theta_0 \subseteq \theta_1 \subseteq \ldots \subseteq \theta_k$, so we have $\mathcal{I}({\cal S}, \theta_0) \supseteq \mathcal{I}({\cal S}, \theta_1) \supseteq \ldots \supseteq \mathcal{I}({\cal S}, \theta'_k)$. It follows from the properties of the approximation scheme and Proposition~\ref{prop:approx} that the substitution $\theta_k$ is total and is an implementation of ${\cal S}$. This idea leads to an extension of the idea of epistemic protocol specifications: we now consider specifications of the form $({\cal S}, \leq)$, where ${\cal S}$ is a sound local proposition epistemic protocol specification, and $\leq$ is a total pre-order on the template variables of ${\cal S}$. Given an approximation scheme, the construction of the previous paragraph yields a unique implementation of ${\cal S}$. Intuitively, by specifying an order $\leq$, the programmer fixes the order in which implementations are synthesized for the template variables, and the approach guarantees that variables later in the order are synthesized using information about the values of variables earlier in the order. \section{A spectrum of approximations} \label{sec:approx} It remains to determine which approximation scheme to use in the approach to constructing implementations described in the previous section. In this section, we consider a number of possibilities for the choice of approximation scheme. A number of criteria may be applied to the choice of approximation scheme. For example, since the programmer must select the order in which variables are synthesized, the approximation scheme should be simple enough to be comprehensible to the programmer, so that they may understand the consequences of their ordering decisions. On the other hand, since synthesis is to be automated, we would like the computation of the values $\theta(x)$ to be efficient. This amounts to efficiency of the model checking problem $\mathcal{I}({\cal S}, \theta')\models \kappa(x)$ for partial substitutions $\theta'$ and formulas $\kappa(x) \in \mbox{CTLK}^+$. To analyze this complexity, we work below with a complexity measure that assumes explicit state representations of environments, but we look for cases where the model checking problem in the approximation systems is solvable in PTIME. We assume that the protocol template $\mathtt{P}$ and the formulas $\Phi$ in the epistemic protocol specification are fixed, and measure complexity as a function of the size of the environment $\Env$. This is because in practice, the size of the environment is likely to be the dominant factor in complexity. One immediately obvious choice for the approximation scheme is to take the system $\mathcal{I}({\cal S}, \theta)$, for a partial substitution $\theta$, to be the union of all the systems $ \mathcal{I}(E,P\theta')$, over all total substitutions $\theta'$ that extend the partial substitution $\theta$. This turns out not to be a good choice (it is the intractable case $\mathcal{I}_{ii,ir,sc}$ below), so we consider a number of relaxations of this definition. The following abstract view of the situation provides a convenient format that unifies the definition of these relaxations. \newcommand{\strat}{\sigma} \newcommand{\seq}{\rho} Given an environment $E$ with states $S$, define a \emph{strategy} for $E$ to be a function $\strat: S^+ \rightarrow \powerset{S}\setminus{\emptyset}$ mapping each nonempty sequence of states to a set of possible successors. We require that for each $t\in \strat(s_0\ldots s_k)$ we have $s_k \ptrans{\mathbf{a}} t$ for some joint action $\mathbf{a}$. Given a set $\Sigma$ of strategies, we can construct an interpreted system consisting of all runs consistent with some strategy in $\Sigma$. We encode the strategy into the run. We use the extended set of global states $S \times \Sigma$. We take $\mathcal{R}_\Sigma$ to be the set of all $r: \mathbf{N} \rightarrow S \times \Sigma$ such that there exists a strategy $\strat$ such that for all $n\in \mathbf{N}$ we have $r(n) = (s_n, \strat)$, for some $s_n \in S$, and, we have $s_{n+1} \in \strat(s_0 s_1 \ldots s_n) $ for all $n \in \mathbf{N}$. Intuitively, this is the set of all infinite runs, each using some fixed strategy in $\Sigma$, with the strategy encoded into the state. We define $\mathcal{I}(E, \Sigma) = (\mathcal{R}_\Sigma, \sim, \pi') $ where $\sim = \{\sim_i\}_{i\in Ags}$ is the relation on points of $\mathcal{R}_\Sigma$ defined by $(r,m) \sim_i (r',m')$ if, with $r(m) = (s, \strat)$ and $r'(m') = (s' ,\strat')$, we have $O_i(s) = O_i(s')$. The interpretation $\pi'$ on $S \times \Sigma$ is defined so that $\pi'(s,\strat) = \pi(s)$, where $s \in S$, $\strat \in \Sigma$ and $\pi$ is the interpretation from $\Env$. \newcommand{\pinf}{\mathit{pi}} \newcommand{\ii}{\mathit{ii}} \newcommand{\pr}{\mathit{pr}} \newcommand{\ir}{\mathit{ir}} A \emph{memory definition} is a collection of functions $\mu = \{\mu_i\}_{i\in Ags}$ with each $\mu_i$ having domain $S^+$. In particular, we work with the following memory definitions derived using the observation functions in the environment $E$: \begin{itemize} \item The {\em perfect information, perfect recall} definition $\mu^{\pinf,\pr} = \{\mu^{\pinf,\pr}_i\}_{i\in Ags}$ where \\ $\mu_i^{\pinf,\pr}(s_0\ldots s_k) = s_0\ldots s_k$ \item The {\em perfect information, imperfect recall} definition $\mu^{\pinf,\ir} = \{\mu^{\pinf,\ir}_i\}_{i\in Ags}$ where \\ $\mu_i^{\pinf,\ir}(s_0\ldots s_k) = s_k$ \item The {\em imperfect information, perfect recall} definition $\mu^{\ii,\pr} = \{\mu^{\ii,\pr}_i\}_{i\in Ags}$ where \\ $\mu_i^{\ii,\pr}(s_0\ldots s_k) = O_i(s_0)\ldots O_i(s_k)$ \item The {\em imperfect information, imperfect recall} definition $\mu^{\ii,\ir} = \{\mu^{\ii,\ir}_i\}_{i\in Ags}$ where \\ $\mu_i^{\ii,\ir}(s_0\ldots s_k) = O_i(s_k)$ \end{itemize} A strategy \emph{depends} on memory definition $\mu$ if there exist functions $F_i: \mathit{range}(\mu_i) \rightarrow \powerset{\Acts_i}$ for $i \in Ags$ such that for all sequences $\rho= s_0 \ldots s_k$, we have $t\in \strat(s_0 \ldots s_k)$ iff $s \ptrans{\mathbf{a}} t$ for some joint action $\mathbf{a}$ such that for all $i\in Ags$, we have $\mathbf{a}_i \in F_i(\mu_i(s_0 \ldots s_k))$. Let $\mathtt{P}$ be a joint protocol template and let $\theta$ be a partial substitution for $\mathtt{P}$. A strategy $\strat$ is \emph{substitution consistent} with respect to $\mathtt{P},\theta$ and a memory definition $\mu$ if $\strat$ depends on $\mu$ and for all sequences $s_0 \ldots s_k$ there exists a substitution $\theta'\supseteq \theta$ mapping all the template variables of $\mathtt{P}$ undefined by $\theta$ to truth values, such that \begin{equation}\label{eq:subcons} \strat(s_0 \ldots s_k) = \{t ~|~\text{there exists } \act{a} \in \mathit{en}(P \theta', s_k), ~ s_k \ptrans{\act{a}} t \} \end{equation} Note that since the choice of $\theta'$ is allowed to depend on $s_0 \ldots s_k$, this does not imply that the set of possible successors states $\strat(s_0 \ldots s_k)$ depends only on the final state $s_k$; the reference to $s_k$ in the right hand side of equation~\ref{eq:subcons} is included just to allow the enabled actions to be determined in a way consistent with the substitution $\theta$, which already associates some of the variables with predicates on the state $s_k$. \begin{example} \label{ex:top-nsc} Consider the maximally nondeterministic, or \emph{top}, strategy $\strat_\top$, defined by $\strat_\top(s_0\ldots s_k) = \{ t ~|~ \text{there exists $\act{a}\in \Acts$} , s_k \ptrans{\act{a}} t\}$ for all $s_0\ldots s_k$. Intuitively, this strategy allows any action to be taken at any time. It is easily seen that $\strat_\top$ depends on every memory definition $\mu$. However, it is not in general substitution consistent, since there are protocol templates for which the set of enabled actions (and hence the transitions) depend on the substitution. Consider the protocol template $\mathtt{P} = \pdo~ x \rightarrow a~ []~ \neg x \rightarrow b ~\pdor$ for a single agent, in an environment with states $S = \{s_0,s_1,s_2\}$ and transitions $s_0 \ptrans{a} s_1$, $s_0 \ptrans{b} s_2$, $s_1 \ptrans{a,b} s_1$ and $s_2 \ptrans{a,b} s_2$. Let $\theta$ be the empty substitution. For all substitutions $\theta'$, $\mathit{en}(\mathtt{P}\theta', s_0)$ is either $\{a\}$ or $\{b\}$, so for the sequence $s_0$, the right hand side of equation~(\ref{eq:subcons}) is equal to either $\{s_1\}$ or $\{s_2\}$. For the strategy $\sigma_\top$, we have $\sigma_{\top}(s_0) = \{s_1,s_2\}$. Hence this strategy is not substitution consistent in this environment. $\boxempty$ \end{example} \newcommand{\mathit{sc}}{\mathit{sc}} \newcommand{\mathit{nsc}}{\mathit{nsc}} We now obtain eight sets of strategies by choosing an information mode $a \in \{\pinf,\ii\}$, a recall mode $b\in \{\pr,\ir\}$ and a selection $c\in \{\mathit{sc},\mathit{nsc}\}$ to reflect a choice with respect to the requirement of substitution consistency. Formally, given a joint protocol template $\mathtt{P}$, a partial substitution $\theta$ for $\mathtt{P}$, and an environment $\Env$, we define $\Sigma^{a,b,c}(\mathtt{P},\theta, \Env)$ to be the set of all strategies in $\Env$ that depend on $\mu^{a,b}$, and that are substitution consistent with respect to $P, \theta$ and $\mu^{a,b}$ in the case $c = \mathit{sc}$. Corresponding to these eight sets of strategies, we obtain eight approximation schemes. Let ${\cal S}$ be an epistemic protocol specification with joint protocol template $\mathtt{P}$, and environment $\Env$. Given a partial substitution $\theta$ for $\mathtt{P}$, and a triple $a,b,c$, we define the system $\mathcal{I}_{a,b,c}({\cal S}, \theta)$ to be $\mathcal{I}(\Sigma^{a,b,c}(P, \theta,E),E)$. \begin{proposition} For each information mode $a \in \{\pinf,\ii\}$, a recall mode $b\in \{\pr,\ir\}$ and selection $c\in \{\mathit{sc},\mathit{nsc}\}$, the mapping $\mathcal{I}_{a,b,c}$ is an approximation scheme. \end{proposition} Additionally we have the approximation scheme $\mathcal{I}^{\top}({\cal S}, \theta)$ defined to be $\mathcal{I}(\{\strat^\top_{\Env, P\theta}\},\Env)$, based on the top strategy in $\Env$ relative to the protocol template $\mathtt{P}\theta$, which is defined by taking $\strat^\top_{\Env, \mathtt{P}\theta} (s_0 \ldots s_k)$ to be the set of all states $t\in S$ such that there exists a joint action $a\in \Acts$ such that for all $ i \in Ags$, the protocol template $\mathtt{P}_i\theta$ contains a clause $\phi \theta \rightarrow a_i$ with $\phi\theta$ satisfiable relative to $\pi(s_k)$. (We note that here $\pi(s_k)$ provides the values of propositions $\mathit{Prop}$ and we are asking for satisfiability for some assignment to the variables on which $\theta$ is undefined. Because we are interested in the case where $\mathtt{P}$, and hence $\phi$, is fixed, this satisfiability test can be performed in PTIME as the environment varies.) For reasons indicated in Example~\ref{ex:top-nsc}, the strategy $\strat^\top_{\Env, P\theta}$ is not substitution-consistent. However, it is easily seen to depend only on the values $O_i(s_k)$, so we have $\strat^\top_{\Env, P\theta} \in \Sigma^{ii,ir,nsc}$. Figure~\ref{fig:lattice} shows the lattice structure of the approximation schemes, with an edge from a scheme $\mathcal{I}$ to a scheme $\mathcal{I}'$ meaning that $\mathcal{I}'$ is a closer approximation to the final system $\mathcal{I}(E, \mathtt{P}\theta)$ synthesized, informally in the sense that $\mathcal{I}$ has more runs and more branches from any point than does $\mathcal{I}'$. (Generally, the relation is one of simple containment of the sets of runs, but in the case of edges involving $\mathcal{I}(E, \mathtt{P}\theta)$ and $\strat^\top$, we need a notion of simulation to make this precise.) \begin{figure} \centerline{\includegraphics[height=7cm]{lattice.pdf}} \caption{Lattice structure of the approximations\label{fig:lattice}} \end{figure} Besides yielding an approach to the construction of implementations of epistemic protocol specifications, we note that our approach also overcomes the counterintuitive aspect of knowledge-based programs illustrated in Example~\ref{ex:picnic}. \begin{example} Suppose that we replace the specification formulas $AG(x_i\Leftrightarrow K_iAX w)$ in Example~\ref{ex:picnic} by the weaker form $AG(x_i\Rightarrow K_iAX w)$, and impose the ordering $x_A< x_B$ on the template variables. We compute the implementation obtained when we use $\mathcal{I}^{\top}$ as the approximation scheme. We take $\theta_0$ to be the empty substitution. $\mathcal{I}(\{\strat^\top_{\Env, P\theta_0}\},\Env)$ has all possible behaviours of the original environment, so at the start state, we have $\neg K_A (AXw)$. It follows that substitution $\theta_1$, which has domain $\{x_A\}$ assigns to $x_A$ a local proposition that evaluates to $\mathbf{false}$ at the initial state. Hence, $\mathtt{P}_A\theta_1$ selects action $\act{w}$ at the initial state. The effect of this is to delete the bottom transition from the state transition diagram for the environment in Figure~\ref{fig:picnicenv}. It follows that in $\mathcal{I}(\{\strat^\top_{\Env, P\theta_1}\},\Env)$, we have $K_B(AX w)$ at the initial state, so $\theta_2(x_B)$ evaluates to $\mathbf{true}$ at the initial state. This means that the final implementation $\mathtt{P} \theta_2$ is the protocol in which Alice brings wine and Bob brings cheese, leading to a successful picnic, by contrast with the knowledge-based program, which does not yield any solutions to their planning problem. (We remark that both Alice and Bob could compute this implementation independently, once given the ordering on the variables. They do not need to communicate during the computation of the implementation.) $\boxempty$ \end{example} We noted above in Theorem~\ref{thm:epskbp} that a sound local proposition specification obtained from a knowledge-based program includes, amongst its implementations, all the implementations of the knowledge-based program. The knowledge-based program, in effect, imposes additional optimality constraints on these implementations. Our ordered semantics aims to approximate these optimal implementations. It is therefore of interest to determine whether the ordered semantics for sound local proposition specifications can sometimes find such optimal implementations. Although it is not true in general, there are situations where the implementations obtained are indeed optimal. The following provides an example. \begin{example} Consider the sound local proposition specification obtained from the knowledge-based program of Example~\ref{two_robot} by replacing the $\Leftrightarrow$ operators in the formulas by $\Rightarrow$. That is, we take $\Phi$ to contain the formulas $$AG(x\Rightarrow K_A(position_A \geq 2)) $$ and $$AG(y\Rightarrow K_B(\bigwedge_{p\in [0, \ldots 10]} position_B = p \Rightarrow AG (position_A < p -1))) $$ We consider the setting where sensors readings are within 1 of the actual position. Suppose that we use $\mathcal{I}^{\top}$ as the approximation scheme, and order the template variables using $x<y$, i.e., we synthesize a solution for $A$ before synthesizing a solution for $B$ (knowing what $A$ is doing.) Then, for $A$, we construct $\theta(x)$ as the local proposition for $A$ that satisfies $$AG(x\Leftrightarrow K_A(position_A \geq 2)) $$ in a system where both $A$ and $B$ may choose either action $\act{Move}$ or $\act{Halt}$ at any time. We obtain the substitution $\theta_1$ where $\theta_1(x)$ is $sensor_A\geq 3$, which ensures that always $position_A \leq 4$, and in which $A$ may halt at a position in the set $\{2,3,4\}$. In the next step, we synthesize $\theta(y)$ as the local proposition such that $$AG(y\Leftrightarrow K_B(\bigwedge_{p\in [0, \ldots 10]} position_B = p \Rightarrow AG (position_A < p -1))) $$ in the system where $A$ runs $\mathtt{P}_A\theta_1$, and where $B$ may choose either action $\act{Move}$ or $\act{Halt}$ at any time. In this system, $B$ knows that $A$'s position is always at most $4$, so it is safe for $B$ to move if $position_B \geq 6$. Agent $B$ knows that its position is at least 6 when it gets a sensor reading at least 7. Hence, we obtain the substitution $\theta_2$ where $\theta_2(y)$ is $sensor_B\geq 7$ and $\theta_2(x)$ is $sensor_A\geq 3$. It can be verified that this substitution is in fact an implementation of the original knowledge-based program. \end{example} \section{Complexity of model checking in the approximations} \label{sec:complex} To construct an implementation based on the extended epistemic protocol specification $({\cal S}, \leq)$ using an approximation scheme $\mathcal{I}({\cal S}, \theta)$, we need to perform model checking of formulas in $\mbox{CTLK}^+$ in the systems produced by the approximation scheme. We now consider the complexity of this problem for the approximation schemes introduced in the previous sections. We focus on the complexity of this problem with the protocol template fixed as we vary the size of the environment, for reasons explained above. \newcommand{\glob}{\mathit{global}} \newcommand{\runencoded}{\mathit{run-enc}} We say that the \emph{environment-complexity} of an approximation scheme $\mathcal{I}({\cal S}, \theta)$ is the maximal complexity of the problem of deciding $\mathcal{I}({\cal S}, \theta), o \models \kappa(x)$ with all components fixed and only the environment $\Env$ in ${\cal S}$ varying. More precisely, write ${\cal S}^- = \langle Ags, \mathtt{P}, \kappa \rangle$ for a tuple consisting of a set $Ags$ of agents, a collection $\mathtt{P} = \{\mathtt{P}_i\}_{i\in Ags}$ of protocol templates for these agents, and a mapping $\kappa$ associating, for each agent $i$, a formula $\kappa(x) = K_i\phi$ of $\mbox{CTLK}^+$ to each template variable $x$ in $\mathtt{P}_i$. Given an environment $\Env$, write ${\cal S}^-(\Env)$ for the epistemic protocol specification $ \langle Ags, \Env , \{\mathtt{P}_i\}_{i\in Ags}, \Spec_\kappa\rangle$ obtained from these components. Say that $\Env$ \emph{fits} a tuple $({\cal S}^-,\theta, o,x)$ consisting of ${\cal S}^-$ as above, a substitution $\theta$ assigning a boolean formula to a subset of the template variables in $\mathtt{P}$, an observation $o$ and a variable $x$, if $\Env$ contains all actions used in $\mathtt{P}$, $o$ is an observation in $\Env$ of the agent $i$ such that $\mathtt{P}_i$ contains $x$, and for each $x$ such that $\theta(x)$ is defined, the formula $\theta(x)$ is local in $E$ to the agent $i$ such that $\mathtt{P}_i$ contains $x$. Given ${\cal S}^-= \langle Ags, \mathtt{P}, \kappa \rangle$ and $\theta$, $o$ and $x$, define $EC_{({\cal S}^-,\theta,o,x)}$ to be the set $$ \{\Env ~|~ \Env \text{ fits }({\cal S}^-,\theta,o,x) \text{ and } \mathcal{I}({\cal S}^-(\Env),\theta), o \models \kappa (x) \} ~.$$ Then the environment-complexity of an approximation scheme $\mathcal{I}({\cal S}, \theta)$ is the maximal complexity of the problem of deciding the sets $EC_{({\cal S}^-,\theta,o,x)}$ over all choices of ${\cal S}^-$, $\theta$, $o$ and~$x$ . We note that even though we have allowed perfect recall and/or perfect information in the strategy spaces used by the approximation, when we model check in the system generated by the approximation, knowledge operators are handled using the usual observational (imperfect recall, imperfect information) semantics. The stronger capabilities of the strategies are used to increase the size of the strategy space in order to weaken the approximation. (Model checking with respect to perfect recall, in particular, would \emph{increase} the complexity of the model checking problem, whereas we are seeking to decrease its complexity.) It turns out that several of the approximation schemes, that are closest to the final system synthesized (which would give the knowledge-based program semantics), share with the knowledge-based program semantics the disadvantage of being intractable. These are given in the following result. \begin{theorem} The approximation schemes $\mathcal{I}_{ii,ir,sc}$, $\mathcal{I}_{ii,pr,sc}$, and $\mathcal{I}_{pi,ir,sc}$ have coNP-hard environment complexity, even for a single agent. \end{theorem} Each of these intractable cases uses substitution consistent strategies and uses either imperfect recall or imperfect information. The proofs vary, but one of the key reasons for complexity in the imperfect recall cases is that the strategy must behave the same way each time it reaches a state. Intuitively, this means that we can encode existential choices from an NP hard problem using the behaviour of a strategy at a state in this case. In the case of $\mathcal{I}_{ii,pr,sc}$, we use obligations on multiple branches indistinguishable to the agent to force consistency of independent guesses representing the same existential choice. All the remaining approximation schemes, it turns out, are tractable: \begin{theorem} The approximation schemes $\mathcal{I}^\top$, $\mathcal{I}_{ii,ir,nsc}$, $\mathcal{I}_{pi,pr,sc}$, $\mathcal{I}_{pi,ir,nsc}$, $\mathcal{I}_{ii,pr,nsc}$ and $\mathcal{I}_{pi,pr,nsc}$ have environment complexity in PTIME. \end{theorem} The reasons are varied, but there are close connections to some known results. The scheme $\mathcal{I}^\top$ effectively builds a new finite state environment from the environment and protocol by allowing some transitions that would normally be disabled by the protocol, so its model checking problem reduces to an instance of CLTK model checking, which is in PTIME by a mild extension of the usual CTL model checking approach. It turns out, moreover, by simulation arguments, that for model checking $\mbox{CTLK}^+$ formulas, the approximations $\mathcal{I}_{ii,ir,nsc}$ and $\mathcal{I}_{ii,pr,nsc}$ are equivalent to $\mathcal{I}^\top$, i.e., satisfy the same formulas at the same states, so the algorithm for $\mathcal{I}^\top$ also resolves these cases. The cases $\mathcal{I}_{pi,pr,sc}$ and $\mathcal{I}_{pi,pr,nsc}$ are very close to the problem of \emph{module checking} of universal $\mbox{CTL}$ formulas, which is known to be in PTIME \cite{KupfermanVW01}. The proof technique here involves an emptiness check on a tree automaton representing the space of perfect information, perfect recall strategies (either substitution consistent or not required to be so), intersected with an automaton representing the complement of the formula. The cases $\mathcal{I}_{pi,pr,nsc}$ and $\mathcal{I}_{pi,ir,sc}$ can moreover be shown to be equivalent by means of simulation techniques, so the latter also falls into PTIME. The demarcation between the PTIME and co-NP hard cases is depicted in Figure~\ref{fig:lattice}. This shows there are two best candidates for use as the approximation scheme underlying our synthesis approach. We desire an approximation scheme that is as close as possible to the knowledge-based program semantics, while remaining tractable. The diagram shows two orthogonal approximation schemes that are maximal amongst the PTIME cases, namely $\mathcal{I}^\top$ and $\mathcal{I}^{pi,pr,sc}$. The former generates a bushy approximation in that it relaxes substitution consistency. The latter remains close to the original protocol by using substitution consistent strategies, but at the cost of allowing perfect information, perfect recall strategies. It is not immediately clear what the impact of these differences will be with respect to the quality of the implementations synthesized using these schemes, and we leave this as a question for future work. \section{Related Work} \label{sec:related} Relatively little work has been done on automated synthesis of implementations of knowledge-based programs or of sound local proposition specifications, particularly with respect to the observational semantics we have studied in this paper. In addition to the works already cited above, some papers \cite{Meyden96fst,Meyden96pricai,meydenvardi,MeydenWilke05,BozianuDF14} have studied the complexity of synthesis with respect to specifications in temporal epistemic logic using the synchronous perfect recall semantics. A symbolic implementation for knowledge-based programs that run only a finitely bounded number of steps under a clock or perfect recall semantics for knowledge is developed in \cite{HMtark13}. There also exists a line of work that is applying knowledge based approaches and model checking techniques to problems in discrete event control, e.g., \cite{BensalemPS10,GrafPQ12,KatzPS11}. In general, the focus of these works is more specific than ours (e.g., in restricting to synthesis for safety properties, rather than our quite general temporal epistemic specifications) but there is a similar use of monotonicity. It would be interesting to apply our techniques in this area and conduct a comparison of the results. \section{Conclusion} \label{sec:concl} In this paper we have proposed an ordered semantics for sound local proposition epistemic protocol specifications, and analyzed the complexity of a model checking problem required to implement the approach, for a number of approximation schemes. This leads to the identification of two optimal approximation schemes, $\mathcal{I}^\top$ and $\mathcal{I}^{pi,pr,sc}$ with respect to which the model checking problem has PTIME complexity in an explicit state representation. A number of further steps are required to obtain a practical framework for synthesis. Ultimately, we would like to be able to implement synthesis using symbolic techniques, so that it can also be practicably carried out for specifications in which the environment is given implicitly using program-like representations, rather than by means of an explicit enumeration of states. The complexity analysis in the present paper develops an initial understanding of the nature of the model checking problems that may be helpful in developing symbolic implementations. In the case of the approximation scheme $\mathcal{I}^\top$, in fact, the associated model checking problem amounts essentially to $\mbox{CTLK}$ model checking in a transformed model, for which symbolic model checking techniques are well understood. In work in progress, we have developed an implementation of this case, and we will report on our experimental findings elsewhere. In the case of the approximation $\mathcal{I}^{pi,pr,sc}$, the model checking problem is more akin to module checking, for which symbolic techniques are less well studied. This case represents an interesting question for future research, as does the question of how the implementations obtained in practice from these tractable approximations differ. Our examples in this paper give some initial data points that suggest both that the ordered approach is able to construct natural implementations for the sound local proposition weakenings of knowledge-based programs that lack implementations, as well as implementations of such weakenings that are in fact implementations of the original knowledge-based program. More case studies are required to understand how general these phenomena are in practice. It would be interesting to find sufficient conditions under which the ordered approach is guaranteed to generate knowledge-based program implementations. \bibliographystyle{eptcs}
train/arxiv
BkiUbzw5qsNCPdQKtNG0
4
0.8
\section{Introduction} \indent Four dimensional quantum gravity~\cite{d}--\cite{bv} is one of the most interesting issues left in the developments of quantum field theory. The big problem in 4D quantum gravity is that the naive perturbation theory breaks down. On the other hand it is believed that quantum gravity in two dimensions is a well-defined quantum field theory~\cite{kpz}--\cite{kn}. Certain formulations of 2D quantum gravity have been solved exactly~\cite{kpz,dk}. This success in two dimensions have inspired many ideas on quantum gravity. Based on such ideas conformal mode dynamics in 4 dimensions have been studied by Antoniadis, Mazur and Mottola~\cite{am,amm1,amm2}. In this paper, we develope these ideas further, and re-investigate four dimensional quantum gravity including the traceless mode. One of the most important idea to define quantum gravity in the generally coordinate invariant way is the background-metric independence. The original expression of quantum gravity defined by the functional integration over the dynamical metric is trivially invariant under any change of non-dynamical background-metric. But, when the functional measures are re-expressed by ones defined on the background-metric, the background-metric independence gives strong constraints on the theory. The background-metric independence includes conformal invariance, which is just the key ingredient to solve 2D quantum gravity exactly~\cite{dk}. As stressed in ref.~\cite{h} the conformal invariance is purely quantum symmetry realized just when gravity is quantized, which does not always require the classical theory to be conformally invariant. Furthermore this idea is independent of dimensions. Naively, it is difficult to imagine that conformally invariant theory is not well-defined. Therefore we think that even in 4 dimensions quantum gravity is well-defined if we formulate it in the background-metric independent way. In two dimensions it is enough to consider the conformal invariance~\cite{dk,h}, while in four dimensions it is necessary to consider the background-metric independence for the tracelesss mode as well as the conformal mode. In four dimensions the measure induces an action with 4 derivatives. So we think that the 4-th order action is rather natural in 4 dimensions~\cite{dp,wp} and the Einstein-Hilbert action should be treated like a mass term~\cite{s}--\cite{bc}, where the square of mass is the inverse of the gravitational coupling constant. The classical limit is then given in the large mass limit. The aim of this paper is to give a proper definition of 4D quantum gravity. In the next section we give general arguments about background-metric independence before going to concrete calculations. In Sect.3 we discuss the induced action for the conformal mode in general cases. We here pay attention to the special property of D-th order operators in D dimensions~\cite{h}. The argument of $D=4$ is essentially used when we evaluate the measure for gravitational fields. After giving some remarks on the measures of matter fields in Sect.4, we evaluate the measure of gravitational field in Sect.5. We then introduce the dimensionless self-coupling constant $t$ for the traceless mode and consider the perturbation theory on $t$~\cite{kn}. The conformal mode is treated in a non-perturbative way. We discuss a model where the measure can be evaluated exactly in the $t \rightarrow 0$ limit. The model in the limit essentially corresponds to the one studied by Antoniadis, Mazur and Mottola~\cite{amm1} though their treatment of the $R^2$-term is differnt from ours. To evaluate the $t$-dependence we give an ansatz based on the background-metric independence for the traceless mode. It is solved in self-consistent manner. In Sect.6 we give some comments on scaling operators in 4 dimensions. \section{General Arguments} \setcounter{equation}{0} \indent Quantum gravity is defined by the functional integral over the metric field as follows: \begin{equation} Z = \int \frac{[g^{-1}dg]_g [df]_g}{\hbox{vol(diff.)}} \exp \Bigl[-I_{CL}(f,g)\Bigr] ~, \end{equation} where $g$ is the metric field restricted to $g_{\mu\nu}=g_{\nu\mu}$ and $\hbox{vol(diff.)}$ is the gauge volume for diffeomorphism. $f$ is a matter field discussed in Sect.4. In this paper we consider scalar and gauge fields. The functional measure for integration over the metric is defined by \begin{equation} <\delta g, \delta g>_g = \int d^D x \hbox{$\sqrt g$}g^{\alpha\beta}g^{\gamma\delta} (\delta g_{\alpha\gamma} \delta g_{\beta\delta}+u\delta g_{\alpha\beta}\delta g_{\gamma\delta})~, \label{2mg} \end{equation} where $u>-1/D$ by positive definitness of the norm. This definition is rather symbolic because the measure depends on the dynamical variables $g$ explicitly. The aim of this paper is to rewrite the measure as one defined on the non-dynamical background-metric as in usual field theories. Decompose the metric into the conformal mode $\phi$, the traceless mode $h$ and the background-metric ${\hat g}$ as follows: \begin{equation} g_{\mu\nu} = \hbox{\large \it e}^{2\phi}{\bar g}_{\mu\nu} \end{equation} and \begin{equation} {\bar g}_{\mu\nu}=\bigl( {\hat g} \hbox{\large \it e}^h \bigr)_{\mu\nu} = {\hat g}_{\mu\lambda} \bigl( \delta^{\lambda}_{~\nu} + h^{\lambda}_{~\nu} + \frac{1}{2} (h^2 )^{\lambda}_{~\nu} + \cdots \bigr) ~. \end{equation} where $tr(h) = h^{\mu}_{~\mu}=0$. An arbitrary variation of the metric is given by \begin{equation} \delta g_{\mu\nu} = 2 \delta\phi g_{\mu\nu} + g_{\mu\lambda}\bigl( \hbox{\large \it e}^{-h}\delta \hbox{\large \it e}^h \bigr)^{\lambda}_{~\nu} ~. \end{equation} Since $tr ( \hbox{\large \it e}^{-h}\delta \hbox{\large \it e}^h ) = \int^1_0 ds ~tr ( \hbox{\large \it e}^{-sh}\delta h\hbox{\large \it e}^{sh} )=0$, the variation of the conformal mode and that of the traceless mode are orthogonal in the functional space defined by the norm (\ref{2mg}). Therefore the measure of metric can be decomposed as \begin{equation} \frac{[g^{-1}dg]_g}{\hbox{vol(diff.)}} = \frac{[d\phi]_g [\hbox{\large \it e}^{-h}d \hbox{\large \it e}^h]_g}{\hbox{vol(diff.)}}~, \end{equation} where the norms for the conformal mode and the traceless mode are defined respectively by \begin{eqnarray} && <\delta \phi, \delta \phi>_g = \int d^D x \sqrt{g} (\delta \phi )^2 ~, \label{2mp} \\ && <\delta h , \delta h >_g = \int d^D x \sqrt{g} ~ tr ( \hbox{\large \it e}^{-h}\delta \hbox{\large \it e}^h )^2 ~. \label{2mh} \end{eqnarray} Let us rewrite the functional measures defined on the dynamical metric $g$ into those defined on the non-dynamical background-metric ${\hat g}$ as in usual quantum field theories. First consider conformal mode dependence of the measures. The partition function will be equivalently expressed as \begin{equation} Z = \int \frac{[d\phi]_{{\hat g}}[\hbox{\large \it e}^{-h}d\hbox{\large \it e}^h]_{{\hat g}}[df]_{{\bar g}}}{ \hbox{vol(diff.)}} \exp \Bigl[ -S(\phi,{\bar g})-I_{CL}(f,g) \Bigr] ~, \end{equation} where $S$ is the action for the conformal mode induced from the measures. It is worth making some remarks on this expression. The first is that we here do not give any change for the classical action. Namely, the induced action is purely the contribution from the measures. The second is that the measures of metric fields are defined on the background metric ${\hat g}$ because of $\det {\bar g} = \det {\hat g}$, while for matter fields they in general depend on the traceless mode explicitly so that they are defined on the metric ${\bar g}$. Originally the partition function is defined by the metric $g=\hbox{\large \it e}^{2\phi}{\bar g}$ so that the theory should be invariant under the simultaneous changes~\cite{dk}: \begin{equation} {\bar g} \rightarrow \hbox{\large \it e}^{2\omega}{\bar g} ~, \qquad \phi \rightarrow \phi-\omega~. \label{2sp} \end{equation} In order that the theory is invariant under these changes, the action $S$ should in general satisfy the following transformation law: \begin{equation} S(\phi-\omega,\hbox{\large \it e}^{2\omega}{\bar g})= S(\phi,{\bar g})-R(\omega,\phi,{\bar g}) ~. \label{2wz} \end{equation} The measure is then transformed as \begin{eqnarray} && [d\phi]_{e^{2\omega}{\hat g}} [\hbox{\large \it e}^{-h}d\hbox{\large \it e}^h]_{e^{2\omega}{\hat g}} [df]_{ e^{2\omega}{\bar g}} \nonumber \\ && = [d\phi]_{{\hat g}}[\hbox{\large \it e}^{-h}d\hbox{\large \it e}^h]_{{\hat g}}[df]_{{\bar g}} \exp \Bigl[ -R(\omega,\phi,{\bar g}) \Bigr] ~. \end{eqnarray} Here note that the measure $[d\phi]_{{\hat g}}$ is invariant under a local shift $\phi \rightarrow \phi-\omega$. Because of this property the invariance under the changes (\ref{2sp}) means the invariance under the conformal change of the background: ${\hat g} \rightarrow \hbox{\large \it e}^{2\omega}{\hat g}$. In this paper we consider the case of $R(\omega,\phi,{\bar g})=S(\omega,{\bar g})$, which is called the Wess-Zumino condition~\cite{wz}. We make some comments on this particular case in the context of scalar field. Explicit form of such an action is given in the next section. Next consider the background-metric independence for the traceless mode. The theory should be invariant under the simultaneous changes \begin{equation} {\hat g} \rightarrow {\hat g}\hbox{\large \it e}^b ~, \qquad \hbox{\large \it e}^h \rightarrow \hbox{\large \it e}^{-b}\hbox{\large \it e}^h ~, \label{2sh} \end{equation} where $tr (b) =0$, which preserves the combination ${\bar g}= {\hat g} \hbox{\large \it e}^h$. The measure for the matter field can be rewritten in the form \begin{equation} [df]_{{\bar g}}=[df]_{{\hat g}} \hbox{\large \it e}^{-W(e^h ,{\hat g})} ~, \end{equation} where the induced action for the traceless mode should satisfy the Wess-Zumino condition~\cite{wz} \begin{equation} W(\hbox{\large \it e}^{-b}\hbox{\large \it e}^h ,{\hat g}\hbox{\large \it e}^b )=W(\hbox{\large \it e}^h ,{\hat g})- W(\hbox{\large \it e}^b ,{\hat g})~. \label{2w} \end{equation} The explicit form of $W$ is discussed in Sect.4. Note that the measure $[\hbox{\large \it e}^{-h}d\hbox{\large \it e}^h]_{{\hat g}}$ is left-invariant under the change $\hbox{\large \it e}^h \rightarrow \hbox{\large \it e}^{-b}\hbox{\large \it e}^h$ so that the theory becomes invariant under the change of the background: ${\hat g} \rightarrow {\hat g}\hbox{\large \it e}^b$. Thus the theory becomes invariant under any change of the background-metric. This is reasonable because the background-metric is quite artificial so that the theory should be independent of how to choose the background-metric. \section{D-th Order Operators in D Dimensions} \setcounter{equation}{0} \indent Before evaluating the measure of gravitational field, we discuss the general cases first. Consider $N$ scalar fields $\varphi_A \quad (A=1,\cdots, N)$, which have an action with $2n$-th derivatives in D dimensions: \begin{equation} I(\varphi,g)=\frac{1}{2(4\pi)^{D/2}}\int d^D x \hbox{$\sqrt g$} \varphi_A D^{(n)}_{AB}\varphi_B ~, \label{3ia} \end{equation} where $D^{(n)}_{AB}= (-\Box)^n \delta_{AB} + \Pi_{AB}$ is a covariant operator. $\Pi_{AB}$ is a lower-derivative matrix operator and $\Box=\nabla^{\mu}\nabla_{\mu}$. Let us calculate the induced action $S(\phi, {\bar g})$ defined by the relation \begin{equation} \int [d\varphi]_g \hbox{\large \it e}^{-I(\varphi ,g)} = \hbox{\large \it e}^{-S(\phi, {\bar g})} \int [d\varphi]_{{\hat g}} \hbox{\large \it e}^{-I(\varphi ,g)}~, \label{3a} \end{equation} where the functional measure of l.h.s. is defined by \begin{equation} <\delta\varphi, \delta\varphi>_g = \int d^D x \hbox{$\sqrt g$} \delta\varphi_A \delta\varphi_A ~. \end{equation} The measure of r.h.s. is defined by replacing the determinant of metric $\sqrt{g}$ into $\sqrt{{\hat g}}$, while note that the action $I$ of r.h.s. depends on $g$, not on ${\bar g}$. Therefore the argument can be applied to a ``non-conformally'' invariant theory also. {}From the definition (\ref{3a}), the variation of the induced action for the conformal mode is given by \begin{eqnarray} \delta_{\phi} S(\phi,{\bar g}) &=& -\delta_{\phi} \log \hbox{$\det^{-1/2}$} D^{(n)} +\delta_{\phi} \log \hbox{$\det^{-1/2}$} {\cal D}^{(n)} \\ &=& \frac{1}{2} \int^{\infty}_{\epsilon}ds Tr \Bigl( \delta_{\phi}D^{(n)} \hbox{\large \it e}^{-sD^{(n)}} \Bigr) -\frac{1}{2} \int^{\infty}_{\epsilon}ds Tr \Bigl( \delta_{\phi}{\cal D}^{(n)} \hbox{\large \it e}^{-s{\cal D}^{(n)}} \Bigr) ~, \nonumber \label{3e} \end{eqnarray} where ${\cal D}^{(n)}=\hbox{\large \it e}^{D\phi}D^{(n)}$ is a non-covariant operator and $\epsilon = 1/L^{2n}$. Here $L \rightarrow \infty$ is a cutoff. The variation of the $2n$-th order operator can be written in the form $\delta_{\phi}D^{(n)}=-2n\delta\phi D^{(n)} +\delta K$, where $\delta K$ depends on the details of lower derivative terms. The variation of ${\cal D}^{(n)}$ is given by $\delta_{\phi} {\cal D}^{(n)}=(D-2n)\delta\phi {\cal D}^{(n)}+ \hbox{\large \it e}^{D\phi}\delta K$. Using these variations we get the following expression: \begin{eqnarray} \delta_{\phi} S(\phi, {\bar g}) & = & -n Tr \Bigl( \delta\phi \hbox{\large \it e}^{-\epsilon D^{(n)}} \Bigr) + \frac{1}{2} Tr \Bigl( \delta K D^{(n)-1} \Bigr) \nonumber \\ && +(n-D/2) Tr \Bigl( \delta\phi \hbox{\large \it e}^{-\epsilon {\cal D}^{(n)}} \Bigr) - \frac{1}{2} Tr \Bigl( \hbox{\large \it e}^{D\phi}\delta K {\cal D}^{(n)-1} \Bigr) \nonumber \\ & = & -n Tr \Bigl( \delta\phi \hbox{\large \it e}^{-\epsilon D^{(n)}} \Bigr) +(n-D/2) Tr \Bigl( \delta\phi \hbox{\large \it e}^{-\epsilon {\cal D}^{(n)}} \Bigr) ~. \end{eqnarray} The last equality is proved by using the relation between the Green functions: $<x| {\cal D}^{(n)-1} |x^{\prime}>_{{\bar g}} = <x| D^{(n)-1} |x^{\prime}>_g$ such that \begin{eqnarray} && Tr \Bigl( \hbox{\large \it e}^{D\phi} \delta K {\cal D}^{(n)-1} \Bigr) = tr \int d^D x \sqrt{{\hat g}} \hbox{\large \it e}^{D\phi} \delta K <x|{\cal D}^{(n)-1}|x>_{{\bar g}} \nonumber \\ && \quad = tr \int d^D x \sqrt{g} \delta K <x|D^{(n)-1}|x>_g = Tr \Bigl( \delta K D^{(n)-1} \Bigr) ~, \end{eqnarray} where $tr$ takes over the indices $A$, $B$. For $D=2n$, the expression is simplified. The non-covariant part vanishes so that the variation of the induced action is written by using the covariant quantity ${\cal H}^{(n)}(x,\epsilon)=<x|\hbox{\large \it e}^{-\epsilon D^{(n)}}|x>_g$. Furthermore, in this case, the induced action $S(\phi, {\bar g})$ satisfies the Wess-Zumino condition. It is proved in the following. Let us apply the simultaneous changes (\ref{2sp}) to both sides of the definition ({\ref{3a}). The l.h.s. is invariant under the changes so that we obtain the following relation: \begin{equation} \hbox{\large \it e}^{-S(\phi-\omega, e^{2\omega}{\bar g})} \int [d\varphi]_{e^{2\omega}{\hat g}} \hbox{\large \it e}^{-I(\varphi,g)} = \hbox{\large \it e}^{-S(\phi,{\bar g})} \int [d\varphi]_{{\hat g}} \hbox{\large \it e}^{-I(\varphi,g)} ~. \end{equation} Now define the action $R(\omega,\phi,{\bar g})$ by the relation \begin{equation} \int [d\varphi]_{e^{2\omega}{\hat g}} \hbox{\large \it e}^{-I(\varphi,g)} = \hbox{\large \it e}^{-R(\omega,\phi,{\bar g})} \int [d\varphi]_{{\hat g}} \hbox{\large \it e}^{-I(\varphi,g)} ~. \end{equation} Then we obtain the general relation (\ref{2wz}). Next consider the variation of $R(\omega,\phi,{\bar g})$ w.r.t. the conformal mode $\phi$, which is given by \begin{equation} \delta_{\phi} R(\omega,\phi,{\bar g}) = - \delta_{\phi} \log \hbox{$\det^{-1/2}$} {\cal D}^{(n)}_{\omega} + \delta_{\phi} \log \hbox{$\det^{-1/2}$} {\cal D}^{(n)} ~, \end{equation} where ${\cal D}^{(n)}_{\omega}= \hbox{\large \it e}^{-D\omega}{\cal D}^{(n)}$ and ${\cal D}^{(n)}$ has been defined before. As in the same way discussed above we obtain the following expression: \begin{equation} \delta_{\phi} R(\omega,\phi,{\bar g}) = (D/2 - n) \Bigl[ Tr \Bigl(\delta\phi \hbox{\large \it e}^{-\epsilon {\cal D}^{(n)}_{\omega}} \Bigr) -Tr \Bigl(\delta\phi \hbox{\large \it e}^{-\epsilon {\cal D}^{(n)}} \Bigr) \Bigr] ~, \end{equation} where we use the relation between the Green functions: $<x| {\cal D}^{(n)-1}_{\omega} |x^{\prime}>_{e^{2\omega}{\bar g}}= <x| {\cal D}^{(n)-1} |x^{\prime}>_{{\bar g}}$. Therefore, in the case of $D=2n$, the action is independent of $\phi$ such that $R(\omega,\phi,{\bar g})=R(\omega,{\bar g})$. {}From the condition at $\phi=\omega$, the action $R(\omega,{\bar g})$ is nothing but $S(\omega,{\bar g})$. Thus we proved that the induced action $S(\phi,{\bar g})$ of $D=2n$ defined by the relation (\ref{3a}) satisfies the Wess-Zumino condition. In two dimensions consider the usual second order operator. It is well-known that the finite term of the heat kernel expansion for ${\cal H}^{(1)}(x,\epsilon)$ is given by the scalar curvature. Thus the integrated action is given by the Liouville action even though the classical theory is not conformally invariant~\cite{h}. In four dimensions we must consider the 4-th order operator. The induced action is then given by integrating the covariant quantity ${\cal H}^{(2)}(x,\epsilon)$ over the conformal mode. {}From the general argument by Duff~\cite{duff}, such a quantity, or what is called trace anomaly depends only on two constants $a$ and $b$ so that the induced action is given by \begin{eqnarray} S(\phi,{\bar g}) &=& - \frac{1}{(4\pi)^2} \int d^4 x \int^{\phi}_0 \delta\phi \sqrt{g} ~{\rm a}^{(2)}_2 \nonumber \\ &=& \frac{1}{(4\pi)^2} \int d^4 x \int^{\phi}_0 \delta\phi \sqrt{g} \biggl[ a \biggl( F+\frac{2}{3}\Box R \biggr) + b G \biggr] ~, \end{eqnarray} where ${\rm a}^{(2)}_2$ is the finite term of ${\cal H}^{(2)}(x,\epsilon)$ defined in eq. (\ref{bh}). The constant of integration is determined by the condition $S(\phi=0, {\bar g})=0$ because both sides of functional integrations are equivalent at $\phi=0$. $F$ and $G$ are the square of Weyl tensor and Euler density respectively: \begin{eqnarray} F &=& R_{\mu\nu\lambda\sigma}R^{\mu\nu\lambda\sigma}-2R_{\mu\nu}R^{\mu\nu} +\frac{1}{3}R^2 ~, \label{3f} \\ G &=& R_{\mu\nu\lambda\sigma}R^{\mu\nu\lambda\sigma}-4R_{\mu\nu}R^{\mu\nu} +R^2 ~. \label{3g} \end{eqnarray} The quantities $F$, $G$ and $\Box R$ are separately integrable w.r.t. the conformal mode~\cite{r}. It is useful to consider the following combination~\cite{r,am,amm1}: \begin{equation} G-\frac{2}{3}\Box R = \hbox{\large \it e}^{-4\phi}\biggl( 4 {\bar \Delta}_4 \phi +{\bar G} -\frac{2}{3}\stackrel{-}{\Box} {\bar R} \biggr) ~, \end{equation} where $\Delta_4$ is the conformally covariant 4-th order operator defined by \begin{equation} \Delta_4 = \Box^2 + 2 R^{\mu\nu}\nabla_{\mu}\nabla_{\nu} -\frac{2}{3}R \Box + \frac{1}{3}(\nabla^{\mu}R)\nabla_{\mu} \label{3d} \end{equation} which satisfies $\Delta_4 = \hbox{\large \it e}^{-4\phi}{\bar \Delta}_4$. The induced action then becomes \begin{eqnarray} && S(\phi, {\bar g}) = \frac{1}{(4\pi)^2} \int d^4x \sqrt{{\hat g}} \biggl[ a {\bar F} \phi +2b \phi {\bar \Delta}_4 \phi +b \Bigl( {\bar G}-\frac{2}{3} \stackrel{-}{\Box} {\bar R} \Bigr) \phi \biggr] \nonumber \\ && \qquad\qquad -\frac{1}{(4\pi)^2}\frac{a+b}{18} \int d^4 x \Bigl( \sqrt{g} R^2 -\sqrt{{\hat g}}{\bar R}^2 \Bigr) ~. \label{3s} \end{eqnarray} This action really satisfies the Wess-Zumino condition, which can be generally proved, if the integrand ${\rm a}^{(n)}_2$ is integrable as well as covariant, as follows: \begin{eqnarray} && S(\phi -\omega, \hbox{\large \it e}^{2\omega}{\bar g}) = -\frac{1}{(4\pi)^2}\int d^4 x \int^{\phi-\omega}_0 \delta \sigma \sqrt{{\hat g}} \hbox{\large \it e}^{4(\sigma+\omega)} {\rm a}^{(n)}_2 |_{g=e^{2(\sigma+\omega)}{\bar g}} \nonumber \\ && ~ = -\frac{1}{(4\pi)^2}\int d^4 x \int^{\phi}_{\omega} \delta \sigma \sqrt{{\hat g}} \hbox{\large \it e}^{4\sigma} {\rm a}^{(n)}_2 |_{g=e^{2\sigma}{\bar g}} = S(\phi,{\bar g})-S(\omega,{\bar g}) ~. \end{eqnarray} The last equality is proved by dividing the integral region $[\omega,\phi]$ into $[0,\phi]-[0,\omega]$. In the above case the first term rather trivially satisfies the Wess-Zumino condition. In the second term the $\sqrt{g}R^2$-term itself does not satisfy the Wess-Zumino condition, but the above combination $\sqrt{g} R^2 -\sqrt{{\hat g}}{\bar R}^2$ satisfies it. The results of this section are very important when we discuss the contributions from the measures of gravity in Sect.5. \section{The Measures of Matter Fields} \setcounter{equation}{0} \indent In this section we briefly discuss matter field contributions to the induced action. Matter field actions are constructed with at most second order derivatives of fields. As discussed in the previous section, such fields are rather special in 4 dimensions. We make some comments on the measures of scalar field and gauge field. \subsection{Scalar fields} \indent Let us consider scalar field coupled to the curvature as follows: \begin{equation} I_S (X,g) = \frac{1}{2} \frac{1}{(4\pi)^2} \int d^4 x \sqrt{g} \bigl( g^{\mu\nu}\partial_{\mu}X \partial_{\nu}X +\xi R X^2 \bigr) ~. \end{equation} {}From arguments of the previous section, the variation of the induced action becomes a non-covariant form in this case. We now do not know whether such an integrand is integrable or not. Even if integrable, the integrated action does not satisfy the Wess-Zumino condition so that the theory becomes more complicated. So we only consider the conformally coupled scalar field with $\xi=1/6$, which is described as $I_{CS}$. Instead of the relation (\ref{3a}), we use the following one: \begin{equation} \int [d X]_g \hbox{\large \it e}^{-I_{CS} (X ,g)} = \hbox{\large \it e}^{-\Gamma(\phi, {\bar g})} \int [d X]_{{\hat g}} \hbox{\large \it e}^{-I_{CS}(X ,{\bar g})}~. \end{equation} The difference is that the action $I_{CS}$ of r.h.s. is defined on the metric ${\bar g}$, not on the metric $g$~\footnote{ Note that the conformal invariance of scalar field in 4 dimensions is described by rescaling the scalar field as well as the metric as $I_{CS}(X,g)=I_{CS}({\bar X},{\bar g})$, where ${\bar X}=\hbox{\large \it e}^{\phi}X$. Thus \begin{equation} \int [dX]_{{\hat g}} \hbox{\large \it e}^{-I_{CS}(X,g)} \neq \int [dX]_{{\hat g}} \hbox{\large \it e}^{-I_{CS}(X,{\bar g})} = \int [d{\bar X}]_{{\hat g}} \hbox{\large \it e}^{-I_{CS}({\bar X},{\bar g})}~. \end{equation} so that the variation of $\Gamma$ is simply given in the form $\delta_{\phi} \Gamma(\phi,{\bar g})=-Tr(\delta\phi \hbox{\large \it e}^{-\epsilon D_{CS}} ) + \frac{1}{2} Tr (\delta K_{CS} D_{CS}^{-1})$, where $D_{CS}= -\Box +\frac{1}{6} R$ and $\delta K_{CS}$ is defined as in the way discussed in Sect.3. This is nothing but the definition of the conformal anomaly. For conformal coupling the trace including $\delta K_{CS}$ vanishes. As a result $\Gamma(\phi,{\bar g})$ is given in the form $S(\phi,{\bar g})$ defined in (\ref{3s}). The coefficients $a$ and $b$ in this case have already calculated everywhere~\cite{bd,duff} \begin{equation} a_X= -\frac{N_X}{120}~, \qquad b_X= \frac{N_X}{360} ~, \label{4x} \end{equation} where $N_X$ is the number of conformally coupled scalar fields. \subsection{Gauge fields} \indent In this subsection we consider abelian gauge fields defined by the action \begin{equation} I_A(A_{\mu},g) = \frac{1}{4(4\pi)^2} \int d^4 x \sqrt{g} g^{\mu\lambda} g^{\nu\sigma} F_{\mu\nu} F_{\lambda\sigma} ~, \end{equation} where $F_{\mu\nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu} = \partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$. Gauge theory is classically conformally invariant in 4 dimensions which is described as $I_A(A_{\mu},g)=I_A(A_{\mu},{\bar g})$, where the gauge field is not rescaled. The measure of gauge field is defined by the norm \begin{equation} <\delta A,\delta A>_g = \int d^4 x \sqrt{g} g^{\mu\nu}\delta A_{\mu} \delta A_{\nu} ~. \end{equation} Unlike the case of scalar field it depends on both the conformal mode and the traceless mode. As for the conformal mode, it is well known that when we rewrite the measure on $g$ into the one on ${\bar g}$, we obtain the induced action (\ref{3s}) with the coefficients~\cite{bd,duff} \begin{equation} a_A = -\frac{N_A}{10}~, \qquad b_A = \frac{31N_A}{180} ~, \label{4a} \end{equation} where $N_A$ is the number of gauge fields. Now, consider the induced action for the traceless mode defined by \begin{equation} [dA_{\mu}]_{{\bar g}} = [dA_{\mu}]_{{\hat g}}\hbox{\large \it e}^{-W( e^h ,{\hat g})} ~. \label{4ma} \end{equation} Apply the simultaneous changes (\ref{2sh}) in both sides of (\ref{4ma}). The measure of l.h.s. (and also the induced action for the conformal mode and the classical gauge action) is invariant under the changes, while the r.h.s. becomes \begin{equation} [dA_{\mu}]_{{\hat g} e^b}\hbox{\large \it e}^{-W( e^{-b} e^h , {\hat g} e^b )} = [dA_{\mu}]_{{\hat g}}\hbox{\large \it e}^{-W( e^b,{\hat g}) -W( e^{-b}e^h , {\hat g} e^b )} ~, \label{4ma2} \end{equation} where we use the relation (\ref{4ma}) again with $h$ replaced with $b$. The r.h.s. of (\ref{4ma2}) should become the original form so that $W$ should satisfy the Wess-Zumino condition (\ref{2w}), which can be rewritten in more familiar form by introducing the one form $(V_{\mu})^{\alpha}_{~\beta}={\hat g}^{\alpha\lambda} \partial_{\mu}{\hat g}_{\lambda\beta}$ and notations $H=\hbox{\large \it e}^h$ and $B=\hbox{\large \it e}^b$ as follows: \begin{equation} W(B^{-1}H, V^B_{\mu}) = W(H, V_{\mu}) - W(B, V_{\mu}) \end{equation} where \begin{equation} V^B_{\mu}= ({\hat g} B)^{-1}\partial_{\mu}({\hat g} B) = B^{-1}V_{\mu} B +B^{-1}\partial_{\mu}B ~. \end{equation} The solution of the Wess-Zumino condition is well-known~\cite{wz}, which is given by \begin{equation} W(H,V_{\mu}) = \zeta \int^1_0 ds \int d^4 x ~ tr\bigl( h ~G(V^s_{\mu})\bigr) \end{equation} where $G(V_{\mu})$ is the non-abelian anomaly of the one form $V_{\mu}$ and \begin{equation} V^s_{\mu}= \hbox{\large \it e}^{-sh} V_{\mu} \hbox{\large \it e}^{sh} +\hbox{\large \it e}^{-sh}\partial_{\mu}\hbox{\large \it e}^{sh} ~. \end{equation} Thus what remains to do would be to determine the overall coefficient $\zeta$, which we do not discuss anymore in this paper. \section{The Measures of Gravitational Fields} \setcounter{equation}{0} \indent In this section we consider the measures of conformal and traceless modes of gravity. Henceforth we introduce the dimensionless self-coupling constant $t$ for the traceless mode in the way~\cite{kn} \begin{equation} {\bar g}_{\mu\nu} = ({\hat g}\hbox{\large \it e}^{th})_{\mu\nu} ~. \end{equation} The classical action for the conformal mode is given by the $R^2$-action and that for the traceless mode is the Weyl action divided by the square of the coupling $t$, which is \begin{equation} I_G = \frac{1}{(4\pi)^2} \int d^4 x \sqrt{g} \Bigl( \frac{1}{t^2} F + c R^2 - m^2 R + \Lambda \Bigr) ~, \label{5ca} \end{equation} where $m^2 $ is the inverse of gravitational constant and $\Lambda$ is the cosmological constant. In the flat background the 4 derivative parts of the Lagrangian have the form $\frac{1}{2} tr (h \partial^4 h) + 36 c \phi \partial^4 \phi + o(t)$. The presence of the Einstein-Hilbert term now gives rise to the tachyon problem at the classical level for $c>0$ and $\Lambda =0$~\cite{s}--\cite{bc}, but in the quantum theory the kinetic term of conformal mode is induced from the measures and also we consider $\Lambda \neq 0$ case so that such a problem will disappear. The question of unitarity still remains to be clarified~\cite{s}--\cite{bc}. We here only stress that the theory is unitary at the low energy and we cannot avoid the 4-th order action to ensure the background-metric independence. The coefficient $c$ is in general arbitrary, but for technical reasons it is determined to be a special value later. \subsection{The induced action in the $t \rightarrow 0$ limit} \subsubsection{Traceless mode} \indent As a first approximation we consider the $t \rightarrow 0$ limit. The metric ${\bar g}$ then reduces to the background metric ${\hat g}$ so that the matter field, the conformal mode and the traceless mode are decoupled each other. So we can evaluate the contributions from the measures exactly. This approximation is nothing but the one adopted in~\cite{amm1} though our management of the $R^2$-terms in eqs. (\ref{3s}) and (\ref{5ca}) are different from theirs. The difference affects $t$-dependent contributions discussed in Sect.5.2. We first calculate the induced action from the measure of traceless mode. At the $t \rightarrow 0$ limit the measure (\ref{2mh}) divided by $t$ reduces to $[dh]_{g^{\prime}}$ defined by the norm $<\delta h, \delta h>_{g^{\prime}}=\int \sqrt{g^{\prime}}tr (\delta h)^2$, where $g^{\prime}_{\mu\nu}=\hbox{\large \it e}^{2\phi}{\hat g}_{\mu\nu}$. The action now becomes \begin{equation} I^{(0)}_G(h,g^{\prime}) = \frac{1}{(4\pi)^2} \int d^4 x \sqrt{g^{\prime}} \Bigl( \frac{1}{2} h_{\mu\nu} T(g^{\prime})^{\mu\nu}_{~~,\lambda\sigma} h^{\lambda\sigma} + c R^{\prime 2} -m^2 R^{\prime} +\Lambda \Bigr) ~, \end{equation} where $h_{\mu\nu}=g^{\prime}_{\mu\lambda}h^{\lambda}_{~\nu}$. To justify one-loop calculations we discard the linear term of $h$ in the expansion of classical action by imposing the constraints $ R^{\prime}_{\mu\nu}= \frac{1}{4} g^{\prime}_{\mu\nu}R^{\prime} $ and $\nabla^{\prime}_{\mu} R^{\prime} =0 $. The induced action is calculated using the quantity ${\cal H}^{(2)}(x,\epsilon)$ for the operator $T$, or one loop divergence of $\det T$. To calculate the coefficients $a$ and $b$ in (\ref{3s}) we have to fix the gauge. The Lagrangian for the traceless part is described in the form \begin{equation} \frac{1}{2} \sqrt{g^{\prime}} h_{\mu\nu} T(g^{\prime})^{\mu\nu}_{~~,\lambda\sigma} h^{\lambda\sigma} = \frac{1}{2} \sqrt{g^{\prime}} h_{\mu\nu} T^{NS}(g^{\prime})^{\mu\nu}_{~~,\lambda\sigma} h^{\lambda\sigma} + \chi^{\mu}N(g^{\prime})_{\mu\nu}\chi^{\nu} ~, \end{equation} where $\chi^{\mu}=\nabla^{\prime\lambda}h^{\mu}_{~\lambda}$. The nonsigular operator $T^{NS}$ and $N$ are defined by eqs.(\ref{at}) and (\ref{an}). According to the standard procedure for the 4-th order operators~\cite{d,ft1,bc} we adopt the gauge-fixing conditon $\chi^{\mu}=0$ and gauge-fixing term such that the action $I_G + I_{FIX}$ is the only nonsingular action of $T^{NS}$. Applying the general coordinate transformation $\delta h^{\mu\nu} =\nabla^{\prime\mu} \xi^{\nu} +\nabla^{\prime\nu} \xi^{\mu} - \frac{1}{2} (\nabla^{\prime}_{\lambda} \xi^{\lambda}) g^{\prime\mu\nu} $ to the gauge-fixing conditon we obtain the ghost Lagrangian $\sqrt{g^{\prime}}\psi^{*\mu}M_{GH}(g^{\prime})_{\mu\nu}\psi^{\nu}$ with \begin{equation} M_{GH}(g^{\prime})_{\mu\nu} = \Box^{\prime} g^{\prime}_{\mu\nu} + \frac{1}{2} \nabla^{\prime}_{\mu}\nabla^{\prime}_{\nu} + R^{\prime}_{\mu\nu} ~. \end{equation} Then the contribution from the measure of traceless mode can be derived by calculating the quantity \begin{eqnarray} \delta_{\phi} S(\phi, {\hat g}) &=& -\delta_{\phi} \log \frac{\det^{1/2} N(g^{\prime}) \det M_{GH}(g^{\prime})} {\det^{1/2} T^{NS}(g^{\prime})} ~\biggl|_{\hbox{kernel part}} \\ &=& -\frac{1}{(4\pi)^2} \int d^4 x~ \delta \phi \sqrt{g^{\prime}} \Bigl( {\rm a}^{(2)}_2 (T^{NS}) - {\rm a}^{(1)}_2(N) -2 {\rm a}^{(1)}_2(M_{GH}) \Bigr) ~, \nonumber \end{eqnarray} where ${\rm a}^{(n)}_2$ is defined in eq. (\ref{bh}). Using the formulae (\ref{bd}) and (\ref{bq}), we obtain the following quantities: \begin{eqnarray} {\rm a}^{(2)}_2(T^{NS}) &=& \frac{21}{10} R^{\prime}_{\mu\nu\lambda\sigma} R^{\prime\mu\nu\lambda\sigma} + \frac{29}{40} R^{\prime 2} ~, \\ {\rm a}^{(1)}_2(N) &=& -\frac{11}{180} R^{\prime}_{\mu\nu\lambda\sigma} R^{\prime\mu\nu\lambda\sigma} + \frac{161}{120} R^{\prime 2} ~, \\ {\rm a}^{(1)}_2(M_{GH}) &=& -\frac{11}{180} R^{\prime}_{\mu\nu\lambda\sigma} R^{\prime\mu\nu\lambda\sigma} + \frac{11}{45} R^{\prime 2} ~. \end{eqnarray} The combinations $F^{\prime}$ and $G^{\prime}$ are now described in the forms $ R^{\prime}_{\mu\nu\lambda\sigma} R^{\prime\mu\nu\lambda\sigma} - \frac{1}{6} R^{\prime 2}$ and $R^{\prime}_{\mu\nu\lambda\sigma} R^{\prime\mu\nu\lambda\sigma}$ respectively. So we can determine the coefficients $a$ and $b$ of the induced action, which are given by~\cite{ft1,ft2} \begin{equation} a_h = -\frac{199}{30} ~, \qquad b_h = \frac{87}{20} ~. \label{5h} \end{equation} \subsubsection{Conformal mode} \indent As in the two dimensional cases~\cite{dk,h}, we assume that the contribution from the measure of conformal mode is given in the form (\ref{3s}). The coefficients $a_{\phi}$ and $b_{\phi}$ are determined in a self-consistent way. Consider the conformal change of the background metric \begin{equation} {\hat g}_{\mu\nu} \rightarrow {\hat g}^{(\omega)}_{\mu\nu} =\hbox{\large \it e}^{2\omega}{\hat g}_{\mu\nu} ~. \label{5g} \end{equation} We then obtain the partition function \begin{equation} Z({\hat g}_{(\omega)})= \int [d\phi]_{{\hat g}_{(\omega)}} [dh]_{{\hat g}_{(\omega)}}[dX]_{{\hat g}_{(\omega)}} [dA]_{{\hat g}_{(\omega)}} \exp \Bigl[ -{\cal I}^{(0)}(X,A,h,\phi;{\hat g}_{(\omega)}) \Bigr] \label{5z} \end{equation} and \begin{equation} {\cal I}^{(0)}(X,A,h,\phi;{\hat g}_{(\omega)}) = S(\phi, {\hat g}_{(\omega)}) + I_{CS}(X,{\hat g}_{(\omega)}) +I_A(A,{\hat g}_{(\omega)}) + I^{(0)}_G(h, \hbox{\large \it e}^{2\phi}{\hat g}_{(\omega)}) ~. \label{5i} \end{equation} The coefficients of the induced action are given by $a=a_X +a_A + a_h +a_\phi$ and $b=b_X +b_A + b_h +b_\phi$. The $\omega$-dependence of the measures for $X$, $A_{\mu}$ and $h^{\mu}_{~\nu}$ can be obtained by repeating the previous calculations with $\phi$ replaced with $\omega$, which are given by the action $S(\omega,{\hat g})$ with the coefficients (\ref{4x}), (\ref{4a}) and (\ref{5h}) respectivily. The contribution from the conformal mode is calculated by using the definition of the partition function above. The Einstein-Hilbert and the cosmological terms have dimensional parameters so that, when the 4-th order term exists, these terms do not contribute to the 4-th order induced action (\ref{3s}). To justify calculations we have to set the linear term of $\phi$ vanishing. To do this, however, we have to relate ${\hat F}_{(\omega)}$ and ${\hat G}_{(\omega)}$ so that we cannot determine the coefficients $a$ and $b$ because for lack of information. Therefore we neglect the $\phi^3$-term of the total action. It can be carried out by canceling out the $R^2$-terms from the classical action and the induced action by taking the value \begin{equation} c=\frac{1}{18}(a+b) ~. \end{equation} In this case we only calculate the quantity \begin{equation} \delta_{\omega}S(\omega,{\hat g}) = - \delta_{\omega} \log \hbox{$\det^{-1/2}$} {\hat \Delta}^{(\omega)}_4 = -\frac{1}{(4\pi)^2} \int d^4 x ~\delta \omega \sqrt{{\hat g}_{(\omega)}} {\rm a}^{(2)}_2 ({\hat \Delta}^{(\omega)}_4) ~. \end{equation} Using the formula (\ref{bd}) and the definition of $\Delta_4$ (\ref{3d}), we obtain \begin{equation} {\rm a}^{(2)}_2 ({\hat \Delta}^{(\omega)}_4) = \frac{1}{90} {\hat R}^{(\omega)}_{\mu\nu\lambda\sigma} {\hat R}_{(\omega)}^{\mu\nu\lambda\sigma} + \frac{1}{90} {\hat R}_{(\omega)}^2 ~. \end{equation} {}From this we get the values of coefficients in the induced action~\cite{amm1}: \begin{equation} a_{\phi}= \frac{1}{15} ~, \qquad b_{\phi}= -\frac{7}{90} ~. \label{5p} \end{equation} \subsubsection{Background-metric independence at the $t \rightarrow 0$ limit} \indent Combining the results calculated before we can extract the $\omega$-dependence of the measure in the partition function (\ref{5z}). We thus obtain the expression \begin{equation} Z({\hat g}_{(\omega)})= \int [d\phi]_{{\hat g}}[dh]_{{\hat g}}[dX]_{{\hat g}}[dA]_{{\hat g}} \exp \Bigl[ -S(\omega,{\hat g}) -{\cal I}^{(0)}( X^{\omega},A,h,\phi;{\hat g}_{(\omega)}) \Bigr] ~, \end{equation} where $X^{\omega}=\hbox{\large \it e}^{-\omega}X$ such that $I_{CS}(X^{\omega},{\hat g}_{(\omega)})=I_{CS}(X,{\hat g})$. The coefficients for the induced action $S(\omega,{\hat g})$ and also $S(\phi,{\hat g})$ in the action ${\cal I}^{(0)}$ are given by \begin{eqnarray} && a = -\frac{N_X}{120} -\frac{N_A}{10} -\frac{199}{30}+\frac{1}{15} ~, \label{5a} \\ && b = \frac{N_X}{360}+\frac{31N_A}{180} +\frac{87}{20}-\frac{7}{90} ~. \label{5b} \end{eqnarray} Since the measure $[d\phi]_{{\hat g}}$ is now invariant under a local shift, it turns out that, changing the variable as $\phi \rightarrow \phi -\omega$ and using the Wess-Zumino condition $S(\phi-\omega,{\hat g}_{(\omega)}) + S(\omega,{\hat g})=S(\phi,{\hat g})$, the partition function goes back to the original form defind on the metric ${\hat g}$. Thus we proved $Z({\hat g}_{(\omega)})=Z({\hat g})$ in the $t \rightarrow 0$ limit. \subsection{The induced action for $t \neq 0$} \indent The background-metric independence for the traceless mode indicates that the $t$-dependence of the induced action, apart from $W(\hbox{\large \it e}^h,{\hat g})$~\footnote This action does not affect the later calculations. }, should appear in the combination of the metric ${\bar g}={\hat g}\hbox{\large \it e}^{th}$ because the measure defined on the background metric itself is invariant under the simultanious changes for the traceless mode (\ref{2sh}). Now we assume the $t$-dependence of the partition function in the following form: \begin{equation} Z = \int \frac{[d\phi]_{{\hat g}}[\frac{1}{t}\hbox{\large \it e}^{-th}d\hbox{\large \it e}^{th}]_{{\hat g}} [dX]_{{\hat g}}[dA]_{{\bar g}}}{\hbox{vol(gauge)}} \exp \Bigl[ -{\cal I}(X,A,\phi;{\bar g}) \Bigr] ~, \label{5zz} \end{equation} where the total action is \begin{eqnarray} && {\cal I}(X,A,\phi;{\bar g}) \nonumber \\ && \quad = \frac{1}{(4\pi)^2} \int d^4 x \sqrt{{\hat g}} \biggl[ 2b(t) \phi {\bar \Delta}_4 \phi + a(t) {\bar F}\phi + b(t) \Bigl( {\bar G} - \frac{2}{3}\stackrel{-}{\Box} {\bar R} \Bigr) \phi \nonumber \\ && \qquad + \frac{1}{t^2} {\bar F} + \frac{1}{18}\bigl( a(t) + b(t) \bigr) {\bar R}^2 \biggr] + \frac{1}{(4\pi)^2} \int d^4 x \sqrt{g} (-m^2 R + \Lambda ) \nonumber \\ && \qquad + I_{CS}(X,{\bar g}) + I_A(A,{\bar g}) ~, \label{5ii} \end{eqnarray} where $a(t)=\sum_n a_n t^{2n}$ and $b(t)=\sum_n b_n t^{2n}$ with $a_0=a$ and $b_0=b$ given in (\ref{5a}) and (\ref{5b}) respectively. The coefficient in front of the classical $R^2$-action is now defined by the $t$-dependent value $c(t)=\frac{1}{18}(a(t)+b(t))$ so that the $R^2$-terms cancel out. Here note that the ${\bar R}^2$-term in (\ref{3s}) remains in the action. Let us consider the conformal change of the background-metric (\ref{5g}). The $\omega$-dependences of the measures are now calculated as perturbations in $t$. The contributions from matter fields have already been calculated in Sect.3. The gravitational contributions are evaluated using the total action defined above. Expanding the action up to the $t^2$-order, the quardratic terms in fields is given by \begin{eqnarray} && {\cal I}_2 ({\hat g}_{(\omega)}) = \frac{1}{(4\pi)^2} \int d^4 x \sqrt{{\hat g}_{(\omega)}} \biggl[ \frac{1}{2} h_{\mu\nu} \Bigl\{ T^{NS}({\hat g}_{(\omega)})^{\mu\nu}_{~~,\lambda\sigma} + ct^2 {\hat R}^{(\omega)} L({\hat g}_{(\omega)})^{\mu\nu}_{~~,\lambda\sigma} \Bigr\} h^{\lambda\sigma} \nonumber \\ && \qquad\qquad\qquad + 2(b+b_1 t^2) \phi {\hat \Delta}^{(\omega)}_4 \phi -4(a+b)t \phi {\hat R}_{(\omega)}^{\mu\lambda\nu\sigma} {\hat \nabla}^{(\omega)}_{\lambda}{\hat \nabla}^{(\omega)}_{\sigma} h_{\mu\nu} \nonumber \\ && \qquad\qquad\qquad + {\hat \nabla}_{(\omega)}^{\lambda}h^{\mu}_{~\lambda} \Bigl\{ N({\hat g}_{(\omega)})_{\mu\nu} + ct^2 (-{\hat \nabla}^{(\omega)}_{\mu}{\hat \nabla}^{(\omega)}_{\nu} +{\hat R}^{(\omega)} {\hat g}^{(\omega)}_{\mu\nu}) \Bigr\} {\hat \nabla}_{(\omega)}^{\lambda}h^{\mu}_{~\lambda} \nonumber \\ && \qquad\qquad\qquad -\frac{1}{3}a t \phi {\hat R}^{(\omega)} {\hat \nabla}^{(\omega)}_{\mu}{\hat \nabla}^{(\omega)}_{\nu} h^{\mu\nu} -\frac{2}{3}b t (\hBox_{(\omega)} \phi) {\hat \nabla}^{(\omega)}_{\mu}{\hat \nabla}^{(\omega)}_{\nu}h^{\mu\nu} \biggr] ~, \end{eqnarray} where $h_{\mu\nu}={\hat g}^{(\omega)}_{\mu\lambda}h^{\lambda}_{~\nu}$ and the second order operator $L^{\mu\nu}_{~~,\lambda\sigma}$ is defined by (\ref{al}). The conditions ${\hat R}^{(\omega)}_{\mu\nu}=\frac{1}{4} {\hat g}^{(\omega)}_{\mu\nu}{\hat R}^{(\omega)}$ and ${\hat \nabla}^{(\omega)}_{\mu} {\hat R}^{(\omega)}=0$ are imposed for the linear term of $h$ to vanish. Under the conditions, the quartic operator ${\hat \Delta}^{(\omega)}_4$ (\ref{3d}) reduces to the form $ \hBox_{(\omega)}^2 -\frac{1}{6}{\hat R}^{(\omega)} \hBox_{(\omega)}$. The gauge-fixing term is defined such that the highest derivative terms become diagonal. To do this we rewrite the action ${\cal I}_2$ in the form \begin{eqnarray} && \frac{1}{(4\pi)^2} \int d^4 x \sqrt{{\hat g}_{(\omega)}} \biggl[ \frac{1}{2} h_{\mu\nu} \Bigl\{ T^{NS}({\hat g}_{(\omega)})^{\mu\nu}_{~~,\lambda\sigma} + ct^2 {\hat R}^{(\omega)} L({\hat g}_{(\omega)})^{\mu\nu}_{~~,\lambda\sigma} \Bigr\} h^{\lambda\sigma} \nonumber \\ && \qquad\quad + 2(b+b_1 t^2) \phi {\hat \Delta}^{(\omega)}_4 \phi + \frac{b^2 t^2}{6} \bigl( \phi \hBox_{(\omega)}^2 \phi -{\hat R}^{(\omega)} \phi\hBox_{(\omega)}\phi \bigr) \nonumber \\ && \qquad\quad -4(a+b)t \phi {\hat R}_{(\omega)}^{\mu\lambda\nu\sigma} {\hat \nabla}^{(\omega)}_{\lambda}{\hat \nabla}^{(\omega)}_{\sigma} h_{\mu\nu} -\frac{1}{3}(a +2b)t \phi {\hat R}^{(\omega)} {\hat \nabla}^{(\omega)}_{\mu}{\hat \nabla}^{(\omega)}_{\nu}h^{\mu\nu} \nonumber \\ && \qquad\quad + \Bigl( \chi_{(\omega)}^{\mu} +\frac{bt}{2} {\hat \nabla}_{(\omega)}^{\mu}\phi \Bigr) {\cal N}({\hat g}_{(\omega)})_{\mu\nu} \Bigl(\chi_{(\omega)}^{\nu} +\frac{bt}{2} {\hat \nabla}_{(\omega)}^{\nu}\phi \Bigr) \biggr] ~, \label{5e} \end{eqnarray} where $\chi_{(\omega)}^{\mu}={\hat \nabla}_{(\omega)}^{\lambda} h^{\mu}_{~\lambda}$ and \begin{equation} {\cal N}({\hat g}_{(\omega)})_{\mu\nu} = N({\hat g}_{(\omega)})_{\mu\nu} + ct^2 (-{\hat \nabla}^{(\omega)}_{\mu}{\hat \nabla}^{(\omega)}_{\nu} +{\hat R}^{(\omega)} {\hat g}^{(\omega)}_{\mu\nu}) ~. \end{equation} Thus we take the gauge-fixing term ${\cal I}_{FIX}$ such that the last term of the expression (\ref{5e}) disappears in the gauge-fixed action ${\cal I}_2 +{\cal I}_{FIX}$. This corresponds to take the gauge-fixing condition $\chi_{(\omega)}^{\mu}+ \frac{bt}{2} {\hat \nabla}_{(\omega)}^{\mu}\phi =0$. The general coordinate transformation $\delta g_{\mu\nu}= g_{\mu\lambda}\nabla_{\nu}\xi^{\lambda} + g_{\nu\lambda}\nabla_{\mu}\xi^{\lambda}$ is expressed as \begin{eqnarray} \delta \phi &=& \frac{1}{4} {\hat \nabla}^{(\omega)}_{\lambda}\xi^{\lambda} +\xi^{\lambda} {\hat \nabla}^{(\omega)}_{\lambda}\phi ~, \\ t\delta h^{\mu}_{~\nu} &=& {\hat \nabla}_{(\omega)}^{\mu} \xi_{\nu} +{\hat \nabla}^{(\omega)}_{\nu} \xi^{\mu} - \frac{1}{2} \delta^{\mu}_{~\nu} {\hat \nabla}^{(\omega)}_{\lambda} \xi^{\lambda} + t \xi^{\lambda} {\hat \nabla}^{(\omega)}_{\lambda} h^{\mu}_{~\nu} \\ && + \frac{t}{2} h^{\mu}_{~\lambda} \Bigl( {\hat \nabla}^{(\omega)}_{\nu} \xi^{\lambda} - {\hat \nabla}_{(\omega)}^{\lambda} \xi_{\nu} \Bigr) + \frac{t}{2} h^{\lambda}_{~\nu} \Bigl( {\hat \nabla}_{(\omega)}^{\mu} \xi_{\lambda} - {\hat \nabla}^{(\omega)}_{\lambda} \xi^{\mu} \Bigr) + \cdots ~, \nonumber \end{eqnarray} where $\xi_{\mu}= {\hat g}^{(\omega)}_{\mu\lambda}\xi^{\lambda}$. Applying it to the gauge-fixing condition we can obtain the ghost action. The kinetic term of the ghost Lagrangian is now given in the form $\sqrt{{\hat g}_{(\omega)}}\psi^{*\mu} {\cal M}_{GH}({\hat g}_{(\omega)})_{\mu\nu} \psi^{\nu}$ with \begin{equation} {\cal M}_{GH}({\hat g}_{(\omega)})_{\mu\nu} = M_{GH}({\hat g}_{(\omega)})_{\mu\nu} + \frac{b t^2}{8}{\hat \nabla}^{(\omega)}_{\mu} {\hat \nabla}^{(\omega)}_{\nu} ~. \end{equation} Changing the normalization as $\phi^{\prime}= (4b +4b_1 t^2 +\frac{b^2}{3}t^2)^{1/2}\phi$, we then obtain the following expression: \begin{equation} {\cal I}_2 +{\cal I}_{FIX} = \frac{1}{2(4\pi)^2} \int d^4 x \sqrt{{\hat g}_{(\omega)}} (\phi^{\prime}, h_{\mu\nu}) {\cal K} \left( \begin{array}{c} \phi^{\prime} \\ h^{\lambda\sigma} \end{array} \right) ~, \end{equation} where \begin{equation} {\cal K} = \left( \begin{array}{cc} \hBox_{(\omega)}^2 & 0 \\ 0 & \hBox_{(\omega)}^2 \delta^{\mu}_{(\lambda}\delta^{\nu}_{\sigma)} \\ \end{array} \right) +\left( \begin{array}{cc} A & 0 \\ 0 & C^{\mu\nu}_{~~, \lambda\sigma} \\ \end{array} \right) + \left( \begin{array}{cc} 0 & B_{\lambda\sigma} \\ B^{\mu\nu} & 0 \\ \end{array} \right) \end{equation} and \begin{eqnarray} && A = -\Bigl( \frac{1}{6} + \frac{5}{72}b t^2 \Bigr) {\hat R}_{(\omega)} \hBox_{(\omega)} ~, \nonumber \\ && C^{\mu\nu}_{~~,\lambda\sigma} = T^{NS\prime}({\hat g}_{(\omega)})^{\mu\nu}_{~~,\lambda\sigma} + c t^2 {\hat R}_{(\omega)} L({\hat g}_{(\omega)})^{\mu\nu}_{~~,\lambda\sigma} ~, \\ && B^{\mu\nu} = -\frac{t}{2\sqrt{b}} \Bigl\{ 4(a+b) {\hat R}^{~~(\mu ~\nu)}_{(\omega)~\lambda~\sigma} {\hat \nabla}_{(\omega)}^{\lambda}{\hat \nabla}_{(\omega)}^{\sigma} +\frac{1}{3}(a+2b) {\hat R}_{(\omega)} {\hat \nabla}_{(\omega)}^{(\mu}{\hat \nabla}_{(\omega)}^{\nu)} \Bigr\} ~, \nonumber \end{eqnarray} where the prime on $T^{NS}$ stands for removing the $\hBox_{(\omega)}^2$ term. To derive the $\omega$-dependence of the measure for gravity we have to evaluate the quantity \begin{eqnarray} \delta_{\omega} S(\omega,{\hat g}) &=& - \delta_{\omega} \log \frac{\det^{1/2} {\cal N}({\hat g}_{(\omega)}) \det {\cal M}_{GH}({\hat g}_{(\omega)})} {\det^{1/2}{\cal K}({\hat g}_{(\omega)})} ~ \biggl|_{\hbox{kernel part}} \\ &=& -\frac{1}{(4\pi)^2} \int d^4 x ~\delta \omega \sqrt{{\hat g}_{(\omega)}} \bigl( {\rm a}^{(2)}_2 ({\cal K}) - {\rm a}^{(1)}_2 ({\cal N}) - 2{\rm a}^{(1)}_2 ({\cal M}_{GH}) \bigr) ~, \nonumber \end{eqnarray} Using the generalized Schwinger-DeWitt technique~\cite{bv,ft1} summarized in appendix B, we first calculate the divergent part of $\log \det {\cal K} = Tr \log {\cal K}$, which is expanded in inverse powers of derivatives as \begin{equation} - Tr \log {\cal K} = \gamma (A) + \gamma (C) + \gamma(B) ~, \end{equation} where \begin{eqnarray} \gamma (A) &=& - 2 Tr \log \hBox_{(\omega)} - Tr \Bigl( A \frac{1}{\hBox_{(\omega)}^2}\Bigr) + \frac{1}{2} Tr \Bigl( A^2 \frac{1}{\hBox_{(\omega)}^4} \Bigr) \nonumber \\ \gamma (C) &=& -2 Tr \log (\hBox_{(\omega)} {\rm I}) - Tr \Bigl( C \frac{1}{\hBox_{(\omega)}^2} \Bigr) + \frac{1}{2} Tr \Bigl( C^2 \frac{1}{\hBox_{(\omega)}^4} \Bigr) \nonumber \\ \gamma (B) &=& Tr \Bigl( B^2 \frac{1}{\hBox_{(\omega)}^4} \Bigr) ~, \end{eqnarray} where ${\rm I}= \delta^{\mu}_{(\lambda}\delta^{\nu}_{\sigma)} = \frac{1}{2} (\delta^{\mu}_{~\lambda} \delta^{\nu}_{~\sigma} + \delta^{\mu}_{~\sigma} \delta^{\nu}_{~\lambda} )$ and $Tr$ includes the trace over the indices $\mu$, $\nu$. The contribution to the induced action from the diagonal part of the conformal mode is calculated as $ {\rm a}^{(2)}_2(A)= {\rm a}^{(2)}_2({\hat \Delta}^{(\omega)}_4) $, where the $t^4$-term is neglected. It turns out that the $t^2$-correction does not appear in this part. For the diagonal part of the traceless mode we obtain the following quantity: $ {\rm a}^{(2)}_2 (C) = {\rm a}^{(2)}_2(T^{NS}({\hat g}_{(\omega)})) - 6 c t^2 {\hat R}_{(\omega)}^2 ~. $ The off-diagonal part is calculated by using the formula (\ref{b4}), which gives the $t^2$-order contribution \begin{equation} {\rm a}^{(2)}_2 (B) = t^2 \biggl( \frac{(a+b)^2}{4b} {\hat R}^{(\omega)}_{\mu\nu\lambda\sigma} {\hat R}_{(\omega)}^{\mu\nu\lambda\sigma} - \frac{2a^2+4ab+b^2}{48b} {\hat R}_{(\omega)}^2 \biggr) ~. \end{equation} Thus ${\rm a}^{(2)}_2 ({\cal K})$ is given by summing up the results from the operators $A$, $B$ and $C$. The ghost parts are calculated as $ {\rm a}^{(1)}_2 ({\cal N}) = {\rm a}^{(1)}_2 (N({\hat g}_{(\omega)})) -\frac{5}{2}c t^2 {\hat R}_{(\omega)}^2 $ and also $ {\rm a}^{(1)}_2 ({\cal M}_{GH}) = {\rm a}^{(1)}_2 (M_{GH}({\hat g}_{(\omega)})) -\frac{1}{72}b t^2 {\hat R}_{(\omega)}^2 ~. $ Combining the above results we finally obtain the $t^2$-dependent part of ${\rm a}^{(2)}_2({\cal K})- {\rm a}^{(1)}_2({\cal N}) - 2 {\rm a}^{(1)}_2({\cal M}_{GH})$ in the form \begin{equation} t^2 \biggl( \frac{(a+b)^2}{4b} {\hat R}^{(\omega)}_{\mu\nu\lambda\sigma} {\hat R}_{(\omega)}^{\mu\nu\lambda\sigma} -\frac{6a^2+40ab+27b^2}{144b} {\hat R}_{(\omega)}^2 \biggr) ~. \end{equation} Now, we can determine the coefficients $a_1$ and $b_1$ from the above results. We finally obtain the induced action $S(\omega, {\hat g})$ with the coefficients \begin{eqnarray} && a_1 = -\frac{6a^2+40ab+27b^2}{24b} ~, \label{5a1} \\ && b_1 = \frac{7}{6}a + \frac{7}{8}b ~. \label{5b1} \end{eqnarray} Here note that the measures of matter fields do not give the contributions directly to the coefficients $a_1$ and $b_1$, which contribute indirectly to them through the values $a$ and $b$ given by (\ref{5a}) and (\ref{5b}). The background-metric independence for the traceless mode indicates that the induced action should be in the form $S(\omega, {\bar g})$ if it includes the interaction terms. As discussed in the previous subsection this $\omega$-dependence can be removed by changing the field: $\phi \rightarrow \phi -\omega$ so that the partition function goes back to the original one defined on ${\hat g}$ (\ref{5zz}) provided $a_1$ and $b_1$ are given by (\ref{5a1}) and (\ref{5b1}). In summary we get the coefficients in the action (\ref{5ii}) as \begin{eqnarray} && a(t) = -\frac{N_X}{120} - \frac{N_A}{10}-\frac{197}{30} \label{5aa} \\ && \quad + t^2 \frac{13 N^2_X +1412 N_X N_A + 36988 N_X -7428 N^2_A + 635656 N_A +16011772} {2880(N_X + 62 N_A +1538)} ~, \nonumber \\ && b(t) = \frac{N_X}{360}+\frac{31 N_A}{180}+\frac{769}{180} - t^2 \biggl( \frac{7 N_X}{960}-\frac{49 N_A}{1440} +\frac{1883}{480} \biggr) ~. \label{5bb} \end{eqnarray} \section{Discussions on Scaling Operators} \setcounter{equation}{0} \indent In this paper we proposed the background-meric independent formulation of 4D quantum gravity. A model of 4D quantum gravity was described as a quantum field theory defined on the background-metric (\ref{5zz}) with the coefficients (\ref{5aa},~\ref{5bb}) by solving the ansatz of the background-metric independence (\ref{5ii}) in the self-consistent manner. The problem of renormalizability still remains to be solved, but we think that if the diffeomorphism invariance ensures renormalizability, our model will be renormalizable because we can easily show that the background-metric independence really ensures the diffeomorphism invariance in quantum level~\cite{hf}. The rest of this section is devoted to discuss scaling operators in 4D quantum gravity. The cosmological constant and the Einstein-Hilbert terms are the lower-derivative operators with conformal charges. As in two dimensions, such operators will receive corrections like \begin{equation} \Lambda \int d^4 x \sqrt{{\hat g}} \hbox{\large \it e}^{\alpha(t)\phi} \end{equation} for the cosmological constant and \begin{equation} -m^2 \int d^4 x \sqrt{{\hat g}}\hbox{\large \it e}^{\beta(t)\phi} \Bigl( {\bar R} + \gamma(t) {\bar \nabla}^{\mu} \phi {\bar \nabla}_{\mu} \phi \Bigr) \end{equation} for the Einstein-Hilbert term. Henceforth we take the flat background-metric for simplicity, though in perturbation theory we should choose a background-metric such that the approximation is well defined. At least up to the $t^2$-order, we can use the argument on the scaling operators in refs.~\cite{am,amm2}. As for the cosmological constant operator, the conformal charge is classically given by $\alpha(t)= \hbox{dim}~[\Lambda] =4$. In quantum theory it receives a correction as $\alpha(t)=4 +\gamma_{\Lambda}$. The anomalous dimension is now calculated using the gauge-fixed action in the form $\gamma_{\Lambda} = \frac{\alpha(t)^2}{4b^{\prime}(t)}$, where $b^{\prime}(t) = b(t)+\frac{b^2 t^2}{12}$, so that the quadratic equation is obtained. Solving the equation, we get the following value: \begin{equation} \alpha(t)=2b^{\prime}(t) \biggl( 1-\sqrt{1-\frac{4}{b^{\prime}(t)}} \biggr) ~, \end{equation} where the solution such that $\alpha(t) \rightarrow 4$ at the classical limit $b^{\prime}(t) \rightarrow \infty$ is chosen. Similarly for the Einstein-Hilbert term we obtain \begin{equation} \beta(t)=2b^{\prime}(t) \biggl( 1-\sqrt{1-\frac{2}{b^{\prime}(t)}} \biggr) ~. \end{equation} Physically the cosmological constant should be real so that we obtain the condition $b^{\prime}(t) \geq 4$ for 4D quantum gravity to exist. Recently the evidence of a smooth phase in 4D simplicial quantum gravity coupled to $U(1)$ gauge theories is reported in the numerical simulations~\cite{bbkptt}, which suggests that $b^{\prime}(t) < 4$ for $N_A =0$, but $b^{\prime}(t) > 4$ for $N_A \geq 1$. Naively, comparing with our result we obtain the value for the coupling constant: $0.11 < t^2 < 0.20$. This seems to indicate that the perturbation of $t$ on the flat background-metric is not so bad. Finally we give a comment on the consistency of our calculations. Consider a constant shift of conformal field: $\phi \rightarrow \phi + \eta$. Then the mass scales are rescaled as $\Lambda \rightarrow \Lambda \hbox{\large \it e}^{\alpha(t)\eta}$ and $m^2 \rightarrow m^2 \hbox{\large \it e}^{\beta(t)\eta}$, and the Weyl action with the coefficient $\eta a(t)$ is induced as well. This extra Weyl action, however, gives at most $t^4$-corrections to the coefficients $a(t)$ and $b(t)$ of the induced action so that our results are self-consistent at least up to the $t^2$-order. \vspace{2mm} \begin{flushleft} {\bf Acknowlegments} \end{flushleft} The authors wish to thank N. Tsuda for informing us of their results~\cite{bbkptt}. We also acknowlege N. D. Hari Dass for careful reading of the manuscript. The research of F.S. is supported in part by the Japan Society for the Promotion of Science under the Postdoctoral Research Program. \vspace{5mm} \begin{center} {\Large {\bf Appendix}} \end{center}
train/arxiv
BkiUa5rxK1ThhCdy6RFj
4
0.8
\section[#1]{\centering\normalfont\scshape #1}} \DeclareMathOperator{\text{Aut}}{Aut} \DeclareMathOperator{\inn}{Inn} \DeclareMathOperator{\res}{Res} \DeclareMathOperator{\ind}{Ind} \DeclareMathOperator{\text{diag}}{diag} \DeclareMathOperator{\mathbb{Z}}{\mathbb{Z}} \DeclareMathOperator{\text{Id}}{\text{id}} \DeclareMathOperator{\irr}{\text{Irr}} \DeclareMathOperator{\rep}{\bold{Rep-}} \DeclareMathOperator{\gr}{\text{Groth}} \begin{document} \title{Induction/Restriction Bialgebras for Restricted Wreath Products}\author{Seth Shelley-Abrahamson}\date{August 2014}\maketitle \begin{abstract} To a finite group $G$ one can associate a tower of wreath products $S_n \rtimes G^n$. It is well known that the graded direct sum of the Grothendieck groups of the categories of finite dimensional complex representations of these groups can be given the structure of a graded Hopf algebra, and in fact a positive self-adjoint Hopf algebra in the sense of Zelevinsky [1], using the induction product and restriction coproduct. This paper introduces and explores an analogously defined algebra/coalgebra structure associated to a more general class of towers of groups, obtained as a certain family of subgroups of wreath products in the case $G$ is abelian. We call these groups restricted wreath products, and they include the infinite family of complex reflection groups $G(m, p, n)$. It is known that in the case of full wreath products the associated Hopf algebra decomposes as a tensor power of the Hopf algebra of integral symmetric functions. In the case of restricted wreath products, the associated algebra/coalgebra is no longer a Hopf algebra, but here we see that it contains an algebra containing every irreducible representation as a constituent and which is isomorphic to a tensor power of such an algebra/coalgebra associated to a smaller restricted wreath product, generalizing the tensor product decomposition for the full wreath products. We closely follow the approach of [1]. \end{abstract} \newpage \tableofcontents \section{Introduction} Let $G$ be a finite abelian group. We may then construct the wreath product $S_n[G] := S_n \rtimes G^n$ as the group of monomial matrices with all nonzero entries in $G$. As $G$ is abelian there is a surjection $S_n[G] \rightarrow G$ by taking the sum of the elements in $G$ appearing in a matrix. For a subgroup $H \subset G$ let $G_n(G, H)$ denote the kernel of the composition $S_n[G] \rightarrow G \rightarrow G/H$, so that $G_n(G, G) = S_n[G]$ and $G_n(G, H)$ is the group of monomial matrices with entries in $G$ whose entries sum to an element of $H$. Note that when $G$ is cyclic this construction yields the finite complex reflection groups in the family $G(m, p, n)$ where $p$ divides $m$. Specifically, we have $G(m, p, n) = G_n(\mathbb{Z}/m, p\mathbb{Z}/m)$. Let $R_0(G, H) = \mathbb{Z}$ and for $n > 0$ let $R_n(G, H) = K_0(\bold{Rep}-G_n(G, H))$ denote the Grothendieck group of the category of finite dimensional complex representations of $G_n(G, H)$. We then construct the graded abelian group $$R(G, H) = \bigoplus_{n \geq 0} R_n(G, H)$$ which has a designated graded basis given by the isomorphism classes of irreducible representations (along with $1 \in \mathbb{Z}$ in degree 0) along with a graded nondegenerate symmetric bilinear form given by the usual pairing of representations. This form will be denoted $\langle \cdot, \cdot \rangle$. Note that the irreducible elements form a graded orthonormal basis for $R(G, H)$. Using induction and restriction, one can place graded product and coproduct structures on $R(G, H)$. In particular, we have an embedding of groups $G_k(G, H) \times G_l(G, H) \subset G_{k + l}(G, H)$ by the block-diagonal embedding of matrices, so we have an induction functor $$\ind : \rep(G_k(G, H) \times G_l(G, H)) \rightarrow \rep(G_{k + l}(G, H))$$ and a restriction functor $$\res: \rep(G_{k + l}(G, H)) \rightarrow \rep(G_k(G, H) \times G_l(G, H)).$$ These are exact functors, so we obtain maps at the level of Grothendieck groups: $$m_{k, l} : R_k(G, H) \otimes R_l(G, H) \rightarrow R_{k + l}(G, H)$$ $$m_{k, l}^*: R_{k + l}(G, H) \rightarrow R_k(G, H) \otimes R_l(G, H)$$ in view of the natural isomorphism $$R_k(G, H) \otimes R_l(G, H) \cong \gr(\rep(G_k(G, H) \times G_l(G, H))).$$ For $k = 0$ or $l = 0$ just set $m_{k, l}$ and $m^*_{k, l}$ to be the maps given by the natural isomorphism $R \otimes \mathbb{Z} \cong R$. Set $$m = m_{G, H} = \sum_{k, l \geq 0} m_{k, l} : R(G, H) \otimes R(G, H) \rightarrow R(G, H)$$ $$m^* = m^*_{G, H} = \sum_{k, l \geq 0} m_{k, l}^* : R(G, H) \rightarrow R(G, H) \otimes R(G, H).$$ It is immediate that $m_{G, H}$ gives $R(G, H)$ the structure of a graded commutative algebra with unit and that $m_{G, H}^*$ gives $R(G, H)$ the structure of a graded cocommutative coalgebra with counit. Furthermore, by Frobenius reciprocity $m_{G, H}$ and $m^*_{G, H}$ are adjoint operators with respect to the inner product on $R(G, H)$ and the induced graded inner product on $R(G, H) \otimes R(G, H)$. As they arise from functors, they send irreducible elements to nonzero sums of irreducible elements with nonnegative coefficients. \section{A Tensor Product Subalgebra} In this section we will construct a natural positive injective map of algebras $$\Phi: \bigotimes_{l \in H^*} R(G/H, 1) \hookrightarrow R(G, H)$$ and we will see a weak form of surjectivity in the sense that every irreducible element in $R(G, H)$ occurs as a constituent of some element of the image of this map. For $H = G$, the case of usual wreath products, $R(G/G, 1) = R(1, 1)$ is the Hopf algebra of integral symmetric functions and the injection above is the usual isomorphism of Hopf algebras known in that case. Let $\phi \colon G_n(G, H) \rightarrow G_n(G/H, 1)$ be the map given by reducing the matrix entries mod $H$. This gives rise to an exact sequence $$0 \rightarrow H^n \rightarrow G_n(G, H) \rightarrow G_n(G/H, 1) \rightarrow 0$$ where the first map is the diagonal embedding. We obtain an additive functor $\phi^* : \rep(G_n(G/H, 1)) \rightarrow \rep(G_n(G, H))$ by pullback, which gives rise to a graded operator $\phi^* \colon R(G/H, 1) \rightarrow R(G, H)$. As $\phi$ is surjective this map sends distinct irreducibles to distinct irreducibles, and in particular is an embedding of graded free abelian groups. For $l \in H^*$, $l^{\otimes n}$ is a linear character of $H^n$ centralized by $G_n(G, H)$, so the action can be extended to all of $G_n(G, H)$ trivially with respect to the above exact sequence. We then obtain an additive functor $\tau_l \colon \rep(G_n(G, H)) \rightarrow \rep(G_n(G, H))$ by tensoring with $l^{\otimes n}$, giving rise to a graded operator $\tau_l$ on $R(G, H)$. These operators have several nice properties. We see $\tau_l \circ \tau_{l'} = \tau_{ll'}$. In view of the inner product on $R(G, H)$ in terms of characters, we see $\tau_l^* = \tau_{\bar{l}} = \tau_{l^{-1}} = \tau_l^{-1}$, so $\tau_l$ is an orthogonal operator. It is clear that $\tau_l$ is a map of coalgebras, but then $\tau_l^* = \tau_{\bar{l}}$ is also a map of coalgebras, so since the form on $R(G, H)$ is nondegenerate we conclude $\tau_l$ is also a map of algebras. In summary, the rule $l \mapsto \tau_l$ gives an action of $H^*$ on $R(G, H)$ by positive orthogonal bialgebra automorphisms. For $l \in H^*$, set $\Phi_l = \tau_l \circ \phi^* \colon R(G/H, 1) \rightarrow R(G, H)$. We then have \begin{proposition} $\Phi_l$ is an injective map of bialgebras sending irreducibles to irreducibles. \end{proposition} \begin{proof} For the first statement, in view of the preceding comments about $\tau_l$ and $\phi^*$ we need only check that $\phi^*$ is a map of bialgebras. It is obvious that $\phi^*$ is a map of coalgebras, and to establish that it is a map of algebras we need to check that the diagram $$\begin{diagram} R(G/H, 1) \otimes R(G/H, 1) &\rTo^{\phi^* \otimes \phi^*}& R(G, H) \otimes R(G, H)\\\dTo^{m_{G/H, 1}}&&\dTo^{m_{G, H}}\\R(G/H, 1) &\rTo^{\phi^*}&R(G, H)\end{diagram}$$ commutes. For this just note that $\phi$ induces a bijection on the coset spaces $G_{k + l}(G, H)/(G_k(G, H) \times G_l(G, H))$ and $G_{k + l}(G/H, 1)/(G_k(G/H, 1) \times G_l(G/H, 1))$, and the commutativity of the diagram then follows immediately from the definition of induced characters. \end{proof} \begin{proposition} The sub-bialgebra $\Phi_l(R(G/H, 1))$ of $R(G, H)$ has a graded basis whose degree $n$ part consists of the isomorphism classes of those irreducible representations $\pi$ of $G_n(G, H)$ whose restriction to $H^n$ contains the irreducible constituent $l^{\otimes n}$.\end{proposition} \begin{proof} From the construction and the previous proposition that $\Phi_l(R(G/H, 1))$, we need only check that any such $[\pi]$ is in the image of $\Phi_l$. Note that the $l^{\otimes n}$-isotypic piece of $\pi|_{H^n}$ is actually a submodule for $G_n(G, H)$, so $\tau_l^{-1}\pi$ is an irreducible representation of $G_n(G, H)$ with trivial $H^n$-action, so has the structure of an irreducible $G_n(G/H, 1) = G_n(G, H)/H^n$-representation $\pi'$. But then $\tau_{l}^{-1}\pi = \phi^*\pi'$ so $\pi = \Phi_l(\pi)$, as needed.\end{proof} \begin{proposition} The sub-bialgebras $\Phi_l(R(G/H, 1))$ are pairwise orthogonal. \end{proposition} \begin{proof} If $\pi$ and $\sigma$ are irreducible representations of $G_n(G, H)$ which are $l^{\otimes n}$-isotypic and $l'^{\otimes n}$-isotypic, respectively, for $l \neq l'$, then we have $$\langle \pi, \sigma \rangle_{G_n(G, H)} \leq \langle \pi, \sigma\rangle_{H^n} = \deg(\pi)\deg(\sigma)\langle l^{\otimes n}, l'^{\otimes n}\rangle_{H^n} = 0$$ so $\langle \pi, \sigma \rangle = 0$, and in view of the previous proposition our claim follows.\end{proof} Now for $l \in H^*$ define the graded operator $\Psi_l \colon R(G, H) \rightarrow R(G/H, 1)$ on the degree $n$ part by operator associated to the additive functor $\Psi_l \colon \rep(G_n(G, H)) \rightarrow \rep(G_n(G/H, 1))$ defined by $\Psi_l(\pi) = \hom_{H^n}(l^{\otimes n}, \pi)$. The $G_n(G/H, 1)$-action is given by $g.A \mapsto \tilde{g}A\tilde{g}^{-1}$ for $A \in \hom_{H^n}(l^{\otimes n}, \pi)$ and $\tilde{g} \in G_n(G, H)$ any lift of $g \in G_n(G/H, 1)$. This map $g.A$ does not depend on the choice of lifting of $g$ because $A$ commutes with the action of $H^n$. Clearly $g.A \in \hom_{H^n}(l^{\otimes n}, \pi)$ so $\Psi_l(\pi)$ is indeed a $G_n(G/H, 1)$-representation, and clearly $\Psi_l$ is an additive functor. \begin{proposition} $\Psi_l$ is a left adjoint for $\Phi_l$ as functors, and in particular the operators $\Psi_l$ and $\Phi_l$ are adjoints on the level of the Grothendieck groups with respect to the bilinear form. As this form is nondegenerate and $\Phi_l$ is a bialgebra homorphism, $\Psi_l$ too is a bialgebra homomorphism.\end{proposition} \begin{proof} This follows from tensor-hom adjunction.\end{proof} It is clear that $\Psi_l \circ \Phi_l$ is naturally isomorphic to the identity functor and that $\Phi_l \circ \Psi_l$ is naturally isomorphic to the functor $I_l$ on $\rep(G_n(G, H))$ given by projection to the $l^{\otimes n}$-isotypic piece for the $H^n$-action (recall this is always a $G_n(G, H)$-subrepresentation). At the level of Grothendieck groups, we obtain: \begin{proposition} $\Psi_l \circ \Phi_l \colon R(G/H, 1) \rightarrow R(G/H, 1)$ is the identity, and $$\Phi_l \circ \Psi_l \colon R(G, H) \rightarrow R(G, H)$$ is orthogonal projection onto the sub-bialgebra $\Phi_l(R(G/H, 1)).$\end{proposition} We now define the map mentioned at the start of this section $$\Phi \colon \bigotimes_{l \in H^*} R(G/H, 1) \rightarrow R(G, H)$$ as the product of the maps $\Phi_l$. Given an $|H^*|$-tuple $\lambda = (\lambda_l)_{l \in H^*}$ of nonnegative integers, let $l(\lambda)$ denote the number of nonzero parts. Given irreducible representations $\pi_l$ of $G_{\lambda_l}(G/H, 1)$, let $\pi_\lambda = \bigotimes_{l \in H^*} \pi_l \in \bigotimes_{l \in H^*} R(G/H, 1).$ \begin{theorem} $\Phi$ is a graded, positive, injective map of algebras. Every irreducible element of $R(G, H)$ occurs as a constituent of some element of the image of $\Phi$. In particular, we have $$\langle \Phi(\pi_\lambda), \Phi(\sigma_\mu)\rangle = \delta_{\pi_\lambda, \sigma_\mu} [G : H]^{l(\lambda) - 1}.$$\end{theorem} \begin{proof} It follows from previous results that $\Phi$ is a positive graded map of algebras. By positivity, showing the given inner product formula will imply injectivity, so we start there, which is just an application of Mackey's double coset formula. In particular, we have $$\begin{array} {lcl}&&\langle \Phi(\pi_\lambda), \Phi(\sigma_\mu)\rangle\\&=&\langle \ind_{G_\lambda(G, H)}^{G_n(G, H)} \bigotimes_{l \in H^*} \Phi_l(\pi_l), \ind_{G_\mu(G, H)}^{G_n(G, H)}\bigotimes_{l \in H^*} \Phi_l(\sigma_l)\rangle_{G_n(G, H)}\\&=&\sum_{\gamma \in G_\lambda\backslash G_n/G_\mu} \langle \bigotimes_{l \in H^*} \pi_l, (\bigotimes_{l \in H^*} \sigma_l)^\gamma\rangle_{G_\lambda \cap\gamma G_\mu \gamma^{-1}} \end{array}$$ Noting $H^n \subset G_\lambda \cap \gamma G_\mu \gamma^{-1}$, we have the bound $$\left\langle \bigotimes_{l \in H^*} \pi_l, \left(\bigotimes_{l \in H^*} \sigma_l\right)^\gamma\right\rangle_{G_\lambda \cap\gamma G_\mu \gamma^{-1}}$$ $$\leq \prod_{l \in H^*} \deg(\pi_l)\deg(\sigma_l) \left\langle \bigotimes_{l \in H^*} l^{\otimes \lambda_l}, \left(\bigotimes_{l \in H^*} l^{\otimes \mu_i}\right)^\gamma\right\rangle_{H^n}$$ As $G^n$ centralizes $H^n$, twisting by $\gamma$ amounts to just twisting by some element of $S_n$, permuting the tensor factors, and hence the final inner product is $0$ unless both $\lambda = \mu$ and $\bar{\gamma} \in S_n$ stabilizes $\lambda$. In this case we can clearly take $\gamma$ to be diagonal, so to compute the original inner product we need only sum over a collection of diagonal double coset representatives. From the definition of $\Phi_l$, we see that any diagonal element of $G_{\lambda_i}(G, G)$ centralizes $\Phi_l(\sigma_l)$, and we conclude that each term of the above Mackey sum involving a diagonal representative yields a contribution of $1$ to the sum, as in that case $G_\lambda \cap \gamma G_\mu \gamma^{-1} = G_\lambda$ and it is just an inner product of an irreducible representation of $G_\lambda$ with itself. The number of classes of the double coset space which have a diagonal representative is clearly $[G: H]^{l(\lambda) - 1}$ - indeed they are formed by a choice of element of $G/H$ for each nonzero $\lambda_i \times \lambda_i$ block such that the entire sum is $1 \in G/H$. For our weak form of surjectivity, let $\pi \in \rep(G_n(G, H))$ be an irreducible representation. The action of $S_n$ on $\pi$ induces an action of $S_n$ on the set of $H^n$-isotypic pieces by permuting the tensor factors, so we conclude some there is a nonzero $H^n$-isotypic piece of $\pi|_{H^n}$ of type $l_1^{\otimes \lambda_1} \otimes \cdots \otimes l_{|H|}^{\otimes \lambda_{|H|}}$. Then, $\pi|_{G_\lambda}$ contains some irreducible representation $\pi_1 \otimes \cdots \otimes \pi_{|H|}$ whose restriction $\pi_i|_{H^{\lambda_i}}$ contains $l_i^{\otimes \lambda_i}$. By Proposition 2, we then have $\pi_i = \Phi_{l_i}(\pi_i')$ for some $\pi_i'$. But then by Frobenius reciprocity $\pi$ is an irreducible constituent of $\Phi(\pi_\lambda)$, as needed.\end{proof} Writing $$\Phi = m_{G, H}^{(|H^*|)} \circ \bigotimes_{l \in H^*} \Phi_l : \bigotimes_{l \in H^*} R(G/H, 1) \rightarrow R(G, H)$$ we also have the adjoint map $$\Psi = \bigotimes_{l \in H^*} \Psi_l \circ m_{G, H}^{*(|H^*|)} : R(G, H) \rightarrow \bigotimes_{l \in H^*} R(G/H, 1)$$ where $m_{G, H}^{(|H^*|)}$ and $m_{G, H}^{*(|H^*|)}$ denote iterated multiplication/comultiplication. The first part of the previous theorem can then be recast as \begin{corollary} $\Psi$ is a positive, graded, surjective map of coalgebras. No positive element lies in its kernel. \end{corollary} Note that in the case of usual wreath products, i.e. $G = H$, we have $[G : H] = 1$, so by the inner product formula in Theorem 6 we have that $\Phi$ sends irreducibles to irreducibles, and the weak surjectivity condition becomes usual surjectivity, so $\Phi$ is surjective and hence is a bijective isometry, so $\Phi^{-1} = \Phi^* = \Psi$. But $\Psi$ is a map of coalgebras, so $\Phi = \Psi^{-1}$ is as well, and we obtain \begin{corollary} For the case of usual wreath products, i.e. $G = H$, we have that $\Phi$ and $\Psi$ are mutually inverse and adjoint, positive (irreducible-to-irreducible), graded isomorphisms respecting both the algebra and coalgebra structures. \end{corollary} This case recovers the usual identification of $R(G, G)$ with the $|G|$-fold tensor power of the Hopf algebra of integral symmetric functions, as seen for example in [1]. \vfil\eject \section {References} 1. A. Zelevinsky. \emph{Representations of Finite Classical Groups, A Hopf algebra approach,} volume 869 of \emph{Lecture Notes in Mathematics.} Springer-Verlag, Berlin, 1981. \end{document}
train/arxiv
BkiUdVbxK7Tt52mM9OX2
5
1
\section{Introduction} It is widely agreed that Bohmian mechanics (BM) makes the same predictions as standard quantum mechanics (SQM), which is the reason why both theories are usually considered as empirically equivalent ``interpretations'' of quantum mechanics. In his foundational papers, Bohm showed that the statistical predictions of BM for the outcome of measurements of arbitrary system properties coincide with those of SQM \cite{Bohm1952,Bohm1952a}. The empirical equivalence vitally clings on two things: 1) the validity and justification of the so-called ``quantum equilibrium hypothesis'' (QEH), which establishes a link between the wavefunction and a classical probability density on system configurations, and 2) the degree of precision of how the measurement process is accounted for. As for 1), it is still a matter of controversial debate whether the QEH has the status of a postulate or whether it can be independently derived from the underlying system dynamics \cite{Goldstein2009,Durr_et_al_1992,Durr_et_al_2013,Towler_et_al_2011,Valentini1991a,Valentini1991b}. In any case, Valentini showed that if the QEH is not exactly fulfilled, the predictions of BM may differ from those of SQM to a small but potentially measurable extent, which would make the empirical equivalence of BM and SQM experimentally testable \cite{Valentini1991a,Valentini1991b}. Here I shall not be concerned with the QEH, but I will rather investigate a recently revived controversy related to the secondly mentioned critical issue about measurement. The consensus among most physicists concerned with BM seems to be that potentially deviating predictions of BM are ``harmless'' in that they can be made arbitrarily small and do not lead to inconsistencies or gross empirical disagreement with the standard theory. Proponents of BM take it as an advantage of their theory that measurement is not an additional concept fundamentally different from ordinary physical processes, as is the case in SQM, but that measurement is a specially designed but otherwise ordinary physical process that must be explicitly included in the description to obtain empirically valid results. As long as the predictions of BM can be made arbitrarily close to those of SQM by taking into account a suitably designed measurement interaction between the system of interest and a macroscopic measurement device, there would be no reason to doubt the empirical equivalence of both theories. However, quite recently this consensus has been seriously put into question. Kiukas and Werner have published a thorough analysis of CHSH inequalities, showing (amongst other things) that BM leads to rather drastically different predictions for position measurements on stationary entangled states \cite{Kiukas_et_al_2010} . After publication the authors found that their line of reasoning had already been followed by Correggi and Morchio \cite{Correggi_et_al_2002}, but their new analysis provided theoretical insights valuable and novel enough to be published as a major contribution to the understanding of standard quantum mechanics and its relation to hidden-variable theories such as BM and Nelson's theory of stochastic evolution of classical particle configurations \cite{Nelson1966}. If the authors (Kiukas and Werner, as well as Correggi and Morchio) are right in what they say, then the Nelson-Bohm class of theories can no longer be considered as empirically equivalent to SQM, and moreover it would be relatively easy to devise an \emph{experimentum crucis} to decide which one of these theories is empirically valid. Kiukas and Werner leave no doubt which theory they think will pass the test (SQM), and certainly even the hardest followers of BM will acknowledge that in this case SQM can hardly be expected to fail. There is a recent defense by D\"urr et al., in which the authors hold that BM yields the same predictions as SQM even for the critical scenario under consideration, if only the ``collapse'' induced by the measurement interaction is correctly taken into account \cite{Durr_et_al_2014}. However, a similar reaction has already been anticipated by Kiukas and Werner, and in their paper they argue against the salutary role of the collapse in such a scenario. There has been no further contribution since, and although both sides presumably consider the issue as settled in favor of their view, the interested neutral public may largely be left with some confusion and the unsatisfactory feeling of witnessing a stalemate situation. Here I aim to remove the confusion and settle the issue in a way that both sides may agree upon. In my view, the main reasons for the apparently conflicting conclusions drawn from the analysis of the critical scenario under consideration (in the following referred to as ``the scenario'') is found in 1) the abstract generality and the sophisticated non-mainstream methodology (C{*}-algebra) of the analysis carried out by Kiukas and Werner, 2) the rather sketchy defense by D\"urr et al., in which the ``collapse'' takes a central role in reconciling the seemingly different predictions, and 3) the differences inherent in the conceptual languages used by either side. I address the first point by re-analyzing the scenario in the framework of either theory, SQM and BM, while using only mainstream mathematics that every informed reader will be familiar with. Secondly, I provide detailed calculations in the framework of BM without making use of any sort of ``collapse''. As for the third point, I will seek to clarify to what extent the \emph{physical meaning} that each party in this conflict gives to their formal expressions inherently differs, so that it becomes more evident where exactly the protagonists may \emph{think} they talk about the same thing, although they're not. The result of my re-analysis is that BM is to the same extent empirically equivalent to SQM as it was before the conflict was brought about. \section{Potential sources of confusion} Let us start with some potential sources of confusion that may obscure the view on the conceptual differences between BM and SQM, and which may prevent a proper resolution of the apparent conflict. \subsection{Position} When Bohmians talk about the position of particles, they mean something else than adherents of SQM. The term ``position'' refers in BM to a classical variable, while in SQM it refers to an operator. As John Bell has put it, position in BM represents a \emph{beable} rather than an observable \cite{Bell1986}. The particle \emph{is} at any time at a specific position, whether it is being measured or not. These beables, the particle positions, are the ``hidden variables'' of Bohm's theory. Another beable, according to Bell, is the wavefunction. Whether being measured or not, the system \emph{has} a certain wavefunction. These beables are \emph{subject to}, and are generally \emph{affected by}, measurements, that is, their value is subject to uncontrollable and potentially drastic alteration during a measurement. In contrast to beables, operator-observables represent an \emph{experimental procedure} amounting to the measurement of a specific system property by an external macroscopic apparatus. This is what Bohmians mean when they say that observables are \emph{contextual}, and it is virtually the same idea that already had been brought up by Nils Bohr in support of his principle of \emph{complementarity}. In his analysis of the EPR paradox, Bohr writes \cite[pp 699-700]{Bohr1935}: \begin{quote} In fact to measure the position of one of the particles can mean nothing else than to establish a correlation between its behavior and some instrument rigidly fixed to the support which defines the space frame of reference. \end{quote} Beables, on the other hand, are non-contextual. So when a Bohmian utters the phrase ``the particle is at time $t$ at position $\boldsymbol{x}$'', he does not mean that the particle is being measured at position $\boldsymbol{x}$, nor does he mean that the system is in the eigenstate $|\boldsymbol{x}\rangle$ of the position operator. Rather, he is referring to a classical hidden variable $\boldsymbol{x}$ that is not attached to a quantum state or an operator. It is an \emph{additional concept} that simply does not exist in the terminology of SQM, and which can, however, be put into analogy with the ``true state'' of a classical system being described by a probability distribution over phase space. In classical statistical mechanics there is no doubt that the system is at any time in an exact state, which is represented by a point in phase space. Still, at the descriptive level the system state may well be represented by a probability distribution on phase space. These two descriptive elements of statistical mechanics, the probability distribution and the point in phase space, do not rival each other. Rather, the probability distribution simply captures the ignorance of the observer about the true state of the system. In practice, only a few macro-observables like volume, pressure, and temperature, are known to the observer, and these macro-observables fix a probability distribution over phase space that matches the constraints. The probability distribution and the point in phase space are also referred to as the \emph{macrostate} and the \emph{microstate}, respectively. It is helpful to recall the analogy of macrostate and microstate with wavefunction and system configuration, respectively, when trying to make sense of statements formulated within BM. The analogy is not perfect, though, because in BM the wavefunction is taken to \emph{objectively exist}, while in statistical mechanics the probability distribution is just a mathematical tool to calculate probabilities. In BM, the wavefunction guides the particles, while it is not asserted in statistical mechanics that the probability distribution in any way ``guides'' the particles. The conceptual stance of SQM is considerably different from that of BM. In SQM, the microstate may rather be put into analogy with the wavefunction, while the macrostate may then be put into analogy with the density matrix. In other words, in SQM the wavefunction already \emph{is} the complete description of the system, there is no further level of detail in the structure of reality, or at least any such further detail would be physically irrelevant and should not be part of a physical theory. \subsection{Collapse} In SQM, measurement is considered fundamentally different from ordinary Schr\"odinger evolution. It is a discontinuous process described by a projection and a subsequent renormalization, also referred to as the ``collapse of the wavefunction'', while the ordinary Schr\"odinger evolution is a continuous process described by a unitary transformation. The ontological status of, and the relation between, these two kinds of process is the issue of the infamous ``measurement problem''. Although hardly anybody seems to like the ``collapse'', it is there in the axioms of SQM, and it does a perfect job when it comes to statistical predictions of measurement outcomes. On the other hand, BM is a collapse-free theory. Measurement is just a specially designed but otherwise ordinary physical process involving a short and strong interaction between the system of interest and a macroscopic measurement device. When Bohmians mention the ``collapse'' in their analysis of a given scenario within the framework of BM, then what they really mean is an ``effective'' collapse. The effective collapse refers to a mathematical simplification of the physical description. More precisely, it amounts to cutting away ``empty branches'' from the wavefunction, which are components of a suitably chosen decomposition of the wavefunction that do not guide any particles. Since they do not guide any particles, they have no effect on the outcome of future measurements, and so they can safely be removed from the description, although ontologically they are still there. However, it is always possible to leave the empty branches in the description and ignore the effective collapse altogether, without affecting the statistical predictions of measurement outcomes. Only, the calculations will become more complicated, because the empty branches have to be taken everywhere into account. In this sense, the effective collapse is a helpful, but not an essential, part of the Bohmian methodology. There is, however, a sense in which the effective collapse is more than just a mathematical convenience: It actually yields an \emph{explanation} for the seemingly random, discontinuous jump of the wavefunction introduced by a measurement process. We shall come back to this later on in the Discussion. \subsection{CHSH inequalities} Every hidden-variable theory faces the challenge of CHSH inequalities \cite{Clauser_et_al_1969}, which are generalizations of Bell's inequality \cite{Bell1964}. If a candidate theory does not violate a CHSH inequality under circumstances where standard quantum mechanics (SQM) does, then it is not empirically equivalent to SQM, and most probably, at least according to the experimental evidence collected over the past decades, it is not empirically valid. Maximal violations of CSHS inequalities involve correlations between the results of local measurements performed on entangled systems. The first and most famous case of CHSH violation has been investigated by Bell \cite{Bell1964}, where he showed that no local hidden-variable theory can violate a special CHSH inequality, the Bell inequality, and is thus in conflict with SQM. Subsequent experiments \cite{Aspect_et_al_1982} have corroborated the predictions of SQM and thus have ruled out any local hidden variables theory as empirically false. Ironically, Bell himself was a great defender of Bohmian mechanics (BM), which \emph{is} a hidden-variable theory \cite{Bell1971,Bell1982,Bell1990}. As he and other defenders of BM have argued, the crucial feature that helps BM circumvent impossibility proofs, is that it is a \emph{nonlocal} theory. Specifically, the velocity of a given particle generally depends on the instantaneous position of remote particles. Moreover, and of equal importance, Bell pointed out that the operator-observables considered in standard quantum mechanics do not represent properties of the isolated system alone, but rather they depend on the entire measurement arrangement, a feature that has been termed \emph{contextuality.} Since Bohmian mechanics has a joint probability distribution for the positions of particles at all times, the correlation between these positions at distinct times cannot violate a CHSH inequality and thus BM appears to be in conflict with the standard theory. The defenders of BM have argued that their theory nevertheless reproduces the same predictions as SQM once the measurement process is adequately accounted for \cite{Berndl_et_al_1996,Durr_et_al_2014}. Interestingly, this is basically the same route that also Bohr took in his attempt to defend SQM against the challenge posed by the EPR paradox \cite{Bohr1935}. So, it seems the same conflicting intuitions about the role of measurement and the nature of reality are raising their head against each other again and again. \section{The critical scenario} Let us give a brief, non-formal description of the critical scenario where the predictions of SQM and of BM are claimed to differ \cite{Correggi_et_al_2002,Kiukas_et_al_2010}. Alice and Bob, the usual suspects in modern quantum mechanical scenarios, are at remote sites and each one has a particle under their control. The particles are entangled with each other in a local energy eigenbasis, so that the state of the total system is itself in an eigenstate of the total energy. As the particles are now remote from each other, there is no interaction between them any more, or the interaction is so weak that it can be ignored, and hence no further entanglement between them is created. Because the total system is in an energy eigenstate, it is stationary. In Bohmian mechanics this implies that the probability density for the position of the particles does not change over time. Because the particles are energetically decoupled, the marginal densities of Alice's and Bob's particle remain constant in time, too, and there can be no time-dependent correlation between the position of both particles. In contrast, when calculating the expectation value of the product of the position observables at distinct times, which gives the two-time correlation function for the measured positions of the particles, one finds that there is a time-dependent correlation between the two measurements. Hence, either BM is empirically false (tacitly assuming that SQM does not fail), or the particles in BM ``are not where they are measured''. Either way, it seems that BM looses out against SQM. \subsection{Analysis in standard quantum mechanics} Alice and Bob each possess an identical copy of a system with a local Hamiltonian $\hat{H}$ and local energy eigenstates $|n\rangle$, so that \begin{equation} \hat{H}|n\rangle=E_{n}|n\rangle. \end{equation} The compound system of Alice and Bob has the total Hamiltonian $\hat{H}_{AB}=\hat{H}_{A}+\hat{H}_{B}$, where $\hat{H}_{A}=\hat{H}\otimes\mathbbm1$ and $\hat{H}_{B}=\mathbbm1\otimes\hat{H}$. Alice and Bob together prepare the system at the initial time $t=0$ in an entangled energy eigenstate \begin{equation} |\Psi_{0}\rangle=\frac{1}{\sqrt{2}}[|0\rangle|1\rangle+|1\rangle|0\rangle],\label{eq:initialstate} \end{equation} so that $\hat{H}_{AB}|\Psi_{0}\rangle=(E_{0}+E_{1})|\Psi_{0}\rangle$. Since $|\Psi_{0}\rangle$ is an energy eigenstate, it is stationary, so in the Schr\"odinger picture we have a time-dependent wavefunction $|\Psi_{t}\rangle=e^{-i(E_{0}+E_{1})t}|\Psi_{0}\rangle$ with an oscillating global phase that is irrelevant and can be ignored. Note that here and in the following we set $\hbar=1$ to simplify the calculations. In the Heisenberg picture, the measurement of an observable $\hat{A}$ at time $t$ is represented by the operator $\hat{A}(t)=\hat{U}^{\dagger}(t)\hat{A}\hat{U}(t)$, so that $\langle\hat{A}\rangle(t)=\langle\Psi_{0}|\hat{A}(t)|\Psi_{0}\rangle$. We can then write the two-time correlation function between any two observables $\hat{A}$ and $\hat{B}$ as \begin{equation} \langle\hat{A}\hat{B}\rangle(t_{1},t_{2})=\langle\Psi_{0}|\hat{A}(t_{1})\hat{B}(t_{2})|\Psi_{0}\rangle. \end{equation} At time $t_{1}>0$ Alice measures the local observable $\hat{A}_{A}=\hat{A}\otimes\mathbbm1$ on her particle, so her measurement will obey the statistics of the operator $\hat{A}_{A}(t_{1})=e^{i\hat{H}t_{1}}\,\hat{A}\,e^{-i\hat{H}t_{1}}\otimes\mathbbm1$. At some later time $t_{2}>t_{1}$ Bob measures the local observable $\hat{B}_{B}=\mathbbm1\otimes\hat{B}$, and his measurement will obey the statistics of the operator $\hat{B}_{B}(t_{2})=\mathbbm1\otimes e^{i\hat{H}t_{2}}\,\hat{B}\,e^{-i\hat{H}t_{2}}$. Since Bob's measurement applies to a different subsystem, his and Alice's operators commute for all times $t_{1},t_{2}$, so that the product of both observables obeys the statistics of the common operator \begin{equation} \hat{A}_{A}(t_{1})\hat{B}_{B}(t_{2})=e^{i\hat{H}t_{1}}\,\hat{A}\,e^{-i\hat{H}t_{1}}\otimes e^{i\hat{H}t_{2}}\,\hat{B}\,e^{-i\hat{H}t_{2}}. \end{equation} The expectation value of this operator yields the two-time correlation function \begin{equation} \langle\hat{A}_{A}\hat{B}_{B}\rangle(t_{1},t_{2})=\langle\Psi_{0}|\hat{A}_{A}(t_{1})\hat{B}_{B}(t_{2})|\Psi_{0}\rangle, \end{equation} which yields for the initial state \eqref{eq:initialstate} \begin{equation} \begin{split}\langle\hat{A}_{A}\hat{B}_{B}\rangle(t_{1},t_{2}) & =\Re\left\{ \langle0|\hat{A}|1\rangle\langle1|\hat{B}|0\rangle e^{i\varDelta E(t_{2}-t_{1})}\right\} \\ & \phantom{=}+\frac{1}{2}\left[\langle0|\hat{A}|0\rangle\langle1|\hat{B}|1\rangle+\langle1|\hat{A}|1\rangle\langle0|\hat{B}|0\rangle\right]. \end{split} \label{eq:twotimeAB} \end{equation} So the two-time correlation function oscillates with the temporal distance between the two measurements, with a frequency proportional to the energy difference $\varDelta E=E_{1}-E_{0}$, and its value does not depend on the temporal ordering of the two measurements. For the particular case that Alice and Bob both measure the position of their particle, we have $\hat{A}=\hat{B}=\hat{\boldsymbol{x}}$, and the two-time correlation function of their measurements yields \begin{equation} \begin{split}\langle\hat{\boldsymbol{x}}_{A}\hat{\boldsymbol{x}}_{B}\rangle(t_{1},t_{2}) & =|\langle0|\hat{\boldsymbol{x}}|1\rangle|^{2}\cdot\cos(\varDelta E(t_{2}-t_{1}))\\ & \phantom{=}+\langle0|\hat{\boldsymbol{x}}|0\rangle\langle1|\hat{\boldsymbol{x}}|1\rangle, \end{split} \label{eq:twotimeQM} \end{equation} where we understand $\boldsymbol{a}\boldsymbol{b}=\boldsymbol{a}\cdot\boldsymbol{b}$ for vectors $\boldsymbol{a}$ and $\boldsymbol{b}$. Hence, if Alice and Bob compare their results, they will find that they correlate in an oscillatory manner according to the relation above. This time-dependent correlation is merely due to entanglement, as for initial states in the product form $|\Psi_{0}\rangle=|0\rangle|1\rangle$ or $|\Psi_{0}\rangle=|1\rangle|0\rangle$ the correlation function would yield \begin{equation} \langle\hat{\boldsymbol{x}}_{A}\hat{\boldsymbol{x}}_{B}\rangle(t_{1},t_{2})=\langle0|\hat{\boldsymbol{x}}|0\rangle\langle1|\hat{\boldsymbol{x}}|1\rangle. \end{equation} Note that for the equal-time condition $t_{1}=t_{2}=t$, as well as for the periodic cases $t_{2}=t_{1}+kT$, with $k\in\{1,2,\ldots\}$, and \begin{equation} T=\frac{2\pi}{\varDelta E},\label{eq:T} \end{equation} one obtains from \eqref{eq:twotimeQM} \begin{align} \langle\hat{\boldsymbol{x}}_{A}\hat{\boldsymbol{x}}_{B}\rangle(t,t+kT) & =|\langle0|\hat{\boldsymbol{x}}|1\rangle|^{2}+\langle0|\hat{\boldsymbol{x}}|0\rangle\langle1|\hat{\boldsymbol{x}}|1\rangle\\ & =\langle\hat{\boldsymbol{x}}_{A}\hat{\boldsymbol{x}}_{B}\rangle, \end{align} where $\langle\hat{\boldsymbol{x}}_{A}\hat{\boldsymbol{x}}_{B}\rangle=\langle\Psi_{0}|(\hat{\boldsymbol{x}}\otimes\hat{\boldsymbol{x}})|\Psi_{0}\rangle$ is the single-time correlation function at initial time $t=0$. \subsection{Analysis in Bohmian mechanics} Let us provide an analysis of the scenario in the framework of BM and calculate the two-time correlation function of the ``true'' particle positions (which do not exist in SQM). See the Appendix for details about the notation and the foundations of BM. The initial quantum state is given by the wavefunction \begin{equation} \Psi_{0}=\frac{1}{\sqrt{2}}\left[\phi_{0}\otimes\phi_{1}+\phi_{1}\otimes\phi_{0}\right], \end{equation} where $\phi_{n}$ are eigenstates of the local Hamiltonian $\hat{H}$, so $\hat{H}\phi_{n}=E_{n}\phi_{n}$. The positions of the particles at time $t$ are precisely determined by the trajectory function $\xi_{t}=\int_{0}^{t}dt'(j_{t'}/\rho_{t'})$, so the time dependent density $\rho_{t}=|\Psi_{t}|^{2}$ can also be written as \begin{equation} \rho_{t}(q)=\int dq'\,\rho_{0}(q')\delta(q-\xi_{t}(q')),\label{eq:densitytraj} \end{equation} where $\rho_{0}=|\Psi_{0}|^{2}$ is the initial density at time $t=0$. For two particles $A$ and $B$, we have $q=(\boldsymbol{x},\boldsymbol{y})$, and so \eqref{eq:densitytraj} becomes \begin{equation} \rho_{t}(\boldsymbol{x},\boldsymbol{y})=\int d^{3}x'\int d^{3}y'\,\rho_{0}(\boldsymbol{x}',\boldsymbol{y}')\delta(\boldsymbol{x}-\boldsymbol{\xi}_{A,t}(\boldsymbol{x}',\boldsymbol{y}'))\delta(\boldsymbol{y}-\boldsymbol{\xi}_{B,t}(\boldsymbol{x}',\boldsymbol{y}')), \end{equation} where $\xi_{t}=(\boldsymbol{\xi}_{A,t},\boldsymbol{\xi}_{B,t})$. From the above expression it is straightforward to construct the \emph{two-time density} for the two particles, \begin{equation} \rho_{t_{1},t_{2}}(\boldsymbol{x},\boldsymbol{y})=\int d^{3}x'\int d^{3}y'\,\rho_{0}(\boldsymbol{x}',\boldsymbol{y}')\delta(\boldsymbol{x}-\boldsymbol{\xi}_{A,t_{1}}(\boldsymbol{x}',\boldsymbol{y}'))\delta(\boldsymbol{y}-\boldsymbol{\xi}_{B,t_{2}}(\boldsymbol{x}',\boldsymbol{y}')),\label{eq:twotimedensity} \end{equation} from where we can calculate the two-time correlation function of the position of the two particles as \begin{equation} \langle\boldsymbol{X}_{A}\boldsymbol{X}_{B}\rangle(t_{1},t_{2})=\int d^{3}x\int\,d^{3}y\,\boldsymbol{x}\boldsymbol{y}\,\rho_{t_{1},t_{2}}(\boldsymbol{x},\boldsymbol{y}).\label{eq:twotimerho} \end{equation} Since the initial state \eqref{eq:initialstate} is an energy eigenstate, the time evolution function becomes time-independent, $\xi_{t}(\boldsymbol{x},\boldsymbol{y})=\xi_{0}(\boldsymbol{x},\boldsymbol{y})=(\boldsymbol{x},\boldsymbol{y})$, and so the two-time density \eqref{eq:twotimedensity} becomes \begin{align} \rho_{t_{1},t_{2}}(\boldsymbol{x},\boldsymbol{y}) & =\int d^{3}x'\int d^{3}y'\,\rho_{0}(\boldsymbol{x}',\boldsymbol{y}')\delta(\boldsymbol{x}-\boldsymbol{x}')\delta(\boldsymbol{y}-\boldsymbol{y}')\\ & =\rho_{0}(\boldsymbol{x},\boldsymbol{y}). \end{align} Thus the two-time correlation function \eqref{eq:twotimerho} becomes \begin{align} \langle\boldsymbol{X}_{A}\boldsymbol{X}_{B}\rangle(t_{1},t_{2}) & =\int d^{3}x\int\,d^{3}y\,\boldsymbol{x}\boldsymbol{y}\,|\Psi_{0}(\boldsymbol{x},\boldsymbol{y})|^{2}\\ & =|\phi_{0}^{\dagger}\,\hat{\boldsymbol{x}}\,\phi_{1}|^{2}+(\phi_{0}^{\dagger}\,\hat{\boldsymbol{x}}\,\phi_{0})(\phi_{1}^{\dagger}\,\hat{\boldsymbol{x}}\,\phi_{1}),\label{eq:twotimeBMX} \end{align} where we have used that \begin{equation} \psi^{\dagger}\hat{\boldsymbol{x}}\phi=\int d^{3}x\,\boldsymbol{x}\,\psi^{*}(\boldsymbol{x})\phi(\boldsymbol{x}). \end{equation} The result coincides with the prediction \eqref{eq:twotimeQM} of standard quantum mechanics only for the equal-time condition $t_{1}=t_{2}=t$ and for periodic intervals $t_{2}=t_{1}+kT$, with $k\in\{1,2,\ldots\}$ and $T$ given by \eqref{eq:T}. For almost all other time points we have \begin{equation} \langle\boldsymbol{X}_{A}\boldsymbol{X}_{B}\rangle(t_{1},t_{2})\neq\langle\hat{\boldsymbol{x}}_{A}\hat{\boldsymbol{x}}_{B}\rangle(t_{1},t_{2}). \end{equation} So it seems that in spite of Bohm's proof of the empirical equivalence of BM and SQM, both theories do not yield the same predictions here. Furthermore, Alice and Bob can decide upon the empirical validity of either standard QM or Bohmian mechanics by repeatedly measuring at two distinct times the position of two particles initially prepared in the entangled energy eigenstate \eqref{eq:initialstate}, communicating their results and calculating the two-time correlation function. \subsection{The hidden collapse} The origin of the discrepancy between the predictions of SQM and of BM is that they are really not about the same quantities. The position operators $\hat{\boldsymbol{x}}_{A}$ and $\hat{\boldsymbol{x}}_{B}$ represent \emph{measured} positions, while the variables $\boldsymbol{X}_{A}$ and $\boldsymbol{X}_{B}$ represent \emph{unmeasured} positions. In the course of each measurement the state of the system is altered and hence a subsequent measurement does not necessarily yield values that coincide with those the system would possess if no preceding measurement would have been performed. Thus, comparing the expressions \eqref{eq:twotimeQM} and \eqref{eq:twotimerho} is really comparing apples with pears. In both of the above analyses carried out in the respective frameworks of SQM and BM, the measurement process has not been included as an ordinary physical process corresponding to a unitary operation on the quantum state of the system. However, on using operator-observables in the SQM analysis, we have \emph{tacitly} included the measurement process as an external intervention implying a hidden collapse of the wavefunction. We shall now analyze once more the scenario described above in the framework of SQM and reveal the hidden collapse in the calculation of the two-time correlation function \eqref{eq:twotimeAB} between arbitrary discrete observables $\hat{A}$ and $\hat{B}$. The result can then be extended, with some caution, to continuous observables, in particular to position observables. Let us write two discrete observables $\hat{A}$ and $\hat{B}$ in their spectral decomposition, so $\hat{A}=\sum_{a}a\,\hat{\varPi}_{a}$ and $\hat{B}=\sum_{b}b\,\hat{\varPi}_{b}$, with the projections $\hat{\varPi}_{a}$ and $\hat{\varPi}_{b}$ onto the eigenspaces belonging to the eigenvalues $a$ and $b$, respectively. We then write the two-time correlation function \eqref{eq:twotimeAB} as \begin{align} \langle\hat{A}_{A}\hat{B}_{B}\rangle(t_{1},t_{2}) & =\langle\Psi|(\hat{A}(t_{1})\otimes\hat{B}(t_{2}))|\Psi\rangle\\ & =\sum_{a,b}\,ab\,\langle\Psi|\left(\hat{\varPi}_{a}(t_{1})\otimes\hat{\varPi}_{b}(t_{2})\right)|\Psi\rangle. \end{align} Using some basic operator algebra we arrive at \begin{equation} \langle\Psi|\left(\hat{\varPi}_{a}(t_{1})\otimes\hat{\varPi}_{b}(t_{2})\right)|\Psi\rangle=P_{t_{2},t_{1}}(b|a)P_{t_{1}}(a),\label{eq:expecy} \end{equation} where \begin{align} P_{t_{1}}(a) & =\langle\Psi_{t_{1}}|(\hat{\varPi}_{a}\otimes\mathbbm1)|\Psi_{t_{1}}\rangle,\\ |\Psi_{t_{1}}\rangle & =\hat{U}(t_{1})|\Psi\rangle,\\ P_{t_{2},t_{1}}(b|a) & =\langle\Psi_{t_{1},a,t_{2}}|(\mathbbm1\otimes\hat{\varPi}_{b})|\Psi_{t_{1},a,t_{2}}\rangle,\\ |\Psi_{t_{1},a,t_{2}}\rangle & =\hat{U}(t_{2}-t_{1})\frac{1}{\sqrt{P_{t_{1}}(a)}}(\hat{\varPi}_{a}\otimes\mathbbm1)\hat{U}(t_{1})|\Psi\rangle. \end{align} The expression $P_{t_{1}}(a)$ represents the probability that Alice finds her particle at time $t_{1}$ having the property $a$. For $t_{1}<t_{2}$ the vector $|\Psi_{t_{1},a,t_{2}}\rangle$ can be interpreted as the state resulting from the following sequence of operations: The initial state $|\Psi\rangle$ freely evolves up to time $t_{1}$, is then projected onto the eigenspace of $a$ and subsequently renormalized, and then again freely evolves up to time $t_{2}$. These operations correspond to Alice detecting at time $t_{1}$ her particle with the property $a$. Hence, the function $P_{t_{2},t_{1}}(b|a)$ represents the conditional probability that Bob finds his particle at time $t_{2}$ having property $b$ given that Alice previously had found her particle at time $t_{1}$ having property $a$. Consequently, the righthand side of \eqref{eq:expecy} can be understood as the joint probability $P_{t_{1},t_{2}}(a,b)$ that Alice finds her particle at earlier time $t_{1}$ having property $a$, and Bob finds his particle at time $t_{2}$ having property $b$, so that \begin{equation} P_{t_{2},t_{1}}(b|a)P_{t_{1}}(a)=P_{t_{1},t_{2}}(a,b). \end{equation} Now, since the measurements are performed on different subsystems, they commute, and thus the temporal ordering of the measurements actually does not matter. For temporally reversed measurements $t_{2}<t_{1}$, calculations analog to those above reveal that \begin{equation} \langle\Psi|\left(\hat{\varPi}_{a}(t_{1})\otimes\hat{\varPi}_{b}(t_{2})\right)|\Psi\rangle=P_{t_{2}}(b)P_{t_{1},t_{2}}(a|b),\label{eq:expecx} \end{equation} with \begin{align} P_{t_{2}}(b) & =\langle\Psi_{t_{2}}|(\mathbbm1\otimes\hat{\varPi}_{b})|\Psi_{t_{2}}\rangle,\\ |\Psi_{t_{2}}\rangle & =\hat{U}(t_{2})|\Psi\rangle,\\ P_{t_{1},t_{2}}(b|a) & =\langle\Psi_{t_{2},b,t_{1}}|(\hat{\varPi}_{a}\otimes\mathbbm1)|\Psi_{t_{2},b,t_{1}}\rangle,\\ |\Psi_{t_{2},b,t_{1}}\rangle & =\hat{U}(t_{1}-t_{2})\frac{1}{\sqrt{P_{t_{2}}(b)}}(\mathbbm1\otimes\hat{\varPi}_{b})\hat{U}(t_{2})|\Psi\rangle. \end{align} The expression $P_{t_{2}}(b)$ represents the probability that Bob finds his particle at time $t_{2}$ having property $b$. The vector $|\Psi_{t_{2},b,t_{1}}\rangle$ can be interpreted as the state resulting from a sequence of operations reverse to those given further above: Bob first performs a measurement on his particle at time $t_{2}$ and finds it having property $b$, then the state freely evolves up to time $t_{1}$. Hence, the function $P_{t_{1},t_{2}}(a|b)$ represents the conditional probability that Alice finds her particle at time $t_{1}$ having property $a$ under the condition that Bob previously had found his particle at time $t_{2}$ having property $b$. Consequently, also the righthand side of \eqref{eq:expecx} can be understood as the joint probability $P_{t_{1},t_{2}}(a,b)$ that Bob finds his particle at time $t_{2}$ having property $a$ and Alice finds her particle at earlier time $t_{1}$ having property $b$, so that \begin{equation} P_{t_{1},t_{2}}(a|b)P_{t_{2}}(b)=P_{t_{1},t_{2}}(a,b). \end{equation} Lastly, on the equal-time condition $t_{1}=t_{2}=t$ we have \begin{equation} \hat{\varPi}_{a}(t)\otimes\hat{\varPi}_{b}(t)=\hat{U}^{\dagger}(t)(\hat{\varPi}_{a}\otimes\hat{\varPi}_{b})\hat{U}(t), \end{equation} and thus we directly obtain the joint probability that Alice and Bob find their particles at time $t$ having properties $a$ and $b$, respectively, \begin{equation} \langle\Psi|\left(\hat{\varPi}_{a}(t)\otimes\hat{\varPi}_{b}(t)\right)|\Psi\rangle=P_{t}(a,b).\label{eq:expecxt} \end{equation} The above calculations have been carried out using operators with a discrete spectrum, so what about continuous observables like position? The spectral decomposition of the position operator, \begin{equation} \hat{\boldsymbol{x}}=\int d^{3}x\,\boldsymbol{x}\,\hat{\varPi}_{x}, \end{equation} involves the improper projections $\hat{\varPi}_{x}=|\boldsymbol{x}\rangle\langle\boldsymbol{x}|$, which do not fulfill the crucial property of idempotency, $\hat{\varPi}_{x}^{2}=\hat{\varPi}_{x}$, but rather a generalized version of it, $\hat{\varPi}_{x}\hat{\varPi}_{x'}=\hat{\varPi}_{x}\delta(\boldsymbol{x}-\boldsymbol{x}')$. This reflects the fact that continuous observables are really idealizations. In practice it is not possible to perform an exact measurement of a continuous observable like position. Instead, position would be measured by, say, an array of detectors, each one indicating whether or not the particle is located within a small but finite spatial region $X_{i}\subset\mathbbm R^{3}$, so that the corresponding operator would have a discrete spectrum and can be decomposed as \begin{equation} \hat{\boldsymbol{x}}_{\varDelta}=\sum_{i}\boldsymbol{x}_{i}\,\hat{\varPi}_{i}, \end{equation} where the $\boldsymbol{x}_{i}$ are the centers of the regions $X_{i}$, where $\varDelta$ corresponds to the spatial separation of the centers, and where \begin{equation} \hat{\varPi}_{i}=\int_{X_{i}}d^{3}x\,\hat{\varPi}_{x} \end{equation} are projections onto the respective regions $X_{i}$. In the theoretical limit $\varDelta\rightarrow0$ one would obtain the exact position operator $\hat{\boldsymbol{x}}_{\varDelta}\rightarrow\hat{\boldsymbol{x}}$, but in practice $\varDelta$ is bounded from below due to technological limitations. With these precautions in mind we can extend the results of the previous section, which were obtained for discrete observables $\hat{A}$ and $\hat{B}$, to the idealized case of continuous observables like position. To summarize, for distinct times $t_{1}\neq t_{2}$, irrespective of the temporal ordering, we have seen that there is a hidden collapse built into the two-time correlation function obtained in SQM, and this collapse generates a correlation depending on the time interval between the two measurements. Since the analysis carried out in the framework of BM using the position variables $\boldsymbol{X}_{A}$ and $\boldsymbol{X}_{B}$ did not include any measurement, and therefore no alteration of the system state induced by measurement, the obtained two-time correlation function cannot be expected to coincide with the two-time correlation function obtained in SQM using operators $\hat{\boldsymbol{x}}_{A}$ and $\hat{\boldsymbol{x}}_{B}$. This is not just a mathematical subtlety. The two-time correlation function obtained in SQM involves \emph{measured} values, while the two-time correlation function obtained in BM involves \emph{unmeasured} values, that is, values for the position of the particles \emph{without} the disturbance of a measurement. On the equal-time condition the two-time correlation functions obtained in SQM and BM coincide. The reason for this coincidence is that when Alice and Bob measure their particles at the same time, the collapses introduced by their measurements fall together to one single collapse. Hence, there can be no influence of the collapse introduced by the measurement of one party on the measurement result of the other party. \subsection{Re-analysis in Bohmian mechanics including measurement} According to the considerations of the previous section, BM should yield the same predictions as SQM when position measurements by Alice and Bob are included in the analysis. Again, let us perform the re-analysis at first with discrete observables. The results may then be extended, with the already mentioned precautions, to the case of continuous observables like position. See the Appendix for a general treatment of measurement in BM. Alice and Bob measure discrete observables $\hat{A}$ and $\hat{B}$ at times $t_{1}$ and $t_{2}$, respectively, by using measurement devices $M_{A}$ and $M_{B}$ having pointer states $\eta$ and $\mu$, respectively. Hence, the total system is initially in the state \begin{equation} \Psi_{0}=\psi_{0}\otimes\eta_{0}\otimes\mu_{0}, \end{equation} where \begin{equation} \psi_{0}=\frac{1}{\sqrt{2}}\left[\phi_{0}\otimes\phi_{1}+\phi_{1}\otimes\phi_{0}\right]\label{eq:initialBohm} \end{equation} is the initial state of the system of interest already introduced in \eqref{eq:initialstate}, and $\eta_{0},\mu_{0}$ are initial pointer states. Alice and Bob want to measure the observables $\hat{A}$ and $\hat{B}$ at distinct times $t_{1}$ and $t_{2}$, respectively, where $t_{1}<t_{2}$. The system freely evolves from $t=0$ up to the time $t_{1}-T_{M}$ when Alice's measurement begins, with $T_{M}$ being a small but finite time interval representing the duration of the measurement, so that the free evolution of the system during a period of length $T_{M}$ can be neglected. The state immediately before Alice's measurement then reads \begin{equation} \Psi_{t_{1}-T_{M}}=\psi_{t_{1}-T_{M}}\otimes\eta_{t_{1}-T_{M}}\otimes\mu_{t_{1}-T_{M}}, \end{equation} with \begin{align} \psi_{t_{1}-T_{M}} & =e^{-i\hat{H}_{AB}(t_{1}-T_{M})}\psi_{0}\\ & =e^{-i(E_{0}+E_{1})(t_{1}-T_{M})}\psi_{0}, \end{align} As for the pointer state $\eta$, it must have been prepared in such a way that when the measurement begins it is in the ``ready'' state $\eta_{R}$, so that $\eta_{t_{1}-T_{M}}=\eta_{R}$, and therefore \begin{equation} \Psi_{t_{1}-T_{M}}=\psi_{0}\otimes\eta_{R}\otimes\mu_{t_{1}-T_{M}}, \end{equation} where we have removed the irrelevant global phase factor $e^{-i(E_{0}+E_{1})(t_{1}-T_{M})}$. Now Alice performs her measurement of the observable $\hat{A}$, resulting in the post-measurement state \begin{equation} \Psi_{t_{1}}=\sum_{a}\psi_{a}\otimes\eta_{a}\otimes\mu_{t_{1}-T_{M}}, \end{equation} where $\psi_{a}=(\hat{\varPi}_{a}\otimes\mathbbm1)\psi_{0}$ are unnormalized eigenvectors of $\hat{A}$ with corresponding eigenvalues $a$, and with $\sum_{a}\|\psi_{a}\|^{2}=1$. The pointer states $\eta_{a}$ corresponding to different outcomes ``$a$'' have by construction (see Appendix) approximately zero overlap, so that for $a\neq a'$ \begin{align} \eta_{a}(z_{A})\eta_{a'}(z_{A}) & \approx0, \end{align} for all configurations $z_{A}$ of Alice's measurement device. Furthermore, the pointer states $\eta_{a}$ have by construction (see Appendix) almost all of their support within the respective regions $Z_{a}$ corresponding to the outcomes ``$a$'' , so \begin{align} \int_{Z_{a}}dz_{a}|\eta_{a}(z_{A})|^{2} & \approx1. \end{align} These relations have to be kept in mind for later use. Now, the system freely evolves up to a later time $t_{2}-T_{M}$ when Bob's measurement begins, reaching the state \begin{equation} \Psi_{t_{2}-T_{M}}=\sum_{a}\psi_{t_{1},a,t_{2}-T_{M}}\otimes\eta_{t_{1},a,t_{2}-T_{M}}\otimes\mu_{R}, \end{equation} with \begin{align} \psi_{t_{1},a,t_{2}-T_{M}} & =e^{-i\hat{H}_{AB}(t_{2}-T_{M}-t_{1})}\psi_{a}\\ & \approx e^{-i(\hat{H}_{A}+\hat{H}_{B})(t_{2}-t_{1})}(\hat{\varPi}_{a}\otimes\mathbbm1)\psi_{0}\\ & =\left(e^{-i\hat{H}_{A}(t_{2}-t_{1})}\hat{\varPi}_{a}\otimes\mathbbm1\right)\frac{1}{\sqrt{2}}\left(e^{-iE_{1}(t_{2}-t_{1})}\phi_{0}\phi_{1}+e^{-iE_{0}(t_{2}-t_{1})}\phi_{1}\phi_{0}\right), \end{align} where we have exploited the shortness of the measurement duration $T_{M}$. As for the pointer states we have \begin{align} \eta_{t_{1},a,t_{2}-T_{M}} & =e^{-i\hat{H}_{M_{A}}(t_{2}-T_{M}-t_{1})}\eta_{R},\\ \mu_{R} & =e^{-i\hat{H}_{M_{B}}(t_{2}-T_{M})}\mu_{0}, \end{align} where we have used that Bob's pointer state must have been prepared in such a way that before his measurement begins, the pointer is in the ``ready'' state $\mu_{R}$. Now Bob performs his measurement of the observable $\hat{B}$ resulting in the post-measurement state \begin{equation} \Psi_{t_{2}}=\sum_{a,b}\psi_{t_{1},a,t_{2},b}\otimes\eta_{t_{2},a}\otimes\mu_{b},\label{eq:psiuncollapsed} \end{equation} where \begin{align} \psi_{t_{1},a,t_{2},b} & =(\mathbbm1\otimes\hat{\varPi}_{b})\psi_{t_{1},a,t_{2}-T_{M}}\\ & =\left(e^{-i\hat{H}_{A}(t_{2}-t_{1})}\hat{\varPi}_{a}\otimes\hat{\varPi}_{b}\right)\frac{1}{\sqrt{2}}\left(e^{-iE_{1}(t_{2}-t_{1})}\phi_{0}\phi_{1}+e^{-iE_{0}(t_{2}-t_{1})}\phi_{1}\phi_{0}\right)\label{eq:psitfinal} \end{align} are unnormalized eigenvectors of both $\hat{A}$ and $\hat{B}$ with corresponding eigenvalues $a$ and $b$, respectively. The state \eqref{eq:psiuncollapsed} is an uncollapsed state resulting from a continuous unitary evolution describing the measurements of both Alice and Bob, as well as the free evolution between the measurements. The resulting state is a sum of branches, \begin{align} \Psi_{t_{2}} & =\sum_{a,b}\Psi_{t_{1},a,t_{2},b}, \end{align} where each branch corresponds to a different combination of outcomes of Alice's and Bob's measurements, \begin{equation} \Psi_{t_{1},a,t_{2},b}=\psi_{t_{1},a,t_{2},b}\otimes\eta_{a,t_{2}}\otimes\mu_{b}. \end{equation} The pointer states $\mu_{b}$ corresponding to different outcomes ``$b$'' have by construction approximately zero overlap, and they have almost all of their support within the respective regions $Z_{b}$ corresponding to the respective outcomes ``$b$''. Now, as each macroscopic measurement device involves a large number of particles, the huge number of internal degrees of freedom induces a further rapid decrease of the overlap between the time-evolved pointer states $\eta_{a,t_{2}}=\hat{U}_{M_{A}}(t_{2}-t_{1})\eta_{a}$. The same argument is used in\emph{ }decoherence theory to explain why branches of the wavefunction that are coupled to an external macroscopic reservoir decohere. As a consequence of such decoherence, the branches stay approximately orthogonal to each other for all temporal distances $t_{2}-t_{1}>0$, so for $a\neq a'$ and $b\neq b'$ we have \begin{align} \Psi_{t_{1},a,t_{2},b}^{*}(q)\Psi_{t_{1},a',t_{2},b'}(q) & \approx0\label{eq:orthog} \end{align} for all configurations $q$, and therefore \begin{equation} |\Psi_{t_{2}}(q)|^{2}\approx\sum_{a,b}|\Psi_{t_{1},a,t_{2},b}(q)|^{2}.\label{eq:Psit2} \end{equation} Now consider the following history of events. Say, at time $t_{1}$ Alice obtains the result ``$a$''. Then the trajectory of the system crosses at time $t_{1}$ the region $Q_{a}=\mathcal{Q}_{S}\times Z_{a}\times\mathcal{Q}_{M_{B}}$, where $\mathcal{Q}_{S}$ is the configuration space of the system of interest, $Z_{a}$ is the region in the configuration space $\mathcal{Q}_{M_{A}}$ of Alice's measurement device corresponding to the result ``$a$'', and $\mathcal{Q}_{M_{B}}$ is the configuration space of Bob's measurement device. Thus, at the later time $t_{2}$ the trajectory crosses the region $Q_{a}=\mathcal{Q}_{S}\times Z_{a,t_{2}}\times\mathcal{Q}_{M_{B}}$, where $Z_{a,t_{2}}$ is the region obtained by propagating every point in $Z_{a}$ from time $t_{1}$ to time $t_{2}$ along its unique trajectory. We therefore have \begin{equation} \int_{Z_{a,t_{2}}}dz\,|\eta_{a,t_{2}}(z)|^{2}=\int_{Z_{a}}dz\,|\eta_{a}(z)|^{2}.\label{eq:etaint} \end{equation} Now say that Bob obtains at time $t_{2}$ the result ``$b$'', then the system trajectory crosses at time $t_{2}$ the region \begin{equation} Q_{a,b}=\mathcal{Q}_{S}\times Z_{a,t_{2}}\times Z_{b}, \end{equation} where $Z_{b}$ is the region in the configuration space of Bob's measurement device corresponding to the result ``$b$''. The joint probability for the occurrence of the outcomes $a$ and $b$ at the times $t_{1}$ and $t_{2}$, respectively, thus reads \begin{align} P_{t_{1},t_{2}}(a,b) & =\int_{Q_{a,b}}dq\,|\Psi_{t_{2}}(q)|^{2}\\ & \approx\int_{Q_{a,b}}dq\sum_{a',b'}|\Psi_{t_{1},a',t_{2},b'}(q)|^{2}\\ & \approx\int_{Q_{a,b}}dq\,|\Psi_{t_{1},a,t_{2},b}(q)|^{2}\\ & =\int d^{3}x\int d^{3}y\,|\psi_{t_{1},a,t_{2},b}(\boldsymbol{x},\boldsymbol{y})|^{2}\int_{Z_{a,t_{2}}}dz_{A}|\eta_{a,t_{2}}(z_{A})|^{2}\int_{Z_{b}}dz_{B}|\mu_{b}(z_{B})|^{2}\\ & =\int d^{3}x\int d^{3}y\,|\psi_{t_{1},a,t_{2},b}(\boldsymbol{x},\boldsymbol{y})|^{2}\int_{Z_{a}}dz_{A}|\eta_{a}(z_{A})|^{2}\int_{Z_{b}}dz_{B}|\mu_{b}(z_{B})|^{2}\\ & \approx\int d^{3}x\int d^{3}y\,|\psi_{t_{1},a,t_{2},b}(\boldsymbol{x},\boldsymbol{y})|^{2}, \end{align} where we have used \eqref{eq:Psit2} and \eqref{eq:etaint}, as well as the fact that the pointer states $\eta_{a}$ and $\mu_{b}$ have almost all of their support in $Z_{a}$ and $Z_{b}$, respectively. Using \eqref{eq:initialBohm} and \eqref{eq:psitfinal} we obtain \begin{equation} \begin{split}P_{t_{1},t_{2}}(a,b) & \approx\Re\left\{ (\phi_{0}^{\dagger}\hat{\varPi}_{b}\phi_{1})(\phi_{1}^{\dagger}\hat{\varPi}_{b}\phi_{0})e^{i(E_{1}-E_{0})(t_{2}-t_{1})}\right\} \\ & \phantom{=}+\frac{1}{2}\left[(\phi_{0}^{\dagger}\hat{\varPi}_{a}\phi_{0})(\phi_{1}^{\dagger}\hat{\varPi}_{b}\phi_{1})+(\phi_{1}^{\dagger}\hat{\varPi}_{a}\phi_{1})(\phi_{0}^{\dagger}\hat{\varPi}_{b}\phi_{0})\right]. \end{split} \label{eq:twotimeP} \end{equation} Thus, the two-time correlation function of the two operators $\hat{A}$ and \textbf{$\hat{B}$} obtained by Bohmian mechanics reads \begin{align} \langle\hat{A}\hat{B}\rangle(t_{1},t_{2}) & =\sum_{a,b}ab\,P_{t_{1},t_{2}}(a,b)\\ & \begin{aligned}\approx\, & \Re\left\{ (\phi_{0}^{\dagger}\hat{A}\phi_{1})(\phi_{1}^{\dagger}\hat{B}\phi_{0})e^{i(E_{1}-E_{0})(t_{2}-t_{1})}\right\} \\ & +\frac{1}{2}\left[(\phi_{0}^{\dagger}\hat{A}\phi_{0})(\phi_{1}^{\dagger}\hat{B}\phi_{1})+(\phi_{1}^{\dagger}\hat{A}\phi_{1})(\phi_{0}^{\dagger}\hat{B}\phi_{0})\right], \end{aligned} \end{align} which approximately coincides with the prediction \eqref{eq:twotimeAB} of standard quantum mechanics. We may then extend this result, with the usual precautions mentioned further above, to continuous observables, so that for $\hat{A}=\hat{B}=\hat{\boldsymbol{x}}$ we obtain \begin{align} & \begin{aligned}\langle\hat{\boldsymbol{x}}_{A}\hat{\boldsymbol{x}}_{B}\rangle(t_{1},t_{2})\approx\, & |\phi_{0}^{\dagger}\hat{\boldsymbol{x}}\phi_{1}|^{2}\cos\left(\varDelta E(t_{2}-t_{1})\right)\\ & +(\phi_{0}^{\dagger}\hat{\boldsymbol{x}}\phi_{0})(\phi_{1}^{\dagger}\hat{\boldsymbol{x}}\phi_{1}), \end{aligned} \label{eq:twotimeBM} \end{align} which approximately coincides with the prediction \eqref{eq:twotimeQM} of SQM. Formally, the final result \eqref{eq:twotimeBM} can be evaluated for any combination of values for $t_{1}$ and $t_{2}$. However, the derivation above requires the temporal ordering $t_{1}<t_{2}$. As can easily be verified, an analog calculation carried out with reversed temporal ordering $t_{2}<t_{1}$ leads to the same end result \eqref{eq:twotimeBM}. Finally, let us give a derivation on the equal-time condition $t_{1}=t_{2}=t$. Alice and Bob simultaneously measure their observables at time $t$, so the quantum state of the system immediately before their measurement reads \begin{equation} \Psi_{t-T_{M}}=\psi_{0}\otimes\eta_{R}\otimes\mu_{R}. \end{equation} Immediately after their measurement, the state becomes \begin{equation} \Psi_{t}=\sum_{a,b}\psi_{a,b}\otimes\eta_{a}\otimes\mu_{b}, \end{equation} where \begin{equation} \psi_{a,b}=(\hat{\varPi}_{a}\otimes\hat{\varPi}_{b})\psi_{0}.\label{eq:psitfinal2} \end{equation} Hence, the resulting state is a sum of branches, \begin{align} \Psi_{t} & =\sum_{a,b}\Psi_{a,b}, \end{align} where each branch corresponds to a different combination of outcomes of Alice's and Bob's measurements, \begin{equation} \Psi_{a,b}=\psi_{a,b}\otimes\eta_{a}\otimes\mu_{b}. \end{equation} The pointer states $\eta_{a}$ and $\mu_{b}$ have each by construction approximately zero overlap for different $a$ and $b$, respectively, so the branches are approximately orthogonal to each other, thus for $a\neq a'$ and $b\neq b'$ we have \begin{align} \Psi_{a,b}^{*}(q)\Psi_{a',b'}(q) & \approx0,\label{eq:orthog-1} \end{align} and therefore \begin{equation} |\Psi_{t}(q)|^{2}\approx\sum_{a,b}|\Psi_{a,b}(q)|^{2}.\label{eq:Psit2-1} \end{equation} Say at time $t$ Alice obtains the result ``$a$'' and Bob obtains the result ``$b$''. Then the trajectory of the system crosses at time $t$ the region $Q_{a,b}=\mathcal{Q}_{S}\times Z_{a}\times Z_{b}$. Calculations analog to those carried out further above yields the probability for the joint occurrence of the measurement results ``$a$'' and ``$b$'' as \begin{equation} P_{t}(a,b)\approx\int d^{3}x\int d^{3}y\,|\psi_{a,b}(\boldsymbol{x},\boldsymbol{y})|^{2}. \end{equation} For the initial state \eqref{eq:initialBohm} and using \eqref{eq:psitfinal2} we obtain \begin{equation} \begin{split}P_{t}(a,b) & \approx\Re\left\{ (\phi_{0}^{\dagger}\hat{\varPi}_{a}\phi_{1})(\phi_{1}^{\dagger}\hat{\varPi}_{b}\phi_{0})\right\} \\ & \phantom{=}+\frac{1}{2}\left[(\phi_{0}^{\dagger}\hat{\varPi}_{a}\phi_{0})(\phi_{1}^{\dagger}\hat{\varPi}_{b}\phi_{1})+(\phi_{1}^{\dagger}\hat{\varPi}_{a}\phi_{1})(\phi_{0}^{\dagger}\hat{\varPi}_{b}\phi_{0})\right]. \end{split} \label{eq:twotimeBM-1} \end{equation} Thus, the two-time correlation function of the two operators $\hat{A}$ and \textbf{$\hat{B}$} obtained by Bohmian mechanics on the equal-time condition $t_{1}=t_{2}=t$ reads \begin{align} \langle\hat{A}\hat{B}\rangle(t,t) & =\sum_{a,b}ab\,P_{t}(a,b)\\ & \begin{aligned}\approx\, & \Re\left\{ (\phi_{0}^{\dagger}\hat{A}\phi_{1})(\phi_{1}^{\dagger}\hat{B}\phi_{0})\right\} \\ & +\frac{1}{2}\left[(\phi_{0}^{\dagger}\hat{A}\phi_{0})(\phi_{1}^{\dagger}\hat{B}\phi_{1})+(\phi_{1}^{\dagger}\hat{A}\phi_{1})(\phi_{0}^{\dagger}\hat{B}\phi_{0})\right], \end{aligned} \end{align} which approximately coincides with the prediction \eqref{eq:twotimeAB} of standard quantum mechanics for equal times $t_{1}=t_{2}=t$. Again replacing the discrete observables $\hat{A}$ and $\hat{B}$ with the continuous position operator $\hat{\boldsymbol{x}}$, we obtain \begin{equation} \begin{split}\langle\hat{\boldsymbol{x}}_{A}\hat{\boldsymbol{x}}_{B}\rangle(t,t) & \approx\,|\phi_{0}^{\dagger}\hat{\boldsymbol{x}}\phi_{1}|^{2}+(\phi_{0}^{\dagger}\hat{\boldsymbol{x}}\phi_{0})(\phi_{1}^{\dagger}\hat{\boldsymbol{x}}\phi_{1}),\end{split} \label{eq:equaltimeBM} \end{equation} which approximately coincides with the prediction \eqref{eq:twotimeQM} for equal times. Concluding, for all temporal distances and orderings between the measurements of Alice and Bob we obtain approximately the same predictions for the two-time correlation function in BM that has been obtained in SQM using operator algebra. The quality of the approximation depends on the spatial separation between the wave packets of the pointer states corresponding to the different measurement outcomes. In the idealized case of perfect separation the predictions of SQM and BM fully coincide. The measurement process takes a small but finite amount of time, so there is no abrupt alteration of the quantum state, and since the particles are guided by the wavefunction there is neither an abrupt alteration of the particle trajectories. Note that in the analysis we did not make use of the \emph{effective collapse}, which is a mathematically convenient, but not essential, part of the Bohmian methodology. Instead, we used the uncollapsed wavefunction and evaluated the final expression for the joint probability of two-time measurement outcomes with the associated history of events. \section{Discussion} I have shown that if the measurement process is adequately accounted for, Bohmian mechanics is able to approximately reproduce the predictions of standard quantum mechanics also for the case of two-time correlation functions involving entangled states. The fact that these are only approximations does not have to be regarded as a flaw of Bohmian mechanics but can also be viewed as a gain in realism. The quality of the approximation depends on the spatial separation between the wave packets of the pointer states corresponding to the different measurement outcomes. On the other hand, the operator algebra used in SQM tacitly assumes a perfect ability to distinguish between the eigenvalues of the measured operator. If the separation of the pointer wave packets could be made perfect then the predictions of BM would fully coincide with those of SQM. However, perfectly separated wave packets would require the pointer wavefunctions to have only finite support on configuration space, and such wavefunctions typically have infinite average kinetic energy due to discontinuities of the wavefunctions and its derivative at the boundary, which is clearly an unrealistic scenario. Let us explicitly go through some critical statements in the challenging paper by Kiukas and Werner and see whether the issues can be resolved. The paper starts with the statement \begin{quote} It is well-known that the position operators of a particle at different times do not in general commute. This is the reason why the notion of trajectories cannot be applied to quantum particles. \end{quote} This statement is idiosyncratic as it shows the fundamental disagreement between the adherents and opponents of BM. At the core of this fundamental disagreement lies a different stance regarding what a physical theory has to provide. For some adherents of SQM an operator-observable is simply the mathematical representation of a physical quantity of the system. Therefore, if an operator-observable evolves in time in such a way that it does not even commute with itself at a later time, then there can be no representation of the objective history of that quantity. For the Bohmian, as well as for some proponents of SQM such as Bohr and Heisenberg, an operator-observable just represents an \emph{experimental procedure} to unveil the value of a physical quantity. If this representation does not commute with itself at different times, then this only means that it is impossible to \emph{measure} the quantity from outside the system without affecting its evolution. Indeed, when the position of a particle is measured then due to Heisenberg uncertainty the momentum of the particle is altered in an uncontrollable and potentially drastic fashion, which consequently also alters the future position of the particle. A strong adherent of SQM might take a radically positivist stance by saying that there simply \emph{is} nothing beyond what can be measured. So, when the position of a particle cannot be measured without affecting the trajectory in an uncontrollable way, then there is no such thing as a trajectory. A weaker form of positivism would be to acknowledge that there \emph{might be something} beyond measurement, but to hold that it is \emph{irrelevant} for a physical theory. A defender of BM would consider both of these positivist stances unsatisfactory. To him, a physical theory must provide a complete picture of nature \emph{as it is}, and not \emph{as it appears} to some observer with a measurement device. These different views are plainly incompatible. At least, however, one may acknowledge that it is \emph{per se} neither inconsistent nor necessarily empirically false to consider a particle trajectory beyond measurement as physically real. The authors further write: \begin{quote} In {[}Bohmian mechanics and Nelson's theory{]} the positions at all times have a joint distribution, and therefore cannot violate a Bell inequality. Hence their predictions must be in conflict with quantum mechanics and, most likely, with experiment. \end{quote} As I have tried to substantiate, the mentioned predictions are only \emph{apparently} in conflict with each other. The reason being that the predictions of SQM involve operator-observables representing \emph{measured} positions of the particles, while the seemingly analog calculation in BM involves classical variables representing the \emph{unmeasured} positions. Therefore, the two predictions are not about the same things and cannot be played off against each other. Whether or not it is reasonable or physically sound to consider such thing as an ``unmeasured position'' is not at stake here. Moreover, I have shown that the predictions of BM approximately agree with those of SQM once the measurement process has suitably been taken into account. The fact that the agreement is only approximative does not suffice to claim a conflict between the theories, as the difference between the predictions can be made arbitrarily small in theory, and in practice it is only limited by the technological advances. Later on, Kiukas and Werner pick up, and attempt to defeat, a crucial argument of the defenders of BM: \begin{quote} The simplest position is to include the collapse of the wave function into the theory {[}citations{]}. Then the first measurement instantaneously collapses the wave function. So if agreement with quantum mechanics is to be kept, the probability distribution changes suddenly. There is no way to fit this with continuous trajectories: When the guiding field collapses, the particles must jump. While the glaring non-locality of this process may be seen as just another instance of implicate order, it introduces an element of unexplained randomness, and demotes the Bohm equation (or Nelson's Fokker-Planck equation) from its role as the fundamental dynamical equation for position. \end{quote} Since BM is a collapse-free theory, it might be surprising to hear that the defenders of the theory involve, of all things, the \emph{collapse} to explain away conflicting predictions. In their response to the challenging article, D\"urr et al. write \begin{quote} It is easy to see that $\psi$ governs the evolution of the actual configuration $X$ of the subsystem in the usual Bohmian way, and that it collapses upon measurement of the subsystem according to the usual quantum mechanical rules, with probabilities given by the Born rule. \end{quote} Here, the authors refer to the so-called \emph{conditional wavefunction}, which is given by \begin{equation} \psi_t(x)\sim\Psi_t(x,Y_t), \end{equation} where $Y_t$ is the actual configuration of the measurement device at time $t$. The cited argument makes an appeal to the ``collapse'' at a crucial point. This collapse, however, is not the collapse as it is usually understood in the context of SQM. It is not the "Proze{\ss} 1" introduced by von Neumann in his monumental textbook \cite{Neumann1932}: a discontinuous projection and renormalization of the wavefunction. The collapse mentioned above by defenders of BM is no sudden, indeterministic ``jump'' of neither the wavefunction nor the particles. The measurement process takes a small but finite amount of time, during which the wavefunction of the total system continuously and deterministically changes, and so do the particle positions as they are guided by the wavefunction. Discontinuity and randomness is nowhere at play in the measurement process. The probabilistic element is introduced not at the time of measurement but long before due to insufficient knowledge about the initial conditions of the entire experiment. The result of the continuous measurement process is a wavefunction with one particular branch, which is the only branch occupied by particles, which is associated with the conditional wavefunction, and which behaves just \emph{as if} it were the result of an ordinary projection of the pre-measurement wavefunction, hence the result of a ``Proze{\ss} 1''. The absolute square of this branch just equals the probability value provided by Born's rule. So, in the interpretation of Bohmian mechanics, the probability that the particles in fact \emph{do} occupy that branch, and hence the probability of actually obtaining the particular measurement result associated with that branch, is just the one predicted by SQM. So, what is the ontological status of the conditional wavefunction? Being a factorial part of one branch of a really existing wavefunction (in the interpretation of BM), it really exists. However, for the same reason, it is not a fundamental but rather a \emph{derived} entity; in the same manner that, for example, the equator, being part of the earth, is a really existing but derived entity. There is no actual reason to consider the conditional wavefunction ``more real'' (whatever that means) than any of the other branches of the wavefunction. It is, however, more \emph{physically relevant}, as it is associated with the only branch of the wavefunction that governs the future behavior of the particles from the moment that the measurement is finished. So, it is reasonable, although not \emph{necessary}, to neglect the other branches of the wavefunction and perform all post-measurement calculations using only the conditional wavefunction. Put differently: it \emph{appears} to the observer as if all other branches have vanished and only the branch picked out by the conditional wavefunction is left over. Since that branch coincides with a projection of the pre-measurement wavefunction on just the subspace associated with the measurement result actually obtained, we therefore face an \emph{explanation} of the seemingly discontinuous, random ``collapse'' of the wavefunction induced by a measurement process. In the framework of BM, the collapse of the wavefunction, albeit just an ``effective'' collapse involving only the physically relevant part of the wavefunction, is a theorem, not a postulate. To repeat, it is not the entire wavefunction that collapses randomly and discontinuously, but it is just the \emph{physically relevant part} of the wavefunction that collapses smoothly and deterministically. Randomness enters the description only \emph{subjectively}, due to our insufficient knowledge about what part of the wavefunction \emph{really is} is the physically relevant one. In citing a defense strategy of the Bohmians, Kiukas and Werner try to turn the argument against the arguers: \begin{quote} So {[}Bohmians argue that{]} the two-time correlations computed from the 2-particle ensemble of trajectories are never observed anyhow, and hence pose no threat to the theory. The downside of this argument is that it also applies to single time measurements, i.e., the agreement between Bohm-Nelson configurational probabilities and quantum ones is equally irrelevant. The naive version of Bohmian theory holds "position" to be special, even "real", while all other measurement outcomes can only be described indirectly by including the measurement devices. Saving the Nelson-Bohm theory's failure regarding two-time two-particle correlations by going contextual also for position just means that the particle positions are declared unobservable according to the theory itself, hence truly hidden. \end{quote} The predictions in the framework of BM involving the actual positions $\boldsymbol{X}_{A}$ and $\boldsymbol{X}_{B}$ without the measurement procedures \eqref{eq:twotimeBMX}, as well as the predictions involving the measurement procedures \eqref{eq:equaltimeBM} both coincide with the predictions performed in the framework of SQM involving the position operators $\hat{\boldsymbol{x}}_{A}$ and $\hat{\boldsymbol{x}}_{B}$ \eqref{eq:twotimeQM} on the equal-time condition $t_{1}=t_{2}=t$. Thus, when Alice and Bob simultaneously measure, they find their particles at the positions where BM says they are ``truly'' located. So I find no basis for the claim that equal-time measurements pose a problem to BM and that the true particle positions are always unobservable. It is only for distinct-time measurements that the secondly measured position does not coincide with its unmeasured counterpart, and this is not surprising because the first measurement on one particle disturbs the quantum state of the total system in a nonlocal way so as to affect also the course of the trajectory of the other particle. BM is a nonlocal hidden-variables theory, and a local measurement on one part of the system may have immediate consequences on the particle trajectories in a remote part of the system. One might find this kind of nonlocal realism disturbing or even unacceptable, but if BM were a \emph{local} hidden-variables theory than it would immediately fall prey to all sorts of CHSH inequalities, and would therefore easily be shown to be empirically false. BM might be weird in its insistence on position as a ``special'' or ``real'' quantity, but as far as the here presented analysis shows, it cannot be accused of inconsistency or of empirical inadequacy. Bohmian particle positions \emph{can} be observed, though each observation has consequences on the outcome of future measurements, even for particles that are very far away. \section{Appendix} \subsection{Notation} We use the following convenient notation in the context of Bohmian mechanics. Greek letters such as $\phi$ denotes a wavefunction (quantum state), $\phi(q)$ denotes the complex value of the wavefunction $\phi$ at the point $q$ (configuration), $\phi^{*}$ denotes the complex conjugate of the wavefunction, and $\phi^{\dagger}$ denotes the conjugate transpose of the wavefunction with respect to the inner product, so that $\phi^{\dagger}\psi=\langle\phi,\psi\rangle$. Hatted letters such as $\hat{A}$ denote linear operators on the wavefunction space (Hilbert space), and $\hat{A}^{\dagger}$ denotes the adjoint of $\hat{A}$ with respect to the inner product, so that $\langle\psi,\hat{A}^{\dagger}\phi\rangle=\langle\hat{A}\psi,\phi\rangle$. Observables are self-adjoint operators, so that $\hat{A}$ is an observable exactly if $\hat{A}^{\dagger}=\hat{A}$. Bold letters such as $\boldsymbol{x}$ denote three-dimensional column vectors, so that $\boldsymbol{x}=(x_{1},x_{2},x_{3})^{T}$. Arguments of wavefunctions are rank-two tensors, so that for $\phi(q)$ we have \begin{equation} q=(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{N})=\left(\begin{array}{ccc} x_{11} & \cdots & x_{1N}\\ x_{21} & \cdots & x_{2N}\\ x_{31} & \cdots & x_{3N} \end{array}\right), \end{equation} which is an element of the tensor space $\mathbbm R^{3\times N}$. The dot product between $3\times N$-tensors $a,b\in\mathbbm R^{3\times N}$ is then defined by \begin{equation} a\cdot b=\sum_{n=1}^{N}\boldsymbol{a}_{n}\cdot\boldsymbol{b}_{n}=\sum_{n=1}^{N}\sum_{k=1}^{3}a_{nk}b_{nk}. \end{equation} Finally, the infinitesimal volume element of the space $\mathbbm R^{3\times N}$ is denoted by $dq=d^{3}x_{1}\cdots d^{3}x_{N}$, so that the integration of a function $f$ over some region $Q=X_{1}\times\cdots\times X_{N}$ with $X_{n}\subset\mathbbm R^{3\times1}$ is written out as \begin{equation} \int_{Q}dq\,f(q)=\int_{X_{1}}d^{3}x_{1}\cdots\int_{X_{N}}d^{3}x_{N}\,f(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{N}). \end{equation} The tensor notation of system configurations has the advantage that particle index and the index for the spatial dimension are separate, so that these different concepts are not mingled with each other, which is mathematically convenient and also addresses a criticism put forward by Monton \cite{Monton2002} against wavefunction realism. \subsection{Foundations of Bohmian mechanics} A closed system of $N$ spin-free particles is at any time completely described by two physically significant mathematical entities: the configuration $q=(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{N})$ of particle positions, and the wavefunction $\Psi$ that guides the particles along their way. Denoting the time parameter by $t\in\mathbbm R$, the temporal evolution of the system is described by the trajectory $\Psi_{t}$ of the wavefunction through the Hilbert space $\mathcal{H}=L^{2}(\mathbbm R^{3\times N})$, and by the trajectory $q_{t}$ of the configuration through the configuration space $\mathcal{Q}=\mathbbm R^{3\times N}$. Both trajectories obey a first-order differential equation: The wavefunction trajectory obeys the \emph{Schr\"odinger equation} \begin{equation} i\frac{d}{dt}\Psi_{t}=\hat{H}\Psi_{t},\label{eq:schroedinger} \end{equation} and the trajectory of the configuration obeys the \emph{guiding equation} \begin{equation} \frac{d}{dt}q_{t}=\frac{j_{t}}{\rho_{t}},\label{eq:guidingeq} \end{equation} where $j_{t}=(\boldsymbol{j}_{1,t},\ldots,\boldsymbol{j}_{N,t})$ is defined by \begin{equation} \boldsymbol{j}_{n,t}=\frac{\hbar}{2m_{n}i}\left(\Psi_{t}^{*}\boldsymbol{\nabla}_{n}\Psi_{t}-\Psi_{t}\boldsymbol{\nabla}_{n}\Psi_{t}^{*}\right),\label{eq:j} \end{equation} and where $\rho_{t}$ is defined by \begin{equation} \rho_{t}=|\Psi_{t}|^{2}.\label{eq:rho} \end{equation} Since the differential equations \eqref{eq:schroedinger} and \eqref{eq:guidingeq} are of first order in time, they have a unique solution for every valid initial condition. More precisely, for every well-behaved initial wavefunction $\Psi_{0}$ at time $t=0$ there is a unique trajectory $\Psi_{t}$ that is formally obtained by applying the unitary time evolution operator $\hat{U}(t)=e^{-i\hat{H}t}$, so that \begin{equation} \Psi_{t}=\hat{U}(t)\Psi_{0}. \end{equation} Similarly, for every initial configuration $q_{0}\in\mathcal{Q}$ for which $\rho_{0}(q_{0})\neq0$, there is a unique trajectory $q_{t}$ that is formally obtained by applying the \emph{trajectory function} $\xi_{t}=\int_{0}^{t}dt'(j_{t'}/\rho_{t'})$, so that \begin{equation} q_{t}=\xi_{t}(q_{0}). \end{equation} Hence, there is a concrete path through space that the particles take, and which is determined by the trajectory function $\xi_{t}$ applied to the initial configuration $q_{0}$ at $t=0$. The path $\boldsymbol{x}_{n}(t)$ of an individual particle $n$ can be extracted from the trajectory function $\xi_{t}=(\boldsymbol{\xi}_{1,t},\ldots,\boldsymbol{\xi}_{N,t})$ by fetching the components corresponding to that particle, so that $\boldsymbol{x}_{n}(t)=\boldsymbol{\xi}_{n,t}(q_{0})$. These features make Bohmian mechanics a fully deterministic theory. However, there is a probabilistic element introduced into the theory in a manner analog to how probability is introduced into classical mechanics. The observer does not possess the full information about the true initial configuration of the system, but rather he has to retreat to a probability density. According to the \emph{quantum equilibrium hypothesis}, the initial configuration is distributed by the initial density $\rho_{0}=|\Psi_{0}|^{2}$. The Schr\"odinger dynamics allows to conclude that $\rho_{t}=|\Psi_{t}|^{2}$ is the probability density for all times $t$, a feature referred to as \emph{equivariance}. Consequently, the probability to find the system configuration at time $t$ within some region $Q\subset\mathcal{Q}$ is given by \begin{equation} P_{t}(Q)=\int_{Q}dq\,\rho_{t}(q). \end{equation} The functions $\rho_{t}$ and $j_{t}$ can be shown to obey the \emph{continuity equation} \begin{equation} \frac{d}{dt}\rho_{t}+\nabla\cdot j_{t}=0,\label{eq:conteq} \end{equation} so that $j_{t}$ takes the role of a \emph{probability current}. This concludes our brief review of the foundations of Bohmian mechanics. Note that in contrast to standard quantum mechanics, the concept of \emph{measurement} is not an element of the foundations. Rather, measurement is considered as a specially designed but otherwise ordinary physical process that involves both a system of interest and a macroscopic measurement apparatus. Note further that because the guiding equation \eqref{eq:guidingeq} is not local in the position space $\mathbbm R^{3}$, the theory exhibits \emph{quantum nonlocality}. That is, the velocity of each particle generally depends on the instantaneous position of remote particles. This, together with the fact that in contrast to the wavefunction, the positions of the particles are assumed to be unknown to the observer, makes Bohmian mechanics a \emph{nonlocal hidden variables theory}. \subsection{Measurement in Bohmian mechanics} In contrast to standard quantum mechanics, there is no separate postulate for measurements in Bohmian mechanics, because a measurement is regarded as a specially designed but otherwise ordinary physical process that involves a short and strong interaction between the system of interest and a macroscopic measurement device involving a large number of particles. The interaction causes the measurement device to change its initial configuration into one out of several macroscopically discernible configurations, each one representing an outcome of the measurement corresponding to a value of the observable to be measured. There is no discontinuous ``collapse of the wavefunction'' but rather a short but continuous unitary evolution of the wavefunction. During that evolution, the system of interest becomes entangled with the measurement device, creating a sum of ``branches'' associated with different potential measurement outcomes. Since the particles of the measurement device are at every instance at a precise position, they occupy exactly one of the branches, and so there is only one unique actual outcome of the measurement. Let us briefly review how this is modeled mathematically. During a short measurement period $T_{M}$ the system of interest $S$ is coupled to a measurement device $M$ by a strong interaction $\hat{W}_{SM}$, so that the unperturbed Hamiltonian can be neglected, \begin{equation} \hat{H}_{S}+\hat{H}_{M}+\hat{W}_{SM}\approx\hat{W}_{SM}.\label{eq:HW} \end{equation} Before and after the measurement period, the interaction term is zero, so that the system of interest and the measurement device evolve independently. The shortness of the measurement period $T_{M}$ can be more precisely defined by the requirement that the free evolution of the system of interest during a period of length $T_{M}$ can be neglected, that is \begin{equation} e^{-i\hat{H}_{S}T_{M}}\approx\mathbbm1.\label{eq:tmshort} \end{equation} The state of the total system is given by the wavefunction $\Psi=\Psi(x,z)$, where $x=(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{N})$ is the variable for the configuration of the system of interest, and $z=(\boldsymbol{z}_{1},\ldots,\boldsymbol{z}_{M})$ is the variable for the configuration of the measurement device. As the system of interest is assumed to be microscopic and the measurement device is assumed to be macroscopic, we have $N\ll M$. The configuration spaces of these systems are $\mathcal{Q}_{S}=\mathbbm R^{3\times N}$ and $\mathcal{Q}_{M}=\mathbbm R^{3\times M}$, respectively, and their volume is measured by the infinitesimal volume elements $dx=d^{3}x_{1}\cdots d^{3}x_{N}$ and $dz=d^{3}z_{1}\cdots d^{3}z_{M}$, respectively. The total configuration space is given by $\mathcal{Q}=\mathcal{Q}_{S}\times\mathcal{Q}_{M}=\mathbbm R^{3\times(N+M)}$, and its elements are the configurations $q=(x,z)$. Let us give a paradigmatic example with an observable $\hat{A}=\sum_{a}a\,\hat{\varPi}_{a}$ with discrete eigenvalues $a$ and projections $\hat{\varPi}_{a}$ onto the corresponding eigenspaces. With $\eta_{R}$ being the ``ready'' state of the measurement device and $\psi_{a}$ being an eigenstate of $\hat{A}$, the measurement interaction induces for each $a$ the transition \begin{equation} \psi_{a}\otimes\eta_{R}\rightarrow\psi_{a}\otimes\eta_{a},\label{eq:mprocess} \end{equation} where $\eta_{a}$ are macroscopically discernible ``pointer states'', which means that they have almost no spatial overlap, \begin{equation} \eta_{a}(z)\eta_{a'}(z)\approx0\quad\text{for }a\neq a'.\label{eq:nooverlap} \end{equation} Because of \eqref{eq:nooverlap}, and because the pointer states are normalized, there are non-overlapping regions $Z_{a}$ in the pointer space $\mathcal{Q}_{M}$, \begin{align} Z_{a}\cap Z_{a'} & =\emptyset\quad\text{for }a\neq a', \end{align} so that each $\eta_{a}$ has almost all of its support in a corresponding region $Z_{a}$, \begin{equation} \int_{Z_{a}}dz\,|\eta_{a}(z)|^{2}\approx1,\label{eq:quasisupp} \end{equation} and therefore approximately vanishes outside of $Z_{a}$, \begin{equation} \eta_{a}(z\notin Z_{a})\approx0.\label{eq:quasivanish} \end{equation} Let us call $Z_{a}$ an \emph{effective support }of $\eta_{a}$. If the actual configuration of the measurement device comes to lie within the region $Z_{a}$ then this is taken to indicate that ``$a$'' is the outcome of the measurement. Furthermore, the pointer states during free evolution should be \emph{macroscopically stable}, that is, when the pointer configuration comes to lie within a region $Z_{a}$ after measurement, then during a subsequent free evolution it should stay within that region. However, the feature of macroscopic stability is not a necessary requirement. With these settings, an arbitrary state $\psi$ of the system of interest, together with the initial state $\eta$ of the measurement device, evolves into a sum of ``branches'' corresponding to the measurement device indicating different eigenvalues of $\hat{A}$, \begin{equation} \left(\sum_{a}\psi_{a}\right)\otimes\eta_{R}\rightarrow\sum_{a}\left(\psi_{a}\otimes\eta_{a}\right), \end{equation} where $\psi_{a}=\hat{\varPi}_{a}\psi$ are (unnormalized) eigenstates of $\hat{A}$ corresponding to eigenvalues $a$. A simple example for such a process \cite{Neumann1932,Bohm1952a,Everett1957} is given by the interaction \begin{equation} \hat{W}_{SM}=-g\hat{A}\hat{p}_{z}, \end{equation} where $g$ is a sufficiently large coupling constant and $\hat{p}_{z}$ is the momentum operator conjugate to the configuration operator $\hat{z}$ of the measurement device. Because of \eqref{eq:HW} the state of the total system after measurement reads \begin{align} \hat{U}(T_{M})\psi\otimes\eta_{R} & =\sum_{a}e^{igaT_{M}\hat{p}_{z}}\hat{\varPi}_{a}\psi\otimes\eta_{R}\\ & =\sum_{a}\psi_{a}\otimes\eta_{a}, \end{align} where the functions \begin{equation} \eta_{a}(z)=\eta(z-gaT_{M}). \end{equation} have their effective support within the respective regions \begin{equation} Z_{a}=\{z\mid z-gaT_{M}\in Z\}, \end{equation} where $Z$ is an effective support of the initial device state $\eta$. Depending on the measurement duration $T_{M}$ and on the separation of the eigenvalues $a$, the coupling constant $g$ must be chosen large enough so that condition \eqref{eq:nooverlap} is met. Directly after measurement the wavefunction $\Psi'$ of the total system is a sum of branches, \begin{equation} \Psi'=\sum_{a}\Psi'_{a},\label{eq:sumbranch} \end{equation} with each branch \begin{equation} \Psi'_{a}=\hat{\varPi}_{a}\psi\otimes\eta_{a}\label{eq:branch} \end{equation} representing a different potential outcome. However, the configuration of the measurement device can only occupy one of these branches. So what is the probability $P(a)=\text{Prob}(\overline{z}\in Z_{a})$ that the actual configuration $\overline{z}$ of the measurement device comes to lie within the region $Z_{a}$ indicating the outcome ``$a$''? Because of \eqref{eq:nooverlap} we have \begin{equation} |\Psi'(x,z)|^{2}\approx\sum_{a}|\hat{\varPi}_{a}\psi(x)|^{2}|\eta_{a}(z)|^{2}, \end{equation} and thus \begin{align} P(a) & =\int dx\int_{Z_{a}}dz|\Psi'(x,z)|^{2}\\ & \approx\int dx\int_{Z_{a}}dz\sum_{a'}|\hat{\varPi}_{a'}\psi(x)|^{2}|\eta_{a'}(z)|^{2}\\ & \approx\int dx\,|\hat{\varPi}_{a}\psi(x)|^{2}\\ & =\|\hat{\varPi}_{a}\psi\|^{2}, \end{align} where \eqref{eq:nooverlap}, \eqref{eq:quasisupp} and \eqref{eq:quasivanish} have been used. The prediction of Bohmian mechanics thus approximately coincides with the value given by the Born rule of standard quantum mechanics. The quality of the approximation depends on how well the pointer states $\eta_{a}$ are separated in space, which in turn depends on the separation of the eigenvalues $a$, as well as on the strength and duration of the measurement interaction. Bohmian mechanics is a collapse-free theory, so we may calculate the results of all subsequent measurements by using the time-evolved version of the uncollapsed wavefunction \eqref{eq:sumbranch}. So for $t>t_{M}$, where $t_{M}$ is the time when the measurement is completed, the wavefunction of the total system is given by \begin{equation} \Psi_{t}=\hat{U}(t-t_{M})\Psi'.\label{eq:uncollapsed} \end{equation} The pointer states of a reasonably functioning measurement apparatus should be macroscopically stable, so if the particles only occupy one branch at time $t_{M}$, they will stay on that branch for $t>t_{M}$. However, even if the pointer states were \emph{not} macroscopically stable, the huge number of internal degrees of freedom of a macroscopic measurement device would, as a result of friction and Brownian motion, make a future overlap of the branches so vanishingly small, and the induced quantum interference effects become so highly unlikely that for all practical purposes they can safely be ignored. In other words, the branches \emph{decohere}. Now, since the guiding equation \eqref{eq:guidingeq} is local with respect to the configuration space (although it is \emph{not} local with respect to position space!), the ``empty'' branches of the wavefunction have no influence on the course of the trajectory. Thus, it is a matter of mathematical convenience to ignore the empty branches and only consider, for $t>t_{M}$, an \emph{effectively collapsed} wavefunction given by \begin{equation} \Psi_{a,t}=\frac{1}{\sqrt{P(a)}}\hat{U}(t-t_{M})\Psi'_{a},\label{eq:effcollaps} \end{equation} where the normalization has been carried out to preserve the norm of the wavefunction. The ``effective collapse'' in Bohmian mechanics is a mathematical simplification that preserves the predictions for the outcomes of future measurements given the results of past measurements. Physically, the wavefunction remains uncollapsed and it is always possible to use the uncollapsed wavefunction for the prediction of future outcomes. \bibliographystyle{spphys}
train/arxiv
BkiUbiM5qg5A51h3ZCsE
5
1
\section{\large{High $Q^2$~nucleon form factor experiments}} Nucleon structure investigation using high energy electron scattering has been a successful field where many discoveries have been made since the 1955 observation of the proton size~\cite{RH1955}. The status of current knowledge of the nucleon electromagnetic form factors is reviewed in Ref.~\cite{VP2015}. To a large extent, this success is due to the dominance of the one-photon exchange mechanism of electron scattering as proposed in the original theory~\cite{MR1950}. The most decisive studies of the partonic structure of the nucleon could be performed when the dominant part of the wave function is a 3-quark Fock state. This requires large momentum transfer, $Q^2 > 1$ GeV$^2$, when the contribution of the pion cloud is suppressed. By the early 90s, the data sets at large $Q^2$~for the proton and the neutron have been found to be in agreement in the Dipole fit, $ {G_{_{Dipole}}}$$\,=\, (1+Q^2/0.71 [GeV^2])^{-2}$, see Ref.~\cite{PB1995}. The SLAC experimental data~\cite{RA1986} on the proton Dirac form factor $F_1^p$ at $Q^2$~above 10~GeV$^2$~ have been found to be in fair agreement with the scaling prediction~\cite{LB1979} based on perturbative QCD: \mbox{${ {F_1^{\it p}}}$}$\, \propto Q^{-4}$, where $Q^2$~is the negative four-momentum transfer squared. New development began with a precision experiment~\cite{CP1989} which made a very productive realization of a double polarization method suggested in Refs.~\cite{AA1958}. The double polarization method has large sensitivity to the typically small electric form factor due to the interference nature of the double polarization asymmetry. It is also less insensitive to the two-photon exchange contributions, which complicates the Rosenbluth method. The experimental results from Jefferson Laboratory~\cite{MJ2000} are shown in Fig.~\ref{fig:GEp} (left). The ratio of the proton Pauli form factor \mbox{${ {F_2^{\it p}}}$}\ and the Dirac form factor \mbox{${ {F_1^{\it p}}}$}\ have been found to be in disagreement with the scaling law $F_2^p/F_1^p \propto 1/Q^2$ (which requires the \GEp~to be proportional to $ {G_{_{_M}}^{\it p}}$~for large momentum transfer, $\tau >>1 $) suggested in Ref.~\cite{LB1979}. \begin{figure}[ht] \unitlength 1cm \begin{minipage}[th]{7cm} \centering \includegraphics[trim = 5mm 10mm 20mm 5mm, width=1.0\textwidth] {gepgmp.pdf} \end{minipage} \hfill \begin{minipage}[th]{7cm} \includegraphics[trim = 20mm 15mm 35mm 35mm,width=1.0\textwidth] {f1d_f1u.pdf} \end{minipage} \hskip .5 in \hfill \caption{ Left: Existing data and projected data accuracy for the ratio of the $\mu _p$\GEp/$ {G_{_{_M}}^{\it p}}$. Right: Ratio of the $d$- and $u$-quark contributions to the proton form factor \mbox{${ {F_1^{\it p}}}$}~from measurement of the $ {G_{_{_M}}^{\it n}}$/$ {G_{_{_M}}^{\it p}}$ (see more in the text). } \label{fig:GEp} \end{figure} The data for $\mu_p$\GEp/$ {G_{_{_M}}^{\it p}}$~revealed an unexpected reduction with $Q^2$, which also means that \mbox{${ {F_1^{\it p}}}$}~and $Q^2$$\times$\mbox{${ {F_2^{\it p}}}$}~for the proton have different $Q^2$ dependencies. The origin of the scaling prediction violation has been attributed to an effect of the quark orbital momentum (so-called ``logarithmic scaling") which provides a very efficient fit of the data for a proton in a wide range of the momentum transfer above 1~GeV$^2$~\cite{BJY2003}. The measurement of the proton to the neutron cross section ratio in the quasi-elastic nucleon knockout from the deuteron was used in JLab's precision experiment of the neutron magnetic form factor for $Q^2$~up to 4~GeV$^2$~\cite{JL2009}. With the latest JLab experiment on the neutron electric form factor~\cite{SR2010}, the data on all four nucleon form factors have become available in the $Q^2$~region of 3-quark dominance. The first analysis of these new data for the flavor contributions to the nucleon form factors was reported in Ref.~\cite{CJRW2011}. The $Q^2F_2/F_1$ for individual flavors as a function of $Q^2$~shown in Fig.~\ref{fig:F12ud} (left) does not have any sign of the saturation expected in the case of approaching the pQCD regime. Analysis shows a large unexpected reduction in the relative size of the $d$-quark contribution to the \mbox{${ {F_2^{\it p}}}$}~form factor, which drops by a factor of 3 when $Q^2$~increases from 1 to 3.4~GeV$^2$. A similar result was found in an advanced analysis~\cite{MD2013} with the GPD-based fits of the form factors, see Fig.~\ref{fig:F12ud} (right). \begin{figure}[ht] \unitlength 1cm \begin{minipage}[th]{7cm} \includegraphics[trim = 5mm 10mm 20mm 5mm, width=0.70\textwidth] {q2f2f1.pdf} \end{minipage} \hskip .5 in \begin{minipage}[th]{7cm} \includegraphics[trim = 20mm 40mm 40mm 27mm, width=0.72\textwidth] {F1+2_u+d.pdf} \end{minipage} \vskip 0.1 in \caption{ The flavor decomposition of proton form factors per Refs.~\cite{CJRW2011,MD2013}.} \label{fig:F12ud} \end{figure} The 0bserved reduction of the $d$-quark contribution to \mbox{${ {F_2^{\it p}}}$}~naturally explains the JLab result for the momentum dependence of \mbox{${ {F_2^{\it p}}}$}/\mbox{${ {F_1^{\it p}}}$}~without the effect of the quark orbital momentum (at least at $Q^2$~below 3.4~GeV$^2$). The origin of the observed \mbox{${ {F_1^{\it d}}}$}/\mbox{${ {F_1^{\it u}}}$}~reduction with the increase of $Q^2$~is a subject of significant interest as it could be the most direct evidence of the di-quark correlations in a nucleon as proposed in Ref.~\cite{CR2007}. The flavor decomposition leads to two simple conclusions: \begin{itemize} \item The contributions of the $u$-quarks and $d$-quark to the magnetic and electric form factors of the proton all have different $Q^2$~dependencies. \item The contribution of the $d$-quark to the \mbox{${ {F_1^{\it p}}}$}~form factor at $Q^2$=3.4~GeV$^2$~ is three times less than the contribution of the $u$-quarks (corrected for the number of quarks and their charge). \end{itemize} The second observation suggests that the probability of proton survival after the absorption of a massive virtual photon is much higher when the photon interacts with a $u$-quark, which is doubly represented in the proton. This may be interpreted as an indication of an important role of the $u$-$u$ correlation. It is well known that the correlation usually enhances the high momentum component and the interaction cross section. The relatively weak $d$-quark contribution to the \mbox{${ {F_1^{\it p}}}$}~indicates a suppression of the $u$-$d$ correlation or a mutual cancellation of different types of $u$-$d$ correlations. \section{\large{The SBS nucleon form factor program}} A set of experiments was proposed with the Super BigBite Spectrometer whose large angular acceptance allows us to advance very significantly the measurements of the \GEp, $ {G_{_{_M}}^{\it n}}$, and \GEn~(see Table~\ref{tab:T1}). \begin{table}[htb] {\begin{tabular}{@{}cccc@{}} \toprule Form factor & Reference & $Q^2$~range, GeV$^2$ & $\Delta G/$$ {G_{_{Dipole}}}$ (stat/syst) at max $Q^2$ \\ \colrule \\ \GEp & \cite{GEpE} & 5-12 & 0.08 / 0.02 \\ \GEn & \cite{GEnE} & 1.5-10.2 & 0.23 / 0.07 \\ $ {G_{_{_M}}^{\it n}}$ & \cite{GMnE} & 3.5-13.5 & 0.06 / 0.03 \\ \\ \botrule \end{tabular} \caption{Upcoming measurements of the nucleon form factors in JLab Hall A with SBS (approved experiments). Projected range of $Q^2$~and accuracy relative to the Dipole form factor at max. value of $Q^2$.} \label{tab:T1}} \end{table} The first measurement for the neutron magnetic form factor ($ {G_{_{_M}}^{\it n}}$) is under preparation for data taking in 2021. Fig.~\ref{fig:GEp} (right) shows the projected accuracy for the ratio \mbox{${ {F_1^{\it u}}}$}/\mbox{${ {F_1^{\it d}}}$}~obtained from $ {G_{_{_M}}^{\it n}}$/$ {G_{_{_M}}^{\it p}}$~with systematic uncertainties dominated by the uncertainty of the \GEp/$ {G_{_{_M}}^{\it p}}$~ratio. \section{\large{New experiment for measurement of the strangeness form factor at high $Q^2$}} In this section we present the physics motivation and specific ideas for a new experiment for the measurement of the \mbox{${ {F_{\it s}^{\it p}}}$}~by using SBS equipment. In the original flavor decomposition study~\cite{CJRW2011} we decided to omit the heavier quark contribution motivated by the fact that all experimental data on the strangeness form factor of a proton \mbox{${ {F_{\it s}^{\it p}}}$}~are consistent with zero~\cite{DB2001,DA2012} (in agreement with the lattice calculations). However, all known experiments were performed for $Q^2$~below 1~GeV$^2$. At the same time, the relative role of the $s \bar s$ in the elastic electron-nucleon scattering could be higher at the momentum transfer of 3~GeV$^2$~\cite{TH2015}. The recent analysis of the possible value for the strange form factor performed by T.~Hobbs, M.~Alberg, and J.~Miller suggests that \mbox{${ {F_{\it s}^{\it p}}}$}~could be as high as a $ {G_{_{Dipole}}}$~(which is 0.03 at Q$^2$=3.4~GeV$^2$) or even larger, see Fig.~\ref{fig:TH} from Refs.~\cite{TH2015,TH2019}. \begin{figure}[ht] \includegraphics[width=0.5\textwidth] {THGM_plot.pdf} \caption{ The strange form factor vs. momentum transfer data and projections per Refs.~\cite{TH2015,TH2019}.} \label{fig:TH} \end{figure} In the one-photon exchange approximation, the amplitude for electron-nucleon elastic scattering can be written as $ {M^{\rm^{nuc}} = -(4\pi\alpha/Q^2)l^{\mu}\,J_{\mu}^{\rm^{nuc}}}$, where $\alpha$ is the fine structure constant, $l^{\mu}=\overline{e} \gamma^{\mu} e$ is the leptonic vector current, and \begin{equation} J_{\mu}^{\rm^{nuc}} = \langle p (n) |({\textstyle{2\over3}}\overline{u}\gamma_{\mu}u + {\textstyle{{-1\,}\over{\ 3}}}\overline{d}\gamma_{\mu}d) + {\textstyle{{-1\,}\over{\ 3}}}\overline{s}\gamma_{\mu}s) | p (n) \rangle \label{eq:jmu} \end{equation} is the hadronic matrix element of the electromagnetic current operators for the proton (neutron). The corresponding nucleon form factors for the virtual photon have three contributions: \begin{equation} G^{\gamma}_{\rm^{E,M}} = {\textstyle{2\over3}}G^u_{_{E,M}} + {\textstyle{{-1\,}\over{\ 3}}} G^d_{_{E,M}} + {\textstyle{{-1\,}\over{\ 3}}} G^s_{_{E,M}} \label{eq:GEM} \end{equation} The Z boson exchange between an electron and a nucleon leads to a similar structure of the current. The contribution of $G^{Z}_{_{E,M}}$ could be observed thanks to the significant interference term in the matrix element of the scattering. The measurement of the asymmetry of the longitudinally polarized electrons scattering from a proton (left vs. right) allows us to find $G^s_{_{E,M}}$, see Refs.~\cite{DB2001,BB2005} and complete flavor decomposition of the nucleon form factors (assuming isospin symmetry). It is easy to see that the uncertainty in $G^s_{_{E,M}}$ (and $F_{1,2}$) contributes linearly to the uncertainty of $u$- and $d$-quark contributions. At $Q^2$=3.4 GeV$^2$~for the $\Delta G^s =$$ {G_{_{Dipole}}}$~the corresponding uncertainty $\Delta (Q^4$\mbox{${ {F_2^{\it d}}}$}$)\sim 0.35$, which is much larger than the contribution from the uncertainty of the \GEn~\cite{SR2010}, see Fig.~\ref{fig:F12ud}. The interest in high $Q^2$~measurement of \mbox{${ {F_{\it s}^{\it p}}}$}~is also motivated by the expectation that \mbox{${ {F_{\it s}^{\it p}}}$}~has a maximum at a momentum transfer much larger than the location of the \GEn~maximum due to the heavier mass of the $s$-quark. Such an expectation is supported by the small radius of a $\phi$ meson which could be obtained from the form factor in the $\phi$ decay to $\pi^\circ e^+e^-$~\cite{AA2016}. There are two experimental difficulties in doing the \mbox{${ {F_{\it s}^{\it p}}}$}~measurement at large $Q^2$: the reduced counting rate and large background from inelastic electron-proton scattering. The reduction of the counting rate, which is due to reduction of the $\sigma_{_{Mott}}G^2_{_{Dipole}}$, is partly compensated for by a linear increase of the asymmetry for high $Q^2$. For suppression of the inelastic events we proposed to use the tight time and the angular correlations between the scattered electron and recoiled proton (as well as the energy deposited in the detectors), as it was considered in Ref.~\cite{BW2005}. The solid angle of the apparatus should cover a suitable range of the momentum transfer $\Delta Q^2/Q^2 \sim 0.1$ for which the event rate variation over the acceptance is limited (we selected a factor of 4). The equipment needed for such an experiment could be obtained from the SBS where a highly segmented hadron calorimeter and electromagnetic calorimeters are under preparation for the GEp experiment~\cite{GEpE}. Figure~\ref{fig:COP2} shows the proposed configuration of the detectors. \begin{figure}[ht] \unitlength 1cm \begin{minipage}[th]{8cm} \includegraphics[width=1.0\textwidth] {Side_view.png} \end{minipage} \begin{minipage}[th]{8cm} \includegraphics[width=0.9\textwidth] {Front_view.png} \end{minipage} \caption{ Left: Side view of the apparatus. The electron beam goes from right to left. The proton detector is shown in green and the electron detector in purple; the liquid hydrogen target is shown in blue. Right: Front view of the apparatus. The blocks in orange get signals from the electron and the proton whose directions are shown in red.} \label{fig:COP2} \end{figure} The proposed detector configuration has an electron arm with a solid angle of 0.1 sr at a scattering angle of 18$\pm$1.5 degrees. Within 30 days of data taking with a 6.6 GeV beam the PV asymmetry will be measured to 3\% relative accuracy which corresponds to an uncertainty of \mbox{${ {F_{\it s}^{\it p}}}$}~of 0.002. Such a measurement will provide the first experimental limit on \mbox{${ {F_{\it s}^{\it p}}}$}~at large momentum transfer of 3~GeV$^2$~(or discover its non-zero value) and reduce the current uncertainty from the strangeness contribution in the flavor separated proton form factors such as \mbox{${ {F_2^{\it d}}}$}~by six times. \section*{Acknowledgments} This work was supported by Department of Energy (DOE) contract number DE-AC05-06OR23177, under which the Jefferson Science Associates operates the Thomas Jefferson National Accelerator Facility.
train/arxiv
BkiUbYk5qoaAwj4XM_wC
5
1
\section{Introduction} There have been various advancements in the way in which technology is utilized in sports and games. The aid provided to players through various technological means has improved at a rapid pace over the past few years. Match analysis results in a high volume of statistical data which in turn is used by players and coaching team for match preparation. In this paper, we propose to use a game-theoretic approach to develop a tool that can potentially help players in \textit{any two-player sports} to thoroughly investigate the opponent and prepare a strategic plan to maximize the chance of winning. For the purpose of proposing a viable and feasible solution that can be directly employed in the real-world, we have chosen \textit{badminton} as the primary sport for experiments, analysis and discussion owing to our deep understanding, the common passion for the sport and since it is easy to understand. Badminton is a racket sport in which two players alternately hit a shuttlecock until a point is scored by one of them. In this work, we propose two models and both the models use the history of match data of the opponent and the player under consideration as the input to provide suggestions and recommendations for a player. We also discuss the necessary steps to develop a complete, end-to-end solution integrating different types of data and technology in order to create a dedicated software application/ program that can be customized for each sport. Our first model called \textit{the recommendation system} takes into consideration the various shots that the player and the opponent have played throughout their career. The main purpose of this system is to help the players gain an understanding of the different shots that the opponent plays and to gain knowledge of the best possible shots that he/she can play such that the chances of gaining a point are maximum. This model could be used by the players and coaches before going into a match and is not intended to be used during match play. Our second model which we like to call the \textit{Simulation model} is an extension of the recommendation system but it takes into consideration the history of the match when it is in use. We use a reward system to determine the best shots for the players. This model is intended to provide match practice to the players against opponents so that they can simulate match situation and gain experience before heading into the actual match. Also, we intend to utilize some of the recent revisions in Badminton laws, one of which allows for coach intervention during the match. The proposed tool can prove to be very handy here as the coach can influence the player and guide the player in the middle of the match through the use of this tool by quickly analysing the match up till that particular point in the game-play. Moreover, our method could prove valuable in saving players and coaching staff the huge amount time expended in going through hours of match videos of the player and opponents and can critical help them in quantitatively analyse the performance and recommend the necessary preparation from players historical data. \section{Relevant works} Game theory has been used to study various strategic sports in the past and most of the prevalent work done so far has been towards studying specific parts of a sport and not the entire match or game. In soccer, the penalty kick has been modelled as a strategic game with imperfect information because of uncertainty about the kicker's type \cite{penalty}. Bayesian equilibrium concept was used and it was found out that the kickers adopt a mixed strategy equilibrium depending based on their strong foot. In cricket, a normal form game was modelled between the batsman and the bowler \cite{Cricket}. The strategies of the batsmen depended upon the type of shot played and the bowler's strategies were the different types of deliveries that he can bowl. The utility values were derived based on the probability of the player to take a particular strategy. The study revealed that the probability distribution followed by the players in adopting different strategies in real-world cricket is very close to the Nash equilibrium conditions. Alpha Go, an artificial intelligence entity that can play the game of Go which was developed by Deep-Mind was able to defeat the best player in the world by 5 games to 0. This intelligent program uses a combination of Monte Carlo Simulation with value and policy networks \cite{Alpha} with the concept of Markov Decision Processes (MDP) \cite{Burnetas:1997:OAP:265654.265664}\cite{Alpha}\cite{vanOtterlo2012} as its base. The best of the players in any sports around the world learn at a rapid pace as they progress in their professional careers. However, there are few areas where this natural learning process doesn't prove to be very effective. In such cases, advanced mathematical and computer modelling can come to aid and convert this slow time-consuming process into a rapid and results-oriented one. In our literature survey, we came across many instances where game theory was used to solve problems pertaining to sports. One such case traces back to 2012 in the Olympics encounter between Yu Yang and Wang Xiaoli of China and South Korean pairs Jung Kyun-eun/Kim Ha-na and Ha Jung-eun/Kim Min-jung (doubles). The Chinese team tried to lose on purpose in this group stage encounter to avoid playing against their teammates Tian Qing and Zhao Yunle so that China is assured both the gold as well as the silver medals. \cite{HONGZHI20131222} presents a detailed analysis on badminton match throwing using this example through game theory. The study reveals that the reason for this kind of match throwing lies in the loopholes of the format that the competition adopts and any rational player would adopt this strategy in the interest of the team. Besides this, there have been various other similar cases that drove us towards using game theory for our problem. In \cite{Pollard}, game theory is used to determine the optimal time during the match to play a risky serve and how the surprise factor plays a part, also studying how it affects the outcome for the player under consideration. Apart from that, it is found that it is beneficial for the player to play a risky serve during the critical points of the match rather than the less important ones. Highly motivated with the past work along these lines, through this project, we intend to contribute towards the game of badminton and develop a highly effective tool for player assistance with the aid of game theory concepts. In summary our contributions are: \begin{enumerate} \item A recommendation tool to suggest the best shots for each of the possible shots of the opponent using the concept of \textit{best strategy} from game theory. \item A simulation model that considers the history up to two shots while determining the favorable shot to be played at any particular stage of the match and the approach is modelled as \textit{an approximated finite non-zero sum extensive from game} \end{enumerate} \section{Data Collection} Comprehensive data is essential to effectively model the capability and choice of players which is vital in sports. Badminton is not like board games where predefined moves or strategies can consistently help you win. Humans tend to think differently when it comes to physical capabilities especially in sports. We can't expect a player to play the same shot with the same accuracy every time; rather, we understand that a player's capability, stability and mentality changes during the course of a game. But to best model these factors, data plays a key role. Our data was manually collected by going through several full match videos of the players. We considered matches between two of the best and most consistently performing badminton players in the world – Lin Dan from China and Lee Chong Wei from Malaysia so that the inefficiency of the players won't tamper with the final results and also incorporate the variation of left-handed and right-handed player in our data. The matches we recorded are spanned over a period of 8 years (2011 - 2019) so that we cover the changing game plan and shot selection over a considerable period. The data was manually collected and annotated on a shot-by-shot basis for a comprehensive modelling, with their outcomes in terms of points and sets won. This format is essential to calculate the necessary parameters for the proposed models. The types of shots which we have considered while collecting our data contribute to efficient results. Figure \ref{Badminton shots} presents the scope of the shots for our paper. We have tried to incorporate the position of the player on the court in terms of the type of shots. Also, there are certain exceptions to the types of shots that can be played against a particular shot of the opponent. None of the shots can be responded with a service which is quite obvious. Also, a smash and a block cannot be returned by a smash and a block respectively. Drops are usually quite difficult to smash. It is important to note that important factors like agility, fatigue and mental state of the player during the course of the game are not taken into account while modelling due to the complexity involved. \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{images/shot_types.PNG} \caption{Various badminton Shots under consideration} \label{Badminton shots} \end{figure*} \section{Methodology} The detailed workings of each of the proposed methods are discussed in this section along with the mathematical modelling. For the purpose of modelling and easy representation, the player under consideration who uses our proposed approaches is referred to as $X_p$ and the player's opponent as $X_o$ \subsection{Recommendation Tool} The recommendation tool considers $X_p$'s choice and capabilities based on $X_p$'s history with a particular opponent $X_o$ to offer the best suggestions to each shot of $X_o$. The concept of best response from game theory is adopted which will help $X_p$ to be match ready with the best and safe shots to play given any shot from $X_o$ to maximize $X_p$'s chances of winning each point and in turn winning the match. We model the recommendation tool as a \textit{normal form game}. The reason being that we are only worried about $X_p$; meaning, we only care about $X_p$'s strategy and how to maximize $X_p$'s chances of winning the match and not $X_o$'s. So it is enough to consider the game on a shot-by-shot basis rather than a sequential game. At any stage of the game, for a given shot $s_{-i}$ of $X_o$, $X_p$ will have set of probable shots (strategies) $S$ to play and the recommendation tool outputs those best shot $s_{i}*$ from the available shots $S$ which is the best response to $s_{-i}$. The game can be modelled as a tuple representing a normal form game as represented in \ref{eqn:nf_game} where $I = {X_p, X_o}$ and $S$ is a set of all badminton shots \begin{equation} G = <I, (s_i)_{i \epsilon I}, (u_i)_{i \epsilon I}> \forall s_i, s_{-i} \epsilon S \label{eqn:nf_game} \end{equation} \begin{equation} u_{i}\left(s_{i} ,s_{-i}\right) = P\left( s_{i}|s_{-i} \right)* P_{success}\left( s_{i}|s_{-i} \right) \label{eqn:max_val} \end{equation} where $P( s_{i}|s_{-i} )$ is the probability of playing a shot $s_{i}$ for a given shot $s_{-i}$ of the opponent $X_o$, $P_{success}( s_{i}|s_{-i} )$ is the success rate of a shot $s_{i}$ for $s_{-i}$. These probabilities are calculated taking into account the data of the previous matches between the same two players. We consider the number of times a particular shot has been played by $X_p$ to calculate the probability of playing that shot including the instances where the shot yielded a point, resulted in the loss of a point or continuation of the rally. Within these instances, we consider the number of times that particular shot has yielded a point for $X_p$ while calculating the probability of success for that shot. This is done specifically for every shot played by $X_o$. For calculating the best response for a particular shot of $X_o$, we find the shot $s_{i}$ for $X_p$ which yields the maximum value of the utility according to the equation \ref{eqn:max_val}. \begin{algorithm} \SetAlgoLined \KwResult{Shot recommendation $s_i$} \While{Point not gained}{ 1. Select $s_i$ for $X_p$ for given $s_{-i}$ from $X_o$ \\ 2. Calculate utility $u_i (s_i | s_{-i}), \forall s_i \epsilon S$ according to equation \ref{eqn:max_val} \\ 3. Pick $s_i$ with maximum utility $u_{max}$ \\ 4. Play $s_{i_{u_{max}}}$\\usepackage{} } \caption{Algorithm for recommendation system} \label{alg:rec_sys} \end{algorithm} This recommendation can be a return to a type of service or any other shot during the rally. It will help $X_p$ be prepared to face $X_o$ with confidence and certainly rule out few shots which have resulted in an immediate point loss in $X_p$'s history. Now, we can extend this tool further where we consider $X_o$ as our primary player and make all the computations to find his best possible shots for all the shots of $X_p$ based on the same data set. This will return a set of most probable shots of a particular for each shot of $X_p$. Now, this will help predict the return shot of $X_o$ for a particular shot $s_{i}$ of $X_p$. Accurate prediction of the type of serve or the type of return of $X_o$ for a particular shot $s_{i}$ of $X_p$ can prove to be very important in winning a point in crucial situations like dues or the first point of the set. This approach is the base for our other two methods and proves to be a vital one in real-world scenarios. We can observe many cases both from recommendation and the datasets, where a player rightly predicts a return of $X_o$ and surprises with a trick shot and wins a point. \subsection{Simulator} The simulator is an extension to the recommendation tool but modelled as an \textit{extensive form game} instead of a normal form game. The most important value addition to this method is that the history of shots between the players $X_p, X_o$ is taken into consideration. In badminton, it may not be always the case that the last shot results in winning or losing a point; the earlier shots played during the rally can also be responsible for a particular outcome. We observed from the collected data that this dependency on the history needs to be considered a maximum for two earlier shots for best results. We have introduced a reward system for incorporating history. We consider four types of rewards as follows: \begin{enumerate} \item A high positive reward $R_{hp}$ when a shot of $X_p$ results in a direct point. For instance, a smash resulting in a direct point; $R_{hp}$ = +5 \item A medium positive reward $R_{mp}$ when a shot of $X_p$ induces a poor return from $X_o$ and thereby yielding a point. For instance, a good lift to the back making $X_o$ make a poor clearance helping $X_p$ kill immediately and gain a point; $R_{mp}$ = +2 \item A medium negative reward $R_{mn}$ for a poor shot of $X_p$ which $X_o$ takes advantage of making $X_p$ lose a point; $R_{mn}$ = -2 \item A low negative reward $R_{ln}$ when a shot results in a direct point loss; $R_{ln}$ = -5 \end{enumerate} The rewards are hence considered with history up to two shots which will help the algorithm suggest the best possible outcome for $X_p$. The total reward for a shot of player $X_p$ given an opponent's shot is stated as follows: \begin{equation} \begin{split} R_{T} (s_{i}, s_{-i}) = (R_{hp} (s_{i}, s_{-i})) + (R_{mp}(s_{i}, s_{-i}, s_{i}:t-1)) \\ + (R_{mn}(s_{i}, s_{-i}, s_{i}:t-1)) + (R_{ln}(s_{i}, s_{-i})) \end{split} \end{equation} The purpose of the simulator is to help $X_p$ by predicting the result of a rally up to a predefined number of steps through the match. It will help $X_p$ emulate the sequence of rallies in different ways to practice with another person before the match. Though the game of badminton is a perfect information zero sum game, the simulator is modelled as a \textit{a perfect-information infinite non-zero sum extensive form game} as the end utilities according to equation \ref{eqn:max_val} won't be the same for both the players. It is not possible to solve an infinite game. Hence we make a few modifications to the above-defined game into a finite extensive form game to be able to solve it up to a predefined number of steps. We call it \textbf{\textit{the approximated sequential representation}} of the original game. Here, we restrict the number of sequences to a predefined number depending on the type of sport we are applying to. In badminton, it is enough for a player to think about his next two moves with respect to one move of the opponent in between as he could rectify his mistakes within that else it would result in a point gain or loss within that. The scenario is illustrated in the figure \ref{Illustration} \begin{figure*} \centering \includegraphics[width=7in]{images/badminton_shots_illustration.png} \caption{Extensive form representation of the game} \label{Illustration} \end{figure*} We introduce a reward system which is inspired from reinforcement learning \cite{article} \cite{KLMSurvey} that helps us calculate the favorable outcome for the player while taking into consideration the opponent's moves. The model of the game can be represented as in equation \ref{eqn:sim_eqn} where $I = {X_p, X_o}$ is the set of agents (players), $S$ is the set of possible badminton shots, $H$ is the set of choice nodes, $Z$ is the set of terminal nodes, $\alpha$ agent function, $\beta$ action function and $\rho$ successor function respectively. This model is treated as a tree $T(n)$ where n is the number of nodes and $T$ can be expanded to $n$ levels denoting history of past events (3 in our case) for simulation. The game for $n$ levels is solved using \textit{backward induction} with data containing the updated reward values of shots depending on $(\alpha, \beta, \rho)$. \begin{equation} G = <I, S, H, Z, \alpha, \beta_{i}, \rho, u_i> \label{eqn:sim_eqn} \end{equation} \begin{equation} u_{i}(s_{i} ,s_{-i} ) = P( s_{i}|s_{-i} )* P_{success}( s_{i}|s_{-i} )* R_{T} ( s_{i},s_{-i} ) \label{eq : er3} \end{equation} where $u_{i}$ is the utility of agent $i$, $P( s_{i}|s_{-i} )$ is the probability of $X_p$ playing a shot $s_{i}$ for a given shot $s_{-i}$ of $X_o$, $P_{success}( s_{i}|s_{-i} )$ is the success rate of a shot $s_{i}$ of $X_p$ for a given shot $s_{-i}$ of $X_o$, $R_{T}( s_{i},s_{-i})$ is the reward for playing $s_{i}$ for $s_{-i}$. By approximating the infinite extensive game into a finite extensive form game, based on the rewards and utilities of players, we can predict the progress in the game after each shot. The simulator then recommends the best favourable way the game could progress with the shots $X_p$ has to play along with the ones $X_o$ is expected to play. Though this model has the capability of producing very good results, it is often very hard to model the cognitive process of humans. In sports, humans tend to think differently, a move by a badminton player will involve a lot of factors like fatigue, ability, confidence, ability, condition in game, precision of opponents shot and even gut feeling. The algorithm for the simulation system is presented in algorithm\ref{alg:sim_sys} \begin{algorithm} \SetAlgoLined \KwResult{Shot recommendation $s_i$} \While{Maximum number of predictions not reached}{ 1. Select $s_i$ for $X_p$ for given $s_{-i}$ from $X_o$ \\ 2. Expand simulation tree for next $n$ moves incorporating all possible shot combinations\\ 3. Calculate accumulated utilities for $X_p , X_o$ using equation \ref{eq : er3}\\ 4. Use backward induction to determine favourable shots and discard rest of the nodes\\ 5. Solve and reduce traversal path towards one optimal shot with maximum utility $s_{i_{u_{max}}}$\\ 6. Update values for all the shot types based on real time data\\ 7. Get $S_{-i}$ of $X_o$ for $s_{i_{u_{max}}}$ of $X_p$\\ } \caption{Algorithm for simulator system} \label{alg:sim_sys} \end{algorithm} \section{Results} In this section, we discuss the results of our models. As there is no established metric to verify the results of our proposed models, an expert opinion or domain knowledge is the only way to check the correctness and accuracy of results. As the model operates on the history of two players from the data, the results are only relevant to those players and will be different for others. Also, it is possible to create a generic model to focus on one player completely, given the data of the player with different opponents. The accuracy of the model depends on the volume and consistency of the available data. \subsection{Results of the Recommendation System} The recommendation system suggests the best response for an $X_o$'s shot without considering the history of shots in the rally and the condition of the player in the game. The results i.e. the recommendations of the model for the player $X_p$ are shown in Table 1 and that for the opponent $X_o$ are shown in Table 2. It can be seen from the data that the model has successfully identified the best shots for given shots and that it has discarded the shots that are not playable for the opponent's shot successfully. The number of suggestions can be increased and we have fixed it to 2 so that $X_p$ always has an alternative. Also, from the suggestions above, it can be seen that the shot $forehand drop$ has been repeated the most. This coincides with the fact that both the players are successful and capable in playing forehand drop shot without any error which was evident from the matches. Using this model, a player can be mentally prepared on what shot to play to a given shot of the opponent that can either lead to a point or keep $X_p$ alive in the rally to avoid losing a point. As this is modelled directly from the capability and behavior of the players, this information will be of value to the players before the match. \begin{table} \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{|c|c|c|} \hline \textbf{Opponent's shot} & \textbf{suggestion 1} & \textbf{suggestion 2} \\ \hline backhand\_short\_serve & forehand\_drop & backhand\_lift \\ \hline backhand\_long\_serve & forehand\_drop & backhand\_drop \\ \hline forehand\_drop & forehand\_drop & backhand\_lift \\ \hline backhand\_drop & backhand\_drop & forehand\_lift \\ \hline forehand\_kill & forehand\_lift & backhand\_lift \\ \hline backhand\_kill & forehand\_long\_clear & forehand\_drop \\ \hline jump\_crosscourt\_smash & block & forehand\_lift \\ \hline normal\_smash & forehand\_drop & block \\ \hline jump\_smash & block & backhand\_lift \\ \hline forehand\_long\_clear & forehand\_drop & backhand\_lift \\ \hline backhand\_short\_clear & jump\_crosscourt\_smash & forehand\_long\_clear \\ \hline backhand\_long\_clear & forehand\_lift & normal\_crosscourt\_smash \\ \hline crosscourt\_clear & jump\_smash & forehand\_lift \\ \hline forehand\_drive & forehand\_drive & backhand\_lift \\ \hline backhand\_drive & forehand\_drop & forehand\_lift \\ \hline forehand\_lift & forehand\_drop & jump\_smash \\ \hline backhand\_lift & forehand\_drop & normal\_smash \\ \hline \end{tabular} } \caption{Recommendations for $X_p$} \label{table:1} \end{table} \begin{table} \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{|c|c|c|} \hline Opponent's shot & suggestion 1 & suggestion 2 \\ \hline backhand\_short\_serve & forehand\_drop & forehand\_lift \\ \hline backhand\_long\_serve & normal\_smash & forehand\_drive \\ \hline forehand\_drop & forehand\_drop & backhand\_lift \\ \hline backhand\_drop & forehand\_drop & backhand\_drop \\ \hline forehand\_kill & backhand\_lift & forehand\_drop \\ \hline backhand\_kill & backhand\_lift & forehand\_lift \\ \hline normal\_crosscourt\_smash & backhand\_lift & block \\ \hline jump\_crosscourt\_smash & forehand\_drop & forehand\_lift \\ \hline normal\_smash & block & forehand\_drop \\ \hline jump\_smash & backhand\_lift & block \\ \hline body\_smash & block & forehand\_lift' \\ \hline forehand\_short\_clear & forehand\_drop & backhand\_drop \\ \hline forehand\_long\_clear & normal\_smash & forehand\_long\_clear \\ \hline backhand\_short\_clear & normal\_crosscourt\_smash & forehand\_drive \\ \hline backhand\_long\_clear & jump\_smash & backhand\_drop \\ \hline crosscourt\_clear & forehand\_drop & normal\_smash \\ \hline forehand\_drive & forehand\_drive & forehand\_long\_clear \\ \hline backhand\_drive & forehand\_drive & backhand\_drive \\ \hline forehand\_lift & forehand\_drop & forehand\_lift \\ \hline backhand\_lift & forehand\_drop & forehand\_drive \\ \hline \end{tabular} } \caption{Recommendations for $X_o$} \label{table:2} \end{table} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{images/p12.png} \caption{Frequency of shots for Player $P_{1}$ and player $P_{2}$ } \label{P12_shots} \end{figure} \subsection{Results of the simulator} The purpose of the simulator is to take the history of the ongoing match into consideration and to model the game strategy for $X_p$ based on the position in the match, to identify the feasible sequence of shots that has lead to point gains and to avoid the poor shots. The simulator can be fed with seed shots i.e. few inputs in the start on how the game should proceed and it predicts the next few shots. Since there is no information about a point gain or loss to the simulator, it will infinitely predict the sequence of the shots in the game. It is accurate when compared to following only the best strategy since in reality, the player should think 2-3 steps ahead in badminton to realize the after-effect of playing a shot as there is always the possibility of a poor shot leading to a point loss during the consecutive shots played. Hence, at any point of time, the simulator builds a tree for the next three shots (2 for the player and 1 for the opponent), does backward induction on the utility values according to equation \ref{eq : er3}, arrives at the most favorable shot for $X_p$, discards the rest of the tree and the process is repeated. To check the working of the simulator, we seeded it with a few shots from a match's data which was not used for modelling. The actual sequence of shots from the match figure \ref{Illustration} and the sequence of predicted shots from the simulator is given in Table \ref{table:3}. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{images/resultcapture.PNG} \caption{Data from an actual match between Lin Dan and Lee Chong Wei} \label{Illustration} \end{figure} \begin{table}[h] \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{|p{1cm}|c|c|} \hline Shot number & $X_p$'s (Lin Dan) shot & $X_o$'s (Lee Chong Wei) shot \\ \hline 1 & backhand\_short\_serve & forehand\_drop \\ \hline 2 & forehand\_lift & forehand\_short\_clear \\ \hline 3 & forehand\_drop & backhand\_drop \\ \hline 4 & forehand\_drive & forehand\_drop \\ \hline 5 & forehand\_drop & forehand\_drive \\ \hline 6 & backhand\_drop & forehand\_drop \\ \hline 7 & forehand\_drop & forehand\_long\_clear \\ \hline 8 & jump\_smash & forehand\_drop \\ \hline 9 & forehand\_drop & backhand\_short\_clear \\ \hline 10 & jump\_crosscourt\_smash & forehand\_drop \\ \hline \end{tabular} } \caption{Output from the Simulator} \label{table:3} \end{table} It can be seen from the tables that the results of simulator are closer to the actual match. The results mostly differ only in the sub-category of shots and the actual type of shots are the same. This tool can help the players understand how the match will proceed after a shot is played which is crucial to analyze the repercussions of playing a shot. The simulator model is likely to work better when the data used for modelling is larger. Fig. \ref{P12_shots} shows the distribution of shots for both the players in the data collected. There is an unequal distribution of shots which affects the accuracy of predictions of the models. We assume that data from at least 20 full matches is required to precisely model the behavior as in our case with 3 matches, the frequency of most of the shots is very low. Almost equal distribution of shots is required to ensure the reliability of the results from the model. \section{Conclusion} In this paper, we have successfully developed two novel approaches for the development of an assistance tool for the game of badminton based on the concepts of Game Theory. Our recommendation tool takes in match data for the player under consideration against a particular opponent and gives out the best possible set of strategies (shots) which the player can use. The simulator model is a generalized and robust extension of this recommendation tool which considers the history of shots played in the ongoing match along with match history to suggest the favorable strategy for the players. The results, analysis and comparison with the actual match data shows the effectiveness of the system and that it is well-rounded. Our current work is restricted by the availability of data and the using the manually annotated data from 3 matches for our experiments was to show the feasibility of our approaches. In the presence of a considerable amount of annotated data, our future works are to test our approaches for other two player sports, remodelling the approaches for team sports, building a complete pipeline of dedicated software application or program that can dynamically function by adapting to the real-time change during a course of a tournament or a match using computer vision to capture visual information and reinforcement learning approaches like Markov Decision Process (MDP) to mathematically model decisions for a system of advanced assistance. \bibliographystyle{ieeetr}
train/arxiv
BkiUbKXxK7Tt522WYcUm
5
1
\section{Introduction} Let $f\colon(S^2,A)\selfmap$ be a branched covering of the sphere with finite, forward-invariant set $A$ containing $f$'s critical values, from now on called a \emph{Thurston map}. A celebrated theorem by Thurston~\cite{douady-h:thurston} gives a topological criterion for $f$ to be isotopic to a rational map, for an appropriate complex structure on $(S^2,A)$. One of the virtues of rational maps, following from Schwartz's lemma, is that they are expanding for the hyperbolic metric of curvature $-1$ associated with the complex structure. In this article, following the announcement in~\cite{bartholdi-dudko:bc0}, we give a criterion for $f$ to be isotopic to an expanding map, namely for there to exist a metric on $(S^2,A)$ that is expanded by a map isotopic to $f$. It will turn out that the metric may, for free, be required to be Riemannian of pinched negative curvature. Some care is needed to define expanding maps with periodic critical points. Consider a non-invertible map $f\colon (S^2,A)\selfmap$. Let $A^\infty\subseteq A$ denote the forward orbit of the periodic critical points of $f$. The map $f$ is \emph{metrically expanding} if there exists a subset $A'\subseteq A^\infty$ and a metric on $S^2\setminus A'$ that is expanded by $f$, and such that at all $a\in A'$ the first return map of $f$ is locally conjugate to $z\mapsto z^{\deg_a(f^n)}$. In other words, the points in $A'$ are cusps, or equivalently at infinite distance, in the metric. We call $f$ \emph{B\"ottcher expanding} if $A'=A^\infty$. This definition is designed to generalize the class of rational maps. Indeed, every post-critically finite rational map $f\colon (\hC,A)\selfmap $ is B\"ottcher expanding by considering the hyperbolic (or Euclidean if $|A|=2$) metric of $(\hC,\ord)$ for an appropriate orbifold structure $\ord\colon A\to \N\cup \{\infty\}$. We call $f$ \emph{topologically expanding} if there exists a compact retract $\mathcal M\subset S^2\setminus A'$ and a finite open covering $\mathcal M=\bigcup\mathcal U_i$ such that connected components of $f^{-n}(\mathcal U_i)$ get arbitrarily small as $n\to\infty$ and such that $S^2\setminus \mathcal M$ is in the immediate attracting basin of $A'$; see~\cite{bartholdi-h-n:aglt}. If $A^\infty=\emptyset$, B\"ottcher expanding maps are the everywhere-expanding maps considered e.g.\ in~\cites{haissinsky-pilgrim:cxc,bonk-meyer:etm}. An obstruction to topological expansion is the existence of a \emph{Levy cycle}. This is an essential simple closed curve on $S^2\setminus A$ that is isotopic to some iterated preimage of itself. We shall see that it is the only obstruction. We recall briefly the algebraic encoding of branched coverings: given $f\colon(S^2,A)\selfmap$, set $G\coloneqq \pi_1(S^2\setminus A,*)$, and define \[B(f)\coloneqq\{\gamma\colon[0,1]\to\mathcal M\mid\gamma(0)=f(\gamma(1))=*\}\,/\,\text{homotopy}. \] This is a set with commuting left and right $G$-actions, see~\S\ref{ss:bisets} to which we refer for the definition of \emph{contracting} bisets. Two branched self-coverings $f_0\colon(S^2,A_0)\selfmap$ and $f_1\colon(S^2,A_1)\selfmap$ are \emph{combinatorially equivalent} if there is a path $(f_t\colon(S^2,A_t)\selfmap)_{t\in[0,1]}$ of branched self-coverings joining them; this happens precisely when the bisets $B(f_0)$ and $B(f_1)$ are isomorphic in a suitably defined sense, see~\cite{kameyama:thurston} and~\cite{bartholdi-dudko:bc2}. The main result of this part is the following criterion; equivalence of~\eqref{main:topexp} and~\eqref{main:contracting} was known in the case $A^\infty=\emptyset$ from~\cite{haissinsky-pilgrim:algebraic}*{Theorem~4}: \begin{mainthm}[= Theorem~\ref{thm:ExpCr}]\label{thm:main} Let $f\colon(S^2,A)\selfmap$ be a Thurston map, not doubly covered by a torus endomorphism. The following are equivalent: \begin{enumerate} \item $f$ is combinatorially equivalent to a B\"ottcher metrically expanding map;\label{main:Bottcher} \item $f$ is combinatorially equivalent to a topologically expanding map;\label{main:topexp} \item $B(f)$ is an orbisphere contracting biset;\label{main:contracting} \item $f$ is non-invertible and admits no Levy cycle. \end{enumerate} Furthermore, if these properties hold, the metric in~\eqref{main:Bottcher} may be assumed to be Riemannian of pinched negative curvature. \end{mainthm} Ha\"\i ssinsky and Pilgrim ask in~\cite{haissinsky-pilgrim:algebraic} whether every everywhere-expanding map is isotopic to a smooth map. By Theorem~\ref{thm:main}, a combinatorial equivalence class contains a B\"ottcher smooth expanding map if and only if it is Levy free. If $A^\infty =\emptyset$, then a B\"ottcher expanding map is expanding everywhere. \subsection{Geometric maps and decidability} Let us define \emph{\Tor} maps as self-maps of the sphere $S^2$ that are a quotient of a torus endomorphism $M z+v\colon\R^2\selfmap$ by the involution $z\mapsto-z$, such that the eigenvalues of $M$ are different from $\pm 1$. Let us call a Thurston map \emph{geometric} if it is either B\"ottcher expanding or \Tor. Recall from~\cite{bartholdi-dudko:bc2} that $R(f,A,\CC)$ denotes the small return maps of the decomposition of a Thurston map $f$ under an invariant multicurve $\CC$. The \emph{canonical Levy obstruction} $\CC_{\text{Levy}}$ of a Thurston map $f\colon(S^2,A)\selfmap$ is a minimal $f$-invariant multicurve all of whose small Thurston maps are either homeomorphisms or admit no Levy cycle. It is unique by Proposition~\ref{prop:HypLevyMCurvInter}. The \emph{Levy decomposition} of $f$ (and equivalently of its biset) is its decomposition (as a graph of bisets) along the canonical Levy obstruction. It was proven in~\cite{selinger-yampolsky:geometrization}*{Main Theorem~II} that every Levy-free map that is doubly covered by a torus endomorphism is in \Tor. Combined with Theorem~\ref{thm:main}, this implies \begin{maincor} Let $f\colon(S^2,A)\selfmap$ be a Thurston map. Then every map in $R(f,A,\CC_{\text{Levy}})$ is either geometric or a homeomorphism.\qed \end{maincor} The following consequences are essential for the decidability of combinatorial equivalence of Thurston maps. \begin{maincor}[= Algorithms~\ref{algo:istor} and~\ref{algo:isexp}]\label{cor:decidegeom} There is an algorithm that, given a Thurston map by its biset, decides whether it is geometric. \end{maincor} \noindent As a consequence we have \begin{maincor}[= Algorithm~\ref{algo:levy}]\label{cor:decidelevy} Let $f$ be a Thurston map. Then its Levy decomposition is symbolically computable. \end{maincor} There may exist expanding maps in the combinatorial equivalence class of a Thurston map which are not B\"ottcher expanding. However, every expanding map is a quotient of a B\"ottcher expanding map, by Theorem~\ref{thm:main} combined with the following \begin{prop}[= Proposition~\ref{prop:semiconjugate}] Let $f,g\colon(S^2,A)\selfmap$ be isotopic Thurston maps, let $\Fatou(f),\Fatou(g)$ be their respective Fatou sets (see~\S\ref{ss:fatou}), and assume $A\cap(\Fatou(g)\setminus\Fatou(f))=\emptyset$. Then there is a semiconjugacy from $f$ to $g$, defined by collapsing to points those components of $\Fatou(f)$ that are attracted towards $A\cap(\Fatou(f)\setminus\Fatou(g))$ under $f$. \end{prop} We will show in~\cite{bartholdi-dudko:bc3} that the semiconjugacy is unique. We deduce the following extension to expanding maps of a classical result for rational maps (see e.g.~\cite{douady-h:thurston}*{Corollary~3.4(b)}) to B\"ottcher expanding maps: \begin{cor}[=Lemma~\ref{lem:ConjBetwExpMaps}] Let $f,g$ be B\"ottcher expanding Thurston maps. Then $f$ and $g$ are combinatorially equivalent if and only if they are conjugate.\qed \end{cor} We also characterize maps (such as rational maps with Julia set a Sierpi\'nski carpet) that are isotopic to an everywhere-expanding map. A \emph{Levy arc} for a Thurston map $f\colon(S^2,A)\selfmap$ is a non-trivial path with endpoints in $A$ which is isotopic to an iterated lift of itself: \begin{prop}[= Lemma~\ref{lem:HomIsolCond} with $A=A'$] Consider a Thurston map $f$ that is not doubly covered by a torus endomorphism. Then $f$ is isotopic to an everywhere-expanding map if and only if $f$ admits no Levy obstruction nor Levy arc. \end{prop} \subsection{Matings and amalgams} We finally apply Theorem~\ref{thm:main} to the study of matings, and more generally amalgams of expanding maps. We state the results for matings in this introduction, while~\S\ref{ss:matings} will discuss the general case of amalgams. Let $p_+(z)=z^d+\cdots$ and $p_-(z)=z^d+\cdots$ be two post-critically finite monic polynomials of same degree. Denote by $\overline1pt value of c$ the compactification of $1pt value of c$ by a circle at infinity $\{\infty\exp(2\pi i\theta)\}$, and consider the sphere \[\mathbb S\coloneqq (\overline1pt value of c\times\{\pm1\})\,/\,\{(\infty\exp(2\pi i\theta),+1)\sim(\infty\exp(-2\pi i\theta),-1)\}.\] (Note the reversed orientation between the two copies of $\overline1pt value of c$). The \emph{formal mating} \begin{equation}\label{eq:mating} p_+ \FM p_-\colon \mathbb S\selfmap,\quad (z,\varepsilon)\mapsto(p_\varepsilon(z),\varepsilon) \end{equation} is the branched covering of $\mathbb S$ acting as $p_+$ on its northern hemisphere, as $p_-$ on its southern hemisphere, and as $z^d$ on the common equator $\{\infty\exp(2i\pi\theta)\}$. The maps $p_+,p_-$ glue continuously by Lemma~\ref{lem:Bottcher extension}. We recall the definition of \emph{external rays} associated to the polynomials $p_\pm$. For a polynomial $p$, the \emph{filled-in Julia set} $K_p$ of $p$ is \[K_p=\{z\in1pt value of c\mid p^n(z)\not\to\infty\text{ as }n\to\infty\}.\] Assume that $K_p$ is connected. There exists then a unique holomorphic isomorphism $\phi_p\colon\hC\setminus K_p\to\hC\setminus\overline{\mathbb D}$ satisfying $\phi_p(p(z))=\phi_p(z)^d$ and $\phi_p(\infty)=\infty$ and $\phi_p'(\infty)=1$. It is called a \emph{B\"ottcher coordinate}, and conjugates $p$ to $z^d$ in a neighbourhood of $\infty$. For $\theta\in\R/\Z$, the associated \emph{external ray} is \[R_p(\theta)=\{\phi_p^{-1}(re^{2i\pi\theta})\mid r\ge1\}. \] Let $\Sigma$ denote the quotient of $\mathbb S$ in which each $\overline{(R_{p_\varepsilon}(\theta),\varepsilon)}$ has been identified to one point for each $\theta\in\R/\Z$ and each $\varepsilon\in\{\pm1\}$. Note that $\Sigma$ is a quotient of $K_{p_+}\sqcup K_{p_-}$, and need not be a Hausdorff space. A classical criterion (due to Moore) determines when $\Sigma$ is homeomorphic to $S^2$. If this occurs, $p_+$ and $p_-$ are said to be \emph{topologically mateable}, and the map induced by $p_+\FM p_-$ on $\Sigma$ is called the \emph{topological mating} of $p_+$ and $p_-$ and denoted $p_+\GM p_-\colon \Sigma\selfmap$. \begin{defn} Let $p_+,p_-$ be two monic post-critically finite polynomials of same degree $d$. We say that $p_+,p_-$ have a \emph{pinching cycle of periodic angles} if there are angles $\phi_0,\phi_1, \dots, \phi_{2n-1}\in \Q/\Z$ with denominators coprime to $d$, such that for all $\varepsilon=\pm1$ and all $i=0,\dots,2n-1$, indices treated modulo $2n$, the rays $R_{p_\varepsilon}( \varepsilon \phi_{2i})$ and $R_{p_\varepsilon}(\varepsilon \phi_{2i+\varepsilon})$ land together. \end{defn} We give a computable criterion for two hyperbolic polynomials to be mateable, which extends a well-known criterion ``two quadratic polynomials are geometrically mateable if and only if they do not belong to conjugate primary limbs in the Mandelbrot set'' due to Mary Rees and Tan Lei, see~\cite{tan:matings} and~\cite{buff+:questionsaboutmatings}*{Theorem~2.1}: \begin{mainthm}\label{thm:mating} Let $p_+,p_-$ be two monic hyperbolic post-critically finite polynomials. Then the following are equivalent: \begin{enumerate} \item\label{thm:mating:1} $p_+ \FM p_-$ is combinatorially equivalent to an expanding map; \item\label{thm:mating:2} $p_+ \GM p_-$ is a sphere map (necessarily conjugate to any expanding map in~\eqref{thm:mating:1}); \item\label{thm:mating:3} $p_+,p_-$ do not have a pinching cycle of periodic angles. \end{enumerate} \end{mainthm} To be more precise, the criterion due to Mary Rees and Tan Lei relies on the fact that, in degree $2$, every Thurston obstruction is a Levy obstruction, so every expanding map is automatically conjugate to a rational map. In degree $\ge3$ there are topological matings that are not conjugate to rational maps: the example in~\cite{shishikura-t:matings} is precisely such a mating with an obstruction but no Levy obstruction, and it is isotopic to an expanding map. Furthermore, in degree $2$ every decomposition of a Thurston map along a Levy cycle has a fixed sphere or cylinder which maps to itself by a homeomorphism cyclically permuting the boundary components (namely, there exists a ``good Levy cycle''). This implies that obstructed maps have a pinching cycle of periodic angles with $n=2$. In Example~\ref{exple:degree3}, we show that this does not hold in higher degree. \subsection{Notation} Let $f\colon(S^2, A)\selfmap$ be a Thurston map with an invariant multicurve $\CC$. Recall that by $R(f,A,\CC)$ we denote the return maps induced by $f$ on $S^2\setminus\CC$, see~\cite{bartholdi-dudko:bc2}*{\S\ref{bc2:ss:return maps}}. We introduce the following notation. By default, curves and multicurves are considered up to isotopy rel the marked points; we use the terminology ``equal'' to mean that. In particular, a cycle of curves is really a sequence of curves that are mapped cyclically to each other, up to isotopy. If we want to insist that curves are equal and not just isotopic, we add the adjective ``solid''; thus a solid cycle of curves is a a sequence of curves mapped cyclically to each other, ``on the nose''. We reserve letters `$\CC$' for invariant multicurves and `$C$' for cycles of curves, or more generally for subsets of invariant multicurves. \section{Multicurves and the Levy decomposition} Let $A$ be a finite subset of the topological sphere $S^2$, and consider simple closed curves on $S^2\setminus A$. Recall that such a curve is \emph{trivial} if it bounds a disc in $S^2\setminus A$, and is \emph{peripheral} if it may be homotoped into arbitrarily small neighbourhoods of $A$; otherwise, it is \emph{essential}. A \emph{multicurve} is a collection of mutually non-intersecting non-homotopic essential simple closed curves. Following Harvey~\cite{harvey:curvecomplex}, we denote by $\mathcal C(S^2\setminus A)$ the flag complex whose vertices are isotopy classes of essential curves, and a collection of curves belong to a simplex if they have disjoint representatives; so multicurves on $S^2\setminus A$ are naturally identified with simplices in $\mathcal C(S^2\setminus A)$. (The empty multicurve corresponds to the empty simplex). Given two simple closed curves $\gamma_1$ and $\gamma_2$ on $S^2\setminus A$, their \emph{geometric intersection number} is defined as \[i(\gamma_1,\gamma_2)=\min_{\gamma'_1,\gamma'_2}\#(\gamma'_1\cap\gamma'_2), \] with the minimum ranging over all curves $\gamma'_1$ isotopic to $\gamma_1$ and $\gamma'_2$ isotopic to $\gamma_2$. The simple closed curves $\gamma_1$ and $\gamma_2$ are in \emph{minimal position} if $i(\gamma_1,\gamma_2)=\#(\gamma_1\cap \gamma_2)$. We say that two simple closed curves $\gamma_1$ and $\gamma_2$ \emph{cross} if $i(\gamma_1,\gamma_2)>0$. Clearly, if $\gamma_1$ and $\gamma_2$ are isotopic or one of them is inessential, then $i(\gamma_1,\gamma_2)=0$. Two multicurves $\CC_1$ and $\CC_2$ \emph{cross} if there are $\gamma_1\in \CC_1$ and $\gamma_2\in \CC_2$ that cross. \begin{prop}[The Bigon Criterion, \cite{farb-margalit:mcg}*{Proposition~1.3}]\label{prop:BigonCriterion} Two transverse simple closed curves on a surface $S$ are in minimal position if and only if the two arcs between any pair of intersection points never bound an embedded disc in $S$.\qed \end{prop} \subsection{Levy, anti-Levy, Cantor, and anti-Cantor multicurves} Consider a Thurston map $f\colon(S^2,A)\selfmap$. We construct the following directed graph: its vertex set is the set of essential simple closed curves on $S^2\setminus A$, namely the vertex set of the curve complex $\mathcal C(S^2\setminus A)$. For every simple closed curve $\gamma$ and for every component $\delta$ of $f^{-1}(\gamma)$, we put an edge from $\gamma$ to $\delta$ labeled $\deg(f\restrict\delta)$. Note that the operation $f^{-1}$ induces a map on the simplices of $\mathcal C(S^2\setminus A)$, but not a simplicial map. \begin{figure \begin{tikzpicture}[auto,vx/.style={circle,minimum size=1ex,inner sep=0pt,outer sep=2pt,draw,fill=gray!50}] \def\leftsphere#1#2{+(0,0.1) .. controls +(180:#1/2) and +(0:#1/2) .. +(-#1,#2) .. controls +(180:#2) and +(180:#2) .. +(-#1,-#2) .. controls +(0:#1/2) and +(180:#1/2) .. +(0,-0.1) ++(0,0)} \def\rightsphere#1#2{+(0,0.1) .. controls +(0:#1/2) and +(180:#1/2) .. +(#1,#2) .. controls +(0:#2) and +(0:#2) .. +(#1,-#2) .. controls +(180:#1/2) and +(0:#1/2) .. +(0,-0.1) ++(0,0)} \def\midsphere#1#2{+(0,0.1) .. controls +(0:#1/2) and +(180:#1/2) .. +(#1,#2) .. controls +(0:#1/2) and +(180:#1/2) .. +(#1*2,0.1) +(0,-0.1) .. controls +(0:#1/2) and +(180:#1/2) .. +(#1,-#2) .. controls +(0:#1/2) and +(180:#1/2) .. +(#1*2,-0.1) ++(#1*2,0) } \path[draw][very thick] (-2.4,0) node [xshift=-15mm] {$S_1$} \leftsphere{1.5}{0.7} node [below=1mm] {$v_1$} node[xshift=12mm] {$S_2$} \midsphere{1.2}{0.7} node[below=1mm] {$v_2$} node[xshift=12mm] {$S_3$} \midsphere{1.2}{0.7} node[below=1mm] {$v_3$} node[xshift=15mm] {$S_4$} \rightsphere{1.5}{0.7}; \path[draw][very thick] (-3.4,2.5) node [xshift=-8mm] {$S_1$} \leftsphere{0.8}{0.7} node [xshift=3mm] {\small $S'_2$} \midsphere{0.3}{0.4} node [xshift=3mm] {\small $S'_3$} \midsphere{0.3}{0.4} node [xshift=8mm] {$S_2$} \midsphere{0.8}{0.7} node [xshift=3mm] {\small $S''_3$} \midsphere{0.3}{0.4} node [xshift=3mm] {\small $S'_4$} \midsphere{0.3}{0.4} node [xshift=8mm] {$S_3$} \midsphere{0.8}{0.7} node [xshift=3mm] {\small $S''_2$} \midsphere{0.3}{0.4} node [xshift=3mm] {\small $S'''_3$} \midsphere{0.3}{0.4} node [xshift=8mm] {$S_4$} \rightsphere{0.8}{0.7}; \path[draw] (-3.4,2.4) -- (-2.4,0.1) \fwdarrowonline{0.5}; \path[draw] (-2.8,2.4) -- (-0.25,0.15) \fwdarrowonline{0.5}; \path[draw] (-2.2,2.4) -- (-0.1,0.1) \fwdarrowonline{0.5}; \path[draw] (-0.6,2.4) -- (0.0,0.1) \fwdarrowonline{0.667}; \path[draw] (0.0,2.4) -- (2.3,0.1) \fwdarrowonline{0.667}; \path[draw] (0.6,2.4) -- (2.4,0.1) \fwdarrowonline{0.667}; \path[draw] (2.2,2.4) -- (0.1,0.1) \bckarrowonline{0.667}; \path[draw] (2.8,2.4) -- (0.25,0.15) \bckarrowonline{0.333}; \path[draw] (3.4,2.4) -- (2.5,0.1) \bckarrowonline{0.5}; \tikz@path@overlay{node}[vx,label={$v_1$}] (v1) at (-2.5,-1.5) {}; \tikz@path@overlay{node}[vx,label={$v_2$}] (v2) at (0,-1.5) {}; \tikz@path@overlay{node}[vx,label={$v_3$}] (v3) at (2.5,-1.5) {}; \path[draw][<-] (v1) to [loop left] (v1); \path[draw][<-] (v1) to [bend left=20] (v2); \path[draw][<-] (v1) to [bend right=20] (v2); \path[draw][->] (v2) to [loop below] (v2); \path[draw][->] (v2) to [bend right=15] (v3); \path[draw][->] (v2) to [bend right=45] (v3); \path[draw][->] (v3) to [loop right] (v3); \path[draw][->] (v3) to [bend right=15] (v2); \path[draw][->] (v3) to [bend right=45] (v2); \end{tikzpicture} \caption{A bicycle $\{v_2,v_3\}$ generates a Cantor multicurve $\{v_1,v_2,v_3\}$. The action of the map $f$ is indicated on the preimages of $\{v_1,v_2,v_3\}$. If annuli are mapped by degree $1$, then it is also a Levy cycle. Trivial spheres are omitted on the top sphere. The graph below is the corresponding portion of the graph on $\mathcal C(S^2\setminus A)$.} \label{Fig:ExampleCantorMult} \end{figure} A multicurve $\CC\in\mathcal C(S^2\setminus A)$ is \emph{invariant} if $f^{-1}(\CC)=\CC$. Given a multicurve $\CC_0$ with $\CC_0\subseteq f^{-1}(\CC_0)$, there is a unique invariant multicurve $\CC$ \emph{generated} by $\CC_0$, namely the intersection of all invariant multicurves containing $\CC_0$. The invariant multicurve $\CC$ may readily be computed by considering $\CC_0,f^{-1}(\CC_0),f^{-2}(\CC_0),\dots$; this is an ascending sequence of multicurves, and each multicurve contains at most $\#A-3$ curves so the sequence must stabilize. Let $\CC$ be an invariant multicurve, and consider the corresponding directed subgraph of $\mathcal C(S^2\setminus A)$. A \emph{strongly connected component} is a maximal subgraph spanned by a subset $C\subseteq\CC$ such that, for every $\gamma,\delta\in C$, there exists a non-trivial path from $\gamma$ to $\delta$ in $C$. Note that singletons with no loop are never strongly connected components. Strongly connected components are partially ordered: $C\prec D$ if there is a path from a curve in $C$ to a curve in $D$. Consider a strongly connected component $C$. We call $C$ \emph{primitive in $\CC$} if it is minimal for $\prec$. We call $C$ a \emph{bicycle} if for every $\gamma,\delta\in C$ there exists $n\in\N$ such that at least two paths of length $n$ join $\gamma$ to $\delta$ in $C$, and a \emph{unicycle} otherwise; see Figure~\ref{Fig:ExampleCantorMult} for an illustration. We remark that bicycles contain at least two cycles, so the number of paths of length $n$ grows exponentially in $n$. On the other hand, every unicycle is an actual \emph{periodic cycle}, namely can be written as $C=(\gamma_0,\gamma_1,\dots,\gamma_n=\gamma_0)$ in such a manner that $\gamma_{i+1}$ has an $f$-preimage $\gamma'_i$ isotopic to $\gamma_i$. If in a periodic cycle $C$ the $\gamma'_i$ may be chosen so that $f$ maps each $\gamma'_i$ to $\gamma_{i+1}$ by degree $1$, then $C$ is called a \emph{Levy cycle}. A periodic cycle $C=(\gamma_0,\gamma_1,\dots,\gamma_n=\gamma_0)$ is a \emph{solid periodic cycle} if $f$ maps $\gamma_i$ onto $\gamma_{i+1}$ for all $i=0,\dots,n-1$; if $f$ maps every $\gamma_i$ to $\gamma_{i+1}$ by degree $1$, then $C$ is called a \emph{solid Levy cycle}. Since the critical values of $f$ are assumed to belong to $A$, the restrictions $f\restrict{\gamma_i}\colon\gamma_i\to\gamma_{i+1}$ are all homeomorphisms. Note that a periodic cycle may be isotopic to more than one solid periodic cycle, possibly some solid Levy and some solid non-Levy cycles. \[\begin{tikzpicture}[->,auto,vx/.style={circle,minimum size=1ex,inner sep=0pt,outer sep=2pt,draw,fill=gray!50}] \tikz@path@overlay{node}[vx] (1) at (-1,0) {}; \tikz@path@overlay{node}[vx] (2) at (1,0) {}; \path[draw] (1) to [bend left=60] (2); \path[draw] (1) to [bend left=10] (2); \path[draw] (2) to [bend left] (1); \tikz@path@overlay{node} at (0,-0.6) {bicycle}; \end{tikzpicture} \qquad \begin{tikzpicture}[->,auto,vx/.style={circle,minimum size=1ex,inner sep=0pt,outer sep=2pt,draw,fill=gray!50}] \tikz@path@overlay{node}[vx] (1) at (-1,0) {}; \tikz@path@overlay{node}[vx] (2) at (0,1.5) {}; \tikz@path@overlay{node}[vx] (3) at (1,0) {}; \path[draw] (1) to [bend left=10] node {$1:1$} (2); \path[draw] (2) to [bend left=10] node {$1:1$} (3); \path[draw] (3) to [bend left=10] node [above] {$1:1$} (1); \tikz@path@overlay{node} at (0,-0.6) {Levy cycle}; \end{tikzpicture} \qquad \begin{tikzpicture}[->,auto,vx/.style={circle,minimum size=1ex,inner sep=0pt,outer sep=2pt,draw,fill=gray!50}] \tikz@path@overlay{node}[vx] (1) at (-1,0) {}; \tikz@path@overlay{node}[vx] (2) at (-1,1) {}; \tikz@path@overlay{node}[vx] (3) at (1,0) {}; \tikz@path@overlay{node}[vx] (4) at (1,1) {}; \path[draw][dashed,thick] (-1,0.5) ellipse (6mm and 9mm) node[above right=7mm] {$C$}; \path[draw] (1) to [bend left] (2); \path[draw] (2) to [bend left] (1); \path[draw] (1) to (3); \path[draw] (3) to [bend left] (4); \path[draw] (4) to [bend left] (3); \tikz@path@overlay{node} at (0,-0.6) {Primitive s.c.c.}; \end{tikzpicture} \] We remark that every invariant multicurve is generated by its primitive unicycles and bicycles, and that if $C$ is a strongly connected component of an invariant multicurve $\CC$ and $C$ has a curve in common with an invariant multicurve $\DD$ then $C$ is also a strongly connected component in $\DD$; and it is a bicycle in $\CC$ if and only if it is a bicycle in $\DD$. However, $C$ could be primitive in $\CC$ but not in $\DD$. We will sometimes speak of a strongly connected component without reference to an invaraint multicurve containing it. We will also say that a strongly connected component $C$ is \emph{primitive} if it is primitive in any invariant multicurve containing $C$. \begin{defn}[Types of invariant multicurves] Let $\CC$ be an invariant multicurve. Then $\CC$ is \begin{idescription} \item[Cantor] if it is generated by its bicycles; \item[anti-Cantor] if $\CC$ does not contain any bicycle; \item[Levy] if it is generated by its Levy cycles; \item[anti-Levy] if $\CC$ does not contain any Levy cycle.\qedhere \end{idescription} \end{defn} \begin{prop}\label{Prop:CantorMultCurv} Suppose $f\colon(S^2, A)\selfmap$ is a Thurston map with an invariant multicurve $\CC$. Then \begin{enumerate} \item there is a unique maximal invariant Cantor sub-multicurve $\CC_{\text{Cantor}}\subseteq\CC$ such that the restrictions of $\CC$ to pieces in $S^2\setminus\CC_{\text{Cantor}}$ are anti-Cantor invariant multicurves of return maps in $R(f,A,\CC_{\text{Cantor}})$; \item there is a unique maximal invariant Levy sub-multicurve $\CC_{\text{Levy}}\subseteq\CC$ such that the restrictions of $\CC$ to pieces in $S^2\setminus\CC_{\text{Levy}}$ are anti-Levy invariant multicurves of return maps in $R(f,A,\CC_{\text{Levy}})$. \end{enumerate} \end{prop} \begin{proof} The multicurve $\CC_{\text{Cantor}}$ is generated by all the bicycles in $\CC$ while the multicurve $\CC_{\text{Levy}}$ is generated by all the Levy cycles in $\CC$. \end{proof} \subsection{Crossings of Levy cycles} We now show that Levy cycles cross invariant multicurves in a quite restricted way. First we need the following technical properties. \begin{prop}\label{prop:solidPerCycl} Let $f\colon(S^2,A)\selfmap$ be a Thurston map. Then \begin{enumerate} \item if $C$ is a periodic cycle, then there is a homeomorphism $h\colon(S^2,A)\selfmap$ isotopic to the identity rel $A$ such that $C$ is a solid periodic cycle of the map $h\circ f$, and is Levy for $h\circ f$ if it was Levy for $f$;\label{prop:solidPerCycl:1} \item if a periodic cycle $C$ crosses a Levy cycle, then $C$ is a periodic primitive unicycle. A strictly preperiodic curve does not cross a Levy cycle;\label{prop:solidPerCycl:2} \item if $L$ is a Levy cycle and $C$ is a periodic cycle crossing $L$ such that $C$ and $L$ are in minimal position, then there is homeomorphism $h\colon(S^2,A)\selfmap$ isotopic to the identity rel $A$ such that $C$ and $L$ are solid curve cycles of the map $h\circ f$.\label{prop:solidPerCycl:3} \end{enumerate} \end{prop} We remark that the last statement can not be improved much. Indeed, there is an example, due to Wittner~\cite{wittner:phd}, of the mating of the airplane and rabbit polynomials, which may be decomposed in two manners as a mating; in other words, the map admits two ``equators'' (invariant curves mapped $d:1$ to themselves). It is impossible to make both equators simultaneously solidly periodic and in minimal position; worse, if they are both made solidly periodic, then they must have infinitely many crossings. We recall the following \begin{lem}[The Alexander method, \cite{farb-margalit:mcg}*{Proposition~2.8}]\label{lem:alexander} A collection of pairwise non-isotopic essential curves $\{\gamma_i\}_i$ can be simultaneously isotopically moved into $\{\gamma'_i\}_i$ if (1) all curves in $\{\gamma_i\}_i$ are pairwise in minimal position, (2) all curves in $\{\gamma'_i\}_i$ are pairwise in minimal position, (3) every $\gamma_i$ is isotopic to the corresponding $\gamma'_i$, and (4) for pairwise different $i,j,k$ at least one of $i(\gamma_i,\gamma_j)$, $i(\gamma_i,\gamma_k)$ and $i(\gamma_j,\gamma_k)$ is $0$.\qed \end{lem} \begin{proof}[Proof of Proposition~\ref{prop:solidPerCycl}] We begin by~\eqref{prop:solidPerCycl:1}. Write $C=(\gamma_0,\gamma_1,\dots,\gamma_n=\gamma_0)$. For every $i$ choose a component $\gamma'_i$ of $f^{-1}(\gamma_{i+1})$ that is isotopic to $\gamma_i$, mapping by degree $1$ if $C$ is a Levy cycle. Note that the $\gamma'_i$ are disjoint. Any isotopy moving all $\gamma_i$ to $\gamma'_i$ satisfies the claim. Let us move to the second claim. Assume that $C=(\gamma_0,\gamma_1,\dots,\gamma_n=\gamma_0)$ crosses a Levy cycle $L$. By Part~\eqref{prop:solidPerCycl:1} we may assume that $L$ is a solid Levy cycle. Put $\gamma_0$ in minimal position with respect to $L$ and denote by $\#(\gamma_0\cap L)$ the total number of crossings of $\gamma_0$ with $L$. Since $L$ is a solid Levy cycle we have \[\#(f^{-m}(\gamma_0)\cap L)=\#(\gamma_0\cap L) \] for every $m\ge 0$. If $m$ is a multiple of $n$, then $f^{-m}(\gamma_0)$ contains at least one component $\gamma'_0$ isotopic to $\gamma_0$. By minimality, \[\#(\gamma'_0\cap L)\ge \#(\gamma_0\cap L). \] We conclude that for every $m\ge0$ there is exactly one component in $f^{-m}(\gamma_0)$ that crosses $L$. This component is necessarily isotopic to $\gamma_{-m}$, subscripts computed modulo $n$. Claim~\eqref{prop:solidPerCycl:2} follows from the observation that if $\gamma$ crosses a Levy cycle $L$, is periodic and is a preimage of some $\gamma'$, then $\gamma'$ crosses $L$. Let us prove Claim~\eqref{prop:solidPerCycl:3}. Write $L=(\lambda_0,\dots,\lambda_p=\lambda_0)$ and $C=(\gamma_0,\dots, \gamma_q=\gamma_0)$. By Part~\eqref{prop:solidPerCycl:1} we may assume that $L$ is a solid Levy cycle. By Part~\eqref{prop:solidPerCycl:2}, there is a unique component $\gamma'_i$ of $f^{-1}(\gamma_{i+1})$ that is isotopic to $\gamma_i$. It follows from the above discussion that \[C'=(\gamma'_0,\gamma'_1,\dots, \gamma'_q) \] is also in minimal position with respect to $C$. It follows from the Alexander method, Lemma~\ref{lem:alexander}, that there is an isotopy moving every $\gamma'_i$ into $\gamma_i$ while fixing every $\lambda_i$. \end{proof} Let $f\colon(S^2,A)\selfmap$ be a Thurston map, and let $\CC$ be an invariant multicurve. The components of $S^2\setminus\CC$ can be compactified to \emph{small spheres} by shrinking each boundary component to a point, and $f$ induces \emph{small maps} between the small spheres, well defined up to isotopy. A periodic small sphere $S_0$ gives rise to a \emph{small Thurston cycle} of maps $S_0\to S_1\to\cdots\to S_0$ (see~\cite{bartholdi-dudko:bc2}*{Definition~\ref{bc2:lem:SmallMaps}}), which is a \emph{small homeomorphism cycle} if all the small maps are homeomorphisms. The next result states that two Levy cycles can be joined so as to give a finer decomposition, with additional homeomorphism small maps. Its content is non-trivial only if the Levy cycles intersect. \begin{cor}\label{cor:IntOfLevyCycl} Let $C_1$ and $C_2$ be two Levy cycles. Then a small neighbourhood of their union is a small homeomorphism cycle. More precisely, assume that $C_1$ and $C_2$ are in minimal position. Denote by $\CC$ the invariant multicurve generated by the boundary of a small neighbourhood of $C_1\cup C_2$ in $S^2$. Then the small spheres of $(S^2,A)\setminus\CC$ that intersect $C_1\cup C_2$ form a small homeomorphism cycle. \end{cor} \begin{proof} By Proposition~\ref{prop:solidPerCycl}\eqref{prop:solidPerCycl:3} we may assume that $C_1$ and $C_2$ are solid Levy cycles in minimal position. Let $\CC_0$ be the boundary of a small neighbourhood $N$ of $C_1\cup C_2$ in $S^2$. By the Bigon criterion, Proposition~\ref{prop:BigonCriterion}, all curves in $\CC_0$ are non-trivial. For every $\gamma\in\CC_0$, its image $f(\gamma)$ belongs to $\CC_0$ and the restriction $f\restrict\gamma\colon\gamma\to f(\gamma)$ has degree $1$. Since $f$ is a covering away from $A$, it extends to a homeomorphism on $N$. Up to isotopy, we may suppose that $N$ is invariant. Since $\CC$ does not contain the peripheral or trivial curves in $\CC_0$, we should extend $f\colon N\to N$ to all connected components of $S^2\setminus N$ that contain at most one marked point. By passing to an iterate of $f$ to lighten notation, we may assume that $f$ preserves each component of $\partial N$. Let $D$ be a disc in $S^2\setminus N$, and assume that $f\colon D\to f(D)$ has degree at least $2$. Since $f$ preserves $\partial D$ and is a homeomorphism on $N$, the image $f(D)$ contains $D$. Likewise, $f^{-1}(D)\cap D$ contains a component, say $E$, whose boundary contains $\partial D$. We get a map $f\colon E\to D$ of degree at least $2$; so $D$ contains at least two critical values, so it is essential. It follows that $f$ extends to a homeomorphism on the union of $N$ and the inessential discs touching it. \end{proof} \subsection{The Levy decomposition} A Thurston map $f\colon (S^2,A)\selfmap$ is called \emph{Levy-free} if $f$ does not admit a Levy cycle and the degree of $f$ is at least $2$. Here we characterize the multicurves along which $f$ decomposes into Levy-free maps. We say that an invariant Levy multicurve $\CC$ is \emph{complete} if every small Thurston map in $R(f,A,\CC)$ either Levy-free or a homeomorphism. \begin{prop}\label{prop:HypLevyMCurvInter} Let $\CC_1$ and $\CC_2$ be complete invariant Levy multicurves of a Thurston map $f\colon(S^2,A)\selfmap$. Then \begin{enumerate} \item if a periodic curve $\gamma_1\in\CC_1$ crosses a curve $\gamma_2\in \CC_2$, then $\gamma_1$ and $\gamma_2$ belong to primitive Levy unicycles;\label{prop:HypLevyMCurvInter:1} \item the Levy-free maps in $R(f,A,\CC_1)$ and in $R(f,\CC_2)$ are the same;\label{prop:HypLevyMCurvInter:2} \item the multicurve $\CC_1\cap\CC_2$ is a complete invariant Levy multicurve.\label{prop:HypLevyMCurvInter:3} \end{enumerate} \end{prop} It follows that there is a unique minimal complete invariant Levy multicurve, called the \emph{canonical Levy obstruction of $f$} and written $\CC_{f,\text{Levy}}$. Any other invariant complete Levy multicurve $\CC$ contains $\CC_{f,\text{Levy}}$ as a sub-multicurve. \begin{proof}[Proof of Proposition~\ref{prop:HypLevyMCurvInter}] \eqref{prop:HypLevyMCurvInter:1} By the definition of a Levy multicurve, for every $\gamma_2\in\CC_2$ there is a Levy cycle $C_2$ such that $\gamma_2$ is an iterated preimage of a curve in $C_2$. Consider $\gamma_1\in\CC_1$. Then $\gamma_1$ crosses $C_2$, because $\gamma_1$ is periodic. It follows from Proposition~\ref{prop:solidPerCycl}\eqref{prop:solidPerCycl:2} that $\gamma_1$ belongs to a primitive Levy unicycles, and by symmetry the same is true for $\gamma_2$. \eqref{prop:HypLevyMCurvInter:2} Consider a Levy-free cycle $f^p\colon S_0\to S_1\to\dots\to S_p=S_0$ in $R(f,A,\CC_1)$. We show that $\CC_2$ intersects none of the $S_1,S_2,\dots,S_p$; this implies that $\bigsqcup_i S_i$ is contained in a Levy-free cycle $S'_0\to\dots\to S'_{p'}=S'_0$ of small spheres of $R(f,A,\CC_2)$, and symmetrically $\bigsqcup_j S'_j$ is contained in a Levy-free cycle of small spheres of $R(f,A,\CC_1)$, so $\bigsqcup_i S_i$ and $\bigsqcup_j S'_j$ are the same. Assume therefore for contradiction that $\CC_2$ intersects some small sphere $S_i$. If this intersection is entirely contained in $S_i$, it will generate a Levy cycle in $\bigsqcup_i S_i$, contradicting the assumption that $\bigsqcup_i S_i$ is Levy-free; therefore $\CC_2$ crosses $\bigsqcup_i\partial S_i$. There is then a periodic curve in $\bigsqcup_i \partial S_i$ crossing $\CC_2$. Choose a curve cycle $C_1\subseteq\bigsqcup_i \partial S_i$ and a curve cycle $C_2\subseteq\CC_2$ such that $C_1$ crosses $C_2$. By Part~\eqref{prop:HypLevyMCurvInter:1} of the proposition, $C_1$ and $C_2$ are anti-Cantor Levy cycles. By Corollary~\ref{cor:IntOfLevyCycl}, there is a small homeomorphism cycle $\{S'_i\}_i$ containing $C_1\cup C_2$. All curves in $\bigsqcup_i\partial S'_i$ belong to Levy cycles. If $\bigcup_i S'_i$ contains (up to isotopy) the union $\bigcup_i S_i$, then we have a contradiction because the degree of $f$ on $\bigcup_i S_i$ is at least $2$ while it is $1$ on $\bigcup_i S'_i$. We now show that we can always reduce to this case. If $\bigcup_i S'_i$ does not contain $\bigcup_i S_i$, then then there is a curve cycle in $\bigsqcup_i\partial S'_i$ crossing at least one curve in $\bigsqcup_i \partial S_i$. This implies that there is a Levy cycle in $\bigsqcup_i\partial S'_i$ crossing a Levy cycle in $\bigsqcup_i \partial S_i$. Invoking again Corollary~\ref{cor:IntOfLevyCycl}, we can enlarge $\bigcup_i S'_i$. Repeating, we may enlarge $\bigcup_i S'_i$ so that it contains $\bigcup_i S_i$. Finally, \eqref{prop:HypLevyMCurvInter:3} follows formally from~\eqref{prop:HypLevyMCurvInter:2}. \end{proof} \begin{defn}[Levy decomposition]\label{def:HypDecomp} The Levy decomposition of a Thurston map $f\colon(S^2,A)\selfmap$ is the decomposition of $f$ along the canonical Levy obstruction $\CC_{f,\text{Levy}}$. \end{defn} We may understand the Levy decomposition of a Thurston map $f\colon(S^2,A)\selfmap$ as follows, if we consider more general subsets of $S^2$ on which $f$ acts as a homeomorphism. Let us call ``Levy kernel'' a subset $L\subseteq S^2$ together with a partition $L=\bigsqcup_{i\in I} S_i$ and a map $f\colon I\selfmap$ such that each $S_i$ is either an essential simple closed curve or an essential small sphere, and is considered up to isotopy; we require that every $S_i$ be isotopic to a degree-$1$ preimage of $S_{f(i)}$, and that if $S_i$ is a curve, then it is not homotopic to any (boundary) curve in $\bigcup_{j\not=i}\partial S_j$. (The last condition replaces the ``non-homotopic'' condition in the definition of a multicurve.) There is a natural order on Levy kernels, given by inclusion up to isotopy. We may think about a Levy kernel as a subset of a sphere on which $f$ has degree one. Corollary~\ref{cor:IntOfLevyCycl} states that if two Levy kernels $L_1$, $L_2$ intersect, then we can construct a bigger Levy kernel $\widetilde L$ that contains both $L_1$ and $L_2$. Therefore, there exists a maximal Levy kernel, and its boundary generates the Levy decomposition. \section{Self-similar groups and automata}\label{ss:bisets} \noindent We recall basic notions about self-similar groups; the reference is~\cite{nekrashevych:ssg}. \subsection{Contracting bisets} Let $G$ be a group. Recall that a \emph{$G$-$G$-biset} is a set $B$ endowed with commuting left and right $G$-actions. Such a biset $B$ is called \emph{left-free} if the left $G$-action is free, i.e.\ has trivial stabilizers. A \emph{basis} is a choice of one element per left $G$-orbit: a subset $X\subseteq B$ such that $B=\bigsqcup_{x\in X}G x$. We therefore have a bijection $G\times X\leftrightarrow B$, and using it we may write the right $G$-action as a map $\Phi\colon X\times G\to G\times X$, determining the structure of $B$. Important examples of bisets come from dynamics: let $f\colon\mathcal M'\to \mathcal M$ be a partial self-covering of a topological space $\mathcal M$, defined on a subset $\mathcal M'\subseteq\mathcal M$. Fix a basepoint $*\in\mathcal M$, and set $G\coloneqq\pi_1(\mathcal M,*)$. Set \begin{equation}\label{eq:B(f)} B(f)\coloneqq\{\gamma\colon[0,1]\to\mathcal M\mid \gamma(0)=f(\gamma(1))=*\}\,/\,\text{homotopy}, \end{equation} with left $G$-action given by pre-concatenation and right $G$-action given by post-concatenation of the unique $f$-lift making the resulting path continuous. A basis of $B$ consists of, for every $z\in f^{-1}(*)$, of a choice of path in $\mathcal M$ from $*$ to $z$. Of particular interest, for us, is $f\colon(S^2,A)\selfmap$ a Thurston map, with $\mathcal M\coloneqq S^2\setminus A$ and $\mathcal M'\coloneqq S^2\setminus f^{-1}(A)$. Recall that two Thurston maps $f_0\colon(S^2,A_0)\selfmap$ and $f_1\colon(S^2,A_1)\selfmap$ are called \emph{combinatorially equivalent} if there is a path $(f_t\colon(S^2,A_t)\selfmap)_{t\in[0,1]}$ of Thurston maps joining them. Kameyama proved in~\cite{kameyama:thurston}, in another language, that $f_0,f_1$ are combinatorially equivalent if and only $B(f_0)$ and $B(f_1)$ are \emph{conjugate}: setting $G_i=\pi_1(S^2\setminus A_i,*_i)$ for $i=0,1$, there exists a homeomorphism $\phi\colon S^2\setminus A_0\to S^2\setminus A_1$ and a bijection $\beta\colon B(f_0)\to B(f_1)$ with $g\cdot b\cdot h=\phi_*(g)\cdot \beta(b)\cdot\phi_*(h)$ for all $g,h\in G_0,b\in B(f_0)$. See~\cite{bartholdi-dudko:bc2} for details. Bisets may be composed: the product of two $G$-$G$-bisets $B,C$ is $B\otimes_G C\coloneqq (B\times C)/\{(b g,c)=(b,g c)\}$, and is related to the composition of partial coverings: we have a natural isomorphism $B(g\circ f)\cong B(f)\otimes B(g)$. If $B,C$ are left-free with respective bases $S,T$, then $B\otimes C$ is left-free with basis $S\times T$. \begin{defn}[\cite{nekrashevych:ssg}*{Definition~2.11.8}]\label{defn:contracting} Let $B$ be a $G$-$G$-biset. It is called \emph{contracting} if for some basis $X\subseteq B$ there exists a finite subset $N\subseteq G$ with the following property: for every $g\in G$ and every $n$ large enough we have the inclusion $X^{\otimes n}g\subseteq N X^{\otimes n}$ in $B^{\otimes n}$. \end{defn} Recall from~\cite{nekrashevych:ssg}*{Proposition~2.11.6} that, if $B$ is contracting for some basis $X$, then it is contracting for every basis, possibly with a different $N$. The set $N$ in Definition~\ref{defn:contracting} is certainly not unique; but for every basis $X$ there exists a minimal such $N$, written $N(B,X)$ and called the \emph{nucleus} of $(B,X)$. Recall also from~\cite{nekrashevych:ssg}*{Proposition~2.11.3} that, if $G$ is finitely generated, $X$ is a basis of $B$ and $B$ is right transitive, then $G$ is generated by $N(B,X)$. These hypotheses are satisfied by bisets of Thurston maps, see~\cite{bartholdi-dudko:bc2}*{Definition~\ref{bc2:dfn:SphBis}}. It is then convenient to express the structure of $B$ by a \emph{Mealy automaton}: it is a finite directed labeled graph with vertex set $N(B,X)$, with labels in $X\times X$ on edges, and with an edge labeled `$x\to y$' from $g\in N(B,X)$ to $h\in N(B,X)$ whenever the equality $x g=h y$ holds in $B$. In fact, one may consider the graph $\mathfrak G$ with vertex set $G$ and an edge from $g\in G$ to $h\in G$ labeled `$x\to y$' whenever $x g=h y$ holds in $B$, and then $N(B,X)$ is precisely the forward attractor of $\mathfrak G$: an equivalent formulation of Definition~\ref{defn:contracting} is that every infinite path in $\mathfrak G$ eventually reaches $N(B,X)$ where it stays. Here is an example of automaton, to which we shall return: \begin{equation}\label{eq:basilica} \begin{fsa}[baseline] \tikz@path@overlay{node}[state] (a) at (-2.7,1.3) {$a$}; \tikz@path@overlay{node}[state] (b) at (-2.7,-1.3) {$b$}; \tikz@path@overlay{node}[state] (e) at (0,0) {$1$}; \path (a) edge node[above=2mm] {$0\to1$} (e) edge[bend left] node {$1\to0$} (b) (b) edge node[below=2mm] {$0\to0$} (e) edge[bend left] node {$1\to1$} (a) (e) edge[loop right] node[above=2mm] {$0\to0$} node[below=2mm] {$1\to1$} (e); \end{fsa} \end{equation} In this automaton, we have $X=\{0,1\}$ and $G=\langle a,b\rangle$. The biset structure is determined by the equations \[0\cdot a=1,\quad 1\cdot a=b\cdot0,\quad 0\cdot b=0,\quad 1\cdot b=a\cdot1.\] The reader may check that this is the biset $B(f)$ as defined in~\eqref{eq:B(f)} for the partial self-covering $f(z)=z^2-1$ of $\hC\setminus\{0,-1,\infty\}$. \begin{prop} Let $G$ be a finitely generated group with solvable word problem, and let $B$ be a computable left-free biset (namely, for a basis $X$ the structure map $X\times G\to G\times X$ is computable). Then it is \emph{semi-decidable} whether $B$ is contracting: there is an algorithm that either runs forever (if $G$ is not contracting) or returns $N(B,X)$ (if $G$ is contracting). \end{prop} \begin{proof} Assume that $B$ is contracting, with nucleus $N(B,X)$. Denote by $\mathscr P_f(G)$ the set of finite subsets of $G$, and define the self-map $\phi\colon\mathscr P_f(G)\selfmap$ by \[\phi(A)=\{h\in G\mid\text{ there exist $x,y\in X$ with }h y \in x A\}. \] Clearly $\phi$ is computable, and $\phi(N(B,X))=N(B,X)$. For $A\subseteq G$ finite, set $\psi(A)\coloneqq\bigcup_{k\ge0}\phi^k(A)$. The sequence $(\bigcup_{k=0}^n\phi^k(A))_n$ is ascending and eventually all $\phi^k(A)$ are contained in $N(B,X)$, so $\psi\colon\mathscr P_f(G)\selfmap$ is computable. Again for $A\subseteq G$ finite, set $\omega(A)\coloneqq\bigcap_{n\ge0}\phi^n(\psi(A))$. The sequence $(\phi^n(\psi(A)))_n$ is a decreasing subsequence of the finite set $\psi(A)$, so $\omega$ is also computable. Let $S$ be a finite generating set for $G$, and assume $1\in S=S^{-1}$. Set $N_0\coloneqq \{1\}$, and for $n\ge1$ set $N_n\coloneqq\omega(N_{n-1}S)$. Then $(N_n)_n$ is an increasing subsequence of $N(B,X)$, so it stabilizes, and its limit $\bigcup_{n\ge0} N_n$ is computable and equals $N(B,X)$. \end{proof} \subsection{Limit spaces}\label{ss:limit} Let $B$ be a contracting $G$-$G$-biset, and let $X$ be a basis of $B$. Define a relation on $X^\infty$, called \emph{asymptotic equivalence}, by \begin{multline*} (x_1x_2\dots)\sim(y_1y_2\dots)\Longleftrightarrow\\ \exists(g_0,g_1,g_2,\dots)\in G^\infty\text{ with $\#\{g_n\}<\infty$ and }x_n g_n=g_{n-1}y_n\text{ for all }n\ge1. \end{multline*} More precisely, one says in this case that $x_1x_2\dots$ and $y_1y_2\dots$ are \emph{$g_0$-equivalent}. The \emph{limit space} of $B$ is the quotient \[\Julia(B)\coloneqq X^\infty/{\sim}.\] More precisely, it is a topological orbispace, with at class $[x_1 x_2\dots]\in\Julia(B)$ the isotropy group $\{g_0\in G\mid x_1x_2\dots\text{ is $g_0$-equivalent to itself}\}$. By~\cite{nekrashevych:ssg}*{Theorem~3.6.3}, we have $x_1x_2\dots\sim y_1y_2\dots$ if and only if there exists a left-infinite path in the Mealy automaton of $B$, with labels $\dots,x_2\to y_2,x_1\to y_1$ on its arrows. These sequences are $g_0$-equivalent for $g_0$ the terminal vertex of the path. In particular, $\sim$-equivalence classes have cardinality at most $\#N(B,X)$. The topological (orbi)space $\Julia(B)$ is compact, metrizable, of finite topological dimension. For example, \eqref{eq:basilica} gives $x_1\dots x_n0(11)^\infty\sim x_1\dots x_n1(10)^\infty$ for all $x_1,\dots,x_n\in X=\{0,1\}$. Denote by $s\colon X^\infty\selfmap$ the shift map $x_1x_2x_3\dots\mapsto x_2x_3\dots$. Clearly the asymptotic equivalence is invariant under $s$, so $s$ induces a self-map $s\colon\Julia(B)\selfmap$. By~\cite{nekrashevych:ssg}*{Corollary~3.6.7}, the dynamical system $(\Julia(B),s)$ is independent, up to topological conjugacy, of the choice of $X$. Note that $s$ only induces a partial self-covering of $\Julia(B)$, if the orbispace structure of $\Julia(B)$ is taken into account. Let $f\colon\mathcal M'\to\mathcal M$ be a partial self-covering as above, and assume that $\mathcal M$ has a complete length metric which is expanded by $f$. The \emph{Julia set} of $f$ is defined as the accumulation set of backward iterates of a generic point: fix $z\in\mathcal M$, and define \begin{equation}\label{eq:julia} \Julia(f)\coloneqq\overline{\bigcap_{n\ge0}\bigcup_{m\ge n}f^{-m}(z)}, \end{equation} a definition that does not depend on the choice of $z$. By~\cite{nekrashevych:ssg}*{Theorem~5.5.3} the biset $B(f)$ defined in~\eqref{eq:B(f)} is contracting, and the dynamical systems $(\Julia(f),f)$ and $(\Julia(B(f)),s)$ are conjugate. The following image shows the Julia set of $f(z)=z^2-1$, the loops $a,b\in\pi_1(\hC\setminus\{0,-1,\infty\},*)$, and the basis $\{\ell_{x_0},\ell_{x_1}\}$ that were used to compute the automaton~\ref{eq:basilica} ($\ell_{x_1}$ is so short that it is not visible): \begin{center} \begin{tikzpicture}[>=stealth] \path[use as bounding box] (-6,-2) rectangle (6,2); \tikz@path@overlay{node}[gray] at (0,0) {\includegraphics[width=12cm,height=4cm]{basilica.png}}; \tikz@path@overlay{node} at (0,0) {$0$}; \tikz@path@overlay{node} at (-3.6,0) {$-1$}; \tikz@path@overlay{node}[inner sep=0pt] (*) at (-2.1,0) {$*$}; \tikz@path@overlay{node}[inner sep=0pt] (x1) at (-2.6,0) {$x_1$}; \tikz@path@overlay{node}[inner sep=0pt] (x0) at (2.6,0) {$x_0$}; \path[draw][thick,->] (*) .. controls (2.5,-4) and (2.5,4) .. node[left] {$a$} (*); \path[draw][thick,->] (*) .. controls (-5,2.3) and (-5,-2.3) .. node [very near end,below] {$b$} (*); \path[draw][thick,dashed,->] (x1) .. controls (-5.3,2) and (-5.3,-2) .. node [near start,above left=-1mm] {$f^{-1}(a)$} (x1); \path[draw][thick,dashed,->] (x0) .. controls (5.3,-2) and (5.3,2) .. node [near start,below right=-1mm] {$f^{-1}(a)$} (x0); \path[draw][thick,dashed,->] (x0) .. controls (1.4,2) and (-1.4,2) .. node [near start,above right=-1mm] {$f^{-1}(b)$} (x1); \path[draw][thick,dashed,->] (x1) .. controls (-1.4,-2) and (1.4,-2) .. node [near start,below left=-1mm] {$f^{-1}(b)$} (x0); \path[draw][thick,dotted,->] (*) -- (x1); \path[draw][thick,dotted,->] (*) .. controls (-1.3,0.3) and (1,1.6) .. node [below] {$\ell_{x_0}$} (x0); \end{tikzpicture} \end{center} \subsection{Orbisphere contracting bisets}\label{ss:orbisphere} We slightly modify the definition of ``contracting'' for sphere bisets, because of the orbisphere structures. Let ${}_GB_G$ be a sphere biset with $G=\pi_1(S^2,A)$. Recall from~\cite{bartholdi-dudko:bc2}*{Equation~\eqref{bc2:eq:minorbispace}} that there is a minimal orbisphere structure $\ord_B$ given by $B$. We call an orbisphere structure $\ord\colon A\to\{2,3,\dots,\infty\}$ \emph{bounded} if $\ord(a)=\infty\Leftrightarrow\ord_B(a)=\infty$ and $\ord(a)\deg_a(B)\mid\ord(B_*(a))$ for all $a\in A$. Let $\overline G$ denote the quotient orbisphere group $G/\langle\gamma_a^{\ord(a)}:a\in A\rangle$. Then we call $B$ an \emph{orbisphere contracting} biset if $\overline G\otimes_G B\otimes_G\overline G$ is contracting for some bounded orbisphere structure on $(S^2,A)$. \section{Expanding non-torus maps} Our purpose is, in this section, to endow the sphere $(S^2,A)$ with a smooth metric that is expanded by a self-map $f\colon (S^2,A)\selfmap$. We recall that by $A^\infty\subset A$ we denote the forward orbit of periodic critical points of $f$. A \emph{non-torus} map is a map that is not doubly covered by a torus endomorphism. \begin{defn}[Metrically expanding maps] Consider a Thurston map $f\colon (S^2,A)\selfmap$ and let $A'$ be a forward-invariant subset of $A^\infty$. We say that $f$ is \emph{metrically expanding} if there exists a length metric $\mu$ on $S^2\setminus A^\infty$ such that \begin{enumerate} \item for every non-trivial rectifiable curve $\gamma\colon[0,1]\to S^2\setminus A'$ the length of any lift of $ \gamma$ under $f$ is strictly less than the length of $\gamma$; and \label{defn:MetrExp:ExpCond} \item at all $a\in A'$ the first return map of $f$ is locally conjugate to $z\mapsto z^{\deg_a(f ^n)}$.\label{defn:MetrExp:NormCond} \end{enumerate} If $A'=A^\infty$, then $f\colon (S^2,A)\selfmap$ is called a \emph{B\"ottcher expanding} map. \end{defn} If $\mu=\dd s$ is a Riemannian orbifold metric on $(S^2,A)$ (i.e.~$\mu$ is a smooth metric on $S^2\setminus A'$ with possible cone singularities in $A\setminus A'$), then Condition~\eqref{defn:MetrExp:ExpCond} may be replaced by $f^*\dd s<\dd s$. Let us now define a more general notion of topological expansion. Consider first a covering map $f\colon \mathcal M'\to \mathcal M$ between compact topological orbispaces, with $\mathcal M'\subseteq \mathcal M$. We call $f$ \emph{topologically expanding} if there exists a finite open covering $\mathcal M'=\bigcup\mathcal U_i$ such that connected components of $f^{-n}(\mathcal U_i)$ get arbitrarily small as $n\to\infty$, in the sense that for every finite open covering $\mathcal M=\bigcup\mathcal V_j$ there exists $n\in\N$ such that every connected component of every $f^{-n}(\mathcal U_i)$ is contained in some $\mathcal V_j$. Equivalently, the diameter of connected components of $f^{-n}(\mathcal U_i)$ tends to $0$ with respect to any metric on $\mathcal M'$. \begin{defn}[Topological expanding maps]\label{defn:TopExp} Consider a Thurston map $f\colon(S^2,A)\selfmap$ and let $A'$ be a forward-invariant subset of $A^\infty$. We call $f$ \emph{topologically expanding} if there exist $\mathcal M'\subseteq \mathcal M\subseteq S^2$ compact with a topologically expanding orbifold covering map $f\colon (\mathcal M',A)\to (\mathcal M,A)$, such that every connected component $\mathcal U$ of $S^2\setminus \mathcal M$ is a disk containing a unique point $a\in A'$, and $\mathcal U$ is attracted to $a$, and the first return of $f$ is locally conjugate to $z\mapsto z^{\deg_a(f ^n)}$ at $a$. If $A'=A^\infty$, then $f\colon (S^2,A)\selfmap$ is called a \emph{B\"ottcher topologically expanding} map. \end{defn} \begin{prop}\label{prop:metr=>top} A metrically expanding map is topologically expanding. \end{prop} \begin{proof} Let $f\colon(S^2,A)\selfmap$ be metrically expanding. For each point $a\in A'$ choose an open neighborhood $\mathcal U_a\ni a$ such that $f(\mathcal U_a)$ is compactly contained in $\mathcal U_{f(a)}$. Set $\mathcal M=S^2\setminus\bigcup\mathcal U_a$ and $\mathcal M'=f^{-1}(\mathcal M)$. \end{proof} \noindent The goal of this section is to prove the following criterion: \begin{thm}[Expansion Criterion]\label{thm:ExpCr} The following are equivalent, for a combinatorial equivalence class $\mathscr F$ of Thurston maps: \begin{enumerate} \item $\mathscr F$ contains a metrically B\"ottcher expanding map;\label{thm:ExpCr:1} \item $\mathscr F$ contains a topologically expanding map;\label{thm:ExpCr:3} \item $B(f)$ is an orbisphere contracting biset for every $f\in\mathscr F$;\label{thm:ExpCr:4} \item $\mathscr F$ does not admit a Levy cycle, and if $\mathscr F$ is doubly covered by a torus endomorphism $M z+v \colon \R/\Z \selfmap$ then both eigenvalues of $M$ have absolute value greater than $1$. \label{thm:ExpCr:5} \end{enumerate} Furthermore, if any of these properties hold then the expanded metric may be assumed to Riemannian of pinched negative curvature. \end{thm} We will prove Theorem~\ref{thm:ExpCr} for maps not doubly covered by torus endomorphisms. The remaining case follows from \cite{haissinsky-pilgrim:algebraic}*{Theorem~4} or from \cite{selinger-yampolsky:geometrization}. The hardest implication in the proof is \eqref{thm:ExpCr:5}$\Rightarrow$\eqref{thm:ExpCr:1}, and will occupy most of this section. \begin{proof}[Proof of Theorem~\ref{thm:ExpCr}, \eqref{thm:ExpCr:1}$\Rightarrow$\eqref{thm:ExpCr:3}$\Rightarrow$\eqref{thm:ExpCr:4}$\Rightarrow$\eqref{thm:ExpCr:5}]\label{sec:ProofthmExpCr} The implication \eqref{thm:ExpCr:1}$\Rightarrow$\eqref{thm:ExpCr:3} follows from Proposition~\ref{prop:metr=>top}. By~\cite{bartholdi-h-n:aglt}*{Proposition~6.4}, the biset of a topologically expanding map is contracting; this is \eqref{thm:ExpCr:3}$\Rightarrow$\eqref{thm:ExpCr:4} with slight adjustments to sphere maps. Consider next a combinatorial equivalence class $\mathscr F=[f]$ admitting a Levy cycle $(\gamma_0,\gamma_1,\dots,\gamma_n=\gamma_0)$. Write $G=\pi_1(S^2\setminus A,*)$, consider the $G$-$G$-biset $B(f)$, and choose a basis $X$ for it. The assumption that $(\gamma_i)_i$ is a Levy cycle means that there exist basis elements $x_0,x_1,\dots,x_n=x_0\in X$ with $x_i\gamma_{i+1}=\gamma'_i x_i$ and $\gamma'_i$ conjugate to $\gamma_i$ for all $i\in\Z/n$. In particular, for every $j\in\Z$ there is a conjugate of $\gamma_0^j$ in the nucleus of $(B(f),X)$. Now $\gamma_0$ has infinite order in $G$, because it is not peripheral. It follows that the nucleus of $(B(f),X)$ is infinite, so $B(f)$ is not orbisphere contracting. \end{proof} \begin{proof}[Outline of the proof of Theorem~\ref{thm:ExpCr}, \eqref{thm:ExpCr:5}$\Rightarrow$\eqref{thm:ExpCr:1}] We wish to prove that a Levy-free non-torus Thurston map $f$ admits an expanding metric. We do so by explicitly constructing the metric adapted to $f$. We consider the \emph{decomposition} of $S^2$ into \emph{small spheres} along the canonical obstruction $\CC_f$. The map $f$ restricts to maps between the small spheres, well-defined up to isotopy; and the \emph{small Thurston maps} --- the return maps to small spheres --- are combinatorially equivalent to rational maps. We first isotope the periodic small spheres so into complex spheres, in such a manner that the small Thurston maps are rational. We put the hyperbolic metric on these periodic small spheres, and pull it back to preperiodic small spheres. It remains to attach the small spheres together. They are spheres with cusps; some of the cusps correspond to the marked set $A$, and some to $\CC_f$. Cut the cusps corresponding to $\CC_f$ along a very small horocycle, and connect the small spheres by very long and thin cylinders along the combinatorics of the original decomposition. We have constructed a space $X$ with a piecewise-smooth non-positively curved metric. Define a self-map $F\colon X\selfmap$ as follows: away from the truncated cusps, apply the original map $f$. Subdivide the long cylinders into long ``annuli'' and short ``annular spheres''. Map the annular spheres to the small spheres they originally mapped to, and map the annuli affinely to each other. The map $F$ is expanding: on periodic small spheres, because it is modelled on rational maps; on preperiodic small spheres, too; on annular and trivial small spheres, because they are contained in thin cylinders; and on annuli because of properties of the canonical obstruction: it contains neither Levy cycles nor primitive unicycles. \end{proof} \subsection{Conformal metrics} Recall first that every Riemannian metric $s$ on a surface (for example a sphere) admits local isothermal coordinates; i.e.\ there is a local chart $\mathcal U$ where $\dd s$ takes form $\rho(z)|\dd z|$ on the tangent space of $\mathcal U$; the function $\rho\colon \mathcal U\to\R_+$ should be smooth. A metric in this form is called \emph{conformal}. The Gaussian curvature $\kappa\colon \mathcal U \to\R$ is given by \[\kappa(z)=\frac{-\Delta(\log\rho(z))}{\rho(z)^2}, \] by an easy calculation (see e.g.~\cite{griffiths-harris:pag}*{page~77}). We note for future reference the following simple calculation: if $\rho(z)=\sigma(|z|)$ is rotationally invariant around $0\in \mathcal U$ in the chart $z$, then the Gaussian curvature may be computed as \begin{equation}\label{eq:polarkappa} \kappa(z)=-\frac{\log(\sigma)''+\log(\sigma)'/|z|}{\sigma(|z|)^2}. \end{equation} We shall consider conformal metrics $s$ on an orbisphere $(S^2,A)$. This means that in a suitable coordinates we have 2$\dd s=\rho(z)|\dd z|$ with $\rho\colon S^2\setminus A\to \R_{+}$ that has continuous extension $\rho\colon S^2\to \R_{+}\cup \{+\infty\}$ such that if for $a\in A$ \begin{itemize} \item $\rho(a)<+\infty$, then $\rho$ is smooth at $a$ (i.e.~$a$ is a usual point); \item $\rho(a)=+\infty$ but $a$ at finite distance from points in $S^2$, then $(S^2,s)$ around $a$ is a quotient of a chart $\mathcal U$ endowed with a conformal metric under a finite group of isometries; the point $a$ is called a \emph{cone singularity}. \end{itemize} If $\rho(a)=+\infty$ and $a$ at infinite distance from points in $S^2$, then $a$ is called a \emph{cusp}. \subsection{Fatou and Julia sets}\label{ss:fatou} We adapt the definition of Julia sets from~\eqref{eq:julia} to expanding Thurston maps. We recall some well-known facts, and include their proofs for convenience. \begin{defn} Let $f\colon(S^2,A)\selfmap$ be an expanding Thurston map. Its \emph{Julia set} $\Julia(f)$ is the closure of the set of repelling periodic points, namely the closure of the set of points $z\in S^2$ with $f^n(z)=z$ for some $n>0$ but admitting no neighbourhood $\mathcal U\ni z$ with $f^n(\mathcal U)$ compactly contained in $\mathcal U$. The \emph{Fatou set} $\Fatou(f)$ is the locus of continuity of forward orbits, namely the set of $z\in S^2$ at which the orbit map $S^2\to (S^2)^\infty,z\mapsto(z,f(z),f^2(z),\dots)$ is continuous in supremum norm (of any metric on $S^2$ realizing its topology). \end{defn} \begin{lem}\label{lem:JulFatDecomp} $S^2=\Julia(f)\sqcup\Fatou(f)$. Moreover, in the notation of Definition~\ref{defn:TopExp} the Julia set $\Julia(f)$ is the set of points in $\mathcal M'$ that do not escape $\mathcal M'$ under iteration of $f$. \end{lem} \begin{proof} By definition, every point $z$ escaping $\mathcal M'$ is in the attracting basin of $A'$, so $z$ has a stable orbit and $z\in \Fatou(f)$. Conversely, suppose that $z$ does not escape $\mathcal M'$. Fix a metric on $S^2$ realizing its topology and consider $\varepsilon>0$ such that for every $\mathcal V\subset \mathcal M$ with diameter less than $\varepsilon$ the components of $f^{-n}(\mathcal V)$ get arbitrarily small as $n\to\infty$. Choose a large $n\in \N$ and consider the $\varepsilon$-neighborhood $\mathcal V\subset \mathcal M$ of $f^{n}(z)$. The pullback of $\mathcal V$ along the orbit of $z$ is a small (since $n$ is large) neighborhood $\mathcal V'$ of $z$; so there are points close to $z$ that have orbits $\varepsilon$-away from the orbit of $z$. This shows that $z\not\in\mathcal F(f)$. Choose now a small closed topological disc $\mathcal V$ containing $z$. There is an $n\ge 1$ such that $f^n(\mathcal V) \supset \mathcal V$. Therefore, there is a periodic point in $\mathcal V$. This shows that $z\in\Julia(f)$. \end{proof} The Fatou set of $f$ is open. Every periodic component of $\mathcal F(f)$ contains a attracting periodic point called the \emph{center}; this point belongs to $A'$. By Lemma~\ref{lem:JulFatDecomp} every non-periodic component of $\mathcal F(f)$ is preperiodic because it consists of points escaping to $S^2\setminus \mathcal M$. We may now deduce that every component of $\mathcal F(f)$ is an open topological disc. Consider first a periodic connected component $O$ of $\Fatou(f)$ and let $a\in A'\cap O$ be its center. There is a conjugacy from $O$ to the open disk $\mathbb D\subset1pt value of c$ such that the first return map $f^n\colon O\to O$ is conjugate to the map $z^{\deg_a(f^n)}$. We write $\deg_O(f)\coloneqq\deg_a(f)$, and call the conjugacy $\phi_O\colon O\to\mathbb D$ a \emph{B\"ottcher coordinate}. We may then determine coordinates on every Fatou component on the forward and backward orbit of $O$ in such a manner that, for every Fatou component $U$, the restriction $f\restrict U\colon U\to f(U)$ is conjugate to a monomial map by $\phi_{f(U)}\circ f\restrict U=z^{\deg_U(f)}\circ\phi_U$. We use B\"ottcher coordinates to define, in every Fatou component $O$, \emph{internal rays} $R_{O,\theta}\subset O$ by \[R_{O,\theta}=\phi_O^{-1}\{r e^{2i\pi\theta}\mid r<1\}. \] These rays are mapped to each other by $f(R_{O,\theta})=R_{f(O),\deg_O(f)\theta}$. The following statement follows immediately from the existence of B\"ottcher coordinates: \begin{lem}\label{lem:Bottcher extension} Let $f\colon(S^2,A)\selfmap$ be a B\"ottcher map, and let $a\in A$ be a degree-$d$ attracting point. Let $F$ denote its immediate basin of attraction; then $F$ is a connected component of the Fatou set of $f$. Let $\overline D$ denote the compactification of $S^2\setminus\{a\}$ by adding a circle of directions in replacement of $a$; then $f$ extends continuously to a self-map of $\overline D$, such that the boundary circle is mapped to itself by $z\mapsto z^d$.\qed \end{lem} \subsection{Canonical obstructions and decompositions} We shall make essential use of Pilgrim's \emph{canonical decomposition}. Let $f\colon (S^2,A)\selfmap$ be a Thurston map. Then there is an induced pullback map $f^*$ on the Teichm\"uller space $\mathscr T_A$ of complex structures on $(S^2,A)$, see~\cite{bartholdi-dudko:bc2}*{\S\ref{bc2:ss:examples}}; for a given complex structure $\eta$, the pullback $f^*\eta$ is defined such that the map $f\colon (S^2,A,f^*\eta)\to(S^2,A,\eta)$ is holomorphic. The map $f$ is combinatorially equivalent to a rational map if and only if $f^*$ has a fixed point. Let $\gamma$ be an essential simple closed curve and let $\eta\in \mathscr T_A$ be a complex structure. The length $\langle\gamma , \eta\rangle$ of $\gamma$ with respect to $\eta$ is defined as the length of the unique geodesic in $(S^2,A,\eta)$ that is homotopic to $\gamma$. This defines an analytic function $\langle\gamma,{-}\rangle\colon \mathscr T_A\to \R$. \begin{defn}[Canonical obstruction~\cite{pilgrim:combinations}*{Theorem~1.2}] Let $f\colon (S^2,A)\selfmap$ be a Thurston map, and consider $\eta \in T_A$. The \emph{canonical obstruction} $\CC_f$ is the set of homotopy classes of essential simple closed curves $\gamma$ such that $\langle\gamma,f^{n*}\eta\rangle$ tends to $0$ as $n$ tends to infinity. \end{defn} It follows from the following theorem that the definition of $\CC_f$ does not depends on $\eta$. It was proved by Kevin Pilgrim that $\CC_f$ is a multicurve. \begin{thm}[Pilgrim, \cite{pilgrim:cto}]\label{th:kevin_can_obstr} If $\CC_f$ is empty and the degree of $f$ is at least $2$, then $f$ is combinatorially equivalent to a rational map.\qed \end{thm} For $f$ a Thurston map, its \emph{canonical decomposition} is the collection of spheres and annuli obtained by cutting $f$ along the canonical obstruction $\CC_f$. Recall that the \emph{small Thurston maps} are the return maps of $f$ to the small spheres in a decomposition. \begin{thm}[Pilgrim, Selinger~\cite{selinger:augts}]\label{thm:CanDecomp} Every small Thurston map in the canonical decomposition of $f$ is either \begin{itemize} \item combinatorially equivalent to a rational non-Lattes post-critically finite map; \item double covered by a torus endomorphism; \item \phantombullet{or}a homeomorphism. \end{itemize} \end{thm} Theorem~\ref{thm:CanDecomp} was conjectured by Kevin Pilgrim (who also proved a slightly weaker version of this theorem, see~\cite{pilgrim:combinations}*{page~13}) and was eventually proved by Nikita Selinger. \subsection{Construction of the model}\label{subsec:ProofOfThmExpCrMain} We give here the proof the implication~\eqref{thm:ExpCr:5}$\Rightarrow$\eqref{thm:ExpCr:1}, by constructing a negatively curved Riemannian metric on $X\simeq S^2$ and an expanding map $F\colon X\selfmap$ isotopic to $f$; see Figure~\ref{Fig:ThExpCr} for an illustration of the construction. \begin{figure} \begin{tikzpicture} \def\leftsphere#1#2{+(0,0.1) .. controls +(180:#1/2) and +(0:#1/2) .. +(-#1,#2) .. controls +(180:#2) and +(180:#2) .. +(-#1,-#2) .. controls +(0:#1/2) and +(180:#1/2) .. +(0,-0.1) ++(0,0)} \def\rightsphere#1#2{+(0,0.1) .. controls +(0:#1/2) and +(180:#1/2) .. +(#1,#2) .. controls +(0:#2) and +(0:#2) .. +(#1,-#2) .. controls +(180:#1/2) and +(0:#1/2) .. +(0,-0.1) ++(0,0)} \path[fill][gray!30] (-1.5,0) \leftsphere{2.5}{0.8} (1.5,0) \rightsphere{2.5}{0.8}; \path[fill][white] (-1.4,0) circle (2mm) (1.4,0) circle (2mm); \path[draw][very thick] (-1.5,0) node [xshift=-22mm] {$S_1$} \leftsphere{2.5}{0.8} +(0,0.1) -- +(3,0.1) +(0,-0.1) -- node [below] {$T$} +(3,-0.1) (1.5,0) node[xshift=22mm] {$S_2$} \rightsphere{2.5}{0.8}; \path[fill][gray!30] (-1.8,2.5) \leftsphere{2.2}{0.8} ++(3.6,0) \rightsphere{2.2}{0.8}; \path[fill][gray!30] (-0.8,2.5) +(0,-0.1) .. controls +(0:0.4) and +(180:0.4) .. +(0.8,-0.5) .. controls +(0:0.4) and +(180:0.4) .. +(1.6,-0.1) -- +(1.6,0.1) .. controls +(180:0.4) and +(0:0.4) .. +(0.9,0.5) -- +(0.9,0.6) -- +(0.7,0.6) -- +(0.7,0.5) .. controls +(180:0.4) and +(0:0.4) .. +(0,0.1) {[rotate around={-90:+(0.7,-0.1)}] \leftsphere{0.5}{0.3}}; \path[fill][white] (-1.6,2.5) circle (4mm) (-1.1,2.5) circle (4mm) (1.1,2.5) circle (4mm) (1.6,2.5) circle (4mm) (0,3.1) circle (1.2mm); \path[draw][very thick] (-1.8,2.5) node [xshift=-19mm] {$S'_1$} \leftsphere{2.2}{0.8} +(0,0.1) -- +(1,0.1) +(0,-0.1) -- +(1,-0.1) ++(1,0) node [xshift=8mm] {\small $S'_2$} +(0,0.1) .. controls +(0:0.4) and +(180:0.4) .. +(0.7,0.5) -- +(0.7,0.6) {node [xshift=1mm,yshift=4mm] {\tiny $S'''_1$} [rotate around={-90:+(0.7,-0.1)}] \leftsphere{0.5}{0.3}} +(0.9,0.6) -- +(0.9,0.5) .. controls +(0:0.4) and +(180:0.4) .. +(1.6,0.1) +(0,-0.1) .. controls +(0:0.4) and +(180:0.4) .. +(0.8,-0.5) .. controls +(0:0.4) and +(180:0.4) .. +(1.6,-0.1) ++(1.6,0) +(0,0.1) -- +(1,0.1) +(0,-0.1) -- +(1,-0.1) ++(1,0) node [xshift=19mm] {$S''_1$} \rightsphere{2.2}{0.8}; \path[draw] (-1.3,2.4) -- node[left] {$2:1$} (-0.4,0.1) \fwdarrowonline{0.5}; \path[draw] (1.3,2.4) -- node[right] {$2:1$} (0.4,0.1) \bckarrowonline{0.5}; \end{tikzpicture} \caption{Illustration to the Proof of Theorem~\ref{thm:ExpCr}. The map $f$ is indicated by the arrows, and sends $S'_1,S''_1,S'''_1$ to $S_1$ and $S'_2$ to $S_2$. We first define a metric on the periodic small spheres ($S_1$), then on the preperiodic small spheres ($S_2$), and finally on the annuli between them. This map could be Pilgrim's ``blow-up an arc'' map, see~\cite{bartholdi-dudko:bc0}*{\S\ref{bc2:ss:pilgrim}}.} \label{Fig:ThExpCr} \end{figure} \subsubsection{Setup} \label{sss:Prf:setup} The space $X$ is constructed by \emph{plumbing} between cusped spheres: we enlarge the cusps to make them almost cylindrical, and then truncate them and glue them on their common boundary. Three variables dictate the construction: first a parameter $w\llcurly 1$ is chosen; the perimeters of the ``cylindrical parts'' will lie between $\pi w$ and $2\pi w$. Then a parameter $\ell\ggcurly 1/w$ is chosen; the cylindrical parts will all have length between $\ell$ and $2\ell$. Finally, a parameter $\epsilon\llcurly 1/\ell$ is chosen; it will be a final adjustment to the construction that makes the curvature bounded by $-\epsilon^2$ from above. The map $F$ is very close to a rational map on each small sphere and is very close to an affine map on each cylinder connecting small spheres. After the main part of construction is carried we obtain a metric $\mu$ that is weakly expanded by $F$ (namely, $F$ does not contract $\mu$) and a certain iteration of $F$ is expands $\mu$. In Lemma~\ref{lem:PertMetric} we perturb $\mu$ infinitesimally to make $F$ expanding. \subsubsection{The canonical decomposition} Throughout this section, we let $\CC=\CC_f$ denote the canonical obstruction of the Thurston map $f\colon(S^2,A)\selfmap$, and we denote by $\Sph$ the collection of small spheres (components of $S^2\setminus \CC$) of the canonical decomposition; so that \[S^2=\bigsqcup_{\gamma\in\CC}\gamma\cup\bigsqcup_{S\in\Sph}S. \] As in~\cite{bartholdi-dudko:bc2}, for $S\in \Sph$ we denote by $\widehat S$ the corresponding topological sphere marked by the image of $A\cap S^2$ and the boundary curves. The map $f$ induces a map $f\colon\Sph\selfmap$, and for each $S\in\Sph$ a map $f\colon \widehat S\to \widehat{f(S)}$, well-defined up to isotopy, see~\cite{bartholdi-dudko:bc2}*{Lemma~\ref{bc2:lem:SmallMaps}}. Recall that we assumed that $f$ is a non-torus map: a map that is not finitely covered by a torus endomorphism. \begin{lem}\label{lem:ExpThmFirstObserv} If $f\colon(S^2,A)\selfmap$ is a Levy-free non-torus Thurston map and $\CC_f$ is non-empty, then $\CC_f$ is an anti-Levy Cantor multicurve, and all small Thurston maps in the canonical decomposition of $f$ are equivalent to non-torus rational maps. \end{lem} \begin{proof} Let us show that $\CC_f$ does not contain a primitive unicycle. Since a non-Levy unicycle has spectral radius strictly less than $1$, such a (primitive) cycle may not belong to $\CC_f$. Further, all small Thurston maps in $R(f,A,\CC)$ are non-torus and non-homeomorphisms, because torus and homeomorphism cycles can only be attached only via Levy cycles, because homeomorphisms and torus maps have no attracting periodic points. Theorem~\ref{thm:CanDecomp} concludes the proof. \end{proof} \subsubsection{Metrics on small spheres} Consider a cycle of periodic spheres $S\to f(S)\to\dots\to f^p(S)=S$. Let us denote by $\widehat S_i=\widehat{f^i(S)}$ the topological sphere associated with $f^i(S)$ and we denote $A_i$ the marked set of $S_i$. By Lemma~\ref{lem:ExpThmFirstObserv} the first return map $f^p\colon \widehat S_i\selfmap$ is isotopic rel $A_i$ to a rational map. Therefore, let us now assume that each $\widehat S_i$ is a marked complex sphere and each $f_i\colon \widehat S_i\to S_{i+1}$ is a rational map. Choose next an orbifold structure $\ord_i\colon A_i\to \{1,2,\dots, \infty\}$ such that $f^p\colon (\widehat S_1,\ord_1)\selfmap$ is a partial self-covering but is not a partial covering. We also choose $\ord_i$ in such a way that $\ord_i(x)=\infty$ if and only if $x$ is in a periodic critical cycle or $x$ is the image of a boundary curve. We endow each $(\widehat S_i,\ord_i)$ with its natural hyperbolic metric. Then every $f\colon \widehat S_i\to \widehat S_{i+1}$ is either expanding (if it is not a covering) or an isometry (if it is a covering); and $f^p\colon \widehat S_1\selfmap$ is expanding. Similarly, we endow each preperiodic sphere $\widehat S'$, say marked by $A'$, with a hyperbolic metric such that $f\colon \widehat S'\to \widehat {f(S')}$ is either isometry or an expanding map. The orbisphere structure $\ord'\colon A'\to \{1,2,3,\dots, \infty\}$ is chosen so that $\ord_i(x)=\infty$ if and only if $x$ is the image of a boundary curve. \subsubsection{Slight adjustment at cusps}\label{sss:AdjAtCusps} For a periodic cycle $f\colon\widehat S_i\to \widehat S_{i+1}$ as above consider a point $x\in \widehat S_i$ that is a cusp with respect to the hyperbolic metric. Then a small neighbourhood of $x$ is foliated by \emph{horocycles} --- curves perpendicular to geodesics starting at $x$. We shall adjust locally the dynamics at cusps and rescale there the hyperbolic metric by a factor of $1+\delta$ for small $\delta$, in such a way that horocycles form an invariant foliation of the new dynamics. Suppose that $x\in S_1$ is periodic, say with period $q$. Let $\mathbb D^*\coloneqq \mathbb D\setminus \{0\}$ be the unit disc punctured at $0$. Since $x$ is a cusp, the universal cover $\mathbb D\to (S_1,\ord_1)$ factors as $\mathbb D\to \mathbb D^*\overset{\pi_x}{\longrightarrow} (S_1,\ord_1)$ with $\pi_x$ extended to $0$ by $\pi_x(0)=x$. Denote by $H^*_r \subset \mathbb D^*$ the circle, i.e.~horocycle, centered at $0$ with Euclidean radius $r$. For a sufficiently small $r$ the image $H_r\coloneqq \pi_x(H^*_r)$ is a small simple closed curve around $x\in \widehat S_1$. Let $d>1$ be the local degree of $f^q$ at $x$ and let $U\subset S_1$ be the Fatou component containing $x$. Choose a B\"ottcher function $B\colon U\to \mathbb D$ conjugating $f^p\colon U\selfmap$ to $z\to z^d\colon \mathbb D\selfmap$. Denote by $E'_r \subset \mathbb D$ the circle centered at $0$ with Euclidean radius $r$. Then $E_r\coloneqq B^{-1}(E')$ is an \emph{equipotential} of $U$. By construction, $f^q(E_r)=E_{r/d}$. Since $\pi_x$ and $B$ are conformal at $0$ there is a $\tau>0$ such that $H_r$ approximates $E_{\tau r}$: for a sufficiently small $r$ the horocycle $H_r$ lies in the $O(r^2)$-neighbourhood of $E_{\tau r}$ and, moreover, the hyperbolic length of $E_{\tau r}$ is $-1/\log(r)+O(-r/\log r)$. (We recall that $-1/\log(r)$ is the hyperbolic length of $H_r$.) Choose now a small constant $\delta>0$ and a smooth function $t\colon \R_{>0} \to \R_{>0}$ with $t(r)=1$ for $r\ge1$ and $t(r)=1+\delta$ for $t<1/d$, such that the rescaled metric $-t(r)/(r\log r) $ still has a negative curvature on $\mathbb D$. It follows from~\eqref{eq:polarkappa} that $-t(r/R)/(r\log r)$ also has negative curvature for all $R>1$. For a sufficiently large $R$ we replace the hyperbolic metric around $x$ by $-t(r/R)/(r\log r)$ and we adjust the dynamics of $f^q$ around $x$ such that $f^q$ maps $H_{r}$ to $H_{r/d}$ for all $r\le 1/R$. The adjustment is possible because $f^q$ has expansion bounded away from $1$ around $H_{1/R}\approx E_{\tau/R}$ with respect to the rescaled metric. We now spread the adjusted dynamics along the orbit of $x$ as well as to all preperiodic preimages of $x$ that are cusps with respect to the hyperbolic metric. We perform the same operation at all cusps. \begin{figure} \centering\begin{tikzpicture} \begin{semilogxaxis}[xlabel=$r$,ylabel=$\sigma(r)$,ymin=2,ymax=10,log basis x={2}] \addplot[domain=0.05:0.16,color=blue] {1/(2*x*cos(0.1*(ln(x))))}; \addplot[domain=0.11:0.9,color=red] {-1/(x*ln(x))}; \end{semilogxaxis} \end{tikzpicture} \caption{The curvature on the widened cusps}\label{fig:sigma} \end{figure} \subsubsection{Plumbing}\label{sss:Plumbing} Let $S_1$ and $S_2$ be two hyperbolic small spheres with respective cusps at $x_1\in S_1$ and $x_2\in S_2$. We now describe an operation, \emph{plumbing}, that truncates $S_1$ and $S_2$ at $x_1$ and $x_2$ along their horocycles of perimeter $\approx 2\pi w$ and joins $S_1,S_2$ along an almost flat cylinder with length $\approx \ell$ such that the resulting sphere still has a negatively curved metric. Since $S_1$ and $S_2$ are covered by punctured discs, it is sufficient to describe the operation between two copies $\mathbb D^*_1, \mathbb D^*_2$ of the unit disc punctured at $0$. The hyperbolic metric on the unit disc punctured at $0$ is written as $\sigma(|z|)|\dd z|$ with $\sigma(r)=-1/(r\log r)$. Replace $\{0<|z|<1\}$ by $\{\exp(-w\ell/2)\le|z|<1\}$, and give it a metric $\sigma(|z|)|\dd z|$ with \[\sigma(r)\approx\max\Big\{\frac1{w r\cos(\epsilon(\log(r)-w\ell/2))},\frac{-1}{r\log r}\Big\}; \] see Figure~\ref{fig:sigma}. On that figure, the blue part $1/(w r\cos(\epsilon(\log(r)-w\ell)))$ is a piece of the one-sheeted hyperboloid of curvature $-\epsilon^2$, as can be readily checked using~\eqref{eq:polarkappa}, with its unique minimal closed curve of length $2\pi w$ appearing at radius $r=\exp(-w\ell/2)$, and with length $\approx\ell/2$. The red part $-1/(r\log r)$ is the original metric on the cusp. At $\approx-w\ell/2$ we replace $\sigma$ by a smooth function that is slightly bigger than $\sigma (-w\ell/2)$; we can do it such that $\log(\sigma)''\ggcurly 1$ at $\approx -w\ell/2$; thus we guarantee that the new function still has a negative curvature by~\eqref{eq:polarkappa}. After the metrics on both cusps have been modified in the above manner, they can be attached along their common boundary curve $\{|z|=\exp(-w\ell/2)\}$, which is geodesic (it corresponds to the core curve of the hyperboloid). The result is a space consisting of two truncated discs with curvature $-1$ attached by a cylinder of curvature $-\epsilon^2$, perimeter $\approx2\pi w$ and length $\approx\ell$. \subsubsection{Global metric}\label{sss:GlobMetr} We now perform the plumbing between the metrized small spheres in $\Sph$. The following proposition will allow us to endow the annuli of the canonical decomposition with an expanding map. \begin{prop}\label{prop:CombGl} There is an assignment \[\CC\to(1,2)\times(1,2),\qquad \gamma\mapsto(w_\gamma,\ell_\gamma) \] (where $w_\gamma$ is the ``width'' of the annulus corresponding to $\gamma$ and $\ell_\gamma$ is its ``length'') such that \begin{itemize} \item if for a non-peripheral curve $\delta\in f^{-1}(\CC)$ the map $f\colon \delta\to f(\delta)$ is one-to-one, then $w_{f(\delta)}>w_\delta$; \item if for a curve $\gamma\in\CC$ there is a unique non-peripheral curve $\delta\in f^{-1}(\CC)$ isotopic to $\gamma$, then $\ell_{f(\delta)}>\ell_\gamma$. \end{itemize} \end{prop} \begin{proof} We first note that only an ordering of the $(w_\gamma)$ and $(\ell_\gamma)$ is required; once such an ordering is found, they can easily be embedded in the interval $(1,2)$. If an assignment $\gamma\to w_\gamma$ is forbidden, then there is a sequence $\gamma_0,\gamma_1,\dots,\gamma_n=\gamma_0$ of curves in $\CC$ such that $w_{\gamma_{i+1}}>w_{\gamma_i}$ holds. This means that $\bigcup_i\partial \gamma_i$ contains a Levy cycle. This contradicts the assumption that $\CC_f$ is an anti-Levy multicurve, by Lemma~\ref{lem:ExpThmFirstObserv}. If an assignment $\gamma\to\ell_\gamma$ is forbidden, then there is a sequence $\gamma_0,\gamma_1,\dots,\gamma_n=\gamma_0$ of curves in $\CC$ such that $\ell_{\gamma_{i+1}}>\ell_{\gamma_i}$ holds. This means that $\bigcup_i\partial \gamma_i$ is a primitive unicycle, and this contradicts the assumption that $\CC_f$ is a Cantor multicurve, again by Lemma~\ref{lem:ExpThmFirstObserv}. \end{proof} We scale the solutions $(\ell_\gamma),(w_\gamma)$ given by Proposition~\ref{prop:CombGl} so that $\ell \le \ell_\gamma \le 2\ell$ and $w/2\le w_\gamma\le w$, for the parameters $\ell\ggcurly 1/w\ggcurly 1$ of the construction in~\S\ref{sss:Prf:setup}. We consider in turn every small sphere $S\in\Sph$ containing a curve $\gamma\in\CC$ on its boundary. There is then another small sphere $S'\in\Sph$ also containing $\gamma$ on its boundary. For $\widehat S$ and $\widehat{S'}$, these boundary points appear as cusps in the scaled hyperbolic metrics that were assigned to them in~\S\ref{sss:AdjAtCusps}. We truncate the cusps on $\widehat S,\widehat {S'}$ along horocycles and attach $\widehat S,\widehat {S'}$ through an almost flat hyperboloid as described in~\ref{sss:Plumbing}. The hyperboloid has a curvature $\approx-\epsilon^2$, perimeter $\approx2\pi w_\gamma$ and length $\approx\ell_\gamma$. We have, in this manner, constructed a metric sphere $X\simeq (S^2,A)$ by plumbing together truncated small spheres in $\Sph$. For every $S\in \Sph$ we denote by $S^\circ$ the image of $\widehat S$ in $X$. We also denote by $\Ann$ the set of almost flat annuli. We stress that $\Ann$ is in bijection with $\CC$. Suppose $B\in \Ann$ is an annulus connecting small spheres $S^\circ_1$ and $S^\circ_2$. Let $B_1$ be the subannulus of $B$ consisting of points in $B$ that are closer to $S_1^\circ$ than $S_2^\circ$. Since $B_1$ is constructed by enlarging the metric in $\widehat S_1\setminus S^\circ_1$ we can view $B\hookrightarrow \widehat S_1\setminus S^\circ_1$; we will refer to this map as \emph{natural}. By construction, \begin{lem}\label{lem:ExpNatMap} The natural map $B\to \widehat S_1\setminus S^\circ_1$ is contracting. \qed \end{lem} \subsubsection{Dynamics at small spheres} Recall that $X$ consists of truncated small spheres and of almost flat cylinders connecting truncated spheres. Consider first a small sphere $S\in \Sph$ and its $f$-image $S'$. We have a rational map $f_S\coloneqq f\colon\widehat S\to \widehat{S'}$. For all points in $S^{\circ}$ with $f_S$ image in $S'^{\circ}$ we set $F$ to be $f_S$. The remaining points are bounded by $f_S^{-1}(\partial S'^\circ)$. We now extend $F$ to $S^\circ$. Consider a curve $\gamma\in f_S^{-1}(\partial S'^\circ)$. Then either $\gamma$ is non-essential rel $A$ or $\gamma \in \CC$ rel $A$. In the first case $\gamma$ bounds a peripheral disc $U$ containing at most one point in $A$. By construction, see~\S\ref{sss:GlobMetr}, there is a very long almost flat annulus $B\in \Ann$ attached to $F(\gamma)$. Since $f_S\restrict U$ is expanding, we may extend $F$ to $U$, see Lemma~\ref{lem:ExpNatMap}, in such a manner that $F\restrict U$ is expanding. If there is an $a\in A\cap U$, then we require that $F(a)=f(a)$ and that $F$ maps a neighbourhood of $a$ analytically (i.e.~locally conformal except at $a$ where the map needs not be an isomorphism) to a neighbourhood of $f(a)$. Suppose that $\gamma$ is isotopic to a curve, say $\gamma_2$, in $\partial S^\circ$. Denote bu $U$ the annulus between $\gamma$ and $\gamma_2$. Let $B\in \Ann$ be the almost flat annulus attached to $F(\gamma)$. We define $F$ on $U$ to be the composition of $f_S$ with the inverse of the natural map from $B$ to $\widehat S\setminus S^\circ$. In this manner we construct an expanding extension of $F$ to $U$, see Lemma~\ref{lem:ExpNatMap}. \subsubsection{Dynamics at annuli} So far $F$ is defined on small spheres; let us assume that $F\restrict S =f\restrict S$ for every $S\in \Sph$. We now extend $F$ to $X\simeq (S^2,A)$ in an expanding manner so that $F\simeq f$. Consider an annulus $B\in \Ann$. Suppose that $f$ maps $B$ to a sequence of annuli and spheres $B_1,S_1,B_2,S_2,\dots, B_t$ with $B_i\in \Ann$ and $S_i\in \Sph$. Consider two cases. Suppose first $t=1$. Then $\ell_{B}-\ell_{B_1}\ggcurly 1$ because the values $\ell_{B}> \ell_{B_1}$ from Proposition~\ref{prop:CombGl} are rescaled so that $\ell_{B},\ell_{B_1}\ggcurly 1$. Also, either $w_{B}>w_{B_1}$ or $w_{B}>w_{B_1}/2$ in case $f\restrict{B}$ has degree greater than $1$. Therefore, we can map in an expanding manner $B$ to $B_1$ minus a small (i.e.~of scale $\llcurly \ell$) neighbourhood of $\partial B_1$ (which is already in the image of small spheres) in an expanding manner so that the obtained map $F$ is isotopic to $f$ rel $\partial B$. Indeed, identify $B$ and $B_1\setminus (\text{small neighbourhood of }\partial B_1)$ with $\mathbb S^1\times [0,1]$, recalling that $B,B_1$ are almost flat. Then set $F$ to be $(x,y) \to (d x +m y, y )$, where $d\ge 1$ is the degree of $f\restrict B$ and $m \ge 0$ is the twisting parameter. Since $m$, $d$ are independent of $\ell \ggcurly 1 \ggcurly w$, the map $F\restrict B$ is expanding. Suppose next $t>1$. Subdivide $B$ into $B'_1,S'_1,B'_2,\dots,S'_{t-1},B'_t$ so that each $S_i$ is an annulus of length $\approx w$ and each $B'_j$ is an annulus of length $\approx \ell/t$. Again, since $\ell \ggcurly 1 \ggcurly w$ we can define $F\restrict B \simeq f\restrict B$ in such a manner that $F$ expands $S'_i$ and $B'_j$ into $S_i$ and $B_j$ respectively. \subsubsection{Perturbation of the metric} We have constructed a metric space $(X,\mu)$ and a map $F\colon X\selfmap$ which weakly ($\ge$) expands the metric, and such that an iterate of $F$ is expanding. \begin{lem}\label{lem:PertMetric} There is a small perturbation $\mu'$ of $\mu$ such that $F\colon X\selfmap$ expands $\mu'$. \end{lem} \begin{proof} Let $F^p\colon X\selfmap $ be an expanding iteration of $F$. By construction, $\mu$ is a smooth Riemannian metric such that $F$ is conformal (rel $\mu$) in a small neighbourhood of $A$. Denote by $A^\infty$ the set of periodic critical cycles of $F\restrict A\selfmap$. Recall that $A^\infty$ is the set of points at infinite distance from $X\setminus A^\infty$ for $\mu$. We also recall that cone points of $\mu$ belong to $A\setminus A^\infty$. For $i\le p-1$ consider the pulled-back metric $\mu_i\coloneqq (F^{-i})^*\mu$. Then $\mu_i$ is a Riemannian metric with cones in $f^{-i}(A\setminus A^\infty)$ and singularities in $f^{-i}(A^\infty)$. Moreover, $F$ weakly expands $\mu_i$. Write $\mu_i(z)$ as a conformal metric $\sigma_i(z) |\dd z|$ for $z\in X$ written in complex charts. For a sufficiently large $K>1$ the inequality $\sigma_i(z)>K$ holds only in a small neighbourhood of $f^{-i} (A)$. Let $A^p\supset A^\infty$ be the set of periodic points in $A$. For sufficiently large $K$ and for $z$ close to $f^{-i}(A)\setminus A^p$ we define $\bar \sigma_i(z)\approx\min\{ \sigma_i(z), K\}$ so that $F$ still weakly expands the truncated metric $\bar \mu_i(z)= \bar \sigma_i(z) |\dd z|$. We leave $\sigma_i$ unchanged away from the neighborhood of $A^p$. We claim that for a sufficiently small $\varepsilon>0$ the quadratic form \[\mu'\coloneqq \mu+(\bar\mu_1 +\dots + \bar\mu_{p-1})\varepsilon\] is positive define (i.e.~$\mu'$ is a metric) and that $F$ expands $\mu'$. Indeed, away from $A^p$ all $\bar \mu_{i}$ are finite metrics. Therefore, if $\varepsilon$ is sufficiently small, then $\mu'$ is positive definite away from $A^p$; so $\mu'$ is a metric. Since $F$ is conformal in a small neighbourhood of $A^p$ all $\bar \mu_i$ and $\mu$ are conformal metrics in a common charts. Hence $\mu'$ is positive-definite as a sum of conformal metrics. Since $F^p$ is expanding, $F$ expands at least one of $\mu, \bar\mu_1,\dots,\bar\mu_{p-1}$. Therefore, $F$ expands $\mu'$. \end{proof} \subsection{Isotopy of expanding maps} Let $f,g\colon (S^2,A)\selfmap$ be two expanding maps. Denote by $\Fatou(f)$ and $\Fatou(g)$ the Fatou sets of $f$ and $g$ respectively. We may partially order the maps $f,g$ by declaring that $g$ is ``smaller than'' $f$ if $A\cap\Fatou(g)\subset A\cap\Fatou(f)$. In this sense, small maps are more expanding, and B\"ottcher maps are maximal. \begin{lem}\label{lem:ConjBetwExpMaps} Let $f,g\colon (S^2,A)\selfmap$ be two expanding maps with $A\cap\Fatou(f)=A\cap\Fatou(g)$. Then $f$ and $g$ are conjugate by $h\simeq \one$ if and only if $f\simeq g$. \end{lem} Moreover, if $\#A\ge 3$, then $h$ is unique, see~\cite{bartholdi-dudko:bc3}*{\S\ref{bc3:ss:ghost}} \begin{proof} We show that if $f, g$ are isotopic, then they are conjugate by $h\simeq \one$. This is an application of the pullback argument. Choose $h_0,h_1\simeq \one$ such that $h_1f = g h_0$. We adjust $h_0$ so that it respects B\"ottcher coordinates around periodic points in $A\cap\Fatou(f)$. Thus $h_0$ is equal to $h_1$ in a small neighbourhood of $A\cap\Fatou(f)$. Let inductively $h_n$ be the lift of $h_{n-1}$; i.e.\ $h_n f = g h_{n-1}$. By construction, all $h_n$ coincide in a small neighbourhood of $A\cap\Fatou(f)$. Since $f$ is expanding away from $A\cap\Fatou(f)$, the sequence $h_n$ tends to a continuous map $h_\infty\colon (S^2,A)\selfmap$ satisfying $h_\infty f = g h_\infty$. Observe now that we also have $h^{-1}_n g = f h^{-1}_{n-1}$. Since $g$ is expanding away from $A\cap\Fatou(g)$, the sequence $h_n^{-1}$ tends to a continuous map $h'_\infty\colon (S^2,A)\selfmap$ satisfying $h'_\infty g = f h'_\infty$. Clearly, $h'_\infty h_\infty=\one$; i.e.\ $h_\infty$ is a homeomorphism. \end{proof} For a Thurston map $f\colon (S^2,A)\selfmap$, a \emph{Levy arc} is a non-trivial path, with (possibly equal) starting and ending point in $A$, that is isotopic rel $A$ to one of its iterated lifts. Let $A'$ be a forward-invariant subset of $A$. We say that $A'$ is \emph{homotopically isolated} if there is no Levy arc connecting two points in $A'$. \begin{lem}\label{lem:HomIsolCond} Suppose that $f\colon (S^2,A)\selfmap$ is a B\"ottcher expanding map, that $A'\subset A\cap \Fatou(f)$ is forward invariant, and that $\Fatou'$ is the set of points in $\Fatou(f)$ attracted by $A'$. Then $A'$ is homotopically isolated if and only if the following properties hold: \begin{enumerate} \item if $O$ is a connected component of $\Fatou'$, then $\overline O$ is a closed topological disc and, moreover, $A\cap\partial O=\emptyset$; \item if $O_1,O_2$ are different connected components of $\Fatou'$, then $\overline O_1\cap \overline O_2=\emptyset$. \end{enumerate} \end{lem} \begin{proof} Suppose first that $A'$ is not homotopically isolated. Let $\ell$ be a Levy arc connecting points $a,b\in A'$. Then $\ell$ can be realized as an inner ray $R_1$ followed by an inner ray $R_2$. If $a\not= b$, then the closures of the Fatou components centered at $a$ and $b$ intersect. If $a=b$ but $R_1\not=R_2$, then the closure of the Fatou component centered at $a$ is not a closed disc, since it is pinched at $a=b$. If $R_1=R_2$, then the landing point of $R_1$ belongs to $A$. Conversely, let us assume that $A'$ is homotopically isolated. We first verify that $A\cap\partial O=\emptyset$. Indeed, if $a\in A\cap\partial O$, then the internal ray $R$ of $O$ landing at $a$ is preperiodic. For $n$ large enough, the ray $f^n(R)$ is a periodic ray of $f^n(O)$ connecting its center, which is a point in $A'$, to $f^n(a)\in A$. Therefore, a loop starting at the center of $f^n(O)$, then following $f^n(R)$, then circling $f^n(a)$, and then following $f^n(R)$ back to the center of $f^n(O)$ is a Levy arc. If the conclusion of the lemma does not hold, then either there is a periodic component $O$ of $\Fatou'$ which is not a disk, and then there are two different inner rays $R_1,R_2$ of $O$ that land together; or there are two periodic connected components $O_1,O_2$ of $\Fatou'$ and respective inner rays $R_1\subset O_1$ and $R_2\subset O_2$ that land together. If $R_1,R_2$ are inner rays of $O$ that land together, then we have $f^n(R_1)\neq f^n(R_2) $ for all $n\ge 0$. Indeed, otherwise the common landing point of $R_1,R_2$ would be precritical, contradicting $A\cap\partial O=\emptyset$. Furthermore, for all $n$ sufficiently large $f^n(R_1)\cup f^n(R_2) $ is a closed curve, non-null-homotopic rel $A$. Indeed, if $f^n(R_1)\cup f^n(R_2) $ were trivial for some $n$, then $f^m(R_1)\cup f^m(R_2)$ would be trivial for all $m\in \{0,1,\dots, n\}$. Let then $D_m$ be the open disc bounded by $f^m(R_1)\cup f^m(R_2) $ and not intersecting $A$. We see that $f^m\colon D_0\to D_m $ has degree one. Denote by $\phi_m$ the angle in $D_m$ between $f^m(R_1)$ and $f^m(R_2)$ measured at the center $f^m(a)$ of $f^m(O)$. Then $\phi_m=\deg_a(f^m)\phi_0$. Since $\phi_0>0$ because $R_1\neq R_2$, and $\deg_a(f^m)\to\infty$ as $m\to\infty$ because $O$ is a Fatou component, we see that $f^{n}\colon D_0\to D_n$ has degree greater than one for all sufficiently large $n$. In all cases, we obtain for some $n>m \ge0$ an arc $f^n(R_1)\cup f^n(R_2)$ that is isotopic to $f^m(R_1)\cup f^m(R_2)$ $A$, so $f^n(R_1)\cup f^n(R_2)$ is a Levy arc. \end{proof} Suppose that $\sim$ is a closed equivalence relation on $S^2$ whose equivalence classes are connected and filled-in (namely, with connected complement) compact subsets of $S^2$ and suppose that not all points of $S^2$ are equivalent. In this case Moore's theorem~\cite{moore:sphere} states that the quotient space $S^2/{\sim}$ is homeomorphic to $S^2$. \begin{cor} Suppose that $f\colon (S^2,A)\selfmap$ is a B\"ottcher expanding map and suppose that $A'\subset A\cap \Fatou(f)$ is a forward invariant homotopically isolated subset of $A$. Let $\Fatou'$ be the set of points in $ \Fatou(f)$ attracted by $A'$. Then the equivalence relation $\sim$ on $S^2$ specified by \[x\sim y\Longleftrightarrow \begin{cases} x=y \;\text{ or }\\ x,y\text{ are in the closure of the same connected component of }\Fatou'; \end{cases} \] is an $f$-invariant equivalence relation satisfying Moore's theorem. View $(S^2,A)/{\sim} \simeq (S^2,A)$. The induced map $f/{\sim}\colon (S^2,A)/{\sim} \selfmap $ is topologically expanding and is isotopic rel $A$ to $f$. \end{cor} \begin{proof} It is clear that $f/{\sim}$ is topologically expanding. If we view $(S^2,A)/{\sim} \simeq (S^2,A)$, then $f$ and $f/{\sim}$ have isomorphic bisets; therefore $f\simeq f/{\sim}$. \end{proof} \begin{prop}\label{prop:semiconjugate} Let $f,g\colon (S^2,A)\selfmap$ be two expanding maps such that $f\simeq g$ and $A\cap\Fatou(g)\subseteq A\cap\Fatou(f)$. Write $A'\coloneqq A\cap (\Fatou(f)\setminus \Fatou(g))$ and let $\Fatou'$ be the set of points attracted towards $A'$ under iteration of $f$. Then there is a semiconjugacy $\pi\colon (S^2,A)\to (S^2,A)$ from $f$ to $g$ defined by \[\pi(x)=\pi(y)\Longleftrightarrow \begin{cases} x=y \;\text{ or }\\ x,y\text{ are in the closure of the same connected component of }\Fatou'; \end{cases}.\] \end{prop} As in Lemma~\ref{lem:ConjBetwExpMaps}, the semiconjugacy $\pi$ is unique. \begin{proof} It sufficient to prove this proposition for the case when $f$ is a B\"ottcher expanding map. By Lemma~\ref{lem:HomIsolCond} applied to $g$ we see that $A_g$ is homotopically isolated. Therefore, again by Lemma~\ref{lem:HomIsolCond} we can collapse $\Fatou'$ to obtain a topologically expanding map $f/\Fatou'$. Since $f/\Fatou'\approx g$, the claim now follows from Lemma~\ref{lem:ConjBetwExpMaps}. \end{proof} \section{Computability of the Levy decomposition}\label{ss:algo} In this section, we give algorithms that prove Corollaries~\ref{cor:decidegeom} and~\ref{cor:decidelevy}. Recall that a branched covering $f\colon (S^2,P_f,\ord_f)\selfmap$ is doubly covered by a torus endomorphism if and only if $P_f$ contains exactly four points and $\ord_f(P_f)=\{2\}$. Moreover, in this case $f\colon (S^2,P_f,\ord_f)\selfmap$ is itself an orbifold self-covering and its biset $B(f)$ is right principal. It is easy to see that $G:=\pi_1(S^2,P_f,\ord_f)$ is isomorphic to $\Z^2 \rtimes_{-\one} \Z/2$, and that $B(f)$ is of the following form: for a $2\times 2$ integer matrix $M$ with $\det(M)>1$ and a vector $v\in\Z^2$ denote by $M^v\colon \Z^2 \rtimes_{-\one} \Z/2 \selfmap$ the endomorphism given by a ``cross product structure'' (see~\cite{bartholdi-dudko:bc0}*{Proposition~\ref{bc0:prop:bis:2222}}): \begin{equation}\label{eq:InjEndOfK:bc4} M^{v}(n,0)=(Mn,0) \text{ and }M^{v}(n,1)=(Mn+v,1). \end{equation} Then $B(f)$ is isomorphic to $G$ as a set, with left and right actions given by $g\cdot b\cdot h=M^v(g)b h$ for all $g,b,h\in G$. Moreover, $f\colon (S^2,P_f,\ord_f)\selfmap$ is combinatorially equivalent to the quotient of $z\mapsto M z+v\colon \R^2/\Z^2\selfmap$ by the involution $z\mapsto-z$. Indeed, every endomorphism of $G$ is of the form~\eqref{eq:InjEndOfK:bc4}. \begin{algo}\label{algo:is2cover} \textsc{Given} a sphere biset ${}_G B_G$,\\ \textsc{Decide} whether $B$ is the biset of a map double covered by a torus endomorphism \textsc{as follows:}\\\upshape \begin{enumerate} \item Compute the action of $B$ on peripheral conjugacy classes in $G$. \item Determine the minimal orbisphere structure $(S^2,\ord_B)$ from the action on peripheral conjugacy classes, see~\S\ref{ss:orbisphere}. \item Return \texttt{yes} if the Euler characteristic of $(S^2,\ord_B)$ is $=0$ and $\#B=4$, and \texttt{no} otherwise. \end{enumerate} \end{algo} \begin{algo}\label{algo:getparam} \textsc{Given} a sphere biset ${}_G B_G$ of a map double covered by a torus endomorphism,\\ \textsc{Compute} parameters $M,v$ for the torus endomorphism $z\mapsto M z+v$ \textsc{as follows:}\\\upshape \begin{enumerate} \item As in Algorithm~\ref{algo:is2cover}, compute the action of $B$ on peripheral conjugacy classes in $G$, and determine the quotient map $\pi\colon G\to\overline G$ to the minimal orbisphere structure, see~\S\ref{ss:orbisphere}, and the quotient biset ${}_{\overline G}\overline B_{\overline G}$. \item Note that $\overline G$ is of the form $\Z^2\rtimes\Z/2$, where the $\Z^2$ is generated by all even products of peripheral generators and the $\Z/2$ is generated by any chosen generator. \item Since the map corresponding to $B$ is a covering, the biset $\overline B$ is left-free and right-principal. Choose an arbitrary element $\overline x\in\overline B$, thus identifying $\overline B$ with $\overline G$ via $\overline x g\leftrightarrow g$. \item Let $\{g_0,g_1\}$ be a basis of $\Z^2\subset\overline G$, and choose a peripheral generator $h$ of $\overline G$. Write $g_0\overline x=\overline x g_0^a g_1^b$ and $g_1\overline x=\overline x g_0^c g_1^d$ for some $a,b,c,d\in\Z$ which form the matrix $M=(\begin{smallmatrix}a&c\\ b&d\end{smallmatrix})$, and write $h\overline x=\overline x g_0^e g_1^f h$ for some $e,f\in\Z$ forming the vector $v=(\begin{smallmatrix}e\\ f\end{smallmatrix})$. \end{enumerate} \end{algo} The following algorithm determines whether a biset is \Tor. We shall give, in~\cite{bartholdi-dudko:bc3}, a much more efficient encoding of non-post-critical marked periodic points, and improve the speed of Algorithm~\ref{algo:istor}. The present algorithm relies on the following \begin{thm}[\cite{selinger-yampolsky:geometrization}*{Main Theorem~II}]\label{thm:nikita} Let $f$ be a Thurston map that is doubly covered by a torus endomorphism. If $f$ is Levy-free, then it is \Tor. \end{thm} \begin{algo}\label{algo:istor} \textsc{Given} a sphere biset ${}_G B_G$ of a map double covered by a torus endomorphism,\\ \textsc{Decide} whether $B$ is the biset of a \Tor\ map \textsc{as follows:}\\\upshape \begin{enumerate} \item Use Algorithm~\ref{algo:getparam} to obtain a $2\times2$ matrix $M$ expressing the linear part of the endomorphism covering $B$, and return \texttt{no} if $M$ has $\pm1$ as eigenvalue. \item Choose a basis $X$ of $B$. Using the action of $B$ on peripheral conjugacy classes, determine those (call them $A'$) that correspond to non-post-critical points. \item Make the finite list of all choices $\widehat{A'}$ of periodic points or preperiodic points on the torus that map to each other as the peripheral conjugacy classes map to each other under $B_*$. \item Run the following two steps in parallel. By Theorem~\ref{thm:nikita}, precisely one of them will terminate: \item For an enumeration of all multicurves $\CC$, check whether $\CC$ is a Levy cycle, and if so return \texttt{no}. \item For each choice $\widehat{A'}$ of periodic points, compute the biset $\widehat{B(A')}$ of the map $(z\mapsto M z+v)/\{\pm1\}$ with $(\frac12\Z^2/\Z^2\cup\widehat{A'})/\{\pm1\}$ marked, and go through the countably many maps $X\to\widehat{B(A')}$. If one of these maps extends to an isomorphism of bisets, return \texttt{yes}. \end{enumerate} \end{algo} \begin{algo}\label{algo:isexp} \textsc{Given} a sphere biset ${}_G B_G$,\\ \textsc{Decide} whether $B$ is the biset of an expanding map \textsc{as follows:}\\\upshape \begin{enumerate} \item Check, using Algorithm~\ref{algo:is2cover}, whether $B$ is double covered by a torus endomorphism. If not, run the next two steps in parallel. If $B$ is double covered by a torus endomorphism, then run Algorithm~\ref{algo:getparam} to obtain a $2\times2$ matrix $M$ expressing the linear part of the endomorphism, and run Algorithm~\ref{algo:istor} to decide whether ${}_G B_G$ is geometric biset. If ${}_G B_G$ is not a geometric biset or at least one eigenvalue of $M$ has absolute value less than $1$, then return \texttt{no}. Otherwise return \texttt{yes}. \item Enumerate all finite subsets of $G$, and check whether one is the nucleus of $(B,X)$. If so, return \texttt{yes}. \item Simultaneously, enumerate all multicurves $\CC$ on $(S^2,A)$, and check whether any is a Levy obstruction for $B$. If so, return \texttt{no}. \end{enumerate} By Theorem~\ref{thm:main}, either Step (2) or Step (3) will succeed. \end{algo} The following algorithm computes the Levy decomposition, and proves in this manner Corollary~\ref{cor:decidelevy}: \begin{algo}\label{algo:levy} \textsc{Given} a Thurston map $f\colon(S^2,A)\selfmap$ by its biset,\\ \textsc{Compute} the Levy decomposition of $f$ \textsc{as follows:}\\\upshape \begin{enumerate}\setcounter{enumi}{-1} \item We are given a $G$-$G$-biset $B=B(f)$. Recall that multicurves on $(S^2,A)$ are treated as collections of conjugacy classes in $G$. Their $B$-lift is computable by~\cite{bartholdi-dudko:bc1}*{\S\ref{bc1:ss:ccgroups}}. \item For an enumeration of all multicurves $\CC$ on $(S^2,A)$, that never reaches a multicurve before reaching its proper submulticurves, do the following steps: \item If the multicurve $\CC$ is not invariant, or is not Levy, continue in~(1) with the next multicurve. \item Compute the decomposition of $B$ using the algorithm in~\cite{bartholdi-dudko:bc2}*{Theorem~\ref{bc2:thm:DecompOfBiset}}; \item If all return bisets of the decomposition are either of degree $1$, or expanding (recognized using Algorithm~\ref{algo:isexp}), or \Tor\ (recognized using Algorithm~\ref{algo:istor}), then return $\CC$; \item Proceed with the next multicurve. \end{enumerate} \end{algo} \section{Amalgams}\label{ss:matings} In the previous sections, we considered a single Thurston map --- or, equivalently, sphere biset --- and characterized when it is combinatorially equivalent to an expanding map. In this section, we rather consider a Thurston map that is defined as an ``amalgam'' of small maps, glued together along a multicurve; we derive a criterion for the amalgam to be expanding. A typical example is a \emph{formal mating}, which is a sphere map admitting an ``equator'' --- a simple closed curve $\gamma$ isotopic to its lift, which maps back to $\gamma$ by maximal degree. We first give an algebraic characterization in terms of bisets, and then its geometric translation in terms of internal rays. \subsection{Sphere trees of bisets} We briefly recall from~\cite{bartholdi-dudko:bc2}*{Definition~\ref{bc2:defn:gfofbisets}} the notion of \emph{sphere tree of biset}: firstly, we are given a tree $\gf$ of groups, namely a tree with a group attached to every vertex and edge, and inclusions $G_e\to G_{e^-}$ and isomorphisms $G_e\leftrightarrow G_{\overline e}$ from an edge $e$ respectively to its source $e^-$ and its reverse $\overline e$. Secondly, we are given analogously a tree $\gfB$ of bisets, and two graph morphisms $\lambda,\rho\colon\gfB\to\gf$, such that $\rho$ is a graph covering and $\lambda$ is monotonous (preimages of connected sets are connected). The graph of groups $\gf$ has a \emph{fundamental group} $\pi_1(\gf,*)$ at each vertex $*\in\gf$; this is the group of expressions of the form $(g_0,e_0,g_1,\dots,e_{n-1},g_n)$ with $(e_0,\dots,e_{n-1})$ a closed path in $\gf$ based at $*$ and $g_i\in G_{e_i^-}$, subject to natural relations coming from the edge group inclusions. Likewise, the graph of bisets $\gfB$ has a \emph{fundamental biset}, which is an ordinary biset for the fundamental group. Just as sphere bisets (up to isomorphism) capture Thurston maps (up to isotopy), sphere trees of bisets capture Thurston maps with an invariant multicurve. Consider a sphere group $G$ and a sphere $G$-biset $B$. A \emph{Levy cycle} in $B$ is a periodic sequence of conjugacy classes $g_0^G,\dots,g_{m-1}^G,g_m^G=g_0^G$ such that each $g_i^G$ is a $B$-lift of $g_{i+1}^G$; namely, there are biset elements $b_0,\dots,b_{m-1}\in B$ such that $g_i b_i=b_i g_{i+1}$ holds for all $i=0,\dots,m-1$. More succinctly, in the product biset $B^{\otimes m}$, we have the commutation relation $g_0 b=b g_0$. \begin{lem} Let $f\colon(S^2,A)\selfmap$ be a Thurston map not doubly covered by a torus endomorphism map. Then $f$ admits a Levy cycle if and only if $B(f)$ admits one. \end{lem} \begin{proof} If $(g_0^G,\dots,g_{m-1}^G)$ is a Levy cycle in $B(f)$, then $B(f)$ is not contracting, so $f$ is not expanding by Theorem~\ref{thm:main}, so $f$ contains a Levy cycle again by Theorem~\ref{thm:main}. Conversely, let $(\gamma_0,\dots,\gamma_{m-1})$ be a Levy cycle for $f$, and write each $\gamma_i$ as a conjugacy class $g_i^G$. Since each $\gamma_{i+1}$ has an $f$-lift isotopic to $\gamma_i$, there are biset elements $b_0,\dots,b_{m-1}$ such that $g_i^{\pm G}b_i\ni b_i g_{i+1}$. Up to replacing some $g_i$ by their inverses, we may assume $g_i^G b_i\ni b_i g_{i+1}$ except possibly $g_{m-1}^G b_{m-1}\ni b_{m-1}g_0$. In that case, increase $m$ to $2m$ and set $g_{m+i}=g_i^{-1}$ for $i=0,\dots,m-1$ so as to have $g_i^G b_i\ni b_i g_{i+1}$ for all $i$, namely $g_i^{h_i}b_i=b_i g_{i+1}$ for some elements $h_i\in G$. Set finally $c_i\coloneqq h_i b_i$ to obtain $g_i c_i=c_i g_{i+1}$ for all $i$. Thus $(g_0^G,\dots,g_{m-1}^G)$ is a Levy cycle in $B$. \end{proof} The following definition captures the notion of algebraic Levy cycles for graphs of bisets: \begin{defn}\label{defn:rpc} Let $\gfB$ be a sphere tree of bisets. A \emph{periodic pinching cycle} for $\gfB$ is \begin{enumerate} \item a sequence of $m$ closed paths $\gamma_j\coloneqq(v_{0,j},e_{1,j},v_{1,j},\dots,e_{n,j},v_{n,j}=v_{0,j})$ in the tree $\gfB$, for $j=0,\dots,m-1$, such that $\rho(\gamma_{j+1})=\lambda(\gamma_j)$, indices read modulo $m$; \item a sequence of $m\times n$ biset elements $b_{i,j}\in B_{e_{i,j}}$ and group elements $g_{i,j}\in G_{\rho(v_{i,j})}$, for $i=0,\dots,n-1$ and $j=0,\dots,m-1$, satisfying \[g_{i,j+1} b_{i+1,j}^-=b_{i,j}^+ g_{i,j}\text{ for all $i,j$},\] indices being read cyclically.\qedhere \end{enumerate} \end{defn} Consider a periodic pinching cycle. Note that the elements $g_{0,j}\rho(e_{i,j})g_{1,j}\cdots\rho(e_{n,j})$, for $j=0,\dots,m-1$, define elements of the fundamental group of $\gf$ based at $\rho(v_{0,j})$, and that their conjugacy classes again produce a Levy cycle for the fundamental biset of $\gfB$. We shall always assume that periodic pinching cycles are non-trivial: $m,n>0$ and the elements $g_{0,j}\rho(e_{i,j})g_{1,j}\cdots\rho(e_{n,j})$ are reduced in the fundamental group of $\gf$. Recall also that, in a tree of bisets $\gfB$, vertices of $\gfB$ are classified as \emph{essential} and \emph{inessential}; every vertex $v\in\gf$ has a unique $\lambda$-preimage $\iota(v)\in\gfB$ that is essential. Consider a vertex $v\in\gf$, and assume that $(\rho\circ\iota)^m(v)=v$ for some $m>0$. The corresponding \emph{return biset} is $B_{\iota(v)}\otimes\cdots B_{\iota(\rho\circ\iota)^{m-1}(v)}$, and is a $G_v$-biset. We denote by $R(\gfB)$ the set of all return bisets of $\gfB$. Let $\gf$ be a tree of sphere groups with fundamental group $G=\pi_1(\gf,*)$. Recall that the edge groups $G_e$ in $\gf$ embed as cyclic subgroups of $G$. Choose a generator $t_e\in G_e$ for every edge $e\in\gf$, and consider the collection of their conjugacy classes $\CC=\{t_e^G\mid e\in E(\gf)\}$. We call $\CC$ the \emph{edge multicurve} of $\gf$. Given a sphere biset $B$, recall that its \emph{portrait} is the induced map $B_*\colon A\selfmap$ on the set of peripheral conjugacy classes. A portrait is \emph{hyperbolic} if every periodic cycle of $B_*$ contains a critical peripheral class; i.e.~if $B$ is the biset of a rational map $f$, then all critical points of $f$ are in the Fatou set. \begin{thm}\label{thm:algebraic amalgam} Let $\gfB$ be a sphere tree of bisets, and let $B\coloneqq\pi_1(\gfB)$ denote its fundamental biset. Assume that the portrait of $B$ is hyperbolic. Then $B$ is sphere contracting if and only if the following all hold: \begin{enumerate} \item All return bisets in $R(\gfB)$ are contracting; \item The edge multicurve of $\gfB$ contains no Levy cycle; \item There is no non-trivial periodic pinching cycle for $\gfB$. \end{enumerate} \end{thm} \begin{proof} Each of the conditions is clearly necessary: if a return biset of $\gfB$ is not contracting, then its image in $B$ is still not contracting; if an edge multicurve is a Levy cycle then it is a Levy cycle for $B$; and, by definition, a periodic pinching cycle has an iterated lift that is isotopic to itself, so every periodic pinching cycle generates a Levy obstruction. Conversely, assume that every return biset in $\gfB$ is contracting, that the edge multicurve $\CC$ of $\gfB$ is Levy-free, and that $B$ is not contracting. Then by Theorem~\ref{thm:main} there is a Levy cycle in $B$. Write $G=\pi_1(\gf,*)$, and let $\{\ell_0^G,\dots,\ell_{m-1}^G\}$ denote this Levy cycle. The conjugacy classes $\ell_j^G$ are not reduced to conjugacy classes in vertex or edge groups, because return bisets are contracting and the edge multicurve is Levy-free, so every $\ell_j^G$ admits a representative $\ell_j$ of the form $g_{0,j}f_{1,j} g_{1,j}\dots f_{n(j),j}\in \pi_1(\gf,w_j)$; here $f_{1,j}\dots f_{n(j),j}$ is a loop in $\gf$ based at $w_j$, and $g_{i,j}\in G_{f_{i,j}^+}$. Furthermore, if we require each $n(j)$ to be minimal, then this expression of a representative is unique up to cyclic permutation. Since $\{\ell_0^G,\dots,\ell_{m-1}^G\}$ is a Levy cycle, there are $b_0,\dots, b_{m-1}\in B$ with $\ell_j b_j=b_j \ell_{j+1}$ for all $j$. Furthermore, since the tree of bisets $\gfB$ is left fibrant, every $b_j\in B$ may be written as $b_j = h_j c_j$ for some $c_j\in B_{v_{0,j+1}}$ the vertex biset of a vertex $v_{0,j}\in\gfB$ with $\rho(v_{0,j})=w_j$, and some element $h_j\in\pi_1(\gf,w_j,w_{j+1})$ in the path groupoid of $\gf$. We get \[\ell_j^{h_j} c_j = c_j \ell_{j+1}\text{ for all }j=0,\dots,m-1.\] Now, again because $\gfB$ is left fibrant, each path $\ell_j$ lifts by $\rho$ to a unique path $\gamma_j\coloneqq(v_{0,j},e_{1,j},v_{1,j},\dots,e_{n(j),j},v_{n(j),j}=v_{0,j})$, and the above equation gives $\lambda(\gamma_{j+1})=\ell_j^{h_j}$. In particular, the length of $\ell_j^{h_j}$ is at most the length of $\ell_{j+1}$; it follows that all $\ell_j^{h_j}$ are cyclically reduced, and all have the same length $n$. We may now redefine $\ell_j$ as the appropriate cyclic permutation of itself so that $\ell_j c_j=c_j\ell_{j+1}$ holds for all $j=0,\dots,m-2$, and we have $\ell_{m-1}^{h_{m-1}}c_{m-1}=c_{m-1}\ell_0$, where $\ell_{m-1}^{h_{m-1}}$ is a cyclic permutation of $\ell_{m-1}$. At worst replacing $m$ by $m n$ and letting $\ell_{km+j}$ be the appropriate cyclic permutation of $\ell_j$ for all $j=0,\dots,m-1$ and all $k=0,\dots,n-1$, we may ensure that $\ell_j c_j=c_j\ell_{j+1}$ holds for all $j$. Set $c_{0,j}\coloneqq c_j$ and choose $c_{i,j}\in B_{f_{i,j}}$ so that $g_{i,j+1} c_{i+1,j}^-=c_{i,j}^+ g_{i,j}$ holds. We have constructed a periodic pinching cycle. \end{proof} Furthermore, it is decidable whether $\gfB$ admits a periodic pinching cycle: for example, Algorithm~\ref{algo:isexp} tells us whether the fundamental biset $B$ is expanding; in that case, there is no periodic pinching cycle, while if not then a periodic pinching cycle may be found by enumerating all $m n$-tuples of biset and group elements as in Definition~\ref{defn:rpc}. \subsection{Trees of correspondences} The algebraic construction above is closely related to the topological construction of an ``amalgam'' $\mathfrak F$ of maps. We shall not stress too precisely the conditions that must be satisfied by $\mathfrak F$, but rather give an intuitive connection to the previous subsection: on the one hand, such a formalism is well developed in~\cite{pilgrim:combinations}; on the other hand, the algebraic picture is the one that we use in practice. We may start with the following data: firstly, one is given a finite tree $\mathfrak T$ expressing a decomposition of a marked sphere $(S^2,A)$. Let there be a topological sphere $S_v$ for every vertex $v\in\mathfrak T$, and a cylinder (written $S_e$) for every edge $e\in\mathfrak T$. There is a finite set $A_v\subset S_v$ of marked points assigned to each vertex $v\in \mathfrak T$. If whenever $e$ touches $v$ one removes a small disk around a certain marked point from $S_v$ and attaches its boundary to a boundary of the cylinder $S_e$, one obtains after gluing a marked sphere $(S^2,A)$ so that $A$ is $\bigcup_v A_v\setminus\{\text{removed points}\}$. Secondly, one is given a tree of correspondences: a tree $\mathfrak F$ also expressing a decomposition of a marked sphere and two graph morphisms $\lambda,\rho\colon\mathfrak F\to\mathfrak T$. To every vertex and edge $z\in\mathfrak F$ one is given a ``topological correspondence'' between the spaces $\lambda(z)$ and $\rho(z)$. More precisely, for each vertex $v\in \mathfrak F$ one is given a marked sphere $(S_v,A_v)$, a covering map $S_v\setminus A_v \to S_{\rho(v)}\setminus A_{\rho(v)}$, and an inclusion $S_v\to S_{\lambda(v)}$ (note that $\lambda(v)$ needs not be a vertex). Similarly, for every edge $e\in \mathfrak F$ one is given a cylinder $S_e$ together with a covering map $S_e\to S_{\rho(e)}$ and an inclusion $S_e\to S_{\lambda(e)}$. The marked set $A$ is assumed to be forward invariant and contains all critical values of all correspondences $F_z$. Typical examples to consider are matings (as we saw in the introduction), for which the trees $\mathfrak T$ and $\mathfrak F$ have a singe edge; the correspondence at each vertex $v_\pm$ is the polynomial $p_\pm$, and the correspondence at the edge is $z\mapsto z^d$ if the cylinder is modelled on $1pt value of c^*$. We denote by $R(\mathfrak F)$ the \emph{small maps} of $\mathfrak F$, namely the return maps to vertex spheres obtained by composing the correspondences along cycles. Again in the example of matings, the small maps are $p_\pm$. By the ``van Kampen theorem'' for bisets, see~\cite{bartholdi-dudko:bc1} and~\cite{bartholdi-dudko:bc2}*{Theorem~\ref{bc2:thm:pilgrim}}, we may freely move between the languages of trees of correspondences $\mathfrak F$, sphere trees of bisets $\gfB$, sphere bisets with invariant algebraic multicurve $(B,\CC)$ represented as conjugacy classes in the fundamental group, and Thurston maps with invariant multicurve $f\colon(S^2,A,\CC)\selfmap$. We call $f$ the \emph{limit} of $\mathfrak F$. Let $\mathfrak F$ be a tree of correspondences with B\"ottcher expanding return maps. Let $\CC$ denote the invariant multicurve associated with the edges of $\mathfrak F$, namely $\CC$ is the set of core curves of cylinders represented by edges of $\mathfrak T$. We assume that $\CC$ is Levy-free. Let $\CC_0$ denote the union of primitive unicycles in $\CC$. Consider $\gamma\in \CC_0$ and denote by $S_e$ the cylinder with core curve $\gamma$, for $e\in \mathfrak T$. Since $\gamma$ is contained in a primitive unicycle, there is a unique $f\in \mathfrak F$ with $\lambda(f)=e$. We call the core curve of $S_{\rho (f)}$ the \emph{image} of $\gamma$. In this manner, there is a well defined (up to isotopy) first return map $f_\gamma\colon \gamma\to \gamma$; up to isotopy we assume that $f_\gamma$ is conjugate to $z\to z^d\colon S^1\selfmap$, with $d>1$ because $\CC$ is Levy-free. The curve $\gamma$ is on the boundary of two small periodic spheres, call them $S_1$ and $S_2$. By assumption, the first return maps on $S_1$ and $S_2$ are B\"ottcher expanding. There are periodic Fatou components $F_1\subset S_1$ and $F_2\subset S_2$ such that $\gamma$ is viewed as the circle at infinity of $F_1$ an $F_2$. Then points in $\gamma$ parametrize internal rays of $F_1$ and $F_2$, and periodic internal rays are parameterized by periodic points of $f_\gamma\colon \gamma\selfmap$, namely by rationals of the form $m/(d^n-1)$ for some $m,n\in\N$. \begin{defn} \label{defn:topPinchCycle} Let $\mathfrak F$ be a tree of correspondences with expanding return maps. Let $\CC$ denote the invariant multicurve associated with the edges of $\mathfrak T$. Let $\CC_0$ denote the union of the primitive unicycles in $\CC$. A \emph{periodic pinching cycle} for $\mathfrak F$ is a sequence $z_1,\dots,z_n$ of periodic points on $\CC_0$, and a sequence of internal rays $I_1^\pm,\dots,I_n^\pm$ in the Fatou components of small maps in $\mathfrak F$ touching $\CC_0$, such that, indices read modulo $n$, \begin{itemize} \item $I_i^+$ and $I_{i+1}^-$ are both parameterized by $z_i$, and lie in neighbouring spheres; \item $I_i^+$ and $I_i^-$ both land at the same point and in the same sphere.\qedhere \end{itemize} \end{defn} As mentioned above, topological periodic pinching cycles are the form that Levy cycles take in trees of correspondences with expanding return maps: given a Levy cycle, we may put it in minimal position with respect to the multicurve $\CC$ associated with the edges of the tree, and thus decompose the Levy cycle into periodic arcs, with arc contained in a small sphere and connecting two boundary circles. If we choose basepoints on the small spheres and boundary circles, and paths from the boundary circle basepoint to the neighbouring sphere basepoints, we may translate these arcs into loops in fundamental groups of small spheres. Even though it is not necessary for our argument, let us explain more precisely how to construct an algebraic periodic pinching cycle out of a topological one. For simplicity assume that all small spheres and cylinders are fixed. Choose basepoints $*_t$ on all small spheres and curves $S_t$ in $\mathfrak T$, identifying the group $G_t$ with $\pi_1(S_t,*_t)$ and the biset $B_t$ with homotopy classes of paths from $*_t$ to an $f$-preimage of $*_t$. Choose for each edge $e\in\mathfrak T$ a path $\ell_e$ from $*_e$ to $*_{e^-}$. Consider a periodic pinching cycle for $\mathfrak F$, and assume again for simplicity that all rays $I_i^\pm$ are fixed. To every fixed point $z_i$, say $z_i\in S_{t(i)}$, corresponds a biset element $b_i\in B_{t(i)}$: choose a path $\gamma_i$ in $S_{t(i)}$ from $*_{t(i)}$ to $z_i$, and set $b_i\coloneqq\gamma_i\#f^{-1}(\gamma_i^{-1})$. Since $f$ is expanding, the infinite concatenation of lifts $b_i^\infty$ is a path from $*_{t(i)}$ to $z_i$. Note that we are using, here, the identification of the circle $S_{t(i)}$ with the Julia set of $z^d$ for some $d>1$ and with the Julia set $\Julia(B_{t(i)})$ of the biset $B_{t(i)}$; recall from~\S\ref{ss:limit} that it consists of equivalence classes of bounded (here constant $b_i^\infty$) infinite sequences in $B_{t(i)}$. Let the rays $I_i^\pm$ belong to sphere $S_{v(i)}$, and set $g_i\coloneqq\ell_{t(i-1)}^{-1}\#b_{i-1}^\infty\#I_i^-\#(I_i^+)^{-1}\#(b_i^\infty)^{-1}\#\ell_{t(i)}\in G_{v(i)}$. Then these data $b_i,g_i,v(i),t(i)$ determine an algebraic periodic pinching cycle with $m=1$. In general, the periodic pinching cycle for $\mathfrak F$ will be periodic but not fixed, and $m$ will be $>1$. \begin{figure} \centering \begin{tikzpicture} \tikz@path@overlay{node} at (0,0) {\includegraphics[width=\textwidth]{zzz.png}}; \foreach\x/\l/\m/\n in {-2.2/1/+/-,2.2/3/-/+} { \tikz@path@overlay{node} at (\x,0) {\includegraphics[width=34mm]{3zz-2zzz.png}}; \path[draw][red!25,ultra thick] (\x,0) circle (18mm); \path[draw][green!67!blue,thick] (\x,0) -- node[right,pos=0.6] {$I_\l^\m$} (\x,-1.8); \path[draw][green!67!blue,thick] (\x,0) -- node[right,pos=0.6] {$I_\l^\n$} (\x,1.8); }; \path[fill] (-2.2,-1.8) circle (2pt) node [below left] {$z_1$}; \path[fill] (2.2,-1.8) circle (2pt) node [below right] {$z_2$}; \path[fill] (2.2,1.8) circle (2pt) node [above right] {$z_3$}; \path[fill] (-2.2,1.8) circle (2pt) node [above left] {$z_4$}; \foreach\x/\y/\Y/\p/\l/\m in {+/+/-/above/4/-,+/-/+/below/2/+,-/+/-/below/4/+,-/-/+/above/2/-} { \path[draw][green!67!blue,thick] (\x2.2,\y1.8) .. controls +(0.0,\y1.0) and +(\x0.866,\Y0.5) .. node[\p] {$I_\l^\m$} (0,\y2.9); } \end{tikzpicture} \caption{A periodic pinching cycle. There is a central fixed sphere mapping under $z^3\frac{2z-1}{2-z}$, and two spheres attached on the Fatou components of $0$ and $1$ mapping under $3z^2-2z^2$. The periodic pinching cycle is in green, and the edges of the tree of spheres are in red.} \end{figure} Given a Thurston map $f\colon(S^2,A)\selfmap$, recall that its \emph{portrait} is the induced map $f\colon A\selfmap$ with its local degree. A portrait is \emph{hyperbolic} if all its cycles contain a point of degree $>1$. \begin{thm}\label{thm:expamalgam} Let $\mathfrak F$ be a tree of maps with hyperbolic portraits. Then its limit $f\colon(S^2,A)\selfmap$ is isotopic to an expanding map if and only if the following all hold: \begin{enumerate} \item All small maps of $\mathfrak F$ are isotopic to expanding maps; \item The invariant multicurve associated with the edges of $\mathfrak T$ is Levy-free; \item There is no non-trivial periodic pinching cycle for $\mathfrak F$. \end{enumerate} \end{thm} \begin{proof} This is a direct translation of Theorem~\ref{thm:algebraic amalgam}. It it instructive to give a geometric proof of the only non-trivial implication, namely that if $f$ admits a Levy cycle $L$ then it admits a periodic pinching cycle. Put $L$ in minimal position with respect to $\CC$. By Proposition~\ref{prop:solidPerCycl}\eqref{prop:solidPerCycl:2}, a Levy cycle may only intersect a primitive unicycle. Choose a curve $\ell\in L$, and let $z_1,\dots,z_n$ denote, in cyclic order along $\ell$, the intersections of $\ell$ with $\CC$. Assuming that all small maps are expanding, the pieces of $\ell$ between points $z_i$ and $z_{i+1}$ belong to Fatou components and their boundaries, and may be assumed to be internal rays. We have in this manner obtained a periodic pinching cycle. \end{proof} \subsection{Higher-degree matings} We are ready to prove Theorem~\ref{thm:mating}. Note that, in the case of matings, periodic pinching cycles of periodic angles are precisely the periodic pinching cycles defined above for amalgams. \subsubsection{Polynomials}\label{ss:polynomials} Let $f$ be a complex polynomial of degree $d\ge2$. The \emph{filled-in Julia set} $\mathcal K(f)$ of $f$ is \[\mathcal K(f)=\{z\in1pt value of c\mid f^n(z)\not\to\infty\text{ as }n\to\infty\}.\] Assume that $\mathcal K(f)$ is connected, and let $\phi$ be the inverse of the B\"ottcher coordinate associated with the Fatou component of $\infty$, so we have $\phi\colon\hC\setminus \mathcal K(f)\to\hC\setminus\overline{\mathbb D}$ satisfying $\phi(f(z))=\phi(z)^d$ and $\phi(\infty)=\infty$ and $\phi'(\infty)=1$. For $\theta\in\R/\Z$, the associated \emph{external ray} $R_\theta$ is defined as $\{\phi^{-1}(r e^{2i\pi\theta})\mid r>1\}$. We have $\Julia(f)=\partial\mathcal K(f)$. Assume now that $f$ is post-critically finite; in particular $\Julia(f)$ is locally connected. Then the landing point $\pi(\theta)\coloneqq\lim_{r\to1^+}\phi_f^{-1}(re^{2i\pi\theta})$ of the ray $R_f(\theta)$ exists for all $\theta$, and defines a continuous map $\pi\colon\R/\Z\to\Julia(f)$. On the other hand, consider a basepoint $*\in1pt value of c\setminus A$ very close to $\infty$, so that its preimages $*_0,\dots,*_{d-1}$ are all also very close to $\infty$. Let $t\in\pi_1(1pt value of c,*)$ denote a short counterclockwise loop around $\infty$, and choose for all $i=0,\dots,d-1$ a path $\ell_i$ from $*$ to $*_i$ that remains in the neighbourhood of $\infty$, and in such a manner that the paths $\ell_i\#f^{-1}(t)$ and $\ell_{i+1}$ are homotopic for all $i=0,\dots,d-2$, and $\ell_{d-1}\#f^{-1}(t)$ is homotopic to $t\#\ell_0$. Here by `$s\#f^{-1}(t)$' we denote the concatenation of a path $s$ with the unique $f$-lift of $t$ that starts where $s$ ends. The following proposition illustrates the link between Julia sets (see also~\S\ref{ss:fatou}) and bisets in the concrete case of polynomials. \begin{prop} \label{prop:JuliaEncod} The set $X\coloneqq\{\ell_0,\dots,\ell_{d-1}\}$ is a basis of $B(f)$. Let $\rho\colon\{0,\dots,d-1\}^\infty\to\R/\Z$ be the base-$d$ encoding map $x_1x_2\dots\mapsto\sum x_i d^{-i}$; then the following diagram commutes: \[\begin{tikzcd} X^\infty\ar[r,equal]\ar[dd,swap,"/{\sim}"] & \{0,\dots,d-1\}^\infty\ar[d,"\rho"]\\ & \R/\Z\ar[d,"\pi"]\\ \Julia(B(f))\ar[r,<->] & \Julia(f) \end{tikzcd} \] where $\sim$ is the asymptotic equivalence relation defined in~\S\ref{ss:limit}. \end{prop} \begin{proof} Consider $x_1 x_2\dots\in X^\infty$ with each $x_i=\ell_{m_i}$ for some $m_i\in\{0,\dots,d-1\}$. Then the path $x_1\#f^{-1}(x_2)\#f^{-2}(x_3)\cdots$ is a well-defined path in $1pt value of c\setminus \mathcal K(f)$, which has a limit because $f$ is expanding, and has the same limit as $R_f(\theta)$ for $\theta=\rho(m_1m_2\dots)$ because with respect to the hyperbolic metric of ${1pt value of c}\setminus K(f)$ there is a $\delta>0$ such that $x_1\#f^{-1}(x_2)\#f^{-2}(x_3)\cdots$ is in the~$\delta$-neighborhood of $R_f(\theta)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:mating}] $\eqref{thm:mating:1}\Rightarrow\eqref{thm:mating:2}$: assume that $p_+\FM p_-\colon\mathbb S\selfmap$ is combinatorially equivalent to an expanding map $h\colon S^2\selfmap$. Denote by $\Sigma$ the quotient of $\mathbb S$ in which all external rays are shrunk to points. Let $\Julia_\pm$ denote the Julia set of $p_\pm$ respectively, and denote their common image in $\Sigma$ by $\Julia$. We have a well-defined map $p_+\GM p_-\colon\Julia\selfmap$, and we shall see that it is conjugate to $h\colon\Julia(h)\selfmap$. Let $\pi_\pm(\theta)$ denote the landing point of the external ray with angle $\theta$ on $\Julia_\pm$. Fix a basepoint of $*$ at infinity, and choose a set $X$ of paths from $*$ to all its $(p_+\FM p_-)$-preimages on the circle at infinity; the cardinality of $X$ is the common degree of $p_+$ and $p_-$. The bisets $B(p_+)$ and $B(p_-)$ may be chosen to have the same basis $X$ consisting of these paths. Note that the basis $X$ is the standard one for $p_+$, but is reversed for $p_-$. Let their respective nuclei be $N_\pm$. Denoting by $\sim_\pm$ the corresponding asymptotic equivalence relations we have, according to~\S\ref{ss:limit}, conjugacies \[X^\infty/{\sim_+}\cong\Julia_+\text{ and }X^\infty/{\sim_-}\cong\Julia_-. \] The bisets $B(h)$ and $B(p_+\FM p_-)$ are isomorphic, and since $h$ is expanding the nucleus of $B(h)$ is contained in $(N_+\cup N_-)^\ell$ for some $\ell\in\N$. It follows that the equivalence relation $\sim_h$ associated with the nucleus of $N(h)$ is generated, as an equivalence relation, by ${\sim_+}\cup{\sim_-}$. By Proposition~\ref{prop:JuliaEncod} we therefore have \begin{align*} \Julia(h)\cong X^\infty/{\sim_h} &\cong\frac{(X^\infty/{\sim_+})\sqcup(X^\infty/{\sim_-})}{[w]_{\sim_+}=[w]_{\sim_-}\text{ for all }w\in X^\infty}\\ &\cong\frac{\Julia_+\sqcup\Julia_-}{\pi_+(\theta)=\pi_-(-\theta)\text{ for all }\theta\in S^1}\cong\Julia\subseteq \Sigma, \end{align*} conjugacies between the dynamics of $h$, $p_+\FM p_-$ and $p_+\GM p_-$. We then extend this conjugacy between the Julia sets of $h$ and $p_+\GM p_-$ to Fatou components, which are all discs. The critical portraits of $p_+\FM p_-$ and of $p_+\GM p_-$ coincide, so their periodic Fatou components are in natural bijection. Since every Fatou component is ultimately periodic, we extend the bijection by pulling back by $p_+\FM p_-$ and $p_+\GM p_-$ respectively. The bijection between the Julia sets restricts to bijections between boundaries of Fatou components, which are ultimately periodic embedded circles in the Julia sets; this extends uniquely the bijection between Julia sets to a conjugacy $(S^2,h)\to(\Sigma,p_+\GM p_-)$. $\eqref{thm:mating:2}\Rightarrow\eqref{thm:mating:3}$ is clear, because a pinching cycle is made of external rays, so it shrinks to a node in $X$, and therefore $X$ is not a topological sphere. $\eqref{thm:mating:3}\Rightarrow\eqref{thm:mating:1}$ is Theorem~\ref{thm:expamalgam}. \end{proof} We remark that the criterion due to Mary Rees and Tan Lei gives strong constraints on pinching cycles of periodic angles in degree $2$. Firstly, the associated external rays must land at dividing fixed points. Secondly, in Definition~\ref{defn:topPinchCycle} it may be assumed that $n=2$, namely each curve in a pinching cycle intersects the equator in exactly two points. This is not true anymore in higher degree; here is an example in degree $3$. \begin{exple}\label{exple:degree3} Consider the polynomials $q_\pm=\frac12z^3\pm\frac32z$. The polynomial $q_+$ has two fixed critical points at $\pm i$, and $q_-$ exchanges its two critical points at $\pm1$. Let $p_+$ be the tuning of $q_+$ in which the local map $z^2$ is replaced by the Basilica map $z^2-1$ on the immediate basins of $\pm i$, and let $p_-$ be the tuning of $q_-$ in which the return map $z^2\circ z^2$ on the immediate basin of $1$ is replaced by $(1-z)^2+1\circ z^2$. Then $p_\pm$ are polynomials of degree $3$, with $4$ finite post-critical points forming $2$ periodic $2$-cycles. The supporting rays for $p_+$ are $\{\{1/8,11/24\},\{5/8,23/24\}\}$ and those for $p_-$ are $\{\{1/8,19/24\},\{5/8,7/24\}\}$; the maps are $\approx z^3\pm 2.12132z$. The only periodic external rays landing together for $q_+$ are at angles $0$ and $1/2$, while the only periodic external rays landing together for $q_-$ are at angles $1/4$ and $3/4$. It follows that the only pairs of external rays landing together for $p_+$ and $p_-$ are \begin{xalignat*}{2} R_{p_+}(1/8),& R_{p_+}(3/8) & R_{p_-}(1/8), & R_{p_-}(7/8)\\ R_{p_+}(0),& R_{p_+}(1/2) & R_{p_-}(1/4), & R_{p_-}(3/4)\\ R_{p_+}(5/8),& R_{p_+}(7/8) & R_{p_-}(3/8), & R_{p_-}(5/8) \end{xalignat*} It then follows that the sequence of rays $R_{p_+}(1/8)$, $R_{p_+}(3/8)$, $R_{p_-}(3/8)$, $R_{p_-}(5/8)$, $R_{p_+}(5/8)$, $R_{p_+}(7/8)$, $R_{p_-}(7/8)$, $R_{p_-}(1/8)$ is a periodic pinching cycle, so $p_+ \FM p_-$ is not equivalent to an expanding map. On the other hand, there does not exist any periodic pinching cycle with $n=2$. \end{exple} \begin{bibdiv} \begin{biblist} \bibselect{math} \end{biblist} \end{bibdiv} \end{document}
train/arxiv
BkiUdmc5qhDBLTI4uwQe
4
0.8
\section{An algebraic characterization}\label{algebra_sec} In this section, we prove that an oriented tree that does not contain a \z6\ or a \fN\ as an induced subgraph admits a chain of Hagemann-Mitschke polymorphisms of length $3$ (Theorem~\ref{HM_3}). It is quite easy to see that this is not the case for general digraphs. For the sake of completeness, we give an explicit digraph that enjoys a Hagemann-Mitschke chain of conservative polymorphisms of length $n$ but not length $n-1$. This is the content of Theorem~\ref{ladder_thm}. \subsection{Hagemann-Mitschke chain} \begin{theorem}\label{HM_3} Let $T$ be an oriented tree. Then $T$ has conservative polymorphisms $f_1$, $f_2$ and $f_3$ that form a HM-chain if and only if $T$ does not contain a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph. \end{theorem} The following lemma proves one direction of the theorem. (The basic idea of the proof of this lemma is inspired by the proof of Lemma~18 in \cite{stacs_lhom}.) \begin{lemma}\label{HM_chain_defined} Let $T$ be an oriented tree. If $T$ can be constructed using Definition~\ref{constructible}, then $T$ has conservative polymorphisms $f_1$, $f_2$ and $f_3$ that form a HM-chain. \end{lemma} \begin{proof} We show the existence of the claimed conservative polymorphisms using induction on the construction of $T$ given in Definition~\ref{constructible}. The defined operations will be trivially conservative, and we won't mentioned this explicitly. We will work with the up-join operation, and note that the proof works similarly for the down-join operation. We begin with breaking down the construction in Definition~\ref{constructible} into two steps. Assume that $T$ is the up-join of $T_0,T_1,\dots,T_n$ with central vertex $v_0$ and join vertices $v_1,\dots,v_n$. Taking the up-join can be thought of as taking the disjoint union $T' = T_1 \sqcup \dots \sqcup T_n$ of $T_1,\dots,T_n$, and then taking the disjoint union $T' \sqcup T_0$ and adding the arcs $v_iv_0$, $1 \leq i \leq n$. First we show that if each of $T_1,\dots,T_n$ admits a HM-chain of polymorphisms of length $3$, then so does $T_1 \sqcup \dots \sqcup T_n$. Let $f_1^i, f_2^i, f_3^i$ be the desired polymorphisms for $T_i$, $1 \leq i \leq n$. If $(x,y,z) \in V(T_j)^3$ for some $1 \leq j \leq n$, then for each $1 \leq s \leq 3$, let $g_s(x,y,z) = f_s^j(x,y,z)$. If $(x,y,z) \in V(T_k) \times V(T_l) \times V(T_m)$ such that $|\{k,l,m\}| > 1$, then let $g_1(x,y,z) = x$ and $g_3(x,y,z) = z$, and furthermore, $g_2(x,y,z) = z$ if $k = l$ and $g_2(x,y,z) = x$ otherwise. It is easy to check that $g_1,g_2,g_3$ form a HM-chain, and that each $g_s$ is a conservative polymorphism of $T'$. Suppose now that $f_1, f_2, f_3$ and $g_1,g_2,g_3$ are the desired polymorphisms for $T_0$ and $T' = T_1 \sqcup \dots \sqcup T_n$, respectively, and that $T$ is obtained by adding arcs $v_1v_0,\dots,v_nv_0$ to $T_0 \sqcup T'$. Let $m = height(T)$. We argue first that it is sufficient to define polymorphisms for vertices $x,y,z$ that are all in the same vertex level $L_j^T$, where $0 \leq j \leq m$. For suppose that $F_1',F_2',F_3' : L_0^3 \cup \dots \cup L_m^3 \rightarrow T$ are operations that are edge-preserving, conservative, and satisfy all required identities. Then we can extend these to full operations with the same properties: \begin{align*} F_1(x,y,z) &= \begin{cases} F_1'(x,y,z) &\text{ if $x,y,z \in L_j^T$ for some $0 \leq j \leq m$,}\\ x &\text{ otherwise.} \end{cases}\\ F_3(x,y,z) &= \begin{cases} F_3'(x,y,z) &\text{ if $x,y,z \in L_j^T$ for some $0 \leq j \leq m$,}\\ z &\text{ otherwise.} \end{cases}\\ F_2(x,y,z) &= \begin{cases} F_2'(x,y,z) &\text{ if $x,y,z \in L_j^T$ for some $0 \leq j \leq m$, else}\\ z &\text{ if $x,y \in L_j^T$ for some $0 \leq j \leq m$,}\\ x &\text{ otherwise.} \end{cases}\\ \end{align*} The claimed properties are easy to check. To see that the defined operations are arc-preserving, note that if $xx'$,$yy'$ and $zz'$ are arcs and $x,y,z$ are in levels $L_{i(x)}, L_{i(y)},L_{i(z)}$, respectively, then $x',y',z'$ are in levels $L_{i(x)+1}, L_{i(y)+1},L_{i(z)+1}$, respectively, so the same case in the above definition applies for both $(x,y,z)$ and $(x',y',z')$. Assume therefore that $T$ is obtained by adding arcs $v_iv_0$, $1 \leq i \leq n$ to $T_0 \sqcup T'$. Using the induction hypothesis and the above argument for disjoint union, we can assume that $T_0$ admits the desired operations $f_1,f_2,f_3$, and $T'$ admits the desired operations $g_1,g_2,g_3$. First we define $F_1$ and $F_3$ for $T$ as follows. \[ F_1(x,y,z) = \begin{cases} f_1(x,y,z) &\text{ if $x,y,z \in V(T_0)$, else}\\ g_1(x,y,z) &\text{ if $x,y,z \in V(T')$, else}\\ x &\text{ if $x \in V(T_0)$ or $y,z \in V(T_0)$, else}\\ u &\text{ where $u$ is leftmost of $\{y,z\} \cap V(T_0)$.} \end{cases} \] \[ F_3(x,y,z) = \begin{cases} f_3(x,y,z) &\text{ if $x,y,z \in V(T_0)$, else}\\ g_3(x,y,z) &\text{ if $x,y,z \in V(T')$, else}\\ z &\text{ if $z \in V(T_0)$ or $x,y \in V(T_0)$, else}\\ u &\text{ where $u$ is leftmost of $\{x,y\} \cap V(T_0)$.} \end{cases} \] Notice that $F_1$ is well defined, since if the first three cases do not apply, then in the last case, (precisely) one of $y$ and $z$ is in $V(T_0)$. We can argue similarly for $F_3$. Observe that $F_1(x,y,y) = x$. In the first two cases, this follows from the induction hypothesis. Otherwise, the third line of the definition sets the value of $F_1(x,y,y)$ to $x$. (The last case cannot occur.) Similarly, we can check that $F_3(x,x,y) = y$. We verify that $F_1$ is arc-preserving, and note that $F_3$ can be analyzed similarly. Assume that $xx'$, $yy'$, and $zz'$ are arcs of $T$. \begin{itemize} \item If $\ell(x) < \ell(v_0) - 1$ or $\ell(x) > \ell(v_0) - 1$, then we observe that for each $w \in \{x,y,z\}$, it holds that $w \in V(T_0) \Leftrightarrow w' \in V(T_0)$. Also recall that $V(T_0)$ and $V(T')$ partition $V(T)$. Therefore the same case of the above definition applies for both $F_1(x,y,z)$ and $F_1(x',y',z')$. It follows from this and the induction hypothesis that $F_1(x,y,z)F_1(x',y',z')$ is an arc of $T$. \item Assume therefore that $\ell(x) = \ell(v_0) - 1$. If $x',y',z' \in V(T')$, then again, $x,x',y,y',z,z' \in V(T')$ and we are done. We note that if any of $x',y',z'$ is a vertex in $V(T_0)$, that vertex must be $v_0$, since $v_0$ is the only vertex of $T_0$ in $L_{\ell(v_0)}^T$ that has an inneighbour in $T$. We also note that if $F_1(x',y',z') = v_0$, then since $F_1(x,y,z) \in \{v_1,\dots,v_n\}$ and $v_iv_0$ are arcs for all $1 \leq i \leq n$, we are done. If $x',y',z' \in V(T_0)$, then $x'=y'=z'=v_0$, so $F_1(x',y',z') = v_0$. If $x' \in V(T_0)$, then line $3$ of the definition gives that $F_1(x',y',z') = v_0$. Assume therefore that $x' \in V(T')$ and $y', z' \in V(T_0)$. Then we have by definition that $F_1(x',y',z') = x'$. If we show that $F_1(x,y,z) = x$, then since $xx'$ is an arc, we are done. Recall that $x,y,z \in \{v_1,\dots,v_n\}$, so $x,y,z \in V(T')$, and thus $F_1(x,y,z) = g_1(x,y,z)$. By the definition of $g_1$ above (recall that $g_1$ is over the disjoint union $T' = T_1 \sqcup \dots \sqcup T_n$), if not all of $x,y,z$ are in the same component of $T'$, then $g_1(x,y,z) = x$. Therefore $F_1(x,y,z) = x$. So we can assume that $x,y,z$ are all in the same component $T_i$ of $T'$, for some $1 \leq i \leq n$. Then since the only arc from $T_i$ to $v_0$ is $v_iv_0$ and $y'=z'=v_0$, we have that $y=z=v_i$. By the induction hypothesis $g_1(x,y,y) = x$, so $F_1(x,y,z) = x$. If $y' \in V(T_0)$ and $z' \in V(T')$, or if $y' \in V(T')$ and $z' \in V(T_0)$, then line $4$ of the definition sets $F_1(x',y',z') = v_0$. \end{itemize} We define \[ F_2(x,y,z) = \begin{cases} F_1(x,x,z) &\text{ if $x \in V(T')$ and $y,z \in V(T_0)$, or if $x \in V(T_0)$ and $y,z \in V(T')$, else}\\ F_3(x,z,z) &\text{ if $x,y \in V(T_0)$ and $z \in V(T')$, or if $x,y \in V(T')$ and $z \in V(T_0)$, else}\\ f_2(x,y,z) &\text{ if $x,y,z \in V(T_0)$, else}\\ g_2(x,y,z) &\text{ if $x,y,z \in V(T')$, else}\\ w, &\text{ where $w$ is leftmost of $\{x,y,z\} \cap V(T_0)$.} \end{cases} \] Note that $F_2$ is well defined. In particular, in line $5$, at least one of $x,y,z$ must be in $V(T_0)$, since otherwise line $4$ applies. To complete the proof that $F_1$, $F_2$, and $F_3$ form a HM-chain, we show that $F_1(x,x,z) = F_2(x,z,z)$ and $F_2(x,x,z) = F_3(x,z,z)$. We focus on $F_1(x,x,z) = F_2(x,z,z)$, and note that it can be shown similarly that $F_2(x,x,z) = F_3(x,z,z)$. If line $1$ of the definition of $F_2$ applies, then we are done by definition. Note that line $2$ cannot not apply for $F_2(x,z,z)$. If line $3$ applies, then $x = z = v_0$, so $F_2(x,z,z) = v_0 = F_1(x,x,z)$. If line $4$ applies, then $F_2(x,z,z) = g_2(x,z,z) = g_1(x,x,z) = F_1(x,x,z)$, where the second equality is by the induction hypothesis, and the last equality is by the definition of $F_1$. Line $5$ cannot apply for $F_2(x,z,z)$. It remains to show that $F_2$ is arc preserving. \begin{itemize} \item Suppose that $\ell(x) < \ell(v_0) - 1$ or $\ell(x) > \ell(v_0) - 1$. As before, for each $w \in \{x,y,z\}$, $w \in V(T_0) \Leftrightarrow w' \in V(T_0)$. Therefore the same case of the above definition applies for both $F_2(x,y,z)$ and $F_2(x',y',z')$, and thus $F_2(x,y,z)F_2(x',y',z')$ is an arc of $T$. \item Suppose that $\ell(x) = \ell(v_0) - 1$. As before, we use the fact if $F_2(x',y',z') = v_0$, we are done. \begin{itemize} \item If \emph{line $1$} of the definition of $F_2$ applies for $F_2(x',y',z')$, then consider first when $x' \in V(T')$ and $y',z' \in V(T_0)$. Then $F_2(x',y',z') = F_1(x',x',v_0) = v_0$ (since $z' = v_0)$) by line $4$ of the definition of $F_1$. Consider therefore the case when $x' \in V(T_0)$ and $y',z' \in V(T')$. Then $F_2(x',y',z') = F_1(v_0,v_0,z') = v_0$ by line $3$ of the definition of $F_1$. \item If \emph{line $2$} of the definition of $F_2$ applies, then we can do a similar analysis as above. \item If \emph{line $3$} applies for $F_2(x',y',z')$, then $F_2(x',y',z') = f_2(v_0,v_0,v_0)= v_0$, since $f_2$ is conservative. \item If \emph{line $4$} applies for $F_2(x',y',z')$. Then line $4$ applies for $F_2(x,y,z)$, so we are done by the induction hypothesis for $g_2$. \item If \emph{line $5$} applies, then $F_2(x',y',z') = v_0$. \end{itemize} \end{itemize} \end{proof} \begin{proof}[Proof of Theorem~\ref{HM_3}] One direction is shown in Lemma \ref{HM_chain_defined}. The other direction follows from the chain of implications outlined in the introduction, or we can use Lemma~\ref{no_HM} in the Appendix B. \end{proof} \subsection{A digraph with an HM-chain of length n but not n-1} \begin{definition}\label{ladder} A \emph{ladder} of height $n$ is a digraph having the following arcs: $a_0a_1, a_1a_2,\dots,a_{n-1}a_n$, $b_0b_1,b_1b_2,\dots,b_{n-1}b_n$, and $b_0a_1,b_1a_2,\dots,b_{n-1}a_n$. \end{definition} \begin{theorem}\label{ladder_thm} Let $H$ be a ladder of height $n \geq 1$. Then $H$ admits an HM-chain of conservative polymorphisms of length $n+1$, but not of length $n$. \end{theorem} \begin{proof} The proof follows from Lemmas~\ref{no_HM_ladder} and \ref{ladder_pol} below. \end{proof} \begin{lemma}\label{no_HM_ladder} Let $H$ be a ladder of height $n \geq 1$. Then $H$ does not admit an HM-chain of conservative polymorphisms $f_1,f_2,\dots,f_n$ of length $n$. \end{lemma} \begin{proof} For ladder $H$ we use the same notation as in Definition~\ref{ladder}. Suppose for contradiction that $f_1,\dots,f_n$ is an HM-chain of conservative polymorphisms of $H$. We show by induction that $f_i(a_i,a_i,b_i) = a_i$ for $1 \leq i \leq n$, and this will contradict the definition of an HM-chain of length $n$ requiring that $f_n(a_n,a_n,b_n) = b_n$. Since $a_0a_1,b_0a_1,b_0b_1$ are arcs, $f_1(a_0,b_0,b_0)f_1(a_1,a_1,b_1)$ is an arc of $H$. Since $a_0 = f_1(a_0,b_0,b_0)$ (by definition), it follows that $f_1(a_1,a_1,b_1) = a_1$, so the base case holds. Assume the induction hypothesis holds for index $i$. Then $a_i = f_i(a_i,a_i,b_i) = f_{i+1}(a_i,b_i,b_i)$, and since $f_{i+1}(a_i,b_i,b_i)f_{i+1}(a_{i+1},a_{i+1},b_{i+1})$ is an arc of $H$, this arc can only be $a_ia_{i+1}$, so $f_{i+1}(a_{i+1},a_{i+1},b_{i+1}) = a_{i+1}$, and we are done. \end{proof} \begin{lemma}\label{ladder_pol} Let $H$ be a ladder of height $n \geq 1$. Then $H$ admits an HM-chain of conservative polymorphisms $f_1,f_2,\dots,f_{n+1}$ of length $n+1$. \end{lemma} \begin{proof} For ladder $H$ we use the same notation as in Definition~\ref{ladder}. Notice that $H$ is leveled. We argue now that it is sufficient to define polymorphisms for vertices $x,y,z$ that are all in the same vertex level $L_j$ of $T$, where $0 \leq j \leq n$. For suppose that $F_1',F_2',F_3' : L_0^3 \cup \dots \cup L_m^3 \rightarrow T$ are operations that are edge-preserving, conservative, and satisfy all required identities. Then we can extend these to full operations with the same properties: \[F_1(x,y,z) = \begin{cases} F_1'(x,y,z) &\text{ if $x,y,z \in L_j$ for some $0 \leq j \leq m$, else}\\ x &\text{ if $y,z \in L_j$ for some $0 \leq j \leq m$,}\\ z &\text{otherwise} \end{cases}\] and for $2 \leq i \leq n+1$, \[F_i(x,y,z) = \begin{cases} F_i'(x,y,z) &\text{ if $x,y,z \in L_j$ for some $0 \leq j \leq m$,}\\ z &\text{ otherwise.} \end{cases}\] The claimed properties of this extension are easy to check. So let $x,y,z \in L_j$ for some $0 \leq j \leq n$. We define the operations $f_i(x,y,z)$ for each $1 \leq i \leq n+1$ as shown below. (This definition is inspired by the definition of an HM-chain in Lemma~5.2 in \cite{soda_lhom}.) Recall that $f_i$ is conservative, so $f_i(x,x,x)$ is always required to be $x$. Also note that we won't discuss the cases in the proofs below which involve $f_i(x,x,x)$, since these cases are trivial to analyze. \begin{center} \begin{minipage}{0.75\textwidth} \begin{center} \begin{eqnarray} f_i(a_j,a_j,b_j) &= a_j & \text{if $j+1 > i$} \\ &= b_j & \text{if $j+1 \leq i$}\\ f_i(a_j,b_j,b_j) &= b_j & \text{if $j+1 < i$} \\ &= a_j &\text{if $j+1 \geq i$}\\ f_i(b_j,a_j,b_j) &= a_j &\text{if $j+1 > i > n-j+1$}\\ &= b_j & \text{otherwise}\\ f_i(b_j,b_j,a_j) &= b_j & \text{if $n - j+1 > i$}\\ &= a_j & \text{if $n - j+1 \leq i$}\\ f_i(b_j,a_j,a_j) &= a_j & \text{if $n - j+1 < i$}\\ &= b_j & \text{if $n - j+1 \geq i$}\\ f_i(a_j,b_j,a_j) &= b_j & \text{if $n-j+1 > i > j+1$}\\ &= a_j & \text{otherwise.} \end{eqnarray} \end{center} \end{minipage} \end{center} We claim that these $f_j$ form an HM-chain. By lines $4$ and $10$ of the definition of $f_i$, $f_1(x,y,y) = x$. By lines $2$ and $8$, $f_{n+1}(x,x,y) = y$. If $f_i(a_j,a_j,b_j) = a_j$, then $j+1 > i$ by line $1$, so $j+1 \geq i +1$, thus $f_{i+1}(a_j,b_j,b_j) = a_j$ by line $4$. Similarly, if $f_i(a_j,a_j,b_j) = b_j$, then $j+1 \leq i$ by line $2$ so, $j+1 < i+1$, and therefore $f_{i+1}(a_j,b_j,b_j) = b_j$ by line $3$. To sum up, $f_i(x,x,y) = f_{i+1}(x,y,y)$. It remains to show that each $f_i$ is arc preserving. In the first column of Table~\ref{pol_table}, we specify some $f_i(x,y,z)f_i(x',y',z')$ where $xx', yy', zz'$ are arcs, $x,y,z \subseteq \{a_j,b_j\}$ and $x',y',z' \subseteq \{a_{j+1},b_{j+1}\}$. We can assume that $f_i(x,y,z) = a_j$, since if $f_i(x,y,z) = b_j$, then $f_i(x,y,z)f_i(x',y',z')$ is an arc of $H$ because both $b_jb_{j+1}$ and $b_ja_{j+1}$ are arcs. It is straightforward to check that Table~\ref{pol_table} covers all cases. \begin{table}[h!tb] \caption{Cases in the proof of Lemma~\ref{ladder_pol}.}\label{pol_table} \begin{center} \noindent\begin{tabular}{ll} $f_i(a_j,b_j,b_j)f_i(a_{j+1},b_{j+1},b_{j+1})$ & \begin{minipage}{0.63\textwidth}Since $f_i(a_j,b_j,b_j) = a_j$ by assumption, $j+1 \geq i$ by line $4$. Therefore $j+2 \geq i$, so by line $4$, $f_i(a_{j+1},b_{j+1},b_{j+1}) = a_{j+1}$.\vspace{\gap}\end{minipage}\\ \hline $f_i(a_j,b_j,b_j)f_i(a_{j+1},a_{j+1},b_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}We note that $j+1 \geq i$ as above, so $j + 2 > i$, and line $1$ gives that $f_i(a_{j+1},a_{j+1},b_{j+1}) = a_{j+1}$.\vspace{\gap}\end{minipage}\\ \hline $f_i(a_j,b_j,b_j)f_i(a_{j+1},b_{j+1},a_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}$j+1 \geq i \Rightarrow j + 2 \geq i$, so by line $11$ $f_i(a_{j+1},b_{j+1},a_{j+1}) = a_{j+1}$.\vspace{\gap}\end{minipage}\\ \hline $f_i(b_j,a_j,b_j)f_i(a_{j+1},a_{j+1},b_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}Since $f_i(b_j,a_j,b_j) = a_j$, $j+1 > i > n - j +1$ by line $5$, we have that $j+2 > i$, and therefore $f_i(a_{j+1},a_{j+1},b_{j+1}) = a_{j+1}$ by line $1$.\vspace{\gap}\end{minipage}\\ \hline $f_i(b_j,a_j,b_j)f_i(b_{j+1},a_{j+1},a_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}Since $f_i(b_j,a_j,b_j) = a_j$, $j+1 > i > n - j +1$ by line $5$, we have that $n - j < i$, and therefore $f_i(b_{j+1},a_{j+1},a_{j+1}) = a_{j+1}$ by line $9$.\vspace{\gap}\end{minipage}\\ \hline $f_i(b_j,a_j,b_j)f_i(b_{j+1},a_{j+1},b_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}Since $f_i(b_j,a_j,b_j) = a_j$, $j+1 > i > n - j +1$ by line $5$, we have that $j+2 > i > n - j$, and therefore $f_i(b_{j+1},a_{j+1},b_{j+1}) = a_{j+1}$ by line $5$.\vspace{\gap}\end{minipage}\\ \hline $f_i(b_j,b_j,a_j)f_i(b_{j+1},b_{j+1},a_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}Since $f_i(b_j,b_j,a_j) = a_j$, $n-j+1 \leq i$ by line $8$, we have that $n-j \leq i$, and therefore $f_i(b_{j+1},b_{j+1},a_{j+1}) = a_{j+1}$ by line $8$.\vspace{\gap}\end{minipage}\\ \hline $f_i(b_j,b_j,a_j)f_i(b_{j+1},a_{j+1},a_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}Since $f_i(b_j,b_j,a_j) = a_j$, $n-j+1 \leq i$ by line $8$, we have that $n - j < i$, and therefore $f_i(b_{j+1},a_{j+1},a_{j+1}) = a_{j+1}$ by line $9$.\vspace{\gap}\end{minipage}\\ \hline $f_i(b_j,b_j,a_j)f_i(a_{j+1},b_{j+1},a_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}Since $f_i(b_j,b_j,a_j) = a_j$, $n-j+1 \leq i$ by line $8$, we have that $n - j > i$ is violated in line $11$, so $f_i(a_{j+1},b_{j+1},a_{j+1}) = a_{j+1}$ by line $12$.\vspace{\gap}\end{minipage}\\ \hline $f_i(b_j,a_j,a_j)f_i(b_{j+1},a_{j+1},a_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}Since $f_i(b_j,a_j,a_j) = a_j$, $n-j+1 < i$ by line $9$, we have that $n - j < i$, so $f_i(b_{j+1},a_{j+1},a_{j+1}) = a_{j+1}$ by line $9$.\vspace{\gap}\end{minipage}\\ \hline $f_i(a_j,b_j,a_j)f_i(a_{j+1},b_{j+1},a_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}Since $f_i(a_j,b_j,a_j) = a_j$, $n-j+1 \leq i$ or $i \leq j + 1$ by line $12$, so we have that $n - j \leq i$ or $i \leq j + 2$. Therefore by line $12$, we have that $f_i(b_{j+1},a_{j+1},a_{j+1}) = a_{j+1}$.\vspace{\gap}\end{minipage}\\ \hline $f_i(a_j,a_j,b_j)f_i(a_{j+1},a_{j+1},b_{j+1})$ & \begin{minipage}{0.63\textwidth}\vspace{\gap}Since $f_i(a_j,a_j,b_j) = a_j$, $j+1 > i$ by line $1$, so $j+2 > i$, and therefore line $1$ gives that $f_i(a_{j+1},a_{j+1},b_{j+1}) = a_{j+1}$.\vspace{\gap}\end{minipage} \end{tabular} \end{center} \end{table} \end{proof} \section{Circular N, $\mathrm{Z_6}$, and fuzzy N}\label{equivalence} We prove that if $H$ is a digraph, then $H$ contains a fuzzy $\mathrm{N}$ or $\mathrm{Z_6}$ as an induced subgraph if and only if $H$ contains a circular $\mathrm{N}$. Note that assuming that $L \neq NL$, there is a simpler proof using already proved results.\footnote{Assume for contradiction that $T$ contains a circular \cN\ but no $\mathrm{Z_6}$ or fuzzy $\mathrm{N}$ as an induced subgraph. If $T$ contains no $\mathrm{Z_6}$ or fuzzy $\mathrm{N}$, then we have a logspace algorithm for LHOM($T$) by the previous results of this paper. Since $T$ contains a circular $\mathrm{N}$, LHOM($H$) is $\mathrm{NL}$-hard (\cite{soda_lhom}), so our logspace algorithm works for an $\mathrm{NL}$-hard problem, and therefore $NL = L$, a contradiction.} However, we wish to prove this without the assumption that $L \neq NL$, as stated in the following theorem. \begin{theorem}\label{circ_N_ind} An oriented tree $T$ contains a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as induced subgraphs if and only if $T$ contains a circular $\mathrm{N}$. \end{theorem} We begin with recalling some definitions from \cite{soda_lhom}. Let $H$ be a digraph. We define two walks $X = x_0 x_1 \dots x_n$ and $Y = y_0 y_1 \dots y_n$ in $H$ to be {\em congruent} if they follow the same pattern of forward and backward arcs, i.e., $x_ix_{i+1}$ is a forward arc if and only if $y_iy_{i+1}$ is a forward arc. Suppose $X, Y$ and $Z = z_0 z_1 \dots z_n$ are congruent walks. We say that $x_iy_{i+1}$ is a {\em faithful arc from $X$ to $Y$} if it is an arc of $H$ in the same direction (forward or backward) as $x_ix_{i+1}$. We say that $X$ {\em avoids} $Y$ in $H$ if there is no faithful arc from $X$ to $Y$ in $H$. Observe that two walks of length zero also avoid each other. We say that $Z$ {\em protects} $Y$ from $X$ if the existence of faithful arcs $x_iz_{i+1}$ and $z_jy_{j+1}$ in $H$ implies that $j \leq i$. In other words, $Z$ protects $Y$ from $X$ if and only if there exists a subscript $s$ such that $x_0, x_1, \dots, x_s$ avoids $z_0, z_1, \dots, z_s$ and $z_{s+1}, z_{s+1}, \dots, z_n$ avoids $y_{s+1}, y_{s+2}, \dots, y_n$. \begin{definition}\label{def-cN} Let $x, x', y, y'$ be vertices of a digraph $H$. An {\em extended $N$ from $x,x'$ to $y,y'$} in $H$ consists of congruent walks $X$ (from $x$ to $x'$), $Y$ (from $y$ to $y'$), and $Z$ (from $y$ to $x'$), such that $X$ avoids $Y$ and $Z$ protects $Y$ from $X$. A {\em circular $\mathrm{N}$} is an extended $N$ in which $x = x'$ and $y = y'$. \end{definition} \begin{lemma}\label{z->cN} If an oriented tree $T$ contains a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as induced subgraphs, then $T$ contains a circular $\mathrm{N}$. \end{lemma} \begin{proof} We first show how to define $X$,$Y$ and $Z$ when $G$ is an induced a $\mathrm{Z_6}$ with vertex set $\{0,1,2,3,4,5\}$ and arcs $\{01,21,23,43,45\}$. Set $X = 01010$, $Y =45454$, and $Z = 43210$. It is trivial to check that $X$ avoids $Y$ and $Z$ protects $Y$ from $X$. Assume that the induced subgraph $H$ is a fuzzy $\mathrm{N}$ that can be expressed as $P_1T\bar{P}_2DP_3$. Suppose that the first vertices of $P_1$ and $P_3$ are $x_0$ and $y_0$, respectively, and the last vertices of $P_1$ and $P_3$ are $x_0'$ and $y_0'$, respectively. Let $h = height(P_1)$. We define an oriented path $Q$ as follows. Let $Q' = Q_1' Q_2' \dots Q_h'$, where each $Q_i'$ is a $Z_4^{f=0}$. Then $Q$ is define as $Q = Q' \bar{Q'}$. Let $q$ denote the first and $q^*$ denote the last vertex of $Q$. Notice that because $P_1$, $P_2$ and $P_3$ are minimal fuzzy paths and $T$ and $B$ are $Z_3^{f=1}$ and $Z_3^{f=0}$, there exist three homomorphisms $h_1$, $h_2$ and $h_3$ as follows. \begin{itemize} \item $h_1$ maps $Q$ to $P_1$ such that $h_1(q) = h_1(q^*) =x_0$. \item $h_3$ maps $Q$ to $P_3$ such that $h_3(q) = h_3(q^*) =y_0$. \item Recall that $Q=Q'\bar{Q'}$. Homomorphism $h_2$ maps $Q'$ to $P_2$ and $\bar{Q'}$ to $\bar{P}_1$ such that $h_2(q) = y_0$ and $h_2(q^*) =x_0$. \end{itemize} Suppose that $Q = q_0q_1 \dots q_n$, and set $X = h_1(q_0) \dots h_1(q_n)$, $Y = h_3(q_0) \dots h_3(q_n)$, and $Z = h_2(q_0) \dots h_2(q_n)$. Since $X$ contains only vertices of $P_1$ and $Y$ contains only vertices of $P_3$, there is no arc a vertex of $X$ and a vertex of $Y$, and therefore $X$ avoids $Y$. It is also straightforward to verify that $Z$ protects $Y$ from $X$. \end{proof} \begin{proof}[Proof of Theorem~\ref{circ_N_ind}] One direction of the theorem follows from Lemma~\ref{z->cN}. The other direction either follows from the chain of implications outlined at the end of the introduction, or a direct proof is given in Appendix B, see Lemma~\ref{cN->z}. \end{proof} \section{Conclusions and open problems} We sharpened results of \cite{soda_lhom} regarding LHOM($H$) in the special case when $H$ is an oriented tree. The next natural question is how far can we push these results: Is there an inductive construction and forbidden subgraph characterization of digraphs $H$ for which LHOM($H$) in $L$? If an inductive characterization exists, can it be used to provide a simpler logspace algorithm for LHOM($H$)? \section{Appendix B: alternative proofs} \begin{lemma}\label{d2} If an oriented tree $T$ is constructible, then $T$ contains neither a $\mathrm{Z_6}$ nor a fuzzy $\mathrm{N}$ as an induced subgraph. \end{lemma} \begin{proof} Assume that $T$ is the up-join of $T_0,\dots,T_n$, and $T_0,\dots,T_n$ do not contain a $\mathrm{Z_6}$ or fuzzy $\mathrm{N}$ as an induced subgraph. Clearly, $T$ can contain a $\mathrm{Z_6}$ only in $\al_{\ell(v_0)-1}$. Note that by Definition~\ref{L_construction}, each $T_i$ contains at most one component in $\al_{\ell(v_0)-1}$, which is an out-star since $T_i$ contains at most one vertex in $L_{\ell(v_0)-1}$ with out-degree more than $0$. It follows that level $\al_{\ell(v_0)-1}$ of $T$ is an in-spider, and therefore it does not contain a $\mathrm{Z_6}$ as an induced subgraph. Suppose that $T$ contains a fuzzy $\mathrm{N}$ denoted by $N$. Let $P_1, \bar{P}_2, P_3$ the subpaths of $N$ as defined in Definition~\ref{def_fuzzyN}. Let $t_1$ be the last vertex of $P_1$, and $b_2$ be the first vertex of $P_2$. Let $v_0,v_1,\dots,v_n$ be the central and join vertices of $T$. Since $T_0,\dots,T_n$ do not contain a fuzzy $\mathrm{N}$, $N$ must contain an arc of the form $v_iv_0$, where we assume without loss of generality that $i=1$. It follows that $N$ must contain vertices both in $L_{\ell(v_0)}$ and $L_{\ell(v_1)}$. We show that $N$ cannot contain a vertex both in $L_{\ell(v_0)+1}$ and $L_{\ell(v_1)-1}$, contradicting that $N$ has height at least $2$. Assume first that $N$ contains a vertex in $L_{\ell(v_0)+1}$. By the definition of a fuzzy $\mathrm{N}$, then each of $P_1$, $\bar{P}_2$ and $P_3$ must contain a vertex in $L_{\ell(v_0)+1}$. Consider $j$ such that $t_1$ is a vertex of $T_j$. Suppose first that $j \neq 0$. Then both $P_1$ and $\bar{P}_2$ have their first vertex in $T_j$. Also, both $P_1$ and $\bar{P}_2$ must contain a vertex in $L_{\ell(v_1)}$. Since $v_j$ is the only vertex of $T_j$ in $L_{\ell(v_1)}$ with more than zero outneighbours in $T_j$, both $P_1$ and $\bar{P}_2$ must contain $v_j$. But then $T_j$ must contain a cycle, contradicting that $T_j$ is a tree. If $j = 0$, then since $v_0$ is the only vertex of $T_0$ with inneighbours in $L_{\ell(v_1)}$, both $P_1$ and $\bar{P}_2$ must contain a $v_0$, and thus $T_0$ contains a cycle. This is impossible. Assume therefore that $N$ contains a vertex in $L_{\ell(v_1)-1}$. We find $j$ such that $b_2$ is a vertex of $T_j$. As above, it follows that both $\bar{P}_2$ and $P_3$ contain vertex $v_j$, indicating that $T_j$ contains a cycle. This is a contradiction. The analysis is analogous when $T$ is obtained using a down-join operation. \end{proof} The proof of the following lemma is easy, or it can be extracted from \cite{soda_lhom}. \begin{lemma}\label{imp_prop} Let $H$ be a digraph. Then if $H$ contains a circular $\mathrm{N}$ with congruent walks $X = x_0 x_1\dots x_n$, where $x_0=x_n=x$, $Y = y_0 y_1 \dots y_n$, where $y_0 = y_n = y$, and $Z = z_0 z_1 \dots z_n$, where $z_0 = y$ and $z_n = x$, then $H$ has the following \emph{implication property}. Let $P = p_0 p_1 \dots p_n$ be an oriented path congruent to $X$ (and hence also to $Y$ and $Z$) with lists $L(p_i) = \{x_i, y_i, z_i\}$, for each $0 \leq i \leq n$. Then there are list homomorphisms $\varphi_{xx}$, $\varphi_{yy}$, and $\varphi_{yx}$, each from $P$ to $H$ such that \begin{itemize} \item $\varphi_{xx}(p_0) = \varphi_{xx}(p_n)=x$, \item $\varphi_{yy}(p_0) = \varphi_{yy}(p_n)=y$, \item $\varphi_{yx}(p_0)=y$ and $\varphi_{yx}(p_n)=x$, \end{itemize} and there is no list homomorphism $\varphi_{xy}$ from $P$ to $H$ such that $\varphi_{xy}$ maps $p_0$ to $x$ and $p_n$ to $y$. \end{lemma} \begin{lemma}\label{cN->z} If an oriented tree $T$ contains a circular $\mathrm{N}$, then it contains a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph. \end{lemma} \begin{proof} We show that if $T$ contains neither an induced $\mathrm{Z_6}$ nor a fuzzy $\mathrm{N}$, then $T$ cannot have the implication property. This implies by Lemma~\ref{imp_prop} that $T$ cannot have a circular $\mathrm{N}$. Since we are assuming that $T$ has no induced $\mathrm{Z_6}$ or fuzzy $\mathrm{N}$, $T$ is the up-join (or down-join) of some trees $T_0,\dots,T_n$. We suppose that the central vertex is $v_0$, and the join vertices are $v_1,\dots,v_n$. We assume that $T$ is the up-join and note that the analysis for down-join is similar. We inductively assume that we already showed that none of $T_0,\dots,T_n$ contains a circular $\mathrm{N}$ (since they do not contain an induced $\mathrm{Z_6}$ or fuzzy $\mathrm{N}$). Therefore the fuzzy $\mathrm{N}$ in $T$ must contain an arc $v_iv_0$ for some $1 \leq i \leq n$. Let $X$ (with first vertex $x$), $Y$ (with first vertex $y$) and $Z$ be the congruent walks making up the circular $\mathrm{N}$ in $T$, and let $P = p_0 \dots p_n$ be a path congruent to $X$. Let $\varphi_{xx}$, $\varphi_{yy}$, and $\varphi_{yx}$ be the list homomorphisms associated with the implication property from Lemma~\ref{imp_prop}. Informally, all arguments below will use the fact that $v_0$ is a ``bottleneck vertex'' in $T$. Suppose first that $x \in V(T_{j_x})$ and $y \in V(T_{j_y})$. \emph{Assume that $j_x \neq j_y$}. Let $p_\alpha$ and $p_{\beta}$ be vertices of $P$ such that \begin{itemize} \item $\alpha$ is minimum such that $\varphi_{yx}(p_\alpha) = v_0$, \item if $p_{\alpha}p_{\alpha + 1}$ is a forward arc, \begin{itemize} \item then let $\beta$ be minimum such that $\alpha < \beta$, $\varphi_{yx}(p_\beta) = v_0$, and $p_{\beta}p_{\beta+1}$ is a backward arc \item otherwise set $\beta = \alpha$. \end{itemize} \end{itemize} It is not difficult to see that such $\alpha$ and $\beta$ must exist. This is because $P$ is a path, so the image of $P$ in $T$ must contain a path between $y$ and $x$, and such a path must enter $T_0$ through $v_0$ and it must also leave $T_0$ through $v_0$. Observe that it follows from the above definition that the image of $P(p_\alpha,p_\beta)$ under $\varphi_{yx}$ is entirely in $T_0$. We modify $\varphi_{xx}$ as \[\varphi_{xx}'(u)= \begin{cases} \varphi_{xx}(u) \text{ if $u$ is not a vertex of $P(p_\alpha,p_\beta)$}\\ \varphi_{yx}(u) \text{ if $u$ is a vertex of $P(p_\alpha,p_\beta)$} \end{cases} \] We claim that $\varphi_{xx}'$ is a list homomorphism from $P$ to $T$. Clearly, we only need to check that arcs $p_{\alpha-1}p_{\alpha}$ and $p_{\beta}p_{\beta+1}$ are mapped to an arc of $T$. Since $p_{\alpha}$ has indegree $1$ (by minimality of $\alpha$), and $p_{\alpha}$ is mapped to a vertex in level $L_{\ell(v_0)}$, $p_{\alpha-1}$ must be mapped to $v_i$ for some $1 \leq i \leq n$. (Recall that $v_1,\dots,v_n$ are the only vertices of $T_1,\dots,T_n$ in level $L_{\ell(v_0-1)}$ with outdegree greater than zero.) So $\varphi_{xx}'(p_{\alpha-1}) = v_i$ and $\varphi_{xx}'(p_{\alpha}) = v_0$, and thus $p_{\alpha-1}p_{\alpha}$ is mapped to an arc of $T$. An analogous argument shows that $p_{\beta}p_{\beta+1}$ is also mapped to an arc. In addition, we can show in a similar way that we can construct $\varphi_{yy}'$ that agrees with $\varphi_{yy}$ everywhere, except that it takes the value of $\varphi_{yx}$ on vertices of $P(p_\alpha,p_\beta)$. Now we can construct $\varphi_{xy}$, the list homomorphism that maps $p_0$ to $x$ and $p_n$ to $y$ showing that $T$ does not have the implication property. The function $\varphi_{xy}$ on $P(p_0,p_{\alpha-1})$ is $\varphi_{xx}'$. On $P(p_\alpha, p_\beta)$, $\varphi_{xy}$ is $\varphi_{yx}$, and on $P(p_{\beta+1},p_n)$, $\varphi_{xy}$ is $\varphi_{yy}'$. Since $\varphi_{xx}'(p_\alpha) = \varphi_{yx}(p_\alpha) = v_0$, and $\varphi_{yx}(p_\beta) = \varphi_{yy}'(p_n) = v_0$, $\varphi_{xy}$ is a list homomorphism from $P$ to $T$, giving a contradiction. Assume therefore that $j_x = j_y$. Then if $\varphi_{yx}$ maps any vertex of $P$ to $v_0$, then we can use the same argument as above. If $\varphi_{yx}$ does not map any vertex of $P$ to $v_0$, then at least one of $\varphi_{xx}$ or $\varphi_{yy}$ must map a vertex of $P$ to $v_0$, since otherwise there would be a circular $\mathrm{N}$ entirely in $T_{j_x }$, contradicting the induction hypothesis. A similar argument can be used to construct a list homomorphism $\varphi_{xy}$ mapping $p_0$ to $x$ and $p_n$ to $y$ from $\varphi_{xx}$ and $\varphi_{yy}$. The existence of $\varphi_{xy}$ gives a contradiction. The remaining cases can be analyzed in a very similar way, and therefore we give only brief arguments. Assume that $x,y \in T_0$. Let $\varphi$ be any list homomorphism associated with the implication property. Then we have that $\varphi(p_0), \varphi(p_n) \in \{x,y\}$. Let $\alpha$ and $\beta$ be such that \begin{itemize} \item $\alpha$ is minimum such that $\varphi(p_{\alpha}) \in L_{\ell(v_0)}$, and $p_{\alpha} p_{\alpha+1}$ is a backward arc. (Note that such an $\alpha$ must exist since otherwise there is a \cN\ entirely in $T_0$.) \item $\beta$ is maximum such that $\varphi(p_\beta) \in L_{\ell(v_0)}$, and $p_{\beta-1} p_\beta$ is a forward arc. \end{itemize} Note that $\varphi$ (any of the list homomorphisms associated with the implication property) maps $p_{\alpha}$ and $p_{\beta}$ to $v_0$. Therefore let $\varphi_{xy}$ be the function that is $\varphi_{xx}$ on vertices of $P(p_0,p_\beta)$, and $\varphi_{yy}$ on vertices of $P(\beta+1,n)$. Since $\varphi_{xx}(p_\beta) = v_0 = \varphi_{yy}(p_\beta)$, $\varphi_{xy}$ is a list homomorphism, and we have the desired contradiction. Assume next that $x \in V(T_0)$ and $y \in V(T_q)$ for some $1 \leq q \leq n$. (The case when $x \in V(T_0)$ and $y \in V(T_q)$ can be analyzed similarly.) We define $\alpha$ and $\beta$ as in the previous case. When $\varphi$ (above) is $\varphi_{xx}$, $\varphi_{xx}(p_{\alpha}) = \varphi_{xx}(p_{\beta}) = v_0$. Then let $\varphi_{xy}$ be the function that is $\varphi_{xx}$ on vertices of $P(p_0,p_\beta)$, and $\varphi_{yy}$ on vertices of $P(p_{\beta+1},p_n)$. As before, we can check that $\varphi_{xy}$ is a list homomorphism, and we have the desired contradiction. \end{proof} \begin{lemma}\label{no_HM} Let $T$ be an oriented tree. If $T$ contains a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph, then $T$ does not have conservative polymorphisms $f_1$, $f_2$ and $f_3$ that form a Hagemann-Mitschke chain. \end{lemma} \begin{proof} If $T$ contains a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph, then $T$ contains a circular $\mathrm{N}$ by Theorem~\ref{circ_N_ind}. If $T$ contains a circular $\mathrm{N}$, then $T$ does not have conservative polymorphisms that form a Hagemann-Mitschke chain of any length by \cite{soda_lhom}. \end{proof} \section{A simple inductive algorithm for LHOM($T$)}\label{algorithm} We note that if an oriented tree $T$ contains a \z6\ or a \fN\ as an induced subgraph, then LHOM($T$) is $\mathrm{NL}$-hard. This follows from the fact if $T$ contains a \z6\ or a \fN\ as an induced subgraph, then $T$ contains a \cN, as we will show in Theorem~\ref{circ_N_ind}. If $T$ contains a \cN\, then LHOM($T$) is NL-hard by \cite{soda_lhom}.\footnote{However, it is not hard to directly prove that if an oriented tree $T$ contains a \z6\ or a fuzzy \fN\ as an induced subgraph, then LHOM($T$) is $\mathrm{NL}$-hard. Note that we can \emph{primitive-positive define} (\emph{pp-define}) the relation $R = \{(0,0),(0,1),(1,1)\}$ over a \z6\ or a fuzzy $\mathrm{N}$. For example, the pp-definition of $R$ over \z6\ can be done in the same way as it is done for an undirected path on $6$ vertices in \cite{stacs_lhom}. The pp-definition of $R$ over a \fN\ can be done very similarly to the definition of $R$ using a \cN\ (see \cite{soda_lhom}). If $R$ can be pp-defined over $H$, then it is well known that there is a straightforward logspace reduction from the $\mathrm{NL}$-complete directed graph unreachability problem to LHOM($H$).} In the rest of this section, we inductively construct a logspace algorithm for LHOM($T$). We begin with a high-level description given as Algorithm~\ref{high_level} below. Suppose that $T$ is the up-join of $T_0,T_1,\dots,T_n$, $v_0$ is the central vertex, and $v_1,\dots,v_n$ are the join vertices. (The case when $T$ is the down-join of some trees can be analyzed similarly.) We assume inductively that there is a logspace algorithm $A_i$ for LHOM($T_i$) for each $0 \leq i \leq n$. \begin{algorithm} \caption{High-level algorithm when $T$ is the up-join of $T_0,T_1,\dots,T_n$.}\label{high_level} \begin{algorithmic}[1] \INPUT A digraph $G$. ($G$ could have many components.) \OUTPUT YES if there is a list homomorphism from $G$ to $T$, and NO otherwise. \For{each component $G'$ of $G$} \State Check whether $G'$ is leveled. If not, output NO.\label{is_leveled} \State If the height of $G'$ is larger than the height of $T$, output NO.\label{is_too_high} \State Find the height $h$ of $G'$, and find the vertex levels $L_0,L_1,\dots,L_h$ of $G'$. \For{each $0 \leq \alpha \leq h - 1$} \If{there is a list homomorphism $h$ from $G'$ to $T$ such that $h$ maps level $L_\alpha$ of $G'$ \indent \indent to level $L_{\ell(v_0) - 1}$ of $T$} mark $G'$ as \textsc{good}\label{main_check} \EndIf \EndFor \If{there is a list homomorphism from $G'$ to $T_i$ for some $0 \leq i \leq n$} mark $G'$ as \textsc{good}\label{to_whole_tree} \EndIf \EndFor \If{all components $G'$ of $G$ are marked \textsc{good}} output YES. \Else{ output NO.}\label{really_no} \EndIf \end{algorithmic} \end{algorithm} First we argue the correctness of Algorithm~\ref{high_level}, and then show how to implement line~\ref{main_check}. \begin{lemma} Let $T$ be the up-join of $T_i$, and $A_i$ be the corresponding logspace algorithms for LHOM($T_i$), $0 \leq i \leq n$, as described above. Let $G$ be a digraph. Then there is a list homomorphism from $G$ to $T$ if and only if Algorithm~\ref{high_level} on input $G$ outputs YES. \end{lemma} \begin{proof} Clearly, if Algorithm~\ref{high_level} outputs YES, then there is a list homomorphism from $G$ to $H$. Suppose therefore that Algorithm~\ref{high_level} outputs NO. If Algorithm~\ref{high_level} outputs NO in line~\ref{is_leveled} or line~\ref{is_too_high}, then clearly, there can be no homomorphism from $G$ to $T$. Assume therefore that Algorithm~\ref{high_level} outputs NO in line~\ref{really_no}. Then at least one of the components $G'$ of $G$ is \emph{not} marked as \textsc{good}. Assume for contradiction that there is a list homomorphism $h$ from $G'$ to $T$. Then either there is a vertex level of $G'$ that is mapped to $L^T_{\ell(v_0)-1}$, or not. There cannot be a vertex level of $G'$ that is mapped to $L^T_{\ell(v_0)-1}$, since then in line~\ref{main_check} the algorithm would have marked $G'$ as good. Therefore $h$ must map $G'$ to $T$ in such a way that no vertex of $G'$ is mapped to a vertex in $L^T_{\ell(v_0)-1}$. That means that $h$ does not map any arc of $G'$ to any of the arcs $v_1v_0,v_2v_0,\dots,v_nv_0$. Since $G'$ is connected, this implies that $h$ must map $G'$ to $T_i$ for some $0 \leq i \leq n$. But that also cannot happen because then $G'$ would be marked \textsc{good} in line~\ref{to_whole_tree}, a contradiction. \end{proof} To implement Algorithm~\ref{high_level} in logspace, we need the following lemmas about digraphs. \begin{lemma}\label{two} Let $G$ be a connected acyclic digraph $G$. Fix two arbitrary vertices $u,v \in V(G)$. If $G$ is not leveled, then there are two different walks $W_1$ and $W_2$, both from $u$ to $v$, such that $net(W_1) \neq net(W_2)$ \end{lemma} \begin{proof} Suppose that every walk from $u$ to $v$ has the same net length. For each vertex $w$ of $G$, let $Q_w$ be an arbitrary walk from $u$ to $w$. Let $Q_w'$ be another arbitrary walk from $u$ to $w$ (it could be the same as $Q_w$). If $net(Q_w) \neq net(Q_w')$, then let $W$ be a walk from $w$ to $v$. Then both $Q_w W$ and $Q_w' W$ are walks from $u$ to $v$, and $net(Q_w W) \neq net(Q_w' W)$, a contradiction. Therefore $net(Q_w) = net(Q_w')$. Assign each vertex $w$ the integer $net(Q_w)$. Let $xy$ an arbitrary arc of $G$, then $net(Q_y) = net(Q_x xy)$, where $Q_x xy$ denotes the walk $Q_x$ and then making one more step from the last vertex $x$ of $Q_x$ to vertex $y$. Therefore $net(Q_y) = net(Q_x) + 1$, i.e., if $xy$ is an arc, then the integer assigned to $y$ is one larger than the integer assigned to $x$. Let $m$ be the minimum of $net(Q_w)$ over $w \in V(G)$. Note that $m \leq 0$ since $net(Q_w) = 0$. Assign each vertex $w$ to level $L_{net(Q_w) - m}$. It follows that $G$ is leveled. \end{proof} \begin{lemma}\label{bounded} Let $G$ be a connected acyclic digraph $G$. Then $G$ is leveled if and only if every walk in $G$ has bounded net length. \end{lemma} \begin{proof} If $G$ is leveled then every oriented walk from a vertex $u$ to a vertex $v$ of $G$ has net length $\ell(v) - \ell(u)$, and therefore every walk in $G$ has bounded net length. Conversely, suppose that $G$ is not leveled, and fix two vertices $u$ and $v$ of $G$. By Lemma~\ref{two} there are two different walks $W_1$ and $W_2$ from $u$ to $v$ such that $net(W_1) \neq net(W_2)$. Assume without loss of generality that $net(W_1) > net(W_2)$. Then for any positive integer $k$, $net((W_1 \bar{W}_2)^{k+1}) > k$. \end{proof} \begin{definition} Given a digraph $G$ and an integer $d$, the \emph{undirected} graph $\cG(G,d)$ is constructed as follows. (For short, we write $\cG$ for $\cG(G,d)$.) \begin{itemize} \item The vertex set of $\cG$ is defined as $V(\cG) = \{I_j(v) \;|\; v \in V(G) \text{ and } v \in V(G) \text{, and } 0 \leq j \leq d\}$ \item The edge set of $\cG$ is defined as $E(\cG) = \{(I_j(u), I_{j+1}(v)) \;|\; uv \text{ is a forward arc of $G$, and } 0 \leq j \leq d-1\}$ \end{itemize} We use $I_k$ to denote $\bigcup_{v \in V(G)} I_k(v)$, and $I_k^{\cG}$ to emphasize that $I_k$ is defined with respect to $\cG$. \end{definition} Clearly, there is a logspace algorithm that takes $G$ and $d$ as inputs and outputs $\cG(G,d)$. The following lemma is a simple observation. \begin{lemma}\label{walks} Let $G$ be a digraph, and consider $\cG = \cG(G,d)$. There is a walk from a vertex $I_0(u) \in I_0^{\cG}$ to a vertex $I_k(v) \in I_k^{\cG}$ if and only if there is a walk $W = a_0 \dots a_n$ in $G$ such that $a_0 = u$ and $a_n = v$ such that $net(W) = k$, and for each $0 \leq i \leq n$, $\ell(a_i) \geq \ell(a_0)$ (i.e., $a_0$ is in the bottom level of $W$). \end{lemma} \begin{lemma}\label{GG} Let $G$ be a connected digraph, and $\cG = \cG(G,d+1)$. Then $G$ is leveled and has height at most $d$ if and only if for each vertex $I_0(u) \in I_0^{\cG}$ and $I_{d+1}(v) \in I_{d+1}^{\cG}$ there is no walk from $I_0(u)$ to $I_{d+1}(v)$ in $\cG(G,d+1)$. \end{lemma} \begin{proof} Assume that $G$ is leveled and has height at most $d$. Suppose for contradiction that for some vertices $I_0(u) \in I_0^{\cG}$ and $I_{d+1}(v) \in I_{d+1}^{\cG}$ there is a walk $I_0(a_0) \dots I_{d+1}(a_n)$ in $\cG(G,d+1)$. Then the walk $a_0\dots a_n$ has net length $d+1$, so $G$ cannot have height at most $d$. Conversely, assume that $G$ is not leveled or has height at least $d+1$. In both cases (in the former case using Lemma~\ref{bounded}), there is a walk $W = a_0 \dots a_n$ in $G$ such that $a_0 = u$ and $a_n = v$ such that $net(W) = d+1$, and for each $0 \leq i \leq n$, $\ell(a_i) \geq \ell(a_0)$ (i.e., $a_0$ is in the bottom level of $W$). Therefore by Lemma~\ref{walks}, there is a walk from $I_0(u)$ to $I_{d+1}(v)$ in $\cG(G,d+1)$. \end{proof} \begin{lemma}\label{component_in_L} Let $G$ be a leveled digraph of height at most $h$ (a constant), and $v \in V(G)$. Then there is a logspace algorithm that outputs the up-component (down-component) of $G$ at $v$. \end{lemma} \begin{proof} We produce $\cG = \cG(G,h)$. We output every vertex $u$ such that there is an undirected path (using Reingold's algorithm \cite{reingold}) from $I_0(v)$ to $I_j(u)$ in $\cG$ for some $j$. These vertices form $U$. Now we output every arc $ab$ such that $a,b \in U$. The down-component at $v$ can be produced in a similar way. \end{proof} We are ready to specify the main subroutine (Algorithm~\ref{level_alg}) of Algorithm~\ref{high_level}. We denote the disjoint union of the trees $T_1,\dots,T_n$ with $T'$. Inductively, we assume that there is a logspace algorithm $A_i$ for each of LHOM($T_i$), $0 \leq i \leq n$. We can easily combine the algorithms for LHOM($T_i$), $1 \leq i \leq n$, to obtain a logspace algorithm $A'$ for LHOM($T'$) (we use Reingold's logspace algorithm for undirected reachability \cite{reingold} to output the components $G'$ of $G$, and then test whether there is a list homomorphism from each $G'$ to one of the $T_i$-s.) \begin{algorithm} \caption{Check if there is a list homomorphism $h : G \rightarrow T$ such that $h(L_\alpha^G) \subseteq L_{\ell(v_0)-1}^T$.}\label{level_alg} \begin{algorithmic}[1] \INPUT A leveled digraph $G$, and an integer $0 \leq \alpha \leq height(G) - 1$. \OUTPUT YES if there is a list homomorphism $h : G \rightarrow T$ such that $h(L_\alpha^G) \subseteq L_{\ell(v_0)-1}^T$ and NO otherwise. \State Let $\cU_{all}$ be the set of up-components of $G$ at level $L_{\alpha+1}^G$. \State Using $A_0$, check for each $U \in \cU$ if there is a list homomorphism $h$ from $U$ to $T_0$ such that for each $v \in V(U) \cap L_{\alpha+1}^G$ that has at least one inneighbour when considered as a vertex of $G$, $h(v) = v_0$ (this can be enforced by setting the list of $v$ to $\{v_0\}$). Let $\cU$ be the set of those $U \in \cU_{all}$ for which such a list homomorphism exists. \label{top_dominant} \State Let $G'$ be the subgraph of $G$ induced by the vertices $V(G) \setminus V(\cU)$, where a vertex $v$ belongs to $V(\cU)$ if it is a vertex of some up-component in $\cU$. \State Using $A'$, check if there is a list homomorphism $h$ from $G'$ to $T'$ such that vertices of $G'$ in level $L_\alpha^G$ are mapped to vertices of $T'$ in $L_{\ell(v_0) -1}^T$. If no such list homomorphism exists, output NO.\label{reject} \State Otherwise output YES. \end{algorithmic} \end{algorithm} The following lemma proves the correctness of Algorithm~\ref{level_alg}. \begin{lemma} Suppose that $T$ is the up-join of $T_0,T_1,\dots,T_n$, $v_0$ is the central vertex of $T_0$, and $v_i$ is the join vertex of $T_i$, $1 \leq i \leq n$. Let $G$ be a leveled digraph, and $0 \leq \alpha \leq height(G) - 1$ be an integer. Then there is a list homomorphism $h : G \rightarrow T$ such that level $L_\alpha$ of $G$ is mapped to level $L_{\ell(v_0)-1}$ of $T$, (i.e., $h(L_\alpha^G) \subseteq L_{\ell(v_0)-1}^T$) if and only if Algorithm~\ref{level_alg} outputs YES on input $G,\alpha$. \end{lemma} \begin{proof} Assume that Algorithm~\ref{level_alg} outputs YES. (The proof is aided by Figure~\ref{alg_2}.) Then there is a list homomorphism $h'$ from $G'$ to $T'$, and a list homomorphism $h_U$ from each up-component $U \in \cU$ to $T_0$. Since all arcs $v_1v_0,v_2v_0,\dots,v_nv_0$ are present, the map that $h(v)$ defined as $h'(v)$ if $v \in V(G')$, and as $h_U(v)$ if $v \in V(U)$ is a list homomorphism from $G$ to $T$. Conversely, assume that there is a list homomorphism $g$ from $G$ to $T$. If there is a list homomorphism from an up-component $U$ to $T_0$, then we can assume that $g$ maps $U$ to $T_0$. After these up-components are removed from $G$ to obtain $G'$, $g|_{G'}$ is a list homomorphism from $G'$ to $T'$. Therefore the algorithm accepts. \begin{figure}[htb] \begin{center} \includegraphics[scale=\wp]{Alg_proof.pdf} \end{center} \caption{Illustration of the correctness proof of Algorithm~\ref{level_alg}.}\label{alg_2} \end{figure} \end{proof} It is routine to implement Algorithms~\ref{high_level} and \ref{level_alg} so that they use only logarithmic space. This is done using the basic trick that if $A_1$ and $A_2$ are logspace algorithms, and $A_3$ is the algorithm that first runs $A_1$ on the input, then feeds the output of $A_1$ to $A_2$, and then outputs the output of $A_2$, then $A_3$ can be assumed to be a logspace algorithm. The various reachability tests the algorithms use can be implemented using Reingold's logspace algorithm for undirected reachability \cite{reingold}. \section{Introduction} Given two digraphs $G$ and $H$, a homomorphism $\varphi : G \rightarrow H$ is a mapping $\varphi : V (G) \rightarrow V (H)$ such that $uv \in A(G)$ implies that $\varphi(u) \varphi(v) \in A(H)$, where $A(G)$ denotes the arc set of $G$. The corresponding algorithmic problem \emph{Digraph Homomorphism} asks if $G$ has a homomorphism to $H$. For example, it is easy to see that $G$ has a homomorphism into the clique $K_c$ if and only if G is $c$-colorable. Instead of digraphs, one can consider homomorphism problems in the more general context of relational structures. Feder and Vardi \cite{Feder93:monotone} observed that the standard framework for the Constraint Satisfaction Problem (CSP) can be formulated as homomorphism problems for relational structures. In fact, they showed that every such problem is equivalent to a Digraph Homomorphism problem, hence Digraph Homomorphism is as expressive as the CSP in general. The expressive power of Digraph Homomorphism can be increased by introducing lists. Given digraphs $G$ and $H$ and a list $L(v) \subseteq V(H)$ for each $v \in V(G)$, a \emph{list homomorphism} $\varphi$ from $G$ to $H$ is a homomorphism from $G$ to $H$ such that $\varphi(v) \in L(v)$ for each $v \in V(G)$. The \emph{List Homomorphism problem with template $H$} (LHOM($H$)) is the following algorithmic problem. Given a digraph $G$ and a list $L(v)$ for each $v \in V(G)$, decide if there is a list homomorphism from $G$ to $H$.\footnote{We remark that LHOM$(H)$ is identical to CSP($\mathbb{B}$), where $\mathbb{B}$ is the relational structure that contains the binary relation that is the arc set of the digraph $H$, and a unary relation $U_S = S$ for each $S \subseteq V(H)$.} The List Homomorphism problem was introduced by Feder and Hell in 1998 \cite{FH:98:LHR}, and it has been studied since then extensively \cite{stacs_lhom,Feder/et_al:07:LHG,Feder/et_al:99:LHC,FederHH03,Gutin06:mincosthomomorphism,Hell/Rafiey:11:DLH}. In this paper, we study List Homomorphism problems of logarithmic space complexity. Such problems with graph and digraph templates have been studied in \cite{stacs_lhom,soda_lhom,lics_lhom}. When $H$ is an undirected graph, LHOM($H$) is in logspace\footnote{These results assume that $\mathrm{L} \neq \mathrm{NL}$.} ($\mathrm{L}$) if and only if $H$ does not contain any of a certain set of graphs as an induced subgraph, or equivalently, if $H$ can be inductively constructed using two simple operations (see \cite{stacs_lhom} for details). We call these graphs \emph{skew decomposable}. Relying on this inductive construction, the logspace algorithm for LHOM($H$) is remarkably simple. When $H$ is a \emph{digraph}, LHOM($H$) is in $\mathrm{L}$ if and only if $H$ does not contain a so-called \emph{circular $\mathrm{N}$} (see \cite{soda_lhom}). We note that $H$ not containing a circular $\mathrm{N}$ is \emph{not} a forbidden induced subgraph characterization. Although the lack of a circular $\mathrm{N}$ in $H$ gives a unifying reason why LHOM($H$) is in $\mathrm{L}$, the first of the two existing logspace algorithms for LHOM($H$) is quite complicated (see \cite{soda_lhom}), and the second algorithm (\emph{a symmetric Datalog program}) requires an involved and technical analysis, which is the subject of \cite{lics_lhom}. This is in stark contrast to the simple and easy-to-analyze logspace algorithm for LHOM($H$) when $H$ is a skew decomposable undirected graph \cite{stacs_lhom}. Whether digraphs for which LHOM($H$) is in $\mathrm{L}$ can be characterized in terms of forbidden induced subgraphs, whether they enjoy an inductive construction, and whether such a construction could be used to build a simple logspace algorithm for LHOM($H$) are intriguing open problems. In this paper, we answer these questions for oriented trees in the positive. (We remark that our results re-prove the $\mathrm{L}-\mathrm{NL}$ dichotomy for LHOM($T$) when $T$ is an oriented tree.) Graphs and digraphs $H$ for which LHOM($H$) is in $\mathrm{L}$ form a natural class, and we believe that our inductive characterization of such oriented trees could be of independent interest. In fact, the FPT-algorithm in \cite{esa_dlhom} uses the inductive characterization of skew decomposable graphs in an essential way. \textbf{Detailed results and structure of the paper:} In Section~\ref{preliminaries}, we introduce basic concepts and define an oriented path we call \z6, and a class of oriented paths we call fuzzy $\mathrm{N}$-s. In Section~\ref{structural}, we describe a way to construct oriented trees inductively (Definition~\ref{constructible}). We proceed to prove the main combinatorial result of the paper: an oriented tree $T$ contains a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph if and only if $T$ can be constructed using the inductive construction in Definition~\ref{constructible} (Theorem~\ref{construction_theorem}). In Section~\ref{algorithm}, we briefly argue that if a tree $T$ contains a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph, then LHOM($T$) is $\mathrm{NL}$-hard. Then for oriented trees $T$ that do not contain any such induced subgraph, we provide a simple logspace algorithm relying on the aforementioned inductive characterization of $T$. In Section~\ref{equivalence}, we give an \emph{unconditional} proof that an oriented tree contains a circular $\mathrm{N}$ if and only if $T$ contains a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph. (Note that if we assume that $\mathrm{L} \neq \mathrm{NL}$, then there is a simpler argument.) In Section~\ref{algebra_sec}, we show that $T$ does not contain a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph if and only if $T$ admits a \emph{Hagemann-Mitschke chain of conservative polymorphisms of length $3$} (note that this is similar to the characterization of undirected graphs in \cite{stacs_lhom}). We also give an example of a digraph in Section~\ref{algebra_sec} that admits a Hagemann-Mitschke chain of conservative polymorphisms of length $n+1$ but not of length $n$ (Theorem~\ref{ladder_thm}). Therefore we can conclude that in this respect, general digraphs behave differently from oriented trees. In Section~\ref{faster}, we give a $O(|V(T)|^3)$ algorithm to recognize oriented trees that do not contain a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph. This is significantly faster than the $O(|V(T)|^8)$-time implementation of the recognition algorithm in \cite{soda_lhom} restricted oriented tree inputs.\footnote{Note that $O(|V(T)|^8)$ is the running time of the straightforward implementation of the recognition algorithm (in \cite{soda_lhom}) when inputs are assumed to be oriented trees. We also note that this algorithm runs faster on trees than on general digraphs. We made no attempt to improve the running time of this algorithm. See Appendix A.} For the sake of completeness, Appendix A contains both an inductive construction and an ``explicit'' characterization of oriented paths that contain no \z6\ or \fN\ as an induced subgraph. We summarize the main results of this paper in Theorem~\ref{main}. Note that some parts of this theorem come from \cite{soda_lhom}, as explained below. \begin{theorem}\label{main} Let $T$ be an oriented tree. Then the following conditions are equivalent: \begin{enumerate} \item $T$ contains no induced subgraph that is a \z6\ or a \fN; \item $T$ can be constructed inductively as in Definition~\ref{constructible}; \item $T$ contains no circular $\mathrm{N}$; \item $T$ admits a chain of conservative Hagemann-Mitschke polymorphisms of length $3$; \item $T$ admits a chain of conservative Hagemann-Mitschke polymorphisms of length $n$, for some $n \geq 1$. \end{enumerate} If the above conditions hold, then $\mathrm{LHOM(T)}$ is in $\mathrm{L}$. Otherwise $\mathrm{LHOM(T)}$ is $\mathrm{NL}$-hard. \end{theorem} The outline of the proof of this theorem is the following:\footnote{In Appendix B, we give direct proofs that (2) $\Rightarrow$ (1) and (1) $\Rightarrow$ (3), and we also give a not direct but short proof that (4) $\Rightarrow$ (1).} \begin{itemize} \item (1) $\Rightarrow$ (2) is the content of Lemma~\ref{d1}. \item (2) $\Rightarrow$ (4) is the content of Lemma~\ref{HM_chain_defined}. \item (4) $\Rightarrow$ (5) is trivial. \item (5) $\Rightarrow$ (3) is by \cite{soda_lhom}. \item (3) $\Rightarrow$ (1) is the content of Lemma~\ref{z->cN}. \end{itemize} It is shown in \cite{soda_lhom} that if $T$ contains no \cN\ then LHOM($T$) is in $\mathrm{L}$, and otherwise LHOM($T$) is $\mathrm{NL}$-hard. However, as discussed above, the merit of the logspace algorithm in this paper is its simple inductive nature. \section{Appendix A}\label{paths} For the sake of completeness, we give two characterizations of oriented paths that contain no $\mathrm{Z_6}$ or fuzzy $\mathrm{N}$ as an induced subgraph. Let this class of oriented paths be denoted by $\cP$. The first characterization is a special case of the inductive characterization of oriented trees. The other characterization gives an explicit ``template'' such that all oriented paths in $\cP$ must obey this template. Roughly, this template specifies in what way certain fuzzy paths can be concatenated so that the resulting path is in $\cP$. \subsection{Oriented Paths} \begin{definition}\label{path_constructible} For $i \in \{1,2\}$, let $P_i$ be either the empty digraph, or an oriented path that has a single vertex $v_i$ in its top vertex level having outdegree $0$. The operation of taking the disjoint union of $P_1,P_2$ and a single new vertex $v_0$, and then adding arcs $v_1v_0$ (if $P_1$ is non-empty) and $v_2v_0$ (if $P_2$ is non-empty) to the resulting digraph is called taking the \emph{top up-join} of $P_1$ and $P_2$. Similarly, for $i \in \{1,2\}$, let $P_i$ be either the empty digraph, or an oriented path that has a single vertex $v_i$ in its bottom vertex level having indegree $0$. The operation of taking the disjoint union of $P_1,P_2$ and a single new vertex $v_0$, and then adding arcs $v_1v_0$ (if $P_1$ is non-empty) and $v_2v_0$ (if $P_2$ is non-empty) to the resulting digraph is called taking the \emph{bottom up-join} $P_1$ and $P_2$. We can similarly define the \emph{top down-join} and \emph{bottom down-join} of two oriented paths. If $P$ is an oriented path with a single vertex, we say that $P$ is \emph{constructible}. Inductively, if $P_1$ and $P_2$ are constructible oriented paths (possibly empty), then their top up-join, bottom up-join, top down-join, bottom down-join (when allowed) are also \emph{constructible}. \end{definition} Clearly, Definition~\ref{path_constructible} this is a special case of Definition~\ref{constructible}. Therefore this construction does not produce an oriented path that contains an induced \z6\ or a \fN\ as an induced subgraph. The converse is not difficult to prove, and a proof (using different terminology) can also be found in \cite{phd}. \begin{theorem} An oriented path $P$ does not contain an induced $\mathrm{Z_6}$ or fuzzy $\mathrm{N}$ if and only if $P$ is constructible. \end{theorem} Now we give the template characterization. \begin{definition}\label{wave} An oriented path $W$ is a \emph{wave} if $W$ is $\zz_k$ for some $1 \leq k \leq 5$, or $W$ is of the form $Q_1 A_1 Q_2 A_2 \dots Q_n A_n$, for some $n \geq 1$, where $A_i$ and $Q_i$ are defined as follows. Let $P_1,\dots,P_n$ be minimal fuzzy paths such that for each $1 \leq i \leq n-1$, $height(P_i) > height(P_{i+1})$. Then for each $1 \leq i \leq n$, \begin{itemize} \item if $i$ is odd, then $Q_i$ is of the form $P_i$, and $A_i$ is of the form $\zz_1$ or $\zz_3^{f=1}$, except that $A_n$ (if $n$ is odd) can be $\zz_j^{f=1}$ for any $j \leq 4$; \item if $i$ is even, then $Q_i$ is of the form $\bar{P_i}$, and $A_i$ is $\zz_1$ or $\zz_3^{f=0}$, except that $A_n$ (if $n$ is even) can be $\zz_j^{f=0}$ for any $j \leq 4$. \end{itemize} \end{definition} \begin{theorem}\label{oriented_path_characterisation} Let $P$ be an oriented path that does not contain a $\mathrm{\zz_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph. Then $P$ has the form $\bar{U}AV$, or the form $r(\bar{U}AV)$, where $U$ and $V$ are waves, and $A$ is either of the form $\zz_1$ or $\zz_3^{f=0}$, \end{theorem} \begin{proof} Assume that $P = a_1\dots a_n$. \emph{To simplify notation, we write $P(i,j)$ instead of $P(a_i,a_j)$ (denoting the subpath of $P$ starting at $a_i$ and ending at $a_j$) in this proof.} We decompose $P$ as $\bar{U}AV$ or $r(\bar{U}AV)$ for some $U = Q_1' A_1' Q_2' A_2' \dots Q_{m'}' A_{m'}'$ and $V = Q_1 A_1 Q_2 A_2 \dots Q_m A_m$, and $A$, where we are using the notation in Definition~\ref{wave}. Let the levels of $P$ be $L_0,\dots,L_h$, and assume that $h \geq 2$, since otherwise $P$ is clearly a $\zz_k$ for some $1 \leq k \leq 5$. Let $P_1$ be a minimal fuzzy subpath of $P$ of maximum height. We observe first that there can be at most $2$ such paths. If there are more, we choose three arbitrary such subpaths $R_1, R_2, R_3$. Since $height(R_1) = height(R_2) = height(R_3) = h$, $R_1, R_2, R_3$ are edge-disjoint. So we can assume w.l.o.g\ that when we traverse $P$ from first to last vertex, we first traverse the arcs of $R_1$, then the arcs of $R_2$, and finally the arcs of $R_3$. Since we have three paths, we can find two among them such that they both have their first vertices in $L_0$, or they both have their last vertices in $L_h$. Assume w.l.o.g.\ that $R_1$ and $R_2$ are such paths. Set $b_1$ and $t_1$ be the first and last vertices of $P_1$, and $b_2$ and $t_2$ be the first and last vertices of $R_2$. Clearly, $b_1,t_1,b_2,t_2$ satisfy the conditions of Lemma~\ref{fuzzy_N_present}. This contradicts that $P$ contains no fuzzy $\mathrm{N}$ as an induced subgraph. Therefore we assume first that $P$ contains only one minimal fuzzy subpath of maximum height $h$. Let this subpath be $P_1 = P(i_1,j_1)$, where $i_1 < j_1$. We define $V$. We can assume w.l.o.g.\ that $P_1$ is an upward path, in which case we show that $P$ has the form $\bar{U}AV$. (If $P_1$ is a downward path, then we can work with $r(P)$ and proceed the same way: we show that $r(P)$ has the form $\bar{U}AV$, and therefore $P$ has the form $r(\bar{U}AV)$.) We set $Q_1 = P_1$. We find $i_2$ ($j_1 \leq i_2$) maximal such that $P(j_1,i_2)$ has height at most $1$, and $P(j_1,i_2)$ is either $Z_1$ or $Z_3^{f=0}$. We set $A_1$ to be $P(j_1,i_2)$. Then we find $j_2$ ($i_2 \leq j_2$) maximal such that $\bar{P}(i_2,j_2)$ is a minimal fuzzy path, and set $Q_2 = \bar{P}(i_2j_2)$. Notice that $\bar{P}(i_2,j_2)$ can be chosen to be minimal, since otherwise either $A_1$ was chosen to be a $\zz_1$ instead of $\zz_3^{f=0}$, or if $A_1$ is a $\zz_3^{f=0}$ and $\bar{P}(i_2,j_2)$ cannot be chosen to be minimal, then $P$ contains a $\zz_6$, a contradiction. Then we find $A_2$, similarly to the way we found $A_1$. We keep on defining $Q_i$ and $A_i$ this way until the following condition applies. For some $\ell$, $Q_{\ell} = P(i_\ell,j_\ell)$ (or $Q_{\ell} = \bar{P}(i_\ell,j_\ell)$) is defined, and the rest of $P$, i.e., $P(j_\ell, n)$ has height at most $1$ ($\star$). Then we set $A_\ell = P(j_\ell, n)$, and this completes the construction of $V$. We show below that $height(Q_i) > height(Q_{i+1})$, which clearly that condition ($\star$) will eventually occur. Therefore the definition of $V$ is valid. Notice that $A_\ell$ is some $\zz$ on at most $4$ vertices, since otherwise $A_\ell$ together with the last arc of $Q_\ell$ would form a $\zz_6$. We already saw that the $A_i$ have the right form. To prove that $V$ is a wave, it remains to show that $height(Q_i) > height(Q_{i+1})$ for each $1 \leq i \leq m-1$. By the choice of $Q_1$ we have that $height(Q_1) > height(Q_2)$. Assume inductively that $height(Q_i) > height(Q_{i+1})$, and assume for contradiction that $height(Q_{i+1}) \leq height(Q_{i+2})$. As before, it is easy to use Lemma~\ref{fuzzy_N_present} to show the presence of a \fN\ in $P$, which gives a contradiction. If the subpath $a_{j_1-2}a_{j_1-1}a_{j_1}$ has height $1$, then we choose $A$ to be this subpath. Otherwise, $A$ is just $a_{j_1}$. $U$ is the subpath $a_t a_{t-1} \dots a_1$, where $t = j_1$ if $A$ is $a_{j_1}$, and $t = j_1-2$ otherwise. As for $V$, we can show that $U$ is a wave. Assume now that $P$ contains two minimal fuzzy subpaths $R_1$ and $R_2$ of maximum height $h$. Suppose w.l.o.g.\ that the arcs of $R_1$ appear before the arcs of $R_2$ in $P$. We also assume w.l.o.g.\ the $R_1$ is a downward path (if $R_1$ is upward path, we can work with $r(P)$ as above). Then $R_2$ must be an upward path such either $R_1R_2$ is a subpath of $P$, or $R_1AR_2$ is a subpath of $P$, where $A$ is a $\zz_3^{f=0}$. This $A$ corresponds to the $A$ in $\bar{U}AV$ the statement of the lemma. We define $V$ to be the subpath of $P$ starting with $R_2$ and ending at $a_n$, and $\bar{U}$ to be the subpath of $P$ starting at$a_1$ and ending with $R_1$. The proof that $U$ and $V$ are waves is similar to the proof of the first case above. \end{proof} \subsection{Running time analysis} We give a running time analysis of the recognition algorithm in Theorem~6.8 of \cite{soda_lhom} when run an oriented tree. We call this algorithm the N-algorithm. Note that we did not attempt to optimize the implementation of this algorithm, and it is possible that the running time could be improved with some work. The N-algorithm first produces a digraph $G^{++}$ with vertex set $|V(H)|^3$. If $H$ is a tree, then $G^{++}$ cannot have a cycle (see the definition of $G^{++}$ in \cite{soda_lhom}), so it must be a forest. It follows that $G^{++}$ has at most $O(|V(H)|^3)$ arcs. Then for each $x,y \in V(H)$, the N-algorithm finds the set $S_{(x,x,y)}$ of all vertices reachable from $(x,x,y)$ ignoring certain arcs of $G^{++}$, and finds the set $S_{(x,y,y)}$ of all vertices reachable from $(x,y,y)$ ignoring some other certain arcs of $G^{++}$. Using BFS, each such search could take $O(|V(H)|^3)$ time. The size of each $S_{(x,x,y)}$ and $S_{(x,y,y)}$ could be $O(|V(H)|^3)$. Then the N-algorithm checks if $S_{(x,x,y)} \cap S_{(x,y,y)}$ is non-empty, for each $x, y\in V(H)$. This can be done in time $O(|V(H)|^2 \cdot (|V(H)|^3)^2) = O(|V(H)|^8)$. \section{Preliminaries}\label{preliminaries} \subsection{Digraphs and related concepts} Let $G$ be a digraph. (All digraphs in this paper are finite.) An arc of $G$ from vertex $a$ to vertex $b$ is denoted by $ab$. We call $a$ and $b$ the \emph{endpoints} of $ab$. If we want to be more specific, we call $a$ the \emph{tail} and $b$ the \emph{head} of $ab$. If $v \in V(G)$ we call $u$ an \emph{inneighbour} (\emph{outneighbours}) of $v$ if $uv \in A(G)$ ($vu \in A(G)$). The \emph{indegree (outdegree)} of a vertex is the number of its inneighbours (outneighbours). A digraph $G$ is \emph{connected} if the undirected graph obtained from $G$ by replacing arcs with undirected edges (the \emph{underlying undirected graph}) is connected. For a disconnected digraph $G$, a \emph{component} of $G$ is a maximal subgraph that is connected. If $V(G)$ can be partitioned into non-empty sets called \emph{vertex levels} $L_0,\dots,L_n$ such that for each arc $ab$, $a \in L_i$ and $b \in L_{i+1}$ for some $i < n$, then we call $G$ \emph{leveled}. Observe that if a connected digraph is leveled, then the partition $L_0,\dots,L_n$ is unique. We call $L_0$ the \emph{bottom} vertex level and $L_n$ the \emph{top} vertex level. For a vertex $v \in V$, we say that \emph{$v$ is in the bottom (top) level if $v \in L_0$ ($v \in L_n$)}. Given a vertex $v$ of $G$, we use $\ell(v)$ to denote the index such that $v \in L_{\ell(v)}$. For $0 \leq i \leq n - 1$ , we denote the set of arcs of $G$ with one endpoint in $L_i$ and the other one in $L_{i+1}$ with $\al_i$. We call $\al_i$ the \emph{$i$-th arc level} of $G$. We define \emph{bottom} and \emph{top arc levels} in the natural way. If we need to be explicit, we write $L^G_i$ ($\al_i^G$) instead of $L_i$ ($\al_i$) to mean the $i$-th vertex (arc) level of digraph $G$. Note that we can think of $\al_i$ as a digraph, and we will often do so without explicitly mentioning. Also note that if $u$ is a vertex in $L_i$ or $L_{i+1}$ such that $u$ is not the endpoint of an arc in $\al_i$ then $u$ \emph{does not} belong to the digraph $\al_i$. Given an arc $a$ of $G$, we use $\ell(a)$ to denote the index such that $a \in \al_{\ell(a)}$. (Note that using $\ell$ for both vertex and arc levels will cause no confusion.) Given a digraph $G$, $r(G)$ denotes the digraph obtained from $G$ by replacing every arc $ab$ with $ba$. An \emph{oriented walk} $W$ is a sequence of vertices $a_1 a_2 \dots a_m$, where precisely one of $a_i a_{i+1}$ or $a_{i+1} a_i$ is an arc. Arc $a_i a_{i+1}$ is called a \emph{forward} arc, and $a_{i+1} a_i$ a \emph{backward} arc. Let $W = a_1 a_2 \dots a_m$ be an oriented walk. An \emph{oriented path} is a \emph{simple} oriented walk, i.e., each vertex in the walk appears only once. Suppose that $e$ is an arc of $W$ with endpoints $a_j$ and $a_{j+1}$, and $e'$ is an arc of $W$ with endpoints $a_{k-1}$ and $a_k$ (note that both $e$ or $e'$ could be backward or forward). Then $W(a_j,a_k)$ (a walk from vertex $a_j$ to $a_k$), $W(e,a_k)$ (a walk from arc $e$ to vertex $a_k$), $W(a_j,e')$ (a walk from a vertex $a_j$ to arc $e'$) and $W(e,e')$ (a walk from arc $e$ to arc $e'$) all denote the subwalk $a_ja_{j+1}\dots a_k$ of $W$. For an oriented path $W = a_1 a_2 \dots a_m$, there is a natural total order $\preceq$ on its vertices, i.e., $a_i \preceq a_j$ if and only if $i \leq j$. This order helps us refer to parts of $W$: the \emph{first} and \emph{last} vertices of $W$ are $a_1$ and $a_n$, respectively. A vertex $a_i$ of $W$ is \emph{before} (\emph{after}) $a_j$ if $i \preceq j$ ($j \preceq i$). We use $\bar{W}$ to denote the path that is isomorphic to $W$, but the order associated with the path is reversed, i.e., $\bar{W} = a_m a_{m-1} \dots a_1$. If $P=a_1\dots a_n$ and $Q = b_1 \dots b_m$ are two oriented paths, then $PQ$ is the \emph{concatenation} of $P$ and $Q$, i.e., the oriented path $P=a_1\dots a_nb_2 \dots b_m$, where we identify the last vertex of $P$, $a_n$, and the first vertex of $Q$, $b_1$ (i.e., the arc on vertices $a_n$ and $b_2$ is $a_nb_2$ if $b_1b_2$ is an arc of $Q$, and it is $b_2a_n$ if $b_2b_1$ is an arc of $Q$). The \emph{height} of an oriented walk $W$, denoted by \emph{$height(W)$}, is the number of different vertex levels in which $W$ contains at least one vertex minus $1$. If $\ell(a_1) < \ell(a_n)$, then we say $W$ is an \emph{upward} walk, and if $\ell(a_1) > \ell(a_n)$, then we say that $P$ is a \emph{downward} walk. (When $\ell(a_1) = \ell(a_n)$, the walk is neither upward nor downward.) The \emph{net length} of an oriented path is the number of forward arcs minus the number of backward arcs in the walk, and it is denoted by $net(W)$. An \emph{oriented tree} is a digraph such that the underlying undirected graph is a tree. Observe that an oriented tree $T$ is always leveled. Furthermore, let $a$ be a vertex or an arc of $T$, and let $b$ also be a vertex or an arc of $T$. Then observe that since $T$ is a tree, the oriented path $P(a,b)$ is unique. In what follows, when we say that digraph $X$ is \emph{of the form} $Y$, where $Y$ is digraph, we mean that there is an isomorphism between $X$ and $Y$. Similarly, saying that digraph $X$ \emph{is} a $Y$ means $X$ is isomorphic to $Y$. \begin{definition}\label{Z_6} $\mathrm{Z}_i^s$, where $1 \leq i$ and $s \in \{f=0,f=1,l=0,l=1\}$, is used to denote oriented paths of the following form. $\mathrm{Z}_i^s$ is of the form $a_1\dots a_i$, where if $a_ja_{j+1}$ is a forward (backward) arc, then $a_{j+1}a_{j+2}$ is a backward (forward arc) arc. Observe that it is always the case that $height(\mathrm{Z}_i^s)=1$, and therefore $\mathrm{Z}_i^s$ has two vertex levels, a bottom level $L_0$ and a top level $L_1$. The superscripts $f=0$ and $f=1$ stand for the first vertex $a_1$ being in $L_0$ and $L_1$, respectively. Similarly, the superscripts $l=0$ and $l=1$ stand for the last vertex $a_i$ being in $L_0$ and $L_1$, respectively. $\mathrm{Z}_i$ stands for either $\mathrm{Z}_i^{f=0}$ or $\mathrm{Z}_i^{f=1}$. Observe that $\mathrm{Z}_1$ is a single vertex. $\mathrm{Z}$ stands for $\mathrm{Z}_i$ for some $i$. A $\mathrm{Z}_6^{f=0}$ can be seen in Figure~\ref{f_constr}. \end{definition} \begin{definition}\label{def_fuzzyN} Let $n$ be a positive integer. We say that a digraph $P$ is a \emph{fuzzy path} if $P$ or $\bar{P}$ is of the form $P_1 P_2 \dots P_n$, where \begin{itemize} \item If $n = 1$, then $P_1$ is $\mathrm{Z}_i$, where $1\leq i \leq 5$. \item If $n \geq 2$, then $P_1$ is $\mathrm{Z}_i^{l=1}$ for some $i \leq 5$, and $P_n = \mathrm{Z}_j^{f=0}$ for some $j \leq 5$. \item For each $1 < i < n$, $P_i$ is either of the form $\mathrm{Z}_2^{f=0}$, or $\mathrm{Z}_4^{f=0}$. \end{itemize} A fuzzy path $P$ is \emph{minimal}, if it has only one vertex in both $L_0$ and $L_{height(P)}$. An oriented path $P$ is a \emph{fuzzy $\mathrm{N}$} if $P$ is of the form $P_1T\bar{P}_2BP_3$, where $P_1$, $P_2$ and $P_3$ are minimal fuzzy paths of the same height, and height at least $2$, $T$ is of the form $\mathrm{Z}_1$ or $\mathrm{Z}_3^{f=1}$, and $B$ is of the form $\mathrm{Z}_1$ or $\mathrm{Z}_3^{f=0}$. A fuzzy $\mathrm{N}$ is illustrated in Figure~\ref{f_constr}. \end{definition} \begin{figure}[htb] \begin{center} \includegraphics[scale=\wp]{Fuzzy_N_Z5.pdf} \end{center} \caption{A fuzzy $\mathrm{N}$ in which $T$ is a single vertex (top left), and a $\mathrm{Z}_6^{f=0}$ (bottom left). The up-join of $T_0,T_1,T_2,T_3$ (right).}\label{f_constr} \end{figure} \subsection{Homomorphisms and polymorphisms} The List Homomorphism problem LHOM($H$) has already been defined in the Introduction. Let $G$ and $H$ be leveled connected digraphs. Notice that any (list)-homomorphism $h$ from $G$ to $H$ must be \emph{level-preserving}: \begin{itemize} \item if $L^G$ is a vertex level of $G$, then $h(L^G) \subseteq L^H$, where $L^H$ is some vertex level of $H$, and \item if $u \in L^G_i$ and $v \in L_{i'}^G$, where $L_i^G$ and $L_{i'}^G$ are some vertex levels of $G$ ($0 \leq i,i' \leq height(G)$), then $\ell(h(u)) - \ell(h(v)) = i - i'$. \end{itemize} This level-preserving property of homomorphisms will be used implicitly. \begin{definition}\label{def-HM} Let $H$ be a digraph. An operation $f : V(H)^m \rightarrow V(H)$ is a polymorphism of $H$ if $f(v_{11},v_{12},\dots,v_{1m})f(v_{21},v_{22},\dots,v_{2m}) \in A(H)$ whenever $v_{11}v_{21},v_{12}v_{22},\dots,v_{1m}v_{2m} \in A(H)$. Operation $f$ is \emph{conservative} if $f(v_1,\dots,v_m) \in \{v_1,\dots,v_m\}$. A sequence $f_1, \dots, f_k$ of ternary operations is called a \emph{Hagemann-Mitschke chain of length $k$} if it satisfies the identities \begin{itemize} \item $x = f_1(x,y,y)$ \item $f_i(x,x,y) = f_{i+1}(x,y,y)$ for all $i=1, \dots, k-1$ \item $f_k(x,x,y)=y.$ \end{itemize} We say that $H$ {\em admits} an HM-chain $f_1,f_2,\dots,f_k$ if each $f_i$ is a polymorphism of $H$. \end{definition} \section{A faster recognition algorithm}\label{faster} An algorithm that recognizes digraphs containing no circular $\mathrm{N}$ is given in \cite{soda_lhom}. However, \cite{soda_lhom} only shows that the algorithm runs in polynomial time. In Appendix A, we show that a direct implementation of this algorithm when inputs are restricted to oriented trees is guaranteed to run in $O(|V(T)|^8)$ time. The running time of the algorithm in this paper is $O(|V(T)|^3)$. \begin{theorem}\label{recognition_alg} Let $T$ be an oriented tree. Then there is a $O(|V(T)|^3)$ algorithm that decides whether $T$ contains a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as induced subgraphs. (Equivalently, whether $T$ contains a circular $N$.) \end{theorem} \begin{proof} For each $u,v \in V(T)$, we find the unique oriented path from $u$ to $v$ in $T$ using, for example, breadth first search. For each such path $P(u,v)$, we test whether $P(u,v)$ is a $\mathrm{Z_6}$, or if not, we run the test in the next paragraph. Clearly, checking whether $P(u,v)$ is a $\mathrm{Z_6}$ takes constant time. If $\mathrm{Z_6}$ is found, we output YES. Otherwise, we run the following test on $P(u,v)$. Suppose that $P(u,v) = a_0 \dots a_n$ (where $a_0 = u$ and $a_n = v$). We traverse $P(u,v)$ from $a_0$ to $a_n$. We initialize a counter $c$ to $0$ at $a_0$, and increase and decrease $c$ every time we move from $a_i$ to $a_{i+1}$: if $a_ia_{i+1}$ is forward arc, we increase $c$ by $1$, and if it is a backward arc, we decrease $c$ by $1$. If $c$ becomes negative at any step, then we abandon the computation on $P(u,v)$ (and we move on to analyze $P(u'v')$ for the next pair $u'v' \in V(T)$). As we traverse $P(u,v)$, we keep track of the maximum value $M$ of the counter $c$. Assume that we obtained the maximum value of $M$. If $M \leq 1$, then we abandon the computation on $P(u,v)$, and we move on to the next path $P(u',v')$. Otherwise $M \geq 2$. We set $c$ to $0$ again, and we traverse $P(u,v)$ again starting at $a_0$ as before, increasing and decreasing the value of $c$ at each step. We set $b_1 = a_0$. If $a_{k_1}$ is the first vertex where $c$ attains the value $M$, we set $t_1 = a_{k_1}$. If $a_{k_2}$ is the first vertex after $a_{k_1}$ where $c$ attains value $0$, then we set $b_2 = a_{k_2}$. If $a_{k_3}$ is the first vertex after $a_{k_2}$ where $c$ attains value $M$, then we set $t_2 = a_{k_3}$. If no such $a_{k_1}$, $a_{k_2}$ and $a_{k_3}$ exist, then we move on to the next path $P(u',v')$. Otherwise, we output YES. If we finished testing all paths $P(u,v)$ and we never output YES, then we output NO. This completes the description of the algorithm. If we output YES, then either $T$ contains a $\mathrm{Z}_6$, or the vertices $b_1,t_1,b_2,t_2$ for some $P(u,v)$ (defined above) satisfy the conditions of Lemma~\ref{fuzzy_N_present}, so $T$ contains a fuzzy $\mathrm{N}$. Conversely, suppose that $T$ contains a $\mathrm{Z}_6$ or a fuzzy $\mathrm{N}$ having first vertex $a$ and last vertex $b$. Then since we cycled through all pairs of vertices of $u,v \in V(T)$, if $T$ contains an induced $\mathrm{Z}_6$ with first vertex $a$ and last vertex $b$, the algorithm detects this $\mathrm{Z}_6$ when $u = a$ and $v = b$. Similarly, if $T$ contains an induced fuzzy $\mathrm{N}$ with first vertex $a$ and last vertex $b$, then the algorithm finds $b_1,t_1,b_2,t_2$ satisfying the conditions of Lemma~\ref{fuzzy_N_present} when $u = a$ and $v = b$. We cycle through all pairs $u,v \in V(T)$. For each such pair, we run a BFS to find $P(u,v)$, which takes time $O(|V(T)|)$ (recall that $T$ is an oriented tree). We traverse $P(u,v)$ and keep track only of a constant amount of data during these traversals. Overall the running time is $O(|V(T)|^3)$. \end{proof} \section{A structural characterization}\label{structural} In this section, we define an inductive construction of oriented trees, and show that oriented trees that can be constructed this way are precisely the oriented trees that do not contain a $\mathrm{Z_6}$ or fuzzy $\mathrm{N}$ as an induced subgraph (Theorem~\ref{construction_theorem}). \begin{definition}[Inductive construction]\label{constructible} \label{L_construction} Let $T_0,T_1,\dots,T_n$ be oriented trees. \begin{enumerate} \item Let $v_0$ be a vertex in the bottom vertex level of $T_0$; \item Let $v_i$ be a vertex of $T_i$, for each $1 \leq i \leq n$, either in the top (bottom) vertex level of $T_i$, or such that $v_i$ is the only vertex in $L_{\ell(v_i)}^{T_i}$ with out-degree (in-degree) greater than $0$. \end{enumerate} The \emph{up-join (down-join)} of $T_0,T_1,\dots,T_n$ is the oriented tree obtained by taking the disjoint union of $T_i$ for $1 \leq i \leq n$, and adding all arcs $v_iv_0$ ($v_0v_i$) for each $1 \leq i \leq n$. An example of this construction is given in Figure~\ref{f_constr}. We call $T_0$ the \emph{central tree}, $v_0$ the \emph{central vertex}, and $v_i$, where $1 \leq i \leq n$, the \emph{join vertices}. When we specify a list of trees and we take their up-join (down-join) the first tree in the list is always meant to be the central tree. If $G$ is an oriented tree with a single vertex, we say that $G$ is \emph{constructible}. Inductively, if $T_0,T_1,\dots,T_n$ are constructible, then their up-join and down-join (for some central and join vertices satisfying the conditions above) are also \emph{constructible}. \end{definition} The main structural result of this section is the following theorem. \begin{theorem}\label{construction_theorem} An oriented tree is constructible if and only if $T$ contains neither $\mathrm{Z_6}$, nor a fuzzy $\mathrm{N}$ as an induced subgraph. \end{theorem} We need the following lemma a number of times to prove the existence of fuzzy $\mathrm{N}$'s. \begin{lemma}\label{fuzzy_N_present} Let $P$ be an oriented path. Then $P$ contains vertices $b_1,t_1,b_2,t_2$ with the following properties: \begin{enumerate} \item when we traverse $P$ from first vertex to last vertex, we encounter $b_1,t_1,b_2,t_2$ in this order, \item $b_1$ and $b_2$ are in $L_x$ for some $x$, and $t_1$ and $t_2$ are in $L_y$ for some $y$, where $y \geq x + 2$, \item no vertex of $P(b_1,t_2)$ is in level $L_{x-1}$ or level $L_{y+1}$, \end{enumerate} if and only if $P$ contains a fuzzy $\mathrm{N}$. \end{lemma} \begin{proof} Suppose that $P$ contains vertices $b_1,t_1,b_2,t_2$ with the above properties. Suppose that there is a path $Q$ among $P(b_1,t_1)$, $\bar{P}(b_2,t_1)$ or $P(b_2,t_2)$ such that $Q$ contains vertices $b_1',t_1',b_2',t_2'$ with same properties as $b_1,t_1,b_2,t_2$ in $P$, respectively. Then we work with $Q$ instead of $P$. Repeating this argument sufficiently many times, we can assume without loss of generality that each arc level of each subpath $P(b_1,t_1)$, $\bar{P}(b_2,t_1)$ and $P(b_2,t_2)$ of $P$ contains only one component of the given subpath. Since in addition, none of $P(b_1,t_1)$, $\bar{P}(b_2,t_1)$ and $P(b_2,t_2)$ contain $\mathrm{Z_6}$ as an induced subgraph, each of $P(b_1,t_1)$, $\bar{P}(b_2,t_1)$ and $P(b_2,t_2)$ satisfies the $3$ conditions in Definition~\ref{def_fuzzyN}. Therefore these subpaths are fuzzy. By throwing away unnecessary vertices, we can also assume that $b_1$ and $t_2$ are the only vertices of $P(b_1,t_1)$ and $P(b_2,t_2)$ in $L_x$ and $L_y$, respectively. Now it is easy to see that $P(b_1,t_2)$ is a \fN: let $t_1'$ be the first vertex of $P(b_1,t_1)$ in $L_y$, and let $t_1''$ the last vertex of $\bar{P}(b_2,t_1)$ in $L_y$. Let $T = P(t_1',t_1'')$. Similarly, let $b_2'$ the first vertex of $\bar{P}(b_2,t_1)$ in $L_x$, and let $b_2''$ be the last vertex of $P(b_2,t_2)$ in $L_x$. Let $B = P(b_2',b_2'')$. Then $P(b_1,t_1') T \bar{P}(b_2',t_1'') B P(b_2'',t_2)$ is a fuzzy $\mathrm{N}$. The converse is straightforward. \end{proof} \begin{lemma}\label{fuzzy_if_bt} Let $P$ be an oriented path containing no $\mathrm{Z_6}$ or fuzzy $\mathrm{N}$ as induced subgraphs. If the first vertex of $P$ is in the bottom level of $P$, and the last vertex of $P$ is in the top level of $P$, then $P$ is a fuzzy path. \end{lemma} \begin{proof} Let the first and last vertices of $P$ be $s$ and $t$, respectively. Assume for contradiction that $P$ contains a vertex $t_1'$ before a vertex $b_2'$ such that $\ell(t_1) \geq \ell(b_2) + 2$. Let $t_1$ be a vertex of $P(s,t_1')$ in $L_a$ where $a$ is maximal. Let $b_2$ be a vertex of $P(t_1',t)$ in $L_b$ such that $b$ is minimal. Since $\ell(s) \leq \ell(b_2)$ and $\ell(t_1) \leq \ell(t)$, we can find $b_1$ (before $t_1$ and $t_2$ (after $b_2$) such that $b_1,t_1,b_2,t_2$ satisfy the conditions of Lemma~\ref{fuzzy_N_present}, and therefore $P$ contains a \fN\ as induced subgraph, a contradiction. Therefore if $P = a_1\dots a_n$, then for any $i < j$, $\ell(a_i) \leq \ell(a_j) + 1$. Since $P$ does not contain \z6\ as an induced subgraph, we conclude that $P$ must be fuzzy. \end{proof} Lemma~\ref{technical_lemma} is the main technical result used in the proof of Theorem~\ref{construction_theorem}. \begin{lemma}\label{technical_lemma} Let $T$ be an oriented tree. Assume that $T$ does not contain a $\mathrm{Z_6}$ or a fuzzy $\mathrm{N}$ as an induced subgraph. Then there is an arc level $\al$ of $T$ that contains at most one component. \end{lemma} \begin{proof} In this proof, $L_i$ and $\al_i$ always refer to vertex and arc levels of $T$, respectively. Similarly, the function $\ell(\cdot)$ gives the index of the vertex level or arc level of a vertex or arc of $T$, respectively. Choose two arbitrary vertices $b \in L_0$ and $t \in L_{height(T)}$. Let $S$ denote the (unique) path $P(b,t)$. Path $S$ is fixed for the rest of the proof. By Lemma~\ref{fuzzy_if_bt}, $S$ is a fuzzy. Since $S$ is a fuzzy path, there is a component $C_i$ of the digraph $\al_i$ (for all $0 \leq i \leq height(T)$), such that all arcs of $S$ in $\al_i$ belong to the component $C_i$. We say that an arc $d \in A(T)$ in $\al_i$ is \emph{separated from $S$} if $d$ does not belong to $C_i$. We can assume that there is a separated arc in $\al_i$ for each $0 \leq i \leq height(T)-1$, because otherwise we would have the desired property stated in the lemma. We use the existence of these separated arcs to obtain a contradiction. We will inductively fix a sequence of paths $F_0,\dots,F_q$ in $T$. For each $0 \leq i \leq q$, $A_i$ will denote the set of arcs of $F_i$ that are separated from $S$. To define $F_0$, we find a separated arc $a_0'$ in $\al_0$, and let $F_0'$ be the unique oriented path from $t$ (recall that $t$ is the last vertex of $S$ in $L_n$) to $a_0'$. Let $c_0$ be the last common vertex of $S$ and $F_0'$, and $a_0$ be the first separated arc of $F_0'(c_0,a_0')$ in $\al_0$. Then $F_0$ is the subpath $F_0'(c_0,a_0)$. Assuming that $F_0,\dots, F_{i-1}$ have been defined, we define $F_i$ inductively as follows. If for each $0 \leq j \leq height(T) - 1$, there is an index $j'$, such that $\al_j \cap A_{j'} \neq \emptyset$, that is, for each arc level of $T$, some $A_{j'}$ contains a separated arc of that arc level, then we set $q = i-1$, and the construction of the paths $F_0,\dots,F_q$ is completed. Otherwise, let $m$ be minimum such that there is no (separated) arc in $A_0 \cup \dots \cup A_{i-1}$ that is in $\al_m$, and let $a_i'$ be a separated arc of $T$ in $\al_m$. Let $F_i'$ be the unique oriented path from $t$ to $a_i'$. Let $c_i$ be the last common vertex of $S$ and $F_i'$, and $a_i$ be the first separated arc of $F_i'(c_i,a_i')$ in $\al_{\ell(a_i')}$. Then $F_i$ is defined as the subpath $F_i'(c_i,a_i)$. \begin{claim}\label{fuzzy_claim} $F_i$ is a fuzzy path for all $0 \leq i \leq q$. \end{claim} \begin{proof} We show first that $F_0$ is fuzzy. Since $a_0 = u_0v_0$ is the first separated arc of $F_0$ in $\al_0$, the last vertex of $F_0$ is $u_0$, and therefore it is in $L_0$. To see this, assume for contradiction that the last vertex of $F_0$ is $v_0$ (in $L_1$). Suppose the arc of $F_0$ before $a_0$ is $u_0w_0$ for some $w_0$ (since $u_0$ is in $L_0$, it has no inneighbours; $u_0w_0$ is not an arc of $S$, because if it was, then $u_0v_0$ would not be separated). Since $a_0$ is the first separated arc of $F_0$, $u_0w_0$ cannot be separated. But that is not possible, because arcs $u_0v_0$ and $u_0w_0$ have the same starting vertex, so they are in the same component of $\al_0$. Since $u_0 \in L_0$ and $t \in L_n$, $P(u_0,t)$ is fuzzy by Lemma~\ref{fuzzy_if_bt}. Since $F_0$ is a subpath of $P(u_0,t)$, $F_0$ is also a fuzzy path. Recall that $a_i$ is the first separated arc of $F_i'(c_i,a_i')$ in $\al_{\ell(a_i')}$, so $a_i$ is either in the top or the bottom arc level of $F_i$. Suppose not. Then there are arcs $e \in \al_j$ and $e' \in \al_{j'}$ of $F_i$ such that $j > \ell(a_i')$ and $j' < \ell(a_i')$. Therefore $F_i(e,e')$ contains an arc $f$ in $\al_{\ell(a_i')}$, and clearly, $f$ is separated. This would contradict that $a_i$ is the first arc of $F_i'(c_i,a_i')$ in $\al_{\ell(a_i')}$. Suppose that $a_i=uv$ is in the bottom arc level of $F_i$. Recall that $a_i$ is the first separated arc of $F_i'(c_i,a_i')$. Assume first that $a_i$ is the only arc of $F_i$ in $\al_{\ell(a_i')}$. Then the last vertex of $F_i$ is $u$, and $u$ is also the only vertex of $F_i$ in the bottom vertex level of $F_i$. Therefore $\ell(c_i) > \ell(u)$ (recall that $c_i$ is the only common vertex of $S$ and $F_i$), and since $S$ is fuzzy, $u$ is in the bottom vertex level also of $P(u,t)$ (but it is possible that $P(u,t)$ contains other vertices in its bottom vertex level). Since vertex $u$ is in the bottom vertex level of $P(u,t)$, and vertex $t$ is in its top level, it follows from Lemma~\ref{fuzzy_if_bt} that $P(u,t)$ is fuzzy. Since $F_i$ is a subpath of $P(u,t)$, $F_i$ must also be fuzzy. Assume therefore that $a_i$ is not the only arc of $F_i$ in $\al_{\ell(a_i')}$. Let $e$ be the first non-separated arc of $F_i$ in $\al_{\ell(a_i')}$. Such an arc can be only the first arc of $F_i$, and therefore one of the endpoints of $e$ is $c_i$. If $c_i$ is the head of $e$, then since $S$ is fuzzy, all vertices of the subpath $P(c_i,t)$ of $S$ are in a vertex level $L_k$, where $k \geq \ell(u)$. That is $u$ is in the bottom vertex level of $P(u,t)$. So as above, $P(u,t)$ must be fuzzy, and therefore $F_i$ is fuzzy. (However, if $F_i$ is fuzzy and $e$ and $a_i$ are in the same arc level, then $P(e,a_i)$ is a $\mathrm{Z}$, and therefore since $e$ is not separated, $a_i$ cannot be separated either. Therefore such an $F_i$ cannot exist.) If $c_i$ is the tail of $e$, then let $d$ be the vertex of $P(c_i,t)$ after $c_i$. The arc on vertices $c_i$ and $d$ must be a forward arc $c_id$, since otherwise $e$ would be separated. Since $d \in L_{\ell(u)+1}$ and $S$ is fuzzy, all vertices of $P(c_i,t)$ are in a vertex level $L_k$, where $k \geq \ell(u)$, and we proceed as above. If $a_i$ is in the top level of $F_i$, then a similar argument works using $P(s,v)$ instead of $P(u,t)$. \end{proof} \begin{claim}\label{up_or_down} For each $0 \leq i \leq q$, $F_i$ is either a downward or an upward (fuzzy) path. \end{claim} \begin{proof} Assume that $F_i = w_0 w_1 \dots w_n$ (note that $w_0 = c_i$). Suppose for contradiction that $\ell(w_0) = \ell(w_n)$. This implies that $F_i$ must have at least two arcs. Since $F_i$ is fuzzy, it must be that $height(F_i) \leq 2$, because if the height is more than $2$, it is not possible that $\ell(w_0) = \ell(w_n)$. If $height(F_i) = 1$, then the last two arcs of $F_i$ are either $w_{n-1} w_{n-2}$ and $w_{n-1}w_n$, or $w_{n-2} w_{n-1}$ and $w_nw_{n-1}$. In both cases, if the last arc of $F_i$ is separated, then so is the second last arc. This contradicts the definition of $F_i$, namely, that its last arc is the first separated arc in that arc level. If $height(F_i) = 2$, then it still must be that the last two arcs of $F_i$ are either $w_{n-1} w_{n-2}$ and $w_{n-1}w_n$, or $w_{n-2} w_{n-1}$ and $w_nw_{n-1}$, so we can argue similarly. \end{proof} Let $F_i$ ($0 \leq i \leq q$) be a fuzzy downward (upward) path that contains a separated arc. Let $e$ be the first arc of $F_i$. \begin{itemize} \item We say that $F_i$ is type $1$ if $e$ is not separated; \item We say that $F_i$ is type $2$ if $e$ is separated and $e$ is a backward (forward) arc of $F_i$; \item We say that $F_i$ is type $3$ if $e$ is separated and $e$ is a forward (backward) arc of $F_i$. \end{itemize} \begin{claim}\label{no_type_3} Let $0 \leq i \leq q$ be an integer such that $F_i$ is downward (upward) and type $3$. Then $T$ contains a fuzzy $\mathrm{N}$. \end{claim} \begin{proof} Since the first arc $e$ of $F_i$ is a forward arc and $e$ is separated, $c_i$ cannot have an outneighbours in $S$. (If $S$ has an outneighbor $v_o$ at $c_i$ then, $e$ would be in the same component of $\al_{\ell(e)}$ as $c_i v_0$, contradicting that $e$ is separated.) Since $F_i$ is downward, $F_i$ must have an arc in $\al_{\ell(e) - 1}$. Let $b_1v$ be the first arc of $F_i$ in $\al_{\ell(e) - 1}$. Let $t_1$ be the outneighbor of $c_i$ in $F_i$. Let $b_2$ be the in neighbor of $c_i$ in $S(c_i,t)$, and let $t_2$ be the first vertex of $S(c_i,t)$ in $L_{\ell(t_1)}$. It is easy to check that $b_1,t_1,b_2,t_2$ satisfy the conditions of Lemma~\ref{fuzzy_N_present}, so $T$ contains a fuzzy $\mathrm{N}$. The argument is analogous when $F_i$ is an upward path. \end{proof} \begin{claim}\label{continuous} Let $1 \leq i \leq q$. Let $e$ be the first arc of $F_i$. Either all arcs of $F_i$ are separated, or arcs of $F_i$ in $\al_{\ell(e)}$ are non-separated, and all other arcs of $F_i$ are separated. \end{claim} \begin{proof} Assume first that $e$ is separated. If $e'$ is non-separated, then the path $F_i(c_i,e')$ must be a $\mathrm{Z}$, since by definition, $e'$ must be in the same component of $\al_{\ell(e')}$ as some arc of $S$. But since $e$ is the first arc of $F_i$, $F_i(c_i,e')$ contains $e$, so $e$ is also in the same component as $e'$, and therefore $e'$ is also non-separated. This is a contradiction. So all arcs of $F_i$ are separated. Assume that $e$ is non-separated. Let $e'$ be any arc of $F_i$. If $e' \in \al_{\ell(e)}$, then since $F_i$ is fuzzy, by Claim~\ref{fuzzy_claim}, $F_i(e,e')$ must be a $\mathrm{Z}$, and therefore $e'$ is also non-separated. If $e'$ is not in $\al_{\ell(e)}$, then the path $F_i(c_i,e')$ is not a $\mathrm{Z}$, so $e'$ cannot be connected to an arc of $S$ inside one arc level. \end{proof} By Claims~\ref{up_or_down}~and~\ref{no_type_3}, we can assume that $F_i$ is type $1$ or $2$ for each $0 \leq i \leq q$. We prove Claim~\ref{down} now. \begin{claim}\label{down} $F_i$ is a downward (fuzzy) path for all $0 \leq i \leq q$. \end{claim} \begin{proof} The proof of Claim~\ref{fuzzy_claim} also showed that the last vertex of $F_0$ is in $L_0$. Since by Claim~\ref{up_or_down}, each $F_0$ is either a downward or an upward path, $F_0$ is a downward path. We show now that the rest of the $F_i$ are also downward. Suppose for contradiction that there is an upward path among $F_0,\dots,F_q$, and let $j$ be the smallest index such that $F_j$ is an upward path. Let $\beta$ be the index such that $F_j$ contains a separated arc in $\al_{\beta+1}$, but not in $\al_\beta$. \begin{subclaim} $\beta$ exists. \end{subclaim} \begin{proof} It is sufficient to show that $F_j$ does not contain a separated arc in $\al_0$. Suppose otherwise. If $\ell(c_j) \geq 2$, then $F_j$ cannot be an upward fuzzy path. If $\ell(c_j) = 0$, then the first arc of $F_j$ is non-separated, so all arcs of $F_j$ in $\al_0$ are non-separated by Claim~\ref{continuous}. If $\ell(c_j) = 1$ then the first arc of $F_j$ must be a backward arc, since otherwise $F_j$ could not be an upward path containing an arc in $\al_0$. If this first arc $f$ is non-separated then we use Claim~\ref{continuous} as above. If $f$ is separated, then $F_j$ is type $3$, and that leads to a contradiction by Claim~\ref{no_type_3}. \end{proof} \begin{subclaim} There is an index $j' < j$ such that $F_{j'}$ contains a separated arc $g_{j'}$ in $\al_\beta$. \end{subclaim} \begin{proof} By Claim~\ref{continuous}, if the upward fuzzy path $F_j$ does not contain a separated arc in $\al_{\beta}$ and it contains a separated arc in $\al_{\beta+1}$, then all arcs of $F_j$ must be in arc levels $\al_{\alpha}$ for some $\alpha \geq \beta$. Therefore when $F_j$ is chosen, by the definition of the sequence $F_0,\dots,F_q$, there must be an index $j' < j$ such that $F_{j'}$ contains a separated arc in $\al_{\beta}$. where. \end{proof} Since $j' < j$, $F_{j'}$ is a downward path. \begin{subclaim}\label{progressive} Let $f$ be a separated arc of $F_i$ for some $0 \leq i \leq q$. Then for each $0 \leq \alpha \leq \ell(f)$, there is an index $i' \leq i$ such that $F_{i'}$ contains a separated arc in $\al_{\alpha}$. \end{subclaim} \begin{proof} This easily follows from the definition of $F_0,\dots,F_q$ and Claim~\ref{continuous}. \end{proof} \begin{subclaim}\label{high_enough} Let $f$ be the first arc of $F_{j'}$. Then there exists an $i$ such that $F_j$ contains a separated arc in $\al_i$, where $i >\ell(f)$ if $f$ is separated, and $i \geq \ell(f)$ otherwise. \end{subclaim} \begin{proof} Assume first $f$ is separated. Using Subclaim~\ref{progressive}, for each $\alpha \leq \ell(f)$, there is an index $k$ such that $F_k$ contains a separated arc in $\al_{\alpha}$. Therefore when $F_j$ is defined, it must contain a separated arc in an arc level $\al_i$, where $i > \ell(f)$. If $f$ is non-separated, then since $F_{j'}$ is downward, $F_{j'}$ contains a separated arc in $\al_{\ell(f) - 1}$. Now we can proceed as in the previous case. \end{proof} \begin{subclaim} If both $F_j$ and $F_{j'}$ are type $1$, then $T$ contains a fuzzy $\mathrm{N}$. \end{subclaim} \begin{proof} Since the first arc of $F_j$ and $F_{j'}$ are non-separated and both $F_j$ and $F_{j'}$ contain a separated arc by assumption, both $F_j$ and $F_{j'}$ must have height at least $2$. Let $b_1$ be the first vertex of $F_{j'}$ in $L_\beta$, $t_1$ be a vertex of $F_{j'}$ in the top level of $F_{j'}$. Let $b_2$ be a vertex of $F_{j}$ in level $L_\beta$ (which exists since $F_j$ is type $1$), and $t_2$ be a vertex of $F_{j'}$ in $L_{\ell(t_1)}$ (which exists by Subclaim~\ref{high_enough}). Since both $F_j$ and $F_{j'}$ are type $1$ and $S$ is a fuzzy path, any vertex of $S(t_1,b_2)$ is in level $L_k$ for some $\beta \leq k \leq \ell(t_1)$. It follows that $b_1,t_1,b_2,t_2$ satisfy the conditions of Lemma~\ref{fuzzy_if_bt}, and therefore $T$ contains a fuzzy $\mathrm{N}$. \end{proof} \begin{subclaim}\label{sc22} If $F_j$ and $F_{j'}$ are type $2$, then $T$ contains a fuzzy $\mathrm{N}$. \end{subclaim} \begin{proof} The proof is illustrated in Figure~\ref{fig_sc22}. Let $b_1$ be the first vertex of $F_{j'}$ in $L_\beta$. Since $F_{j'}$ is type $2$, $c_{j'}$ cannot have inneighbours in $S$. Let $t_1$ be the outneighbours of $c_{j'}$ that is a vertex of $S(c_{j'}, c_j)$. Since $F_{j'}$ is type $2$, $c_j$ does not have outneighbours in $S$. Let $b_2$ be the inneighbour of $c_j$ in $S(c_{j'},c_j)$. Let $t_2$ be a vertex of $F_j$ in $L_{\ell(t_1)}$. Using Subclaim~\ref{high_enough}, it is easy to check that $b_1,t_1,b_2,t_2$ satisfy the conditions of Lemma~\ref{fuzzy_if_bt}, and therefore $T$ contains a fuzzy $\mathrm{N}$. \end{proof} The cases when one of $F_j$ and $F_{j'}$ is type $1$ and the other is type $2$ can be handled similarly to the previous two claims. \end{proof} \begin{figure}[htb] \begin{center} \includegraphics[scale=\wp]{Illustration.pdf} \end{center} \caption{Illustration of the proof of Subclaim~\ref{sc22}.}\label{fig_sc22} \end{figure} By Claim~\ref{down}, $F_q$ is a downward path. By definition, $F_q$ contains a separated arc $a_q$ in $\al_{height(T)-1}$. If the path $F_q(c_q,a_q)$ has height at least $2$, then it cannot be downward since $F_q$ is fuzzy. If $F(c_q,a_q)$ has height $1$, then since $a_q$ is separated, all arcs of $F(c_q,a_q)$ must be separated, including the first arc $e$ of $F_q(c_q,a_q)$. Note that $e$ must be a forward arc, since if it is a backward arc, then $c_q \in L_n$, and $S$ must contain an inneighbour of $c_q$, implying that $e$ cannot be separated. It follows that $F(c_q,a_q)$ is type $3$, and the contradiction follows from Claim~\ref{no_type_3}. The lemma is proved. \end{proof} Before we can proceed, we need a simple characterization of oriented trees of height $1$ containing no $\mathrm{Z_6}$ as an induced subgraph. \begin{definition}\label{spider} An \emph{in-star} (\emph{out-star}) is an oriented tree with vertex set $\{v_0,v_1,v_2,\dots,v_n\}$, and arc set $\{v_1v_0, v_2v_0,\dots,v_nv_0\}$ ($\{v_0v_1, v_0v_2,\dots,v_0v_n\}$), where $n \geq 0$. Note that if $n = 0$, then the star is just vertex $v_0$. Vertex $v_0$ is called the \emph{root}, and vertices $v_i$ ($1 \leq i \leq n$) are called \emph{leaves}. Let $S_0$ be an in-star (out-star) with leaves $v_1,\dots, v_n$, and $S_1,\dots,S_n$ be out-stars (in-stars) with root vertices $r_1,\dots,r_n$, respectively. Let $S$ be the oriented tree obtained by \begin{itemize} \item taking the disjoint union of $S_i$, $0 \leq i \leq n$, and \item identifying the leaf vertex $v_j$ of $S_0$ with the root vertex $r_j$ of $S_j$, for each $1 \leq j \leq n$. \end{itemize} Then $S$ is called an \emph{in-spider (out-spider)}. $S_0$ is called the \emph{body}, and $S_1,\dots,S_n$ are called the \emph{legs} of $S$. \end{definition} \begin{lemma}\label{h1} Let $T$ be an oriented tree of height $1$. Suppose that $T$ does not contain $\mathrm{Z_6}$ as an induced subgraph. Then $T$ is either an in-spider or an out-spider. \end{lemma} \begin{proof} Let $L_0$ and $L_1$ be the vertex of levels of $T$. Assume first that $T$ contains an induced $Z_5^{f=0}$ with vertex set $V = \{a,b,c,d,e\}$ and arc set $\{ab,cb,cd,ed\}$. Then $a,c,e \in L_0$ and $b,d \in L_1$. Assume that $L_1 = \{u_1,\dots,u_n\}$. If $cu_i$ is not an arc for some $1 \leq i \leq n$, then since $T$ is connected, there must be an induced oriented path from $u_i$ to a vertex of in $V$. For example, if there is a path from $u_i$ to $e$, then there is an arc $ew$ in $T$, and thus $\{a,b,c,d,e,w\}$ induces a \z6\ in $T$. It is easy to check that $T$ contains an induced \z6\ in the remaining cases. Let $S_0$ to be the out-star with root $c$ and leaf set $L_1$. For each $1 \leq i \leq n$, let $S_i$ be the in-star with root $u_i$. Let the leaves of $S_i$ be the inneighbours of $u_i$. Clearly, $T$ is an out-spider with body $S_0$ and legs $S_1,\dots,S_n$. If $T$ contains a $\zz_5^{f=1}$, then an analogous argument shows that $T$ is an in-spider. Suppose now that $T$ contains an induced $\zz_4^{f=0}$ with vertex set $\{a,b,c,d\}$ and arc set $\{ab,cb,cd\}$ but not a $\zz_5$. Then $a$ has out-degree $1$ and $d$ has in-degree $1$, since otherwise $T$ would contain a $\zz_5$. Any vertex $w$ of $T$ in $L_1$ can have only $c$ as its inneighbour, and any vertex $z$ in $L_0$ can have only $b$ as its outneighbours. It follows that $T$ is both an in-spider and an out-spider. The remaining cases are also easy to analyze, e.g., when $T$ contains a $\zz_3$ but not $\zz_4$, then $T$ is up-star or a down-star. \end{proof} We are ready to prove one direction of Theorem~\ref{construction_theorem} after the following definition, which will also be used later. \begin{definition} Let $G$ be a leveled digraph and $v$ be a vertex of $G$. The \emph{up-component of $G$ at $v$} is the subgraph of $G$ induced by the set of vertices \[ U = \{u \;|\; \exists \text{ a walk $W$ from $v$ to $u$ such that for each vertex $w$ of $W$, $\ell(w) \geq \ell(v)$}\}. \] Given a vertex level $L$ of $G$, the \emph{set of up-components of $G$ at level $L$} is the set of digraphs which are up-components for some vertex $v \in L$. A \emph{down-component at $v$} and \emph{set of down-components at level $L$} are defined analogously. \end{definition} \begin{lemma}\label{d1} If an oriented tree $T$ contains neither a \z6\ nor a fuzzy $\mathrm{N}$ as an induced subgraph, then $T$ is constructible. \end{lemma} \begin{proof} Let $T$ be as stated in the lemma. We use Lemma~\ref{technical_lemma} to find an integer $\alpha$ such that arc level $\al_\alpha$ of $T$ contains exactly one component $R$. (If there is no such $\alpha$, then $T$ must be a single vertex, and we are done.) By Lemma~\ref{h1}, $R$ is an out-spider or an in-spider. We assume that $R$ is an in-spider. The case when $R$ is an out-spider can be handled similarly. Assume that $S_0$ is the body of $R$ having root $v_0$, and leaves $v_1,\dots,v_n$. We define $T_0$ as the up-component of $T$ at $v_0$ (so $v_0$ is in the bottom vertex level of $T_0$), and $T_1,\dots,T_{n'}$ as the components of $T \setminus T_0$. We show that $T$ is the up-join of $T_0,T_1,\dots,T_{n'}$; this will follow from these claims: \begin{enumerate} \item The only common vertex of $T_0$ and $R$ is $v_0$, and the only vertex of $T_0$ in $L_{\ell(v_0)}$ that has indegree non-zero (with respect to $T$) is $v_0$; \item Each $T_i$ contains precisely one vertex among $v_1,\dots,v_n$. (And by definition, no $T_{i'}$ and $T_{i''}$, $i' \neq i''$ contains the same vertex among $v_1,\dots,v_n$.) We assume w.l.o.g.\ that $v_i$ belongs to $T_i$. It also follows that $n = n'$. Furthermore, any vertex $w \neq v_i$ of $T_i$ in $L_{\ell(v_0) - 1}$ has out-degree zero. \end{enumerate} Once these are established, it follows that $T$ is the up-join of $T_1,\dots,T_{n'}$. Vertex $v_0$ is in the bottom vertex level of the tree $T_0$. Each $T_i$ contains at most one vertex of degree at least one in $L_{\ell(v_0) - 1}$, which is $v_i$, and $T$ is obtained by taking the disjoint union $T_0,T_1,\dots,T_{n'}$ and adding the arcs of $v_1v_0,\dots,v_nv_0$. We prove the two claims now. Suppose for contradiction that $T_0$ contains a vertex $w$ of $R$ such that $w \neq v_0$. Then $w$ is an outneighbours or $v_j$ for some $1 \leq j \leq n$ (because $R$ is an in-spider). Therefore the arcs $v_jw$, $v_jv_0$, and the path from $v_0$ to $w$ in $T_0$ forms a cycle in $T$, a contradiction. It follows that any vertex $w' \neq v_0$ of $T_0$ in $L_{\ell(v_0)}$ has indegree $0$ in $T$, since otherwise $\al_i$ would contain at least two components (contradicting our choice of $i$). We prove the second claim. Since $T$ is connected, there must be a path from $v_0$ to a vertex of $T_i$. This can only happen if $T_i$ contains at least one of the vertices $v_1,\dots,v_n$. Conversely, suppose for contradiction that there are indices $1 \leq i', i'' \leq n$ such that $v_{i'}$ and $v_{i''}$ belong to the same $T_i$ for some $1 \leq i \leq {n'}$. But then the arcs $v_{i'}v_0$, $v_{i''}v_0$ together with the oriented path in $T_i$ from $v_{i'}$ to $v_{i''}$ would form a cycle, contradicting that $T$ is an oriented tree. Furthermore, assume for contradiction that there is a vertex $w \neq v_i$ of $T_i$ (for some $1 \leq i \leq n'$) in $L_{\ell(v_0) - 1}$ that has out-degree at least one. Then the arc leaving $w$ is an arc in $\al_\alpha$ that is not part of $R$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem~\ref{construction_theorem}.] One direction of the theorem follows from Lemma~\ref{d1}. The other direction either follows from the chain of implications outlined at the end of the introduction, or a direct proof is given in Appendix B, see Lemma~\ref{d2}. \end{proof}
train/arxiv
BkiUgGXxK1ThhAcYnTt_
5
1
\section{Introduction} Mobile robotic networks (MRNs) have received increasing attention in the last decades. Thanks to the mobility, flexibility, and distributed fashion, MRNs are widely deployed, e.g., surveillance, reconnaissance, search and environmental monitoring \cite{bullo2009distributed}. Among these applications, formation control serves as a fundamental technique to enhance the cooperation performance by maintaining a preset geometric shape \cite{olfati2004consensus}. \textcolor{black}{ Numerous methods have been proposed to obtain stable and robust formation control, see \cite{oh2015survey,kamel2020formation} for a detailed review.} Despite the large variety of the control methods, the interaction topology among robots is universal and critical for effective cooperation of MRNs. The topology characterizes the locality of information exchange, and determines the shape-forming stability and convergence. \textcolor{black}{ Recent years have witnessed the emergence of many applications that necessitate advances in topology inference, which brings significant benefits in better understanding the system behaviors. Taking MRNs as the specific object, there are mainly two types of applications. First, from the security perspective, external attackers can utilize the topology inference method to find the critical robot that has significant control impacts in the formation, e.g., calculating the node degree and centrality \cite{zhang2014analysis}, or identifying the leadership relationship in the formation \cite{vasquez2018network}. With the topology information, more intelligent interception or herding tasks in military scenarios can be performed to control the formation \cite{choi2018detecting,li2019learning,licitra2019single,li2020unpredictable}. Second, from the perspective of performance improvement, inferring the topology of formation can support the self-configuration ability of MRNs \cite{venkitaraman2020recursive}. For instance, when a robot disconnects with others, it can use the inferred local topology to keep coordination with the formation, by predicting the state and reconnecting with appropriate neighboring robots \cite{li2019optimal}. } Mathematically, topology inference can be seen as a typical inverse modeling problem. Plenty of related works have been developed for various dynamic models \cite{deka2016learning,shi2019bayesian,lu2019nonparametric,dong2019learning}. In relation to the basic consensus dynamics, the interaction topology is reconstructed by measuring the power spectral density of the network response to input noises, and node removal strategies are designed \cite{shahrampour2013reconstruction,shahrampour2014topology}. For sparsely connected dynamical networks, eigenvalue decomposition-based optimization methods in \cite{hassan2016topology,mateos2019connecting} are proposed to reconstruct the topology. \cite{bazanella2019network,van2021topology} investigate the identifiability conditions of the system topology of a class of heterogeneous dynamical networks, from the perspective of characterizing the system transfer matrix from input to output. \textcolor{black}{ Despite the fruitful results, these methods cannot handle the topology inference of MRNs under formation control. For example, many well-established techniques are effective when the system is asymptotically stable and only involves zero-mean noises input \cite{matta2018consistent}, or the input is known \cite{coutino2020state,8985069}. Nevertheless, in practical formation control, the input is generally regular, the system can be marginally stable, and the state is not always fully observable by external observers. In a word, careful treatments of the formation input, interaction characteristics and observation limitations are still lacking. } \textcolor{black}{ To fill the gap, this paper focuses on the local topology inference problem of MRNs under first-order linear formation control, where an inference robot can manoeuvre among the formation robots and observe their motions. Specifically, the inference robot has no knowledge of the formation inputs and interaction parameters, and the observation range is strictly limited. This problem is challenging due to three aspects. First, the set of robots within the observation range of the inference robot can change over time. Second, the movement of formation robots heavily depends on the unknown formation input and interaction constraints. Third, the state evolution of the observable robot subset is determined by not only itself but also the unobservable robots. It is quite difficult to decouple the influences of the mixed three factors, and obtain a reliable local topology from the noise-corrupted observations. To address these issues, the key insight is to determine an available robot set from the changing observable robot set, and eliminate the influence of the unobservable robots. Then, we need to filter the influence of the formation input from local observations and design an unbiased topology estimator. } Preliminary results about estimator design with known interaction range have appeared in \cite{lys}. In this paper, we consider a more general situation where the interaction range is unknown, and extend the analysis by i) further estimating the unknown interaction range, ii) designing algorithms to determine the feasible robot set for inference, and iii) adding conjoint inference error analysis of the former two factors. The main contributions are summarized as follows. \begin{itemize} \item We investigate the local topology inference problem of MRNs under noisy observations, {\color{black}{without the knowledge about the formation input and interaction parameters. By characterizing the steady formation pattern, we determine a constant subset from the time-varying set of robots within the observation range, and identify the formation input parameters. }} The estimation error bound under finite observations is established in probability. \item {\color{black}{ Leveraging the interaction constraints between formation robots, we develop an active excitation based method to obtain a reliable estimate of the interaction range. Combining the novel range-shrink strategy and the monotonicity analysis of the interaction range, the influence of unobservable robots is perfectly avoided. Then, an ordinary least squares (OLS) based local topology estimator is established after filtering the formation input's influence on observations before the steady stage. }} \item The convergence and accuracy of the proposed estimator are proved, by resorting to the concentration measure with probability guarantees. \textcolor{black}{ Extensions on nonidentical observation slots of the robots and on more complicated control models are also discussed and analyzed. Simulation studies and comparison tests illustrate the effectiveness of the proposed method. } \end{itemize} \textcolor{black}{ This paper reveals the possibility of inferring the local topology of MRNs under first-order linear formation control protocols, without knowledge about the formation input and interaction parameters. The achieved results provide insights to tackle more complicated and general scenarios, and also necessitate the investigation of interaction security of MRNs. } The remainder of this paper is organized as follows. Section \ref{r-work} presents related literature. Section \ref{preliminary} gives the modeling for MRNs and formulates the inference problem. Section \ref{revealing} studies how to identify the steady pattern and interaction range. Section \ref{sec:inference-estimation} develops the design of the local topology estimator and analyzes the inference performance. Simulation results are shown in Section \ref{simulation}, followed by the concluding remarks and further research issues in Section \ref{conclusion}. All the proofs of theorems are provided in the Appendix. \section{Related Work}\label{r-work} \textit{Formation control in MRNs}. The fundamental rules for formation control were first introduced by the famous Reynolds' Rules \cite{reynolds1987flocks}: separation, alignment, and cohesion. Based on the rules, numerous methods have been proposed to achieve the desired performance, and consensus-based algorithms have become the mainstream, e.g., \cite{sun2016optimal,zhao2018affine,alonso2019distributed,xu2020affine}. The key idea of consensus-based algorithms is that the formation is modeled as a graph, and every robot exchanges information (positions and velocities) with its neighbors and computes its control inputs. Therefore, the interaction structure lays critical support for effective formation control and \textcolor{black}{is largely affected by communication network}. In recent years, communication-free formation control \cite{deghat2014localization,cheng2017event,trinh2018bearing} has been developed and attracts research interests, thanks to the fast advancement of sensing technologies. Communication-free interaction avoids information delays and network bandwidth consumption, and even enables stealth modes of operation \cite{kan2011network}. For instance, formation control with bearing measurements by vision sensors was investigated in \cite{zhao2019bearing}. \textcolor{black}{Note that in all cases, the interaction range is restricted by the physical distance between robots} due to the energy constraints, i.e., two distant robots outside the interaction range are disconnected. \textit{Topology Inference}. A large body of research concerning topology inference has been developed in the literature. \cite{granger1969investigating,brovelli2004beta} used Granger causality to formulate the directionality of the information exchange among system nodes, and constructed corresponding estimators to infer the underlying topology. \textcolor{black}{ Identifying the topology of sparsely connected networks via compressed sensing is also commonly investigated \cite{timme2007revealing,wang2011network,hayden2016sparse,wai2019joint}, which is transformed to a constrained $L_1$ norm optimization problem based on limited observations. Considering the latent regularity in the time series of nodal observations and adopting some basic assumptions (e.g., smoothness), graph signal processing methods \cite{mei2015signal,onuki2016graph,egilmez2017graph,hallac2017network,pasdeloup2018characterization} are proposed to derive a topology interpretation for the causation or correlation between nodes. } When the network dynamics are nonlinear, kernel-based methods were developed to effectively infer the topology \cite{karanikolas2016multi,karanikolas2017multi,wang2018inferring}. The key idea is to select appropriate kernel functions to approximate the nonlinearities, where the performance is mainly determined by the kernel design. \textcolor{black}{Several works \cite{vasquez2018network,8985069} have directly considered inferring the topology of MRNs, but they still lack performance guarantees, especially when the knowledge about the formation input is unavailable. } In summary, most existing works cannot directly infer the topology of MRNs under formation control, due to the unknown formation input and interaction characteristics. Despite many attempts on the asymptotic inference performance, there is no analytical model for the inference error under finite observations. These challenges motivate this paper. \section{Preliminaries and Problem Formulation}\label{preliminary} Let $\mathcal{G}=(\mathcal{V},\mathcal{E})$ be a directed graph that models an MRN, where $\mathcal{V}=\{1, \cdots,n\}$ is a finite set of nodes (i.e., robots) and $\mathcal{E}\subseteq \mathcal{V}\times \mathcal{V}$ is the set of interaction edges. An edge $(i,j)\in \mathcal{E}$ indicates that $i$ will use the information from $j$. The adjacency matrix $A=[a_{ij}]_{n \times n}$ of $\mathcal{G}$ is defined such that ${a}_{ij}\!>\!0$ if $(i,j)$ exists, and ${a}_{ij}\!=\!0$ otherwise. Denote ${\mathcal{N}_i^{in}}=\{j\in \mathcal{V}:a_{ij}>0\}$ and ${\mathcal{N}_i^{out}}=\{j\in \mathcal{V}:a_{ji}>0\}$ as the in-neighbor and out-neighbor sets of $i$, respectively. Throughout the paper, we use the scripts $\tilde \cdot $ and $\hat \cdot$ right above a variable to indicate the corresponding observation and estimator, respectively. We denote by $\| \cdot \|$ the spectral norm and by $\| \cdot \|_{F}$ the Frobenius norm of a matrix. Denote $\bm{0}$ by all-zero matrix and $\bm{1}$ by all-one matrix in compatible dimensions. The set variables are expressed in capital calligraphy fonts, and $\mathcal{V}_a \backslash \mathcal{V}_b$ represents the elements in $\mathcal{V}_a$ that are not in $\mathcal{V}_b$. The two-dimension state of a robot is expressed in boldface font (e.g., ${\mathbf z}$). Unless otherwise noted, the formulation with non-boldface state variables applies to the robot state in each dimension independently. For square matrices $M_a$ and $M_b$ in the same dimensions, ${M_a}\!\succeq\!{M_b}$ (${M_a}\!\preceq\!{M_b}$) means ${M_a}-{M_b}$ is positive-semidefinite (negative-semidefinite). For two real-valued functions $f_1$ and $f_2$, $f_1(x)=\bm{O}(f_2(x))$ as $x\to x_0$ means $\mathop {\lim }\nolimits_{x \to x_0 } |f_1(x)/f_2(x)|<\infty$, and $f_1(x)=\bm{o}(f_2(x))$ as $x\to x_0$ means $\mathop {\lim }\nolimits_{x \to x_0 } |f_1(x)/f_2(x)|=0$. \textcolor{black}{Some important symbols are summarized in Table \ref{tab:test}. } \begin{table}[t] \small \centering \caption{\label{tab:test}Some Important Notation Definitions} \begin{tabular}{cl} \toprule Symbol & Definition \\ \midrule $r_a$, $r_i$ & the abbreviation of the inference robot, robot $i$\\ $z^a_k$, $z^i_k$ & the state of $r_a$, $r_i$ at time $k$\\ ${\mathbf z}_k^{a}$, ${\mathbf z}_k^{i}$ & the two-dimensional position of $r_a$, $r_i$ at time $k$ \\ $c$ & the desired velocity of formation robots\\ $h$ & the shape configuration vector of formation robots\\ $k_s$ & the time when $\epsilon$-steady pattern is reached\\ $k_{end}$ & the time when $r_a$ stops observation\\ $\mathcal{V}_{\sss F}^{a}(k)$ & the robot set within $r_a$'s observation range at time $k$ \\ $\mathcal{V}_{\sss F}$ & the constant robot subset observed by $r_a$\\ $\mathcal{V}_{\sss H}$ & the robot subset by range-shrink strategy ($\mathcal{V}_{\sss H}\subseteq\mathcal{V}_{\sss F}$)\\ $z^{\sss F}_k$, $z^{\sss H}_k$ & the state vector of robot set $\mathcal{V}_{\sss F}$, $\mathcal{V}_{\sss F}$ at time $k$\\ $W$ & the interaction topology matrix among the formation\\ $W_{\sss HF}$ & the interaction topology matrix between $\mathcal{V}_{\sss H}$ and $\mathcal{V}_{\sss F}$\\ $R_f$ & the observation range of $r_a$\\ $R_c$ & the interaction range of formation robots\\ $R_o$ & the obstacle detection radius of formation robots\\ $X$ & the matrix of $k_s$ filtered observations about $\mathcal{V}_{\sss F}$\\ $Y$ & the matrix of $k_s$ filtered observations about $\mathcal{V}_{\sss H}$\\ \bottomrule \end{tabular} \end{table} \subsection{Formation Control} \label{s2-c} To describe the predefined geometric shape under formation control, \textcolor{black}{the shape vector $h_0=[h_0^1,\cdots,h_0^n]^{\mathsf{T}}$ is introduced, where $h_0^i(i\in\mathcal{V})$ is the desired relative deviation between robot $i$ (abbreviated to $r_i$ hereafter) and a common reference point.} To achieve this pattern, a common first-order discrete consensus-based controller is given by \cite{olfati2007consensus} \begin{equation}\label{eq-1} z^{i}(t_{k+1})=z(t_k)+\varepsilon_{\sss T} \sum\limits_{j \in \mathcal{N}_i^{in}} {{a_{ij}}(z^j(t_k)-z^i(t_k)-h_0^{ij})}, \end{equation} where $h_0^{ij}=h_0^j-h_0^i$ is desired state deviation between $j$ and $i$, and $\varepsilon_{\sss T}=t_{k+1}-t_{k}$ is the control period satisfying $\varepsilon_{\sss T}\le 1/\max\{d_i: i\in\mathcal{V}\}$. Note that once the formation shape is specified, the choice of the reference point will make no difference as $h_0^{ij}$ remains unchanged. Generally, to dynamically guide the formation motion, one robot will be specified as the leader with an extra velocity input. For simplicity and without loss of generality, $r_{n}$ is taken as the leader and reference node, and suppose that it runs in a constant velocity $c_0$. \textcolor{black}{Let $L=\text{diag}\{A \bm{1}_n\}-A$ be the Laplacian matrix of $\mathcal{G}$}, and denote $u_0=Lh_0+[0,\cdots,0, c_0]^\mathsf{T}$. Then, the global dynamics of the system is described by \begin{equation}\label{eq:original-global} z(t_{k+1})= ( {I}_{n}-\varepsilon_{\sss T} L) z(t_k)+ \varepsilon_{\sss T}u_0 {\buildrel \Delta \over =} W z(t_k)+ \varepsilon_{\sss T}u_0, \end{equation} where $W$ equivalently represents the original topology matrix $A$ and is known as Perron matrix. Apparently, $W$ is row-stochastic, i.e., $W\bm{1}_{n}=\bm{1}_{n}$. For ease of notation, we denote $z_{k} {\buildrel \Delta \over =} z(t_k)$, $c{\buildrel \Delta \over =}\varepsilon_{\sss T}c_0$, $h{\buildrel \Delta \over =}\varepsilon_{\sss T}h_0$ and $u{\buildrel \Delta \over =}\varepsilon_{\sss T}u_0$ in following sections. Then, (\ref{eq:original-global}) is rewritten as \begin{equation}\label{eq:global-system} z_{k+1}= W{z_k}+ u. \end{equation} We make the following assumption throughout this paper. \begin{assumption}[System stability]\label{assum:stability} The eigenvalue 1 of $W$ is simple \textcolor{black}{(i.e., its algebraic multiplicity equals one)}, and the magnitudes of all other eigenvalues are less than one. \end{assumption} \subsection{Obstacle-avoidance and Interaction Constraints} The obstacle-avoidance mechanism is critical for MRNs to interact with the physical environment. Denote by $R_o$ the the obstacle detection range, and by \textcolor{black}{$u_{k}^{j,e}$ the input triggered by the excitation source (i.e., the obstacle $r_{ob}$) on $r_j$. Once the relative distance between $r_j$ and $r_{ob}$ satisfies $\|\mathbf{z}^j-\mathbf{z}^{ob}\|\le R_o$, the state of $r_j$ is updated by \begin{align}\label{eq:obstacle-rule} z_{k+1}^{j,e}=\sum\limits_{\ell \in \mathcal{V}} {{w_{j\ell}}(z_k^{\ell}-z_k^j)} + u_k^{j} + u_{k}^{j,e}, \end{align} where the first two terms on the right hand side (RHS) can be seen as the internal interaction within the MRN}, while the last term represents the external interaction with the environment. There are numerous obstacle-avoidance algorithms in the literature (e.g., \cite{pandey2017mobile} provides a detailed review), and among them, $u_{k}^{j,e}$ is mainly determined by the desired goal state, the relative state and velocity between $r_{j}$ and $r_{ob}$. As long as the excitation source appears within the obstacle-detection range of $r_j$, there will always be a $u_k^{j,e}\neq0$. In this work, we do not specify the detailed form of $u_{k}^{j,e}$, but mainly leverage the obstacle-avoidance property that \begin{equation}\label{eq:obstacle-property} |u_{k}^{j,e}|>0,~\text{if}~\|\mathbf{z}^j-\mathbf{z}^{ob}\|\le R_o. \end{equation} In practical applications, the interaction capability of robots is limited due to the energy constraint, and thus the interaction range among robots (denoted by $R_c$) is bounded \cite{bullo2009distributed}, satisfying \begin{equation}\label{eq:interaction-constraint} R_o<R_c<\infty. \end{equation} \begin{figure*}[t] \centering \setlength{\abovecaptionskip}{0.1cm} \includegraphics[width=0.8\textwidth]{local-framework-new2} \caption{The proposed local topology inference method. First, the inference robot $r_a$ uses the collected observations over the MRN to estimate the formation input parameters $c$ and $h^{\sss F}$. Then, $r_a$ makes active excitations on a target robot in the MRN to estimate the interaction range between two robots. Finally, based on the estimated information, $r_a$ can filter the influence of the unobservable part and determine the shrunken range. Specifically, the inferred topology can be leveraged in turn to approximate the best shrink and infer a new local topology. } \label{fig:frame} \end{figure*} \subsection{Problem of Interest} \textcolor{black}{ Suppose an inference robot (denoted by $r_a$) can manoeuvre in an MRN described by the formation control model \eqref{eq:global-system}. Specifically, $r_a$ is equipped with advanced sensors with a limited observation range, and does not have knowledge about the formation input and interaction parameters. Note that both the formation robots and $r_a$ are moving during the whole process, and thus the robots within the observation range of $r_a$ can change over time. Let $\mathcal{V}_{\sss F}^{a}(k)\subseteq\mathcal{V}$ be the set of robots within $r_a$'s observation range at time $k$, given by \begin{equation} \mathcal{V}_{\sss F}^{a}(k)=\{ i: \|{\mathbf z}_{k}^{i}-{\mathbf z}_{k}^{a}\|_2 < R_{f} \}, \end{equation} where $R_f$ is the observation range of $r_a$. Since there can be possible observation inaccuracies brought by the movement of robots, $r_a$'s observation for $i\in\mathcal{V}_{\sss F}^{a}(k)$ is described by \begin{equation} \tilde z_{k}^i=z_{k}^i+\omega_{k}^i,~i\in\mathcal{V}_{\sss F}^{a}(k), \end{equation} where $\omega_{k}^i$ is the $i$-th element of \textit{i.i.d.} Gaussian noise vector $\omega_{k}\in\mathbb{R}^{n}$, satisfying $\omega_{k}\sim N(0,{\sigma ^2}I)$. Considering the interaction constraint (\ref{eq:interaction-constraint}) in $\mathcal{V}$, we assume that $R_f$ satisfies \begin{equation}\label{eq:observation-constraint} R_f\ge R_c, \end{equation} where implicates that $r_a$ can observe at least one single robot and all its in-neighbors. } {\color{black}{ The goal of this paper is to investigate how $r_a$ can infer the local topology of the formation from the observations $\{ \tilde z_{k}^i,~i\in\mathcal{V}_{\sss F}^{a}(k)\}$. This problem is very challenging, and most existing methods cannot be directly applied due to three factors: i) Time-varying $\mathcal{V}_{\sss F}^{a}(k)$: the observations of robots in $\mathcal{V}_{\sss F}^{a}(k)$ may be discontinuous and insufficient. ii) Weak prior knowledge: the unknown formation input and interaction parameters make direct inference from $\{ \tilde z_{k}^i,i\in\mathcal{V}_{\sss F}^{a}(k)\}$ unavailable. iii) Limited observation range: the neighbors that send real-time information to $\mathcal{V}_{\sss F}^{a}(k)$ may locate outside the observation range of $r_a$. We will address these issues from the following aspects to obtain a reliable local topology inference. \begin{itemize} \item Utilizing the steady pattern of the formation, we first demonstrate how to determine a constant subset $\mathcal{V}_{\sss F}\subseteq\mathcal{V}_{\sss F}^{a}(k)$ as available inference sources, and identify the formation input from corresponding observations. \item Since the interaction range between robots is limited, we develop an excitation method to estimate the interaction range, and later use it to improve the local topology inference performance. \item Towards the influence of unobservable robots on $\mathcal{V}_{\sss F}$, we propose a novel range-shrink method to guarantee the inferred topology is unbiased in the asymptotic sense. \end{itemize} Based on the above treatments, we finally present the local topology estimator, along with its convergence and accuracy analysis. Specifically, the situation that the observation slots for robots in $\mathcal{V}_{\sss F}$ are nonidentical will also be analyzed. The whole framework of this paper is shown in Fig.~\ref{fig:frame}. }} \section{Estimating the Steady Pattern and the Interaction Range}\label{revealing} In this section, \textcolor{black}{we first demonstrate how to determine a constant subset $\mathcal{V}_{\sss F}$ from $\mathcal{V}_{\sss F}^{a}(k)$ and identify the formation input. } Then, we present the range-shrink idea by introducing a common truncated estimator. \textcolor{black}{Finally, the excitation strategy for estimating the interaction range is provided} {\color{black}{ \subsection{Determining Constant Robot Subset $\mathcal{V}_{\sss F}$} Suppose the MRN starts the formation task from an arbitrary initial state. Given the initial position of $r_a$, $r_a$ needs to manoeuvre among the formation robots and avoid collisions with them, namely, keeping $\|{\mathbf z}_{k}^{a}\!-\!{\mathbf z}_{k}^{i}\|_2 \!>\!R_o, i\in\mathcal{V}_{\sss F}^a(k)$. This can be easily achieved by making $r_a$ not too close to the robots and track the formation velocity, e.g., setting \begin{equation}\label{eq:movement} u^a_k=\sum\nolimits_{i\in \mathcal{V}_{\sss F}^{a}(k) } ( z_k^i-z_{k-1}^i )/|\mathcal{V}_{\sss F}^{a}(k)| + g_a(\mathcal{V}_{\sss F}^{a}(k)), \end{equation} where the first sum term is for formation tracking, and $g_a(\mathcal{V}_{\sss F}^{a}(k))$ represents the adjusting input when $r_a$ is too close to some robots. Note that any strategy that meets the above requirement can be adopted by $r_a$. Then, we focus on how to infer the local topology from $r_a$'s observations in this process. }} Since the steady pattern of the MRN reflects the formation shape and moving speed of the MRN, we first characterize the steady pattern by introducing the notion of linear steady trajectory, and determine the subset $\mathcal{V}_{\sss F}$ to be inferred. \begin{definition}[Linear steady trajectory]\label{def:steady-pattern} Given the dynamic system (\ref{eq:global-system}), its state evolution $\{z_{k}\}$ is subject to linear steady trajectory if there exists unique $c\in\mathbb{R}$ and $s\in\mathbb{R}^{n}$ such that \begin{equation} z_{k}=ck\bm{1}_n + s. \end{equation} \end{definition} \textcolor{black}{By referring to the Theorem 1 in our preliminary work \cite{lys}, we have the following result about the steady trajectory. } \begin{lemma}\label{le:ep-convergence} By the constant controller $u=Lh+[0\cdots0 \;c]^\mathsf{T}$, the system (\ref{eq:global-system}) will approximate the linear steady trajectory with arbitrary precision, i.e., given an arbitrary $\epsilon>0$, there always exists a $k_0\in\mathbb{N}^{+}$ and a unique $s\in\mathbb{R}^{n}$, such that \begin{equation}\label{eq:steady-state} \|z_k-{c} k \bm{1}_n -s \|_1<\epsilon, \forall k\ge{k_0}. \end{equation} \end{lemma} {\color{black}{ Lemma \ref{le:ep-convergence} illustrates that when the formation is in the linear steady trajectory with tolerant accuracy $\epsilon$ (we call it as $\epsilon$-steady pattern hereafter), all robots are running at a common speed with fixed relative state deviations. Utilizing this property and given appropriate following strategy for $r_a$, we have the following result. \begin{lemma}\label{le:steady-subset} Given an arbitrary $\epsilon>0$, there always exists a $k_1\in\mathbb{N}^{+}$, $\forall k\ge k_1$, $\mathcal{V}_{\sss F}^{a}(k)$ remains unchanged. \end{lemma} Lemma \ref{le:steady-subset} follows easily from Lemma \ref{le:ep-convergence}. Taking the moving strategy \eqref{eq:movement} as an example, when the formation reaches $\epsilon$-steady pattern, $r_a$ will also move stably with the MRN with almost the same velocity, and thus the formation robots in the observation range of $r_a$ will not change. Based on the analysis, we determine the constant local subset $\mathcal{V}_{\sss F}$ by \begin{equation} \mathcal{V}_{\sss F}=\mathcal{V}_{\sss F}^{a}(k_{end}), \end{equation} where $k_{end}$ represents the time when $r_a$ stops observing the MRN. For simplicity, we temporarily assume $\mathcal{V}_{\sss F}\subseteq\mathcal{V}_{\sss F}^{a}(k)$ for an arbitrary $k$, and extend the analysis to the cases when this assumption is violated in Section \ref{subsec:extension}. }} \subsection{Steady Pattern Identification}\label{subsec:steady} {\color{black}{ After the local set $\mathcal{V}_{\sss F}$ is determined, the steady pattern parameters of the formation can be identified from the observations by utilizing Lemma \ref{le:ep-convergence}. }} Based on (\ref{eq:steady-state}) and taking the observation noises into account, if the formation has reached $\epsilon$-steady pattern, then the pattern parameters can be identified by solving \begin{equation} \label{eq:solving-steady} \mathop {\min }\limits_{c,s^{\sss F}} \sum\limits_{t = {k}}^{k + L_c} {{{\left\| {\tilde z_t^{\sss F}} - ct \bm{1}_{n_f} + s^{\sss F} \right\|}_2^2}}, \end{equation} \textcolor{black}{where ${\tilde z_t^{\sss F}}=[\tilde z_t^i,i\!\in\!\mathcal{V}_{\sss F}]\!\in\!\mathbb{R}^{n_f}$ represents the observation vector of $\mathcal{V}_{\sss F}$ at time $t$, } $n_f=|\mathcal{V}_{\sss F}|$, and $L_c$ is the observation window length. Note that (\ref{eq:solving-steady}) is a typical least squares problem, whose solution is given by \begin{itemize} \item \textbf{Steady pattern estimator}: \begin{equation} \label{eq:window-s} \!\!\left \{ \begin{aligned} \hat c(k,L_c) &\!=\!{\sum\nolimits_{t= k}^{k+L_c-1} \bm{1}_{n_f}^\mathsf{T} (\tilde{z}_{t+1}^{\sss F}-\tilde{z}_{t}^{\sss F}) } /{ ({n_f}{L_c}) }, \\ \hat{s}^{\sss F}(k,L_c) &\!=\! {\sum\nolimits_{t = k+1}^{k+L_c} (\tilde{z}_{t}^{\sss F} -\hat{c} t \bm{1}_{n_f}) } / {L_c}. \end{aligned}\right. \end{equation} \end{itemize} Next, we demonstrate the estimation performance of (\ref{eq:window-s}). \begin{theorem}[Accuracy of $\hat c$ and $\hat{s}^{\sss F}$]\label{th:cs-performance} Suppose the MRN has reached $\epsilon$-steady pattern after $k_0$. Let $\Delta_c= \hat c(k_0,L_c) - c $ be the estimation error of $\hat c$, then we have \begin{equation} \label{eq:accuracy-c} \Pr\left\{ | \Delta_c | \le \frac{4\epsilon}{\sqrt{L_c}} \right\}\ge P_1(L_c), \end{equation} where $P_1(L_c)= 1 - 2 \exp\{-\frac{ {n_f} {L_c}\epsilon^{2}}{\sigma^{2}}\}$. Denote the estimation error of $\hat s^{\sss F}$ as $\Delta_s= \bm{1}_{n_f}^\mathsf{T}(\hat{s}^{\sss F}(k_0,L_c) - s)/n_f $, then it satisfies \begin{equation} \label{eq:expectation-variance} \mathop {\lim }\limits_{L_c \to \infty } | \mathbb{E}[\Delta_s] | \le 2\epsilon,~\mathop {\lim }\limits_{L_c \to \infty } \mathbb{D}[\Delta_s]=\frac{\sigma^2}{2n_f}, \end{equation} \textcolor{black}{where $\mathbb{E}[\cdot]$ and $\mathbb{D}[\cdot]$ represent the expectation and variance of a random variable, respectively. } \end{theorem} \begin{proof} The proof is provided in Appendix \ref{apdix:cs-performance}. \end{proof} Theorem \ref{th:cs-performance} demonstrates that, with sufficient observations over the $\epsilon$-steady pattern, the estimation accuracy of $\hat c$ is determined by $\epsilon$. \textcolor{black}{In other words, the confidence interval of $\hat c(k_0,L_c)$ is given as $\hat c(k_0,L_c) \in[c-\frac{4\epsilon}{\sqrt{L_c}},c+\frac{4\epsilon}{\sqrt{L_c}}]$ with probability at least $P_1(L_c)$.} Specifically, when $L_c\to\infty$, we have with probability one that \begin{equation} \mathop {\lim }\limits_{L_c \to \infty } | \Delta_c | =0 . \end{equation} However, as for the estimation accuracy of $\hat s^{\sss F}$, it only achieves $\epsilon$-level accuracy in the expected sense with bounded variance. \begin{remark} Note that (\ref{eq:expectation-variance}) only presents the estimation error of $\hat s^{\sss F}$ in limit form. It is shown in the proof of Theorem \ref{th:cs-performance} that, one has with high probability \begin{equation} \! \left \{ \begin{aligned} &| \mathbb{E}[\Delta_s] | \!\le\! (2 \! + \!\frac{2{k_0}+1}{L_c})\epsilon, \\ &\mathbb{D}[\Delta_s] \!= \! \frac{\sigma^2}{2n_f} \! + \!\sigma^2(\frac{1}{n_f L_c^2} \!+\! \frac{(2{k_0}+1)^2}{L_c^2} \!+\! \frac{4{k_0}+2}{L_c} ). \end{aligned} \right.\!\! \end{equation} Despite the undesired uncertainty in $\Delta_s$, one can tighten the error bound of $\hat{s}^{\sss F}(k,L_c)$ by increasing the observations. \end{remark} Note that (\ref{eq:window-s}) is not an appropriate solution if the system is not in $\epsilon$-steady pattern. Hence, we need to judge whether the system is in $\epsilon$-steady pattern before obtaining the final $\hat c$ and $\hat s$. Inspired by (\ref{eq:accuracy-c}), we in turn deduce that $| \Delta_c | > {4\epsilon}/{\sqrt{L_c}}$ holds with high probability if the observations used are not all in $\epsilon$-steady pattern. Hence, we use the last $L_c$ groups of observation to obtain a benchmark estimator of $c$ by \begin{equation} \hat c_b {\buildrel \Delta \over =} \hat c(k_{end}-L_c,L_c). \end{equation} Based on Theorem \ref{th:cs-performance}, if the system is in $\epsilon$-steady pattern after $k_0$, one has with probability $1 - 2 \exp\{-\frac{ {n_f} {L_c}\epsilon^{2}}{\sigma^{2}}\}$ \begin{align} \label{eq:2c-error} | \hat c(k_0,L_c)- \hat c_b| \le &| \hat c(k_0,L_c)- c| + |c-\hat c_b | \nonumber \\ \le & {8\epsilon}/{\sqrt{L_c}}. \end{align} Although infinite observations are not available in practice, the upper bound in (\ref{eq:2c-error}) can be used as an empirical criterion to judge when the $\epsilon$-steady pattern is reached, given by \begin{itemize} \item \textbf{$\epsilon$-steady time criterion}: \begin{align}\label{eq:criterion-ks} { k_{s}} \!=\! \inf &\{ k : |\hat c(k,L_c)-\hat c_b | \le {8\epsilon}/{\sqrt{L_c}} \}. \end{align} \end{itemize} Once ${k_{s}}$ is obtained, the formation input parameters $c$ and $h$ are finally determined by \begin{equation}\label{eq:final-ch} \left \{ \begin{aligned} \hat c &= \hat c(k_{s},L_{s}), \\ \hat h^{\sss F} & = \hat s^{\sss F}(k_{s},L_{s}) \!-\!\bm{1}_{n_f} \hat s^{i}(k_{s},L_{s}), \end{aligned} \right. \end{equation} where $L_{s}=k_{end}-k_{s}$ represents the amount of observations of the system in the $\epsilon$-steady stage. \subsection{Range-shrink: Motivated by Truncated Estimator} To explicitly illustrate the necessity of the range-shrink strategy, we begin with the case where the observations are noise-free and the input is known. Under full observation, denote $z_{k+1}^{u} {\buildrel \Delta \over =} z_{k+1}-u_{k}=Wz_{k}$. {\color{black}{ Then, the global topology can be obtained from $K$ groups of noise-free observations by \begin{equation} \label{eq:ideal-estimator} W = Z_{2:K+1}^{u} {Z_{1:K}^\mathsf{T}}(Z_{1:K} Z_{1:K}^\mathsf{T} )^{-1} , \end{equation} where $Z_{2:K+1}^{u}=[z_{2}^{u},z_{3}^{u},\cdots,z_{K+1}^{u}]$ and $Z_{1:K}=[z_{1},z_{2},\cdots,z_{K}]$. Note that the feasibility of the estimator under full observations relies on the invertibility of $(Z_{1:K} Z_{1:K}^\mathsf{T})$, which is related to the number of observations and the steady pattern of the formation. Here we temporarily suppose the invertibility holds, and analyze the details in the proposed local topology estimator in Section \ref{subsec:estimator}. Let $W_{\sss FF}=[w_{ij},~i,j\in\mathcal{V}_{\sss F}]\!\in\!\mathbb{R}^{n_f\times n_f}$ be the topology matrix of $\mathcal{V}_{\sss F}$. To infer $W_{\sss FF}$ from $\{ z_{k}^{\sss F}\}$, it is certainly free for one to adopt a truncated form of (\ref{eq:ideal-estimator}) as in \cite{santos2019local} \begin{equation} \label{eq:truncated-estimator} \hat W_{\sss FF} = Z_{2:K+1}^{u,\sss F} (Z_{1:K}^{\sss F})^\mathsf{T}(Z_{1:K}^{\sss F} (Z_{1:K}^{\sss F})^\mathsf{T} )^{-1}. \end{equation} The works \cite{matta2018consistent,santos2019local,cirillo2021learning} have explored the conditions of using the truncated estimator to approximate the ground truth\footnote{In \cite{matta2018consistent,santos2019local,cirillo2021learning}, the conditions of using estimator (\ref{eq:truncated-estimator}) are summarized as: i) the topology is in symmetric Erd\H{o}s-R{\'e}nyi random graph form with vanishing connection probability, and ii) the ratio of the observable nodes to all nodes converges to constant as the size of the network goes to infinity.}. Nevertheless, these conditions are not consistent with our problem setting,}} and $\hat W_{\sss FF}$ is far away from the ground truth from basic linear algebra, i.e., \begin{equation}\label{eq:unequal} \hat W_{\sss FF} \neq [Z_{2:K+1}^{u} {Z_{1:K}^\mathsf{T}}(Z_{1:K} Z_{1:K}^\mathsf{T} )^{-1}]_{\sss FF}. \end{equation} More precisely, let $\mathcal{V}_{\sss F'}=\mathcal{V}\backslash\mathcal{V}_{\sss F}$ and the formation dynamics (\ref{eq:global-system}) can be divided into \begin{equation}\label{eq:devide_state} \left[ {\begin{aligned} {z}_{k+1}^{\sss F}\\ {z}_{k+1}^{\sss F'} \end{aligned}} \right] \!=\! \left[ {\begin{aligned} W_{\sss FF}~W_{\sss FF'}\\ W_{\sss F'F}~W_{\sss F'F'} \end{aligned}} \right]\left[ {\begin{aligned} {z}_{k}^{\sss F}\\ {z}_{k}^{\sss F'} \end{aligned}} \right] \!+ \!\left[ {\begin{aligned} u_{k}^{\sss F}\\ u_{k}^{\sss F'} \end{aligned}} \right], \end{equation} where ${z}_{k}^{\sss F'}$ is the state of $\mathcal{V}_{\sss F'}$ at time $k$. Substituting $\tilde{z}_{k}^{\sss F}={z}_{k}^{\sss F}+\omega_k^{\sss F}$ into (\ref{eq:devide_state}), the observation of $\mathcal{V}_{\sss F}$ is given by \begin{equation}\label{eq:local_observation} \tilde{z}_{k+1}^{\sss F}= {W_{\sss FF}} \tilde{z}_{k}^{\sss F} + u_{k}^{\sss F} + {W_{\sss FF'}}z_{k}^{\sss F'} + \omega_{k+1}^{\sss F}-{W_{\sss FF}}\omega_{k}^{\sss F}. \end{equation} Note that (\ref{eq:local_observation}) only represents the explicit relationship of every two consecutive observations, not a real process. {\color{black}{ It is clear that the unobserved and non-negligible term $\{ {W_{\sss FF'}}z_{k}^{\sss F'} \}$ incurs the inequality of \eqref{eq:unequal}, making it extremely hard to obtain an unbiased estimator of $W_{\sss FF}$ from noisy $\{\tilde z_{k}^{\sss F} \}$. }} Thanks to the constrained interaction characteristics of MRNs, \textcolor{black}{we observe that the robots that are outside $r_i$'s interaction range have no influence on $r_i$. } Therefore, we transform the inference objective by shrinking the inference scope from $\mathcal{V}_{\sss F}$ to a smaller $\mathcal{V}_{\sss H}$, which directly avoids the inference bias in the truncated estimator (\ref{eq:truncated-estimator}). As shown in Fig.~\ref{fig-range}, we use a concentric circle to cover the feasible subset $\mathcal{V}_{\sss H} \!\subseteq\! \mathcal{V}_{\sss F}$ with radius $R_h$, satisfying \begin{equation} R_h=R_f-R_c. \end{equation} Once the subset $\mathcal{V}_{\sss H}$ is determined, \textcolor{black}{we can design an unbiased estimator of the following local topology, \begin{equation} W_{\sss HF}\!=\![w_{ij},~i\!\in\!\mathcal{V}_{\sss H},j\in\mathcal{V}_{\sss F}]\!\in\!\mathbb{R}^{n_h\times n_f}, \end{equation} where $n_h=|\mathcal{V}_{\sss H} |$. Note that $W_{\sss HF}$ covers all connections within $\mathcal{V}_{\sss H}$ and the directed connections from $\{\mathcal{V}_{\sss F}\backslash\mathcal{V}_{\sss H}\}$ to $\mathcal{V}_{\sss H}$. The details are presented in the next section. } \begin{figure}[t] \centering \setlength{\abovecaptionskip}{0.2cm} \includegraphics[width=0.45\textwidth]{range8-ecc.png} \caption{Illustration of observation ranges. The blue circle area enclosing $\mathcal{V}_{\sss H}$ is with radius $R_h$, and larger circle area enclosing $\mathcal{V}_{\sss F}$ is with radius $R_f$.} \label{fig-range} \vspace*{-8pt} \end{figure} \subsection{Inferring Interaction Range by Active Excitation}\label{sec:excitation} Next, we present the active excitation based method to illustrate how to estimate the interaction radius $R_c$. Note that robots are equipped with sensors to detect obstacles around. When $r_a$ is very close to $r_j$, by the obstacle-avoidance rule (\ref{eq:obstacle-rule}), an excitation input will be triggered in $r_j$. Then, the observed state of $r_i$ under excitation is given by \begin{itemize} \item \textbf{Observation under excitation}: \begin{align}\label{eq:excited-observation} \tilde z_{k}^{j,e} = ck+s^{j} + \bm{\epsilon}_{k}^{j}+ \omega_{k}^{j} + u_{k-1}^{j,e}, \end{align} \end{itemize} \textcolor{black}{where $\bm{\epsilon}_{k}^{j}$ is the $j$-th element of $\bm{\epsilon}_{k}=z_k-{c} k \bm{1}_n -s$, which represents the residual error vector with the linear steady trajectory. } According to Lemma \ref{le:ep-convergence}, when the MRN is in $\epsilon$-steady pattern, $ \| \bm{\epsilon}_{k} \|_1\le \epsilon$. Next, we present the details of the active excitation based method as follows. \begin{itemize} \item \textit{Step 1: Initial excitation on $r_j$.} \end{itemize} \textcolor{black}{Based on (\ref{eq:excited-observation}) and recalling the velocity estimation error $\Delta_c$, the velocity prediction error on $j\in\mathcal{V}$ at the $\epsilon$-steady pattern is calculated by} \begin{align}\label{eq:excitation-j} \delta_{k}^{j} &=\tilde z_{k}^{j,e} - \tilde z_{k-1}^{j} - \hat c \nonumber \\ &= (\omega_{k}^{j}- \omega_{k-1}^{j} ) + ( \bm{\epsilon}_k^{j}-\bm{\epsilon}_{k-1}^{j}) - \Delta_c + u_{k-1}^{j,e}. \end{align} Note that $\delta_{k}^{c}$ is a random variable, and $u_{k-1}^{j,e}\!=\!0$ if $r_j$ is not excited. Based on Theorem \ref{th:cs-performance}, if $r_j$ is under no excitation, \textcolor{black}{then we have $|\delta_{k}^{j}|\!\le\! \sqrt{3\epsilon^2\!+\!2\sigma^2}$ with a high probability. } Utilizing this empirical result, we design the following criterion to determine whether $r_j$ is excited by $r_a$ and its reaction range (i.e., the obstacle detection range $R_o$), given by \begin{equation} \label{eq:estimator-Ro} \left \{ \begin{aligned} k_e&=\inf\{k:| \delta_{k}^{j} |>\sqrt{3\epsilon^2+2\sigma^2}\}, \\ \hat{R}_o &=\| {\mathbf z}_{k_e}^{j} - {\mathbf z}_{k_e}^{a} \|_2, \end{aligned} \right. \end{equation} where $k_e$ is the starting moment of the excitation stage. \begin{itemize} \item \textit{Step 2: Excitation strategy.} \end{itemize} To keep $r_a$ within the obstacle detection range of $r_j$, we define the feasible state set of $r_a$ as \begin{align} \mathcal{Z}_{k+1}^{a}= \{ {\mathbf z}_{k+1}^{a}: \| {\mathbf z}_{k+1}^{a} -{\hat{\mathbf z}}_{k+1}^{j} \|_2 \le \hat{R}_o \}, \end{align} For better identification, the next movement of $r_a$ is randomly selected from $\mathcal{Z}_{k+1}^{a}$ in the same direction, i.e., \begin{equation}\label{eq:update-rule} {\mathbf z}_{k+1}^{a} \!\in\! \mathcal{Z}_{k+1}^{a}\! \cap \! \{ {\mathbf z}_{k+1}^{a} : {z}_{k+1}^{a}\cdot {z}_{k}^{a} \ge 0~\text{in each dimension}\}. \!\! \end{equation} \begin{itemize} \item \textit{Step 3: Estimating $R_c$ based on out-neighbors.} \end{itemize} If $r_j$ is injected with the excitation input $u_{k-1}^{j,e}$, the influence of $u_{k-1}^{j,e}$ will spread to $\mathcal{N}_{j}^{out}$ in following moments. Suppose $r_a$ makes excitations over $r_j$ for consecutive $m$ time steps, then the accumulated velocity prediction error of $i\in\mathcal{N}_{j}^{out}$ in the $m$-step is calculated by \begin{align}\label{eq:state-increment} \delta_{k+m,k}^{i}= \tilde z_{k+m}^{i,e} - \tilde z_{k}^{i} - m \hat c . \end{align} Next, we define the following out-neighbor estimation function and demonstrate its accuracy. \begin{definition}[Out-neighbor indicator]\label{def:indicator} The indicator of the event that $i\in \mathcal{N}_j^{out}$, $\Theta_{ij}$, is defined as \begin{equation} \Theta_{ij}=\left \{ \begin{aligned} &1, &&\text{if}~w_{ij}>0, \\ &0, &&\text{otherwise}. \end{aligned} \right. \end{equation} The estimator of $\Theta_{ij}$ is defined as \begin{equation} \hat{\Theta}_{ij}=\left \{ \begin{aligned} &1, &&\text{if}~|\frac{\delta_{k+m}^{i}}{m} |>(\frac{4}{\sqrt{L_c}} + \frac{4}{\sqrt{m}})\epsilon, \\ &0, &&\text{otherwise}. \end{aligned}\right. \end{equation} \end{definition} \begin{theorem}[Accuracy of $\hat{\Theta}_{ij}$]\label{th:outneighbor-excitation} Under $m$ consecutive excitations on $r_j$, \textcolor{black}{the true positive probability of estimator $\hat{\Theta}_{ij}$ is lower bounded as } \begin{align}\label{eq:neighbor-iden} &\Pr\left\{ {\Theta}_{ij}=1 |\hat{\Theta}_{ij}=1 \right\} \ge P_1(L_c)\cdot P_2(m), \end{align} where $P_2(m)=1 - 2 \exp\{-\frac{ {m}\epsilon^{2}}{\sigma^{2}}\}$. \end{theorem} \begin{proof} The proof is provided in Appendix \ref{apdix:outneighbor-excitation}. \end{proof} Theorem \ref{th:outneighbor-excitation} demonstrates that by active excitations, the out-neighbors of $r_j$ (within the observation range) can be determined with a high probability. Besides, if there exists at least one out-neighbor of $r_j$ in $\mathcal{V}_{\sss F}$, then the two robots are always within the interaction range during the whole process. \textcolor{black}{Utilizing this characteristic, we take the maximum distance between $r_j$ and the inferred $r_j$'s out-neighbors from their observations as the lower bound of $R_h$, given by} \begin{equation}\label{eq:range-bound} R_c^{lb} \!=\! \sup \left\{ \| \tilde{\mathbf{z}}_{t}^{i}- \tilde{ \mathbf{z} }_{t}^{j} \|_2 : \hat{\Theta}_{ij}=1, t=1,\!\cdots\!,k_e \! +\! m \right\}. \! \end{equation} The procedures of obtaining $R_c^{lb}$ are summarized in Algorithm \ref{algo:infer-range}. Then, the interaction range satisfies \begin{equation}\label{eq:Rc-range} R_c^{lb}\le R_c\le R_c^{ub}, \end{equation} \textcolor{black}{where $R_c^{ub}=R_f$}. The range interval (\ref{eq:Rc-range}) is critical for final topology inference. \begin{algorithm}[t] \caption{Infer the interaction range $R_c$} \label{algo:infer-range} \begin{algorithmic}[1] \REQUIRE{Steady moment $k_s$, excitation number $m$, $\hat c$ and $\hat h$.} \ENSURE{Lower bound of $R_c$.} \STATE Select a excitation target $j\in\mathcal{V}_{\sss F}$; \WHILE {$ | \delta_{k}^{j} |\le \sqrt{3\epsilon^2+2\sigma^2} $} { \STATE $r_a$ moves closer to $r_j$, $k=k+1$; } \ENDWHILE \STATE $k_e=k$, $\hat{R}_o =\| {\mathbf z}_{k_e}^{j} - {\mathbf z}_{k_e}^{a} \|_2$; \FOR {$t =1\to m$} { \STATE Update ${\mathbf z}_{k_e +t}^{a}$ by (\ref{eq:update-rule}); } \ENDFOR \FOR {all $i\in\mathcal{V}_{\sss F}\backslash\{j\}$} { \IF {$ |\frac{\delta_{k+m}^{i}}{m} | > (\frac{4}{\sqrt{L_c}} + \frac{4}{\sqrt{m}})\epsilon $} { \STATE $\hat{\Theta}_{ij}=1$; } \ENDIF } \ENDFOR \IF { all $\hat{\Theta}_{ij}=0$} { \STATE re-select a target robot and go to line 2; } \ENDIF \STATE Compute $R_c^{lb} $ by (\ref{eq:range-bound}); \end{algorithmic} \end{algorithm} \section{Estimator Design and Performance Analysis}\label{sec:inference-estimation} By the methods proposed in the last section, the obtained estimators of $\hat c$, $\hat h^{F}$ and $\hat R_c^{lb}$ make the local topology inference feasible. \textcolor{black}{However, directly using $\hat R_c^{lb}$ to determine $\mathcal{V}_{\sss H}$ is relatively conservative.} In this section, we first present the estimator of local topology $W_{\sss HF}$ and leverage it to reversely approximate $R_c$. Then, taking the estimation error of $\hat c$ and $\hat h^{F}$ into consideration, we give the non-asymptotic error bound of $\| \hat{W}_{\sss HF} - W_{\sss HF} \|$. Finally, we demonstrate how to utilize the knowledge acquired in the active excitation stage to improve further the inference performance based on $\hat{W}_{\sss HF}$. \subsection{Local Topology Inference under Uncertain $R_c$} \label{subsec:estimator} First, we analyze the inference performance of the ordinary least squares estimator under different interaction range $R_c$. If $R_c$ is determined, the inferable subset $\mathcal{V}_{\sss H} \!\subseteq\! \mathcal{V}_{\sss F}$ is also determined by $R_h=R_f-R_c$. Considering the possibility that the formation leader $r_{n}\in\mathcal{V}_{\sss F}$, we need to discriminate its influence. Given $\mathcal{V}_{\sss F}$ and $k_s$, if the leader $r_{n}\in \mathcal{V}_{\sss F}$, then it is identified by \begin{equation} \hat r_{n}= {\arg \mathop {\min }\limits_{i} \left\{ f_{c}^{i} : f_{c}^{i} \le \frac{8\epsilon}{\sqrt{L_c}} , i\in\mathcal{V}_{\sss F} \right\}}, \label{ww1}\\ \end{equation} where $f_{c}^{i}={ | \sum\nolimits_{k = 0}^{k_s-1} (\tilde{z}_{k+1}^{i} - \tilde{z}_{k}^{i} - \hat c_b )| } / { {k_s} }$. Note that if $\hat r_{n}$ is empty, it means $r_{n}\notin \mathcal{V}_{\sss F}$. To discriminate this situation, we define the indicative leader vector $\mathbb{I}_{\sss F}$ by \begin{equation}\label{eq:indicator} \mathbb{I}^{\sss F}(i)=\left\{ {\begin{aligned} &1, &&\text{if}~\exists i\in \mathcal{V}_{\sss F}, i=\hat r_{n},\\ &0, &&\text{otherwise}. \end{aligned}} \right. \end{equation} Next, we will illustrate how to filter the influence of the input to infer the local topology, and use it to approximate the real $R_c$. Let $\mathcal{V}_{\sss H'}=\mathcal{V}_{\sss F}\backslash{\mathcal{V}_{\sss H}}$ and $W_{\sss HF}=[W_{\sss HH}~W_{\sss HH'}]$. Define two variables of filtered $\tilde z_{k}^{\sss F}$ and organize them as \begin{equation}\label{eq:filtered_observation} \begin{aligned} x_k &= ( \tilde z_{k}^{\sss F} -\hat h^{\sss F})\in \mathbb{R}^{n_f}, \\ y_k &= ( \tilde z_{k}^{\sss H} -\hat h^{\sss H} - \hat c \mathbb{I}^{\sss H}) \in \mathbb{R}^{n_h} , \\ X &=[x_0,x_1,\cdots,x_{k_s-1}]\in \mathbb{R}^{n_f \times k_s}, \\ Y &=[y_1,y_2,\cdots,y_{k_s}]\in \mathbb{R}^{n_h \times k_s}. \end{aligned} \end{equation} \textcolor{black}{Then, by referring to the Theorem 2 in our preliminary work \cite{lys}, we present the following local topology estimator. } \begin{theorem}\label{th:topo-estimator} \textcolor{black}{Given the filtered observation matrices $X$ and $Y$, and supposing ${R}_c$ is known}, if $|\mathcal{V}_{\sss F}| \! + \! 1\! \le k_s$, then the optimal estimation of $W_{\sss HF}$ in the sense of least squares is \begin{equation}\label{eq:form-solution} \hat W_{\sss HF}= { Y } X^\mathsf{T} ( X X^\mathsf{T})^{-1}. \end{equation} \end{theorem} Theorem \ref{th:topo-estimator} gives the least squares solution of $W_{\sss HF}$ when $R_c$ is known. The core insight is that by the range-shrink strategy, the truncated state $[Wx]^{\sss H}=W_{\sss HF} x^{\sss F}$, which perfectly avoids the influence brought by the unobservable $\mathcal{V}_{\sss F'}$. Although the number of feasible observations is limited in practice, Theorem \ref{th:topo-estimator} can be used as the basis for approximating $W_{\sss HF}$ from noisy observations. {\color{black}{ \begin{remark} Note that the invertibility of matrix $(X X^\mathsf{T})$, i.e., $\operatorname{Rank}(X X^\mathsf{T})=n_f$, is guaranteed from two aspects: the non-steady observations and the random observation noises. First, the observation matrix $X$ consists of $k_s$ columns of observations before the $\epsilon$-steady pattern is converged. In other words, the velocities of the robots do not reach consensus and the state variations of different robots are independent, thus making $\operatorname{Rank}(X )=n_f$ holds, which is a dominant factor. Second, the observations $\{ \tilde{z}_{k}^{\sss F} \}$ are corrupted by independent random noises. Since the columns in $X$ are calculated by $x_k = \tilde z_{k}^{\sss F} -\hat h^{\sss F}$ and are independently random, according to Sard's theorem in measure theory, the matrix is full-ranked almost surely. The above two factors effectively avoid the ill-posedness of the proposed estimator. \end{remark} }} \subsection{Convergence of the Proposed Estimator} Next, we focus on the convergence performance of $\hat W_{\sss HF}$ assuming $R_c$ is known. Taking the estimation error of $\hat c$ and $\hat h^{\sss F}$ into account, the convergence of $\hat W_{\sss HF}$ is characterized by the following result. \begin{theorem}[Convergence of $\hat W_{\sss HF}$ with known $R_c$]\label{th:final-error} Let $P_3(k_s)=1-2 \exp\{-(k_s+n_h)\}$ and suppose $R_c$ is known. With probability at least $P_1(L_c) \cdot P_3(k_s)$, the error of the topology estimator $\hat W_{\sss HF}(\hat R_c)$ satisfies \begin{equation}\label{eq:ks_bound} \| \hat W_{\sss HF} \!-\! W_{\sss HF}\| = \bm{O} ( \frac{1}{k_s}) + \bm{o}( \frac{1}{k_s^2}). \end{equation} \end{theorem} \begin{proof} The proof is provided in Appendix \ref{apdix:final-error}. \end{proof} Theorem \ref{th:final-error} demonstrates the convergence rate of $\hat W_{\sss HF}$ in terms of $k_s$ in probability. Apparently, if the observations before the $\epsilon$-steady pattern are sufficient, then $\hat W_{\sss HF}$ will closely approximate the ground truth in a rate of $\frac{1}{k_s}$, satisfying \begin{equation} \label{eq:final-error} \Pr\left\{ \mathop{\lim}\limits_{L_c,k_s\to \infty } \| \hat W_{\sss HF} - W_{\sss HF}\| =0 \right\}=1. \end{equation} \begin{remark} \textcolor{black}{Note that since $\hat W_{\sss HF}$ is based on the estimators $\hat c$ and $\hat h^{\sss F}$, the bound of $ \| \hat W_{\sss HF} - W_{\sss HF}\|$ is also related to $\epsilon$, $\sigma$ and $L_c$. } In the proof of Theorem \ref{th:final-error}, we show that the RHS in (\ref{eq:ks_bound}) is in fact composed of multiple factors, including $\bm{O}( \frac{\epsilon }{k_s\sqrt{L_c}})$, $\bm{O} ( \frac{\epsilon}{k_s})$, $\bm{O} ( \frac{\sigma}{k_s})$ and $\bm{o}( \frac{\sigma^2}{k_s^2})$. Hence, we can characterize the bound as a uniform one about $k_s$. It is worth noting that, although the estimation errors of $\hat c$ and $\hat h^{\sss F}$ are influenced by $\epsilon$ and $\sigma$, these parts of errors will have a slight influence on the accuracy of $\hat W_{\sss HF}$ as $k_s$ grows. \end{remark} Note that there are some possible techniques to further alleviate the influence of the observation noises, e.g., by de-regularization. \textcolor{black}{In this method, the optimization objective is $\sum\nolimits_{k = 1}^{k_s} \| y_k^{\sss B}- {W_{\sss HF}} y_{k-1}^{\sss A} \|^2-\beta\|W_{\sss HF}\|_{F}^2$, where $\beta>0$ and the second negative term is called de-regularization term. Deeper investigation towards this direction will be left as future work. } \subsection{Accuracy Analysis} It is illustrated in Theorem \ref{th:final-error} that if the interaction range $R_c$ is known, the local topology estimator $\hat W_{\sss HF}$ converges to $ W_{\sss HF}$ asymptotically. However, we only have an estimation range of $R_c$, i.e., $[R_c^{lb},R_c^{ub}]$, and different $\hat R_c$ renders different cardinality of $\mathcal{V}_{\sss F}$. \textcolor{black}{To analyze the accuracy of the local topology inference under various $\hat R_c$, we explicitly write the local topology estimator as $\hat{W}_{\sss HF}(\hat{R}_c)$, and propose a range approximation algorithm to find appropriate $\hat R_c$. } \textcolor{black}{First, we use the maximum range $R_c^{ub}$ to determine an auxiliary robot set $\mathcal{V}_{\sss H_0}\subseteq \mathcal{V}_{\sss F}$, which is covered by a concentric circle of $r_a$'s observation range, with radius $R_{h0}$ satisfying \begin{equation} R_{h0}=R_f-R_c^{ub}. \end{equation} Let $\mathcal{V}_{\sss F_0}$ be the set of robots within the concentric circle range with radius $(R_{h0}+\hat{R}_{c})$, and denote $\mathcal{V}_{\sss F'_0}=\mathcal{V}_{\sss F}\backslash\mathcal{V}_{\sss F_0}$. Note that here $\mathcal{V}_{\sss H_0}$ is constant and $\mathcal{V}_{\sss F_0}$ will change with $\hat{R}_{c}$. Apparently, we have $\mathcal{V}_{\sss H_0}\!\subseteq\!\mathcal{V}_{\sss F_0}\!\subseteq \!\mathcal{V}_{\sss F}$ and $\mathcal{V}_{\sss H_0}\cap\mathcal{V}_{\sss F'_0}=\emptyset$. } For the robots in $\mathcal{V}_{\sss F'_0}$, $r_a$ will regard that $\hat w_{ij} (\hat{R}_c)=0,~i\in\mathcal{V}_{\sss H_0},j\in\mathcal{V}_{\sss F'_0}$. For the robots in $\mathcal{V}_{\sss F_0}$, $\hat W_{\sss{H_0 F_0}}(\hat{R}_c)$ is computed by the OLS estimator. Combining the two parts, $W_{\sss{H_0 F}}$ is estimated by \begin{equation}\label{eq:h0f} \hat W_{\sss{H_0 F}}(\hat{R}_c) =\left [Y_{\sss H_0} X_{\sss F_0}^\mathsf{T} ( X_{\sss F_0} X_{\sss F_0}^\mathsf{T})^{-1}, \bm{0}_{|\mathcal{V}_{\sss H_0}|\times |\mathcal{V}_{\sss F'_0}| } \right]. \end{equation} \textcolor{black}{Recall $\hat W_{\sss{H_0 F}}(\hat{R}_c)$ utilizes $k_s$ groups of observations}, and we define the following evaluation function of $\hat R_c$ to describe its influence on $\hat W_{\sss{H_0 F}}(\hat{R}_c)$ \begin{itemize} \item \textbf{Asymptotic inference bias of $\hat W_{\sss{H_0 F}}(\hat{R}_c)$}: \begin{equation}\label{eq:asymptotic-bias} f_w(\hat{R}_c)= \mathop {\lim } \limits_{ k_s \to \infty } \| \hat W_{\sss{H_0 F}}(\hat{R}_c;k_s) - W_{\sss{H_0 F}} \|. \end{equation} \end{itemize} \begin{theorem}[Inference bias under different $\hat{R}_c$]\label{th:decreasing-error} The asymptotic inference bias $f_w(\hat{R}_c)$ is monotonically decreasing w.r.t. the inferred range $\hat{R}_c$ in probability, i.e., if $\hat{R}_{c1}\ge \hat{R}_{c2}$, \begin{equation}\label{eq:fw-le} \Pr \left\{ f_w(\hat{R}_{c1}) \le f_w(\hat{R}_{c2}) \right\}=1. \end{equation} Specifically, if $\hat{R}_c\ge R_c$, \textcolor{black}{the estimator $\hat W_{\sss{H_0 F}}(\hat{R}_c)$ is asymptotically unbiased}, i.e., \begin{equation} \Pr \left\{ f_w(\hat{R}_c)=0 \right\}=1. \end{equation} \end{theorem} \begin{proof} The proof is provided in Appendix \ref{apdix:decreasing-error}. \end{proof} Theorem \ref{th:decreasing-error} demonstrates the decreasing monotonicity of $f_w(\hat{R}_c)$ in asymptotic sense. \textcolor{black}{Note that $\hat{R}_c\ge R_c$ is a sufficient condition to guarantee an asymptotically unbiased $\hat W_{\sss{H_0 F}}(\hat{R}_c)$. } Despite not knowing the groundtruth $R_c$ and $W_{\sss{H_0 F}}$ in practice, from Theorem \ref{th:decreasing-error} we deduce that $f_w(R_{c}^{ub})=f_w(R_{c})=0$, which indicates that $\hat W_{\sss{H_0 F}}(R_{c}^{ub})$ can be leveraged to replace $W_{\sss{H_0 F}}(R_{c})$ for evaluation. Accordingly, we define the empirical bias of $\hat W_{\sss{H_0 F}}(\hat{R}_c)$ as \begin{itemize} \item \textbf{Empirical inference bias of $\hat W_{\sss{H_0 F}}(\hat{R}_c)$}: \begin{equation}\label{eq:empirical-bias} f_e(\hat{R}_c)= \| \hat W_{\sss{H_0 F}}(\hat{R}_c) - \hat W_{\sss{H_0 F}}(R_{c}^{ub}) \|. \end{equation} \end{itemize} \begin{algorithm}[t] \caption{$\text{search}\_\text{suboptimal}\_R_c(R_{c}^{ub},R_{c}^{lb},n_c,n_w)$} \label{algo:infer-Rc} \begin{algorithmic}[1] \REQUIRE{Range $[R_{c}^{lb},R_{c}^{ub}]$, decision threshold $\varepsilon_w$, counting number $n_c$ and stopping threshold $n_w$. } \ENSURE{Suboptimal estimation of $R_{c}$.} \STATE $\hat{R}_c=(R_{c}^{ub}+R_{c}^{lb})/2$; \STATE Determine the subset $\mathcal{V}_{\sss F_0}$ by $R_{f0}=R_{h0}+\hat{R}_c$; \STATE Compute $f_e(\hat{R}_c)$ by (\ref{eq:empirical-bias}); \IF {$ f_e(\hat{R}_c)> \varepsilon_w$} { \STATE $R_{c}^{lb}=\hat{R}_c$, $n_c=1$; \STATE $\text{search}\_\text{suboptimal}\_R_c(R_{c}^{ub},R_{c}^{lb},n_c,n_w)$; } \ELSE { \STATE $R_{c}^{ub}=\hat{R}_c$, $n_c=n_c+1$; \IF {$ n_c \ge n_w $} { \STATE Return $R_{c}^{ub}$; } \ELSE { \STATE $\text{search}\_\text{suboptimal}\_R_c(R_{c}^{ub},R_{c}^{lb},n_c,n_w)$; } \ENDIF } \ENDIF \end{algorithmic} \end{algorithm} Based on (\ref{eq:empirical-bias}), we propose Algorithm \ref{algo:infer-Rc} to obtain a suboptimal estimation of ${R}_c$ from the range $[R_c^{lb},R_c^{ub}]$. \textcolor{black}{The key idea of the algorithm is to validate the monotonicity of $f_e(\hat{R}_c)$, and find an appropriate $\hat{R}_c$ after which $f_e(\cdot)$ remains stable. } Specifically, the classic bisection method is used to speed up the search efficiency, and a decision threshold $\varepsilon_w$ and a stopping threshold $n_w$ are introduced to terminate the process. Note that the larger $\varepsilon_w$ and smaller $n_w$ are, the more conservative $\hat{R}_c$ is. \textcolor{black}{ \begin{remark} In previous parts, we assumed that the observation noises on each robot are i.i.d. Gaussian noises for simple analysis. In fact, this assumption can be relaxed on independent but non-identical cases, i.e., $\mathbb{E} \omega_t \omega_s^{\mathsf{T}}=\delta_{ts}\operatorname{diag}(\sigma_{\omega_1}^2, \sigma_{\omega_2}^2,\cdots,\sigma_{\omega_n}^2)$. The key insight is to adopt $\max\{\sigma_{\omega_1}^2, \sigma_{\omega_2}^2,\cdots,\sigma_{\omega_n}^2\}$ as the variance bound for all observation noises in the inference error analysis. Consequently, this scaling step will not affect the convergence and asymptotic accuracy of the proposed method. \end{remark}} \subsection{Estimator Design with Its Improved Solution} With the $\epsilon$-steady pattern parameter $\hat c$, $\hat{h}^{\sss F}$ and interaction range $\hat R_c$ (output of Algorithm \ref{algo:infer-Rc}) determined, we are able to design the unbiased topology estimator of $W_{\sss HF}$ with the maximum number of robots. Consequently, the set $\mathcal{V}_{\sss H}$ is in turn specified by $R_{h}=R_{f}+\hat R_{c}$. Then, the local topology $W_{\sss HF}$ is estimated by \begin{equation}\label{eq:final-W-hf} \hat W_{\sss{HF}}(\hat R_{c}) = Y(\hat R_c) X^\mathsf{T} ( X X^\mathsf{T})^{-1}. \end{equation} Despite the asymptotic boundedness of the OLS estimator (\ref{eq:final-W-hf}), the proposed method nevertheless can be used as the basis for inferring the local topology when a finite number of observations are available. Note that (\ref{eq:final-W-hf}) only utilizes $\hat{R}_c$ to specify the inference scope of $\mathcal{V}_{\sss H}$. In fact, $\hat{R}_c$ can be regarded as the prior knowledge that $r_a$ has mastered in the excitation stage to further improve the inference accuracy. The key insight is that two robots that are not within range $R_c$ will not receive information from each other. Leveraging this as a hard constraint, $\hat W_{\sss HF}$ can be further optimized by solving the following problem \begin{subequations} \label{further} \begin{align} \mathop {\min }\limits_{{W_{\sss HF}}} ~& \| Y(\hat R_c) - {W_{\sss HF}}X \|_{\sss \text{Fro}}^2 \\ \text{s.t.}~~&w_{ij}=0,\text{if}~\|\tilde{\mathbf z}^i-\tilde{\mathbf z}^j\|_2>\hat R_c, i\in\mathcal{V}_{\sss H},j\in\mathcal{V}_{\sss F}. \end{align} \end{subequations} Note that (\ref{further}) is a typical constrained linear least squares problem, and can be solved by many mature optimization techniques, e.g., interior-point method \cite{boyd2004convex}. \textcolor{black}{ Finally, we briefly summarize how the local topology $W_{\sss HF}$ is inferred from noisy observations $\{ \tilde{z}_{k}^{i},i\!\in\!\mathcal{V}_{\sss F}^{a}(k)\}_{k=1}^{k_{end}}$. The first step is to determine the constant subset $\mathcal{V}_{\sss F}$ and estimate the input parameters from the observations in steady pattern. Then, the interaction range between robots is estimated. Utilizing the range-shrink strategy and estimated interaction range, we further determine the appropriate subset $\mathcal{V}_{\sss H}$. At last, the local topology is inferred by \eqref{eq:form-solution} and its improved version \eqref{further}, where the input's influence on the non-steady observations $\{ \tilde{z}_{k}^{\sss F} \}_{k=1}^{k_s}$ is filtered. } {\color{black}{ \subsection{Extensions and Discussions}\label{subsec:extension} Recall that the topology estimator is obtained by solving $\mathop {\min }\limits_{{W_{\sss HF}}} ~ \| Y - W_{\sss HF} X \|_{F}^2$. In fact, it can be decomposed into inferring the rows of $W_{\sss HF}$ independently, i.e., solving \begin{equation} \mathop {\min }\limits_{{W_{\sss HF}^{[i,:]}}} ~ \| Y^{[i,:]} - W_{\sss HF}^{[i,:]} X \|^2, \end{equation} for all $i\in\mathcal{V}_{\sss H}$. Based on this decomposition, we demonstrate how to infer the local topology when $\mathcal{V}_{\sss F}\subseteq\mathcal{V}_{\sss F}^{a}(k)$ does not always hold. Note that if there exists $k< k_{end}$ such $\mathcal{V}_{\sss F}\not\subset\mathcal{V}_{\sss F}^{a}(k)$, it indicates that the observation range of $r_a$ does not cover robots in $\mathcal{V}_{\sss F}$ simultaneously. Let the starting time that $i\!\in\!\mathcal{V}_{\sss F}$ be \begin{equation} k_f^i=\inf\left\{ k_{\ell}: i\in \bigcap\nolimits_{k=k_{\ell}}^{k_{end}} \mathcal{V}_{\sss F}^{a}(k) \right\}. \end{equation} Next, as indicated in \eqref{further}, if $r_j$ is outside the interaction range of $r_i$, then the interaction weight $w_{ij}=0$. This property further relaxes the dependence on the observations of $\mathcal{V}_{\sss F}$. For an explicit expression, denote by $\tilde{\mathcal{V}}_{\sss F}^{i}$ the robot set that has possible influences on $r_i$, given by \begin{equation} \tilde{\mathcal{V}}_{\sss F}^{i}=\{j: j\in \mathcal{V}_{\sss F}~\text{and}~\|\tilde{\mathbf z}^j_{k_{end}}-\tilde{\mathbf z}^j_{k_{end} } \|_2 \le \hat R_c \}. \end{equation} Apparently, one has $i\in\tilde{\mathcal{V}}_{\sss F}^{i}\subseteq\mathcal{V}_{\sss F}$. Recalling the filtered observation variables defined in \eqref{eq:filtered_observation}, we permutate the filtered observations of $\tilde{\mathcal{V}}_{\sss F}^{i}$ and the local topology matrix associated with $r_i$ as follows, \begin{align} \tilde{X}_i&=[\tilde{x}_{k_f^i}(i),\tilde{x}_{k_f^i+1}(i),\cdots,\tilde{x}_{k_s-1}(i)] \in \mathbb{R}^{ |\tilde{\mathcal{V}}_{\sss F}^{i}| \times (k_s-k_f^i)}~, \nonumber \\ \tilde{Y}_i&=[{y}_{k_f^i+1}^i,{y}_{k_f^i+1}^i,\cdots,{y}_{k_s}^i] \in \mathbb{R}^{ 1 \times (k_s-k_f^i)}, \nonumber \\ \tilde{W}_{i}&= [ ({ w_{ij}, {j\in\tilde{\mathcal{V}}_{\sss F}^{i}}})_{1\times |\tilde{\mathcal{V}}_{\sss F}^{i}| } , \bm{0}_{1\times (n_f-|\tilde{\mathcal{V}}_{\sss F}^{i}|) } ] \in \mathbb{R}^{ 1 \times |\tilde{\mathcal{V}}_{\sss F}^{i}|}, \nonumber \end{align} where $\tilde{x}_k(i)=[x_k^{\ell}]_{\ell\in\tilde{\mathcal{V}}_{\sss F}^{i}}$ is the partitioned part in $x_k$ that corresponds to $\tilde{\mathcal{V}}_{\sss F}^{i}$. Based on the above formation, the following result is presented to illustrate how to infer the local topology ${W}_{\sss HF}$ row-by-row. \begin{corollary}\label{coro:extension} Given the observations before $\epsilon$-steady time $k_s$. For $r_i$, if $|\tilde{\mathcal{V}}_{\sss F}^{i}| \le k_s-k_f^i$, then its associated local topology $\tilde{W}_{i}$ can be uniquely inferred by \begin{equation} \hat{\tilde{W}}_{i}=[\tilde{Y}_i \tilde{X}_{i}^\mathsf{T} ( \tilde{X}_i \tilde{X}_i^\mathsf{T})^{-1},\bm{0}_{1\times (n_f-|\tilde{\mathcal{V}}_{\sss F}^{i}|)}]. \end{equation} \end{corollary} The proof of this corollary is the same as that of Theorem \ref{th:topo-estimator}, and the details are omitted here. From Corollary \ref{coro:extension}, the available observation slot for inferring $r_i$'s local topology ${\tilde{W}}_{i}$ is not necessarily the same as that of other robots in $\mathcal{V}_{\sss F}$. Besides, $\hat{\tilde{W}}_{i}$ is the optimal estimation of ${\tilde{W}}_{i}$ in the sense of least squares, as long as the observation slot satisfies $|\tilde{\mathcal{V}}_{\sss F}^{i}| \le k_s-k_f^i$. Similar to the convergence and accuracy of $\hat W_{\sss HF}$, $\hat{\tilde{W}}_{i}$ enjoys the convergence of $\bm{O} ( \frac{1}{k_s-k_f^i})$ and the asymptotical accuracy when $k_s\to\infty$. In summary, although the integrated estimator $\hat W_{\sss HF}$ can be unavailable if robots in $\mathcal{V}_{\sss H}$ occur in $r_a$'s observation range at different moments, one can still utilize Corollary \ref{coro:extension} to infer the local topology associated with each $i\in\mathcal{V}_{\sss H}$. Finally, the underlying $W_{\sss HF}$ is recovered by appropriately permuting the robot indexes of $\{ \hat{\tilde{W}}_{i}, i\in\mathcal{V}_{\sss H} \}$, and stacking them into one matrix row-by-row. \begin{remark} The proposed inference method in this paper, which take the first-order linear formation control as the entry point, also provides insights to tackle some second-order and nonlinear cases. Taking the second-order linear model in \cite{cao2010sampled} as an example, the major difference here is that the topology matrix to be inferred describes the element-to-element interaction connections between both the positions and velocities of robots. The proposed method can be extended to the second-order cases because the global state evolution shares the same linear form as that of the first-order case, with appropriate notations and treatments of the double dimensions for each robot. Besides, for a common class of nonlinear models like $ z_{k+1}^{i}=z_{k}^{i}+ \sum\nolimits_{j = 1}^{n} w_{ij}\varphi_{ij}( z_{k}^{j}-z_{k}^{i}-h^{ij})$ (where $\varphi_{ij}(\cdot)$ is the continuous and strictly-bounded nonlinear interaction function satisfying $\varphi_{ij}(y)=0$ if $y=0$), one can still use the proposed linear estimator to infer the underlying binary adjacent topology, combined with popular clustering methods as \cite{santos2019local} does. \end{remark} }} \begin{figure}[t] \centering \setlength{\abovecaptionskip}{0.1cm} \includegraphics[width=0.5\textwidth]{new_example2} \caption{An MRN of 11 robots and the interaction weights are in red font. Robot 1-3 are unobservable to $r_a$, robot 4-11 constitute the observable set $\mathcal{V}_{\sss F}$, and robot 7-9 constitute the ideal subset $\mathcal{V}_{\sss H}$.} \label{se_example} \vspace{-10pt} \end{figure} \begin{figure}[t] \centering \setlength{\abovecaptionskip}{-0.1cm} \subfigure[$\sigma=0.05$]{\label{fig:v_noise1} \includegraphics[width=0.45\textwidth]{auto_epsilon_323_new}} \hspace{-0.7cm} \subfigure[$\sigma=0.1$]{\label{fig:v_noise2} \includegraphics[width=0.45\textwidth]{auto_epsilon_224_new}} \caption{Estimation of the formation speed $\hat c(k,L_c) $. The threshold parameter $\epsilon$ is set as $\epsilon=0.8\sigma$. } \label{fig:v_noise} \vspace{-10pt} \end{figure} \begin{figure*}[t] \centering \subfigure[Empirical error $f_e(\hat{R}_c)$]{\label{fig:fe_error_noise} \includegraphics[width=0.45\textwidth]{auto_fe_error}} \hspace{-0.6cm} \subfigure[Average error with ground truth]{\label{fig:average_error_noise} \includegraphics[width=0.45\textwidth]{auto_average_error}} \hspace{-0.6cm} \subfigure[Empirical error $f_e(\hat{R}_c)$]{\label{fig:fe_error_data} \includegraphics[width=0.45\textwidth]{auto_data_fe_error}} \hspace{-0.6cm} \subfigure[Average error with ground truth]{\label{fig:average_error_data} \includegraphics[width=0.45\textwidth]{auto_data_average_error}} \vspace{-5pt} \caption{Inference error of $\hat W_{\sss{H_0 F}}(\hat{R}_c)$. (a)(b): under different noise variance using 200 observations. (c)(d): under different observation amount with $\sigma=0.4$. } \label{fig:inference_local} \vspace*{-13pt} \end{figure*} \begin{figure}[t] \centering \subfigure[Under different noise]{\label{fig:final_noise} \includegraphics[width=0.45\textwidth]{auto_noise_improved2}} \hspace{-0.6cm} \subfigure[Under different observation amount]{\label{fig:final_data} \includegraphics[width=0.45\textwidth]{auto_data_improved}} \vspace{-5pt} \caption{Comparisons of inference error of OLS estimator $\hat W_{\sss{HF}}(\hat R_{c})$ and the improved estimator. 200 observations are used in (a), and $\sigma=0.1$ in (b).} \label{fig:final_inference} \vspace{-13pt} \end{figure} \section{Simulation}\label{simulation} \subsection{Simulation Setting} In this section, we conduct numerical experiments to demonstrate the feasibility of inferring the local topology of the MRN, and validate the theoretical results. For simplicity, we consider a representative case of an MRN consisting of 11 robots. The preset formation shape and the robot set division are shown in Fig.~\ref{se_example}. Specifically, robot 1 is set as the leader, and the moving speed in stable stage is set as 0.3m$/$s. The observation range of $r_a$ is set as $R_f=9$m, and the interaction and obstacle-detection radius of a formation robot are setting as $R_c=5$m and $R_o=1.5$m, respectively. The observation window length $L_c$ is set to be 500. In the following parts, we will present the inference results of the steady pattern, the interaction range and the local topology. \subsection{Simulation Results} Let us begin with examining the steady pattern estimator (\ref{eq:window-s}) in terms of $\hat c(k,L_c)$. For simplicity, we set the threshold parameter $\epsilon=0.8\sigma$, and conduct two groups of experiments using different $\sigma$. As shown in Fig.~\ref{fig:v_noise}, when the MRN reaches the steady state, the velocity estimation remains stable. Specifically, the red line illustrates how to find the $\epsilon$-steady time by the bound ${8\epsilon}/{\sqrt{L_c}}$ in (\ref{eq:criterion-ks}). Apparently, the larger $\epsilon$ is set, the more conservative $\hat c(k,L_c)$ is. Next, as shown in Fig.~\ref{fig:inference_local}, the inference performance of the interaction range is evaluated. Since the active excitation method mainly aims to obtain a lower bound of $\hat{R}_c$, here we omit the simulation process of this stage and directly present the inference error under different $\hat{R}_c$. For fair comparisons, Fig.~\ref{fig:fe_error_noise} and \ref{fig:average_error_noise} depict the error curve under noise variance from $0$ to $1$ using 200 observations, while Fig.~\ref{fig:fe_error_data}-\ref{fig:average_error_data} depicts the error curve under observation amount from $20$ to $260$ with $\sigma=0.4$. Note that the average error of the inferred topology with the ground truth is computed as $ \| \hat W_{\sss{H_0 F}}(\hat{R}_c) - W_{\sss{H_0 F}} \|/(|\mathcal{V}_{\sss{H}}||\mathcal{V}_{\sss{F}}|)$. As we can see, the empirical inference function $f_e(\hat{R}_c)$ and the average error is generally decreasing with $\hat{R}_c$. This corresponds to the result of Theorem \ref{th:decreasing-error} and supports the feasibility of using Algorithm \ref{algo:infer-Rc} to determine a more accurate $\hat R_c$. Specifically, the more observations are involved, the smaller the average inference error w.r.t. the ground truth is. \textcolor{black}{ Then, with $\hat{R}_c=4.5$m, we compare the inference performance without and with the interaction constraint, corresponding to $\hat W_{\sss{HF}}(\hat R_{c})$ and \eqref{further}, respectively. } Fig.~\ref{fig:final_noise} presents the inference errors under different variances of observation noises, varying from 0 to 0.5. Each test is based on 200 observations over the same time window. Fig.~\ref{fig:final_data} presents the inference errors under different number of observations, ranging from 20 to 260, with $\sigma=0.1$. Note that all the error indexes (Y-coordinate) in the figures describe the absolute deviation between two variables instead of the relative. \textcolor{black}{Under the same observation amount, the larger $\sigma$ is, the less improvement can be obtained. When $k_s$ and $\sigma$ are not very large, remarkable improvements in the inference performance can be achieved.} In addition, as shown in Fig.~\ref{fig:final_data}, the inference error can be further reduced with a larger available observation amount (i.e., $k_{s}$), which corresponds to the conclusion of Theorem \ref{th:final-error}. \textcolor{black}{ Finally, we present the performance comparison of the proposed approach with the methods in \cite{matta2018consistent} and \cite{8985069} (denoted as M-1 and M-2, respectively), under the same settings of $\hat{R}_c$, noise variance and observation amount as the last experiment. Note that for fair comparisons, we use the filtered observations to implement M-1. Fig.~\ref{fig:comp_three_noise} shows the relationships between the inference error and the observation noise variance for all methods. Fig.~\ref{fig:comp_three_observation} shows the relationships between the inference error and the observation amount for all methods, under common noise variance $\sigma=0.1$. It is clear from the presented results that the proposed method outperforms the other two, which mainly results from the estimation of the formation input and the interaction range. We also observe that the reason for M-1 having better performance than M-2 lies in the filtered observations we used. More detailed technical comparisons along with some other inference algorithms are summarized in Table \ref{tab:comparison}. From this table, it shows that the proposed method has better applicability for the considered problem and inference performance guarantees. } \begin{figure}[t] \centering \subfigure[Under different noise]{\label{fig:comp_three_noise} \includegraphics[width=0.45\textwidth]{compare_three_noise}} \hspace{-0.6cm} \subfigure[Under different observation amount]{\label{fig:comp_three_observation} \includegraphics[width=0.45\textwidth]{compare_three_observation2}} \vspace{-5pt} \caption{\textcolor{black}{Comparisons of the proposed method with methods in \cite{matta2018consistent} and \cite{8985069}. 200 observations are used in (a), and $\sigma=0.1$ in (b).}} \label{fig:compare_three} \end{figure} \begin{table*}[t] \centering \tiny \caption{\label{tab:comparison}Comparisons of the proposed method with other methods} \begin{tabular}{p{1.7cm}p{1cm}p{0.9cm}p{1cm}p{1.5cm}cc p{0.9cm}p{1cm}c} \toprule \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c}{ \textbf{Topology Structure}} & \multicolumn{2}{c}{ \textbf{Input Consideration} } & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}} \textbf{ Local Inference }\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Input}\\ \textbf{Filtering}\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}} \textbf{Observation}\\ \textbf{Noise}\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}} \ \textbf{Convergence}\\ \textbf{Speed}\end{tabular}} \\ \cmidrule{2-7} & Undirected & Directed & Random & Non-random & Feasibility & Conditions \\ \midrule Truncated estimator in \cite{matta2018consistent} & \makecell[c]{$\checkmark$} & &\makecell[c]{$\checkmark$} & & \makecell[c]{$\checkmark$} &\makecell{Erd\H{o}s-R{\'e}nyi graph\\$ \frac{N_O}{N}\!\in\! (0,1] (N\!\to\!\infty)$$^{^1}$} & & &$\bm{O}(\sqrt{ \frac{ 1}{T} })$$^{^2}$ \\ \midrule Spectral method in \cite{zhu2020network} & \makecell[c]{$\checkmark$} & & \makecell[c]{$\checkmark$} & & & & & &$\bm{O}(e^{-L})$$^{^3}$ \\ \midrule Geometric method in \cite{vasquez2018network} & &\makecell[c]{$\checkmark$} & &\makecell[c]{$\checkmark$} &\makecell{ \textcolor{black}{Binary} \\ \textcolor{black}{judgement$^{^4}$}} &\makecell{ Non-steady trajectory\\is available} & &\makecell[c]{$\checkmark$} &No guarantee\\ \midrule OLS-based method in \cite{8985069} & &\makecell[c]{$\checkmark$} & &\makecell[c]{$\checkmark$} &\makecell{Feasible \\ if revised$^{^5}$} &\makecell{\textcolor{black}{Non-steady trajectory}\\\textcolor{black}{is available}} & &\makecell[c]{$\checkmark$} &No guarantee\\ \midrule \textbf{Our method} & & \makecell[c]{$\checkmark$} & & \makecell[c]{$\checkmark$} &\makecell[c]{$\checkmark$} &\makecell{Non-steady trajectory\\is available} &\makecell[c]{$\checkmark$} &\makecell[c]{$\checkmark$} &$\bm{O}(\frac{1}{T})$ \\ \bottomrule \addlinespace[0.5ex] \setlength\tabcolsep{0.5ex \end{tabular} \vspace{-5pt} \begin{tablenotes}[para]\footnotesize \item[1] $N_O$ and $N$ represent the cardinality of the observable subset and the whole node set, respectively. \item[2] $T$ here refers to the number of observations in the non-steady stage, and the system in this reference is asymptotically stable. \item[3] In \cite{zhu2020network} the authors implement $L$ groups of tests over the system, with the same initial state while ending at different moments, and no noise terms are involved. \item[4] The method is based on the geometric characteristics of the robot trajectory. Although not tailored for the local topology inference of MRNs, but we point out it can applied to infer whether the connection between two robots exists. \item[5] The method is originally designed for global topology inference, and can be revised for the local cases if using the idea in this paper. \end{tablenotes} \end{table*} \section{Conclusion}\label{conclusion} \textcolor{black}{In this paper, we have studied the problem of inferring the local topology of MRNs under first-order formation control, without the knowledge about formation input and interaction parameters}. To overcome the inference challenges brought by the unknown formation input and interaction range, \textcolor{black}{we first demonstrated how to determine the available robot subset for inference, considering the set of robots that are within the observation range of the inference robot might change over time. } Then, we designed $\epsilon$-steady pattern estimators to obtain the input parameters and an active excitation method to estimate the interaction range. Then, we proposed a range-shrink strategy to avoid the inference brought by the unobservable robots and presented the local topology estimator. The convergence rate and the accuracy of the proposed estimator were proved. \textcolor{black}{Extensions on different observation slots for the robots and more complicated control models were also analyzed. Finally, extensive simulation tests and comparisons verified the effectiveness of the proposed method.} Future directions include i) generalizing the method to more complex formation control cases (e.g., switching topology and nonlinear dynamics); ii) investigating the possible attack against the MRNs based on the inferred topology along with its countermeasures.
train/arxiv
BkiUdK3xaKgTr7hAWU8q
5
1
\section{Introduction} Description of the dynamical motions of a collection of particles in space and time can provide a rich amount of information including molecular geometries, mean atomic fluctuations, and free energies. The molecular conformation is located at a local energy minimum where the net inter-particle force on each particle is close to zero and the position on the potential energy surface is stationary. Molecular motion can be modeled as vibrations around and interconversions between these stable configurations. Molecules spend most of their time in these low-lying states at finite temperature, which thus dominate the molecular properties of the system. In this paper we study the molecular mechanics of tetrahedral molecules. Let $u(t)=(u_{1}(t),\,u_{2}(t),\,u_{3}(t),\,u_{4}(t))$ with $u_{j}(t)\in \mathbb{R}^{3}$ for $j=1,2,3,4$ stand for the spatial position of the system of $4$-particles at time $t$. Such system satisfies the Newtonian equation \begin{equation} \ddot{u}(t)=-\nabla V(u(t)), \label{eqn01 \end{equation} where the potential energy $V$ represents the force field given by \[ V(u):=\sum_{1\leq j<k\leq4}^{n}U(|u_{j}-u_{k}|^{2}). \] When these $4$-particles interact by bond stretching, van der Waals and electrostatic forces \cite{xx, GT}, $U$ is given by \[ U(x)=\left( \sqrt{x}-1\right) ^{2}+\left( \frac{B}{x^{6}}-\frac{A}{x^{3 }\right) +\frac{\sigma}{\sqrt{x}}~. \] A local energy minimum is a stationary point $a\in\mathbb{R}^{12}$ such that $\nabla V(a)=0$. To detect possible periodic vibrations around the configuration $a$, a natural method is to investigate the existence of periodic solutions to \eqref{eqn01} near $a$. An important feature of this molecular configuration is that it admits tetrahedral spatial symmetries and thus the bifurcated/emerging periodic motions will have both spatial and temporal symmetry. In this case, the equation \eqref{eqn01} is equivariant under the action of the group \[ S_{4}\times O(3)\times O(2)\text{, \] which acts by permuting the particles, rotating and reflecting them in $\mathbb{R}^{3}$ and by temporal phase shift and reflection, respectively. In this paper, we use the equivariant degree method to investigate the existence of periodic solution to \eqref{eqn01} around an equilibrium admitting $S_{4}$-symmetries. The concept of equivariant gradient degree was introduced by K. Geba in \cite{Geba}. This degree satisfies all the standard properties expected from a degree theory. In addition, it can also be generalized to settings in infinite-dimensional spaces allowing its applications to studying critical points of invariant functionals (cf. \cite{survey}). The values of the gradient equivariant degree can be expressed elegantly in the form \[ \nabla_{G}\text{-deg}(\nabla\varphi,\Omega)=n_{1}(H_{1})+n_{2}(H_{2 )+\dots+n_{m}(H_{m}),\;\;\;n_{k}\in\mathbb{Z}, \] where $(H_{j})$ are the orbit types in $\Omega$, which allow to predict the existence of various critical orbits for $\varphi$ and their symmetries. We should mention that the gradient degree is just one of many equivariant degrees (see \cite{survey}) that were introduced in the last three decades: equivariant degrees with $n$-free parameters (primary degrees, twisted degrees), gradient and orthogonal equivariant degrees \cite{AED, IVB, KW} --- all these different degrees being related to each other (cf. \cite{BKR,RR}). For multiple applications of the equivariant gradient degree to Newtonian system, we refer the reader to \cite{survey, DKY, FRR, GI, RR, RY2} and the references therein. The local minimizer $u_{o}$ of $V$ is a regular tetrahedron located in a sphere of radius $r_{o}$. Then $U^{\prime\prime}(r_{o})>0$ due to the fact that $a$ is a local minimizer. Le \[ \nu_{0}^{2}:=\frac{32}{3}r_{o}^{2}U^{\prime\prime}(r_{o})>0\text{. \] The 6 non-zero eigenvalues of $D^{2}V(u_{o})$ are computed to be $\nu_{0}^{2}$ with multiplicity 2, $2\nu_{0}^{2}$ with multiplicity 3{,} and $4\nu_{0}^{2}$ with multiplicity 1. Then, the normal modes of (\ref{eqn01}) are $\nu_{0}$, $\sqrt{2}\nu_{0}$, and $2\nu_{0}$ with the respective multiplicities. Observe that the normal mode $\nu_{0}$ is $1:1:2$ resonant. Due to multiplicities and resonances, the Lyapunov center theorem can be applied to prove only the local existence of a periodic solution (nonlinear normal modes) from the frequency $2\nu_{0}$ \cite{MW}. On the other hand, since the equilibrium corresponds to a local minimizer of the Hamiltonian, the Weinstein-Moser theorem \cite{1} gives the existence of at least 6 periodic orbits in each (small) fixed energy level, regardless of resonances and multiplicities. Using the gradient equivariant degree method, we establish the global existence of branches of periodic solutions emerging from the equilibrium $u_{o}$ starting with the frequencies of the normal modes $\nu _{0}$, $\sqrt{2}\nu_{0}$, and $2\nu_{0}$. The global property means that families of periodic solutions are represented by a continuum that has norm or period going to infinity, ends in a collision orbit, or comes back to another equilibrium. Specifically, we prove that the tetrahedral equilibrium $u_{o}$ has the following global family of periodic solutions: one family starting with frequency $\nu_{0}$, five families with frequency $\sqrt{2}\nu_{0}$, and one families with frequency $2\nu_{0}$. The family from $\nu_{0}$ has symmetries of a brake orbits where all the particles form a regular tetrahedron at any time. The first symmetry from $\sqrt{2}\nu_{0}$ gives brake orbits where two pairs of particles are related by inversion, and in the second symmetry, one pair of particles is related by inversion and other by a $\pi$-rotation and $\pi$-phase shift. The third symmetry from $\sqrt{2}\nu_{0}$ is not a brake orbit, and all the particles are related by a $\pi/2$-rotoreflection and $\pi/2$-phase shift. The fourth symmetry from $\sqrt{2}\nu_{0}$ is a brake orbit where three particles form a triangle at all times, while another makes counterbalance movement. The fifth symmetry from $\sqrt{2}\nu_{0}$ is not brake orbit, and three particles move in the form of a traveling wave along a triangle, while another makes a counterbalance movement. The family from $2\nu_{0}$ has symmetries of a brake orbits with two symmetries by inversion at any time. The exact description of the symmetries is given in Section 5. The article \cite{5} presents for tetraphosphorus molecules an extensive study of the stability and existence of nonlinear modes that are relative equilibria. The authors assume the absence of resonances in the normal form of that Hamiltonian. Though in our study we consider the nonlinear normal modes that are not relative equilibria and we use a force field expressing mutual interaction between the atoms that leads to a Hamiltonian with resonances. Other molecules that have tetrahedral symmetries include the methane molecule. This molecule has an equilibrium state with a carbon atom at the center and four hydrogen atoms at the vertices of a regular tetrahedron. The articles \cite{3,4} use a combination of geometric methods, normal forms, and Krein signature to analyze the existence of nonlinear modes and their stability. These results can be easily extrapolated to the tetraphosphorus molecule which have the same symmetries but different configuration. In this sense, the symmetries and number of solutions obtained in \cite{3,4} for each frequency coincide with our results. The gradient equivariant degree allows to determine global properties of the branches and to manage easily resonances. Nevertheless, more precise local information can be obtain with the results of \cite{3,4,5}. The paper is structured in the following sections. In Section 2, we analyze the isotypic decomposition of the eigenvalues of the Hessian $D^{2}V(u_{o})$. In Section 3, we prove the global existence of families of periodic solutions from the tetrahedral equilibrium. In Section 4, we describe the symmetries of the different families of periodic solutions. In Appendix, we review preliminary notions and definitions used in group representations, the properties of the equivariant gradient degree, and indicate the standard techniques used to compute it. \section{Model for Atomic Interaction} Consider $4$ identical particles $u_{j}$ in the space $\mathbb{R}^{3}$, for $j=1,2,3,4$. Assume that each particle $u_{j}$ interacts with all other particles $u_{k}$ for $k\not =j$. Put $u:=(u_{1},u_{2},u_{3},u_{4})^{T \in\mathbb{R}^{12}$ and \[ \tilde{\Omega}_{o}:=\{u\in\mathbb{R}^{12}:\forall_{k\not =j}\;\;u_{k \not =u_{j}\}. \] The Newtonian equation that describes the interaction between these $4$-particles i \begin{equation} \ddot{u}=-\nabla V(u),\quad u\in\tilde{\Omega}_{o}. \label{eq:mol \end{equation} The potential energy $V:\tilde{\Omega}_{o}\rightarrow\mathbb{R}$ \begin{equation} V(u):=\sum_{1\leq j<k\leq4}^{n}U(|u_{j}-u_{k}|^{2}), \label{eq:pot \end{equation} is well define and $C^{2}$ when $U\in C^{2}(\mathbb{R}^{+})$ satisfie \begin{equation} \lim_{x^{+}\rightarrow0}U(x)=\infty,\qquad\lim_{x\rightarrow\infty U(x)=\infty\text{.} \label{U \end{equation} Classical forces used in molecular mechanics are associated with bending between adjacent particles, electrostatic interactions and van der Walls forces. The condition (\ref{U}) holds when $U$ is determined by these force fields. \subsection{The Tetrahedral Equilibrium} \label{sec:equilib} One can easily notice that the space $\mathbb{R}^{12}$ is a representation of the group \[ \mathfrak{G}:=S_{4}\times O(3), \] where $S_{4}$ stands for the symmetric group of four elements. More precisely $S_{4}$ is the group of permutations of four elements $\{1,2,3,4\}$. Then the action of $\mathfrak{G}$ on $\mathbb{R}^{12}$ is given by \begin{equation} (\sigma,A)(u_{1},u_{2},u_{3},u_{4})^{T}=(Au_{\sigma(1)},Au_{\sigma (2)},Au_{\sigma(3)},Au_{\sigma(4)})^{T}, \label{eq:act1 \end{equation} where $A\in O(3)$ and $\sigma\in S_{4}$. Notice that $S_{4}$ can be considered as a subgroup of $O(3)$, representing the actual symmetries of a tetrahedron $\mathbf{T}\subset\mathbb{R}^{3}$. More precisely, consider the regular tetrahedron given by \[ \mathbf{T}:=\left\{ \gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\right\} , \] where \[ \gamma_{1}=\left( \begin{array} [c]{c 0\\ 0\\ 1 \end{array} \right) ,\quad\gamma_{2}=\left( \begin{array} [c]{c \frac{2}{3}\sqrt{2}\\ 0\\ -\frac{1}{3 \end{array} \right) ,\quad\gamma_{3}=\left( \begin{array} [c]{c -\frac{1}{3}\sqrt{2}\\ \frac{1}{3}\sqrt{6}\\ -\frac{1}{3 \end{array} \right) ,\quad\gamma_{4}=\left( \begin{array} [c]{c -\frac{1}{3}\sqrt{2}\\ -\frac{1}{3}\sqrt{6}\\ -\frac{1}{3 \end{array} \right) . \] The tetrahedral group $\{A\in O(3):A(\mathbf{T})=\mathbf{T}\}$ can be identified with the group $S_{4}$. Indeed, any $A$ such that $A(\mathbf{T )=\mathbf{T}$ permutes the vertices of $\mathbf{T}$, i.e. \[ A\gamma_{j}=\gamma_{\sigma(j) \] for $j=1,2,3,4$, i.e. we can identify $A_{\sigma}$ with the permutation $\sigma\in S_{4}$ by these relations. We have explicitly for the permutations $(1,2)$ and $(2,3,4)$, which are generators of $S_{4}$, the following identification \[ A_{(1,2)}=\left[ \begin{array} [c]{ccc \frac{1}{3} & 0 & \frac{2\sqrt{2}}{3}\\ 0 & 1 & 0\\ \frac{2\sqrt{2}}{3} & 0 & -\frac{1}{3 \end{array} \right] \quad\text{ and }\quad A_{(2,3,4)}=\left[ \begin{array} [c]{ccc -\frac{1}{2} & \frac{\sqrt{3}}{2} & 0\\ -\frac{\sqrt{3}}{2} & -\frac{1}{2} & 0\\ 0 & 0 & 1 \end{array} \right] ~. \] These generators define an explicit isomorphism $A_{\sigma}:S_{4}\rightarrow O(3)$. Notice that the function $V:\tilde{\Omega}_{o}\rightarrow\mathbb{R}$ is invariant with respect to the action of $c\in\mathbb{R}^{3}$ on $(\mathbb{R ^{3})^{4}$ by shifting, $V(u+c)=V(u)$. Therefore in order to fix the center of mass at the origin in the system (\ref{eq:mol}), we define the subspace \begin{equation} \mathscr V:=\{(u_{1},u_{2},u_{3},u_{4})^{T}\in(\mathbb{R}^{3})^{4}:u_{1 +u_{2}+u_{3}+u_{4}=0\} \label{eq:V \end{equation} and $\Omega_{o}=\tilde{\Omega}_{o}\cap\mathscr V$. Then, one can easily notice that $\mathscr V$ and $\Omega_{o}$ are\emph{ }invariant under the nonlinear dynamics of \eqref{eq:mol}, and in addition $\Omega_{o}$ is $G$-invariant. Consider the point $v_{0}:=(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4 )\in\Omega_{o}$. The isotropy group $\mathfrak{G}_{v_{o}}$ is given by \[ \tilde{S}_{4}:=\{(\sigma,A_{\sigma})\in S_{4}\times O(3):\sigma\in S_{4}\}, \] where $S_{4}$ is considered as a subgroup of $O(3)$ using the above identification for $A_{\sigma}$. Since $\tilde{S}_{4}$ is a finite group, $\mathscr V^{\tilde{S}_{4}}$ is a one dimensional subspace of $\mathscr V$ and we have that \[ \mathscr V^{\tilde{S}_{4}}=\text{span}_{\mathbb{R}}\{(\gamma_{1},\gamma _{2},\gamma_{3},\gamma_{4})^{T}\}. \] Then, by Symmetric Criticality Condition, a critical point of $V^{\tilde {S}_{4}}:\Omega_{o}^{\tilde{S}_{4}}\rightarrow\mathbb{R}$ is also a critical point of $V$. Since $\mathscr V^{\tilde{S}_{4}}$ is one-dimensional, we denote its vectors by $rv_{o}\in\mathbb{R}^{12}$ for $r\in\mathbb{R}$. Notice that \[ \phi(r):=\sum_{1\leq j<k\leq4}U\left( \frac{8}{3}r^{2}\right) ,\quad r>0. \] is exactly the restriction of $V$ to the fixed-point subspace $\mathscr V^{\tilde{S}_{4}}\cap\Omega_{o}$. Thus in order to find an equilibrium for \eqref{eq:mol}, by Symmetric Criticality Principle, it is sufficient to identify a critical point $r_{o}$ of $\phi(r)$. Clearly by (\ref{U}), \[ \lim_{r\rightarrow0^{+}}\phi(r)=\lim_{r\rightarrow\infty}\phi(r)=\infty, \] then there exists a minimizer $r_{o}\in(0,\infty)$, which is clearly a critical point of $\varphi$. Consequently \begin{equation} u_{o}:=r_{o}v_{o}\in\Omega_{o} \label{eq:crit-u0 \end{equation} is the $\tilde{S}_{4}$-symmetric equilibrium of $V$. The components of $u_{o $, which are $r_{o}\gamma_{j}$ for $j=1,2,3,4$, give us the configuration of the stationary solution of \eqref{eq:mol}, see Figure \ref{fig:1}. \begin{figure}[ptb] \vglue2.1cm\hskip3cm \scalebox{.7}{ \psline[linecolor=red]{->}(2.5,-.8)(2.5,5) \rput(2.5,5.3){\red $x_3$} \psline[linecolor=red]{->}(0.75,0.145)(6,2.2) \rput(6.1,2.4){\red $x_1$} \rput(0,2.5){\psline[linecolor=red]{<-}(0,0)(3,-2)} \rput(0,2.7){\red $x_2$} \psdot(2.5,.83) \psline[linecolor=white,fillcolor=lightblue](0,0)(3,-2)(2.5,4) \psline[linecolor=white,fillcolor=lightyellow](2.5,4)(3,-2)(5,0) \psline[linewidth=2pt](0,0)(3,-2)(5,0) \psline[linewidth=2pt](0,0)(2.5,4)(5,0) \psline[linewidth=2pt](3,-2)(2.5,4) \psline[linewidth=2pt,linestyle=dashed](0,0)(5,0) \psdot[dotsize=10pt](5,0)\psdot[dotsize=10pt](0,0)\psdot[dotsize=10pt](3,-2)\psdot[dotsize=10pt](2.5,4) \rput(5.7,0){\large $r_{o}\gamma_2$} \rput(-.7,0){\large $r_{o}\gamma_3$}\ \rput(3.7,-2.2){\large $r_{o}\gamma_4$} \rput(2.7,4.5){\large $r_{o}\gamma_1$}} \par \vskip1.5cm \caption{Stationary solution to equation \eqref{eq:mol} with tetrahedral symmetries. \label{fig:1 \end{figure} \subsection{Isotypic Decomposition} Since the system \eqref{eq:mol} is symmetric with respect to the group action $\mathfrak{G}:=S_{4}\times O(3)$ we have that the orbit of equilibria $\mathfrak{G}(u_{o})$ is a 3-dimensional submanifold in $\mathscr V$. The slice $S_{o}$ to the orbit $\mathfrak{G}(u_{o})$ at $u_{o}$ is \[ S_{o}:=\{x\in\mathscr V:x\bullet T_{u_{o}}\mathfrak{G}(u_{o})=0\}. \] The tangent space $T_{u_{o}}\mathfrak{G}(u_{o})$ is described a \[ T_{u_{o}}\mathfrak{G}(u_{o})=\text{span}\{(J_{j}\gamma_{1},J_{j}\gamma _{2},J_{j}\gamma_{3},J_{j}\gamma_{4})^{T}\in\mathscr V:j=1,2,3\}, \] where $J_{j}$ are the three infinitesimal generator of the rotations \[ J_{1}:=\left[ \begin{array} [c]{ccc 0 & 0 & 0\\ 0 & 0 & -1\\ 0 & 1 & 0 \end{array} \right] ,\quad J_{2}:=\left[ \begin{array} [c]{ccc 0 & 0 & 1\\ 0 & 0 & 0\\ -1 & 0 & 0 \end{array} \right] ,\quad J_{3}:=\left[ \begin{array} [c]{ccc 0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{array} \right] . \] Since the $\mathfrak{G}$-isotropy group of $u_{o}$ is $\tilde{S}_{4}$, then $S_{o}$ is an orthogonal $\tilde{S}_{4}$ representation. In order to identify the $\tilde{S}_{4}$-isotypical components, we consider first the $\tilde {S}_{4}$-representation $V=\mathbb{R}^{12}$ on which $\tilde{S}_{4}$-acts by \eqref{eq:act1}. We have the following table of characters $\chi_{j}$, $j=0,1,2,3,4$, for all irreducible $\tilde{S}_{4}$-representations $\mathcal{V}_{j}$ (all of them of real type) and the character $\chi_{V}$ of the the representation $V$ \ \begin{tabular} [c]{|cc|ccccc|}\hline Rep. & Character & $(1)$ & $(1,2)$ & $(1,2)(3,4)$ & $(1,2,3)$ & $(1,2,3,4) \\\hline $\mathcal{V}_{0}$ & $\chi_{0}$ & $1$ & $1$ & $1$ & $1$ & $1$\\ $\mathcal{V}_{1}$ & $\chi_{1}$ & $3$ & $1$ & $-1$ & $0$ & $-1$\\ $\mathcal{V}_{2}$ & $\chi_{2}$ & $2$ & $0$ & $2$ & $-1$ & $0$\\ $\mathcal{V}_{3}$ & $\chi_{3}$ & $3$ & $-1$ & $-1$ & $0$ & $1$\\ $\mathcal{V}_{4}$ & $\chi_{4}$ & $1$ & $-1$ & $1$ & $1$ & $-1$\\\hline $V$ & $\chi_{V}$ & $12$ & $2$ & $0$ & $0$ & $0$\\\hline \end{tabular} \] One can easily conclude that we have the following $\tilde{S}_{4}$-isotypic decomposition: \[ V=\mathcal{V}_{0}\oplus\left( \mathcal{V}_{1}\oplus\mathcal{V}_{1}\right) \oplus\mathcal{V}_{2}\oplus\mathcal{V}_{3}~. \] Since the subspace $V$ is obtained by fixing the center of mass at the origin, and $\{(v,v,v,v)\in\mathbb{R}^{12}:v\in\mathbb{R}^{3}\}$ is equivalent to the irreducible $\tilde{S}_{4}$-representation $\mathcal{V}_{1}$, we have the\emph{ }the $\tilde{S}_{4}$-isotypic decompositio \begin{equation} \mathscr V=V_{0}\oplus V_{1}\oplus V_{2}\oplus V_{3},\qquad V_{j =\mathcal{V}_{j}~. \label{eq:S4-iso \end{equation} In order to determine the $\tilde{S}_{4}$-isotypic type of the tangent space $T_{u_{0}}\mathfrak{G}(u_{o})$ (which has to be an irreducible $\tilde{S}_{4 $-representation of dimension $3$), we apply the isotypic projections $P_{j}:V\rightarrow V_{j}$, $j=1$ and $3$, given by \[ P_{j}v:=\frac{\dim(V_{j})}{72}\sum_{g\in S_{4}}\chi_{j}(g)\,gv,\quad v\in V, \] to conclude that $T_{u_{0}}\mathfrak{G}(u_{o})\simeq\mathcal{V}_{3}$. Therefore, the $\tilde{S}_{4}$-isotypic decomposition of the slice $S_{o}$ is \begin{equation} S_{o}=V_{0}\oplus V_{1}\oplus V_{2}~.\label{eq:S4-iso-S \end{equation} \subsection{Computation of the Spectrum $\sigma(\nabla^{2}V(u_{o}))$} Since the potential $V$ is given by \eqref{eq:pot}, we hav \[ \nabla V(u)=2\left[ \begin{array} [c]{c \sum_{k\not =0}U^{\prime}(|u_{0}-u_{k}|^{2})(u_{0}-u_{k})\\ \sum_{k\not =1}U^{\prime}(|u_{1}-u_{k}|^{2})(u_{1}-u_{k})\\ \vdots\\ \sum_{k\not =n-1}U^{\prime}(|u_{n-1}-u_{k}|^{2})(u_{n-1}-u_{k}). \end{array} \right] \] Notice that we have $\nabla V(u_{o})=0$ when $U^{\prime}(r_{0}^{2})=0$. For a given vector $v=(x,y,z)^{T}\in\mathbb{R}^{3}$, we define the matrix $\mathfrak{m}_{v}:=vv^{T}$, i.e. \[ \mathfrak{m}_{v}:=\left[ \begin{array} [c]{c x\\ y\\ z \end{array} \right] [x,y,z]=\left[ \begin{array} [c]{ccc x^{2} & xy & xz\\ xy & y^{2} & yz\\ xz & yz & z^{2 \end{array} \right] . \] Then one can easily see that the matrix $\mathfrak{m}_{v}$ represents the linear operator $\Vert v\Vert^{2}\,P_{v}:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}$, where $P_{v}$ is the orthogonal projection onto the subspace generated by $v\in\mathbb{R}^{3}$. Put \[ \mathfrak{m}_{j,k}:=\mathfrak{m}_{(\gamma_{j}-\gamma_{k})}. \] Clearly $\mathfrak{m}_{j,k}=\mathfrak{m}_{k,j}$. Notice that \[ \mathfrak{m}_{j,k}(\gamma_{j})=\frac{4}{3}(\gamma_{j}-\gamma_{k}). \] By direct computations one can derive the following matrix form of $\nabla ^{2}V(u_{o})$ \[ M:=\nabla^{2}V(u_{o})=4r_{o}^{2}U^{\prime\prime}(r_{o})\left[ \begin{array} [c]{cccc \displaystyle\sum_{j\not =1}\mathfrak{m}_{1j} & -\mathfrak{m}_{12} & -\mathfrak{m}_{13} & -\mathfrak{m}_{1,4}\\ -\mathfrak{m}_{2,1} & \displaystyle\sum_{j\not =2}\mathfrak{m}_{2,j} & -\mathfrak{m}_{23} & -\mathfrak{m}_{2,4}\\ -\mathfrak{m}_{3,1} & -\mathfrak{m}_{3,2} & \displaystyle\sum_{j\not 3}\mathfrak{m}_{3,j} & -\mathfrak{m}_{3,4}\\ -\mathfrak{m}_{4,1} & -\mathfrak{m}_{3,2} & -\mathfrak{m}_{4,3} & \displaystyle\sum_{j\not =4}\mathfrak{m}_{4,j \end{array} \right] \] Since $M:\mathscr V\rightarrow\mathscr V$ is $\tilde{S}_{4}$-equivariant, it follows that \[ M_{j}:=M|_{V_{j}}:V_{j}\rightarrow V_{j},\quad j=0,1,2, \] and since the sub-representations $V_{j}=\mathcal{V}_{j}$ are absolutely irreducible, we have that \[ M_{j}=\mu_{j}\,\text{\textrm{Id\thinspace}}:V_{j}\rightarrow V_{j ,\;\;\;j=0,1,2, \] which implies $\sigma(M|_{{S_{o}}})=\{\mu_{0},\mu_{1},\mu_{2})$. In order to find explicit formulae for the eigenvalues $\mu_{j}$, we notice that \[ \mathfrak{v}_{0}:=\left[ \begin{array} [c]{c \gamma_{1}\\ \gamma_{2}\\ \gamma_{3}\\ \gamma_{4 \end{array} \right] \in V_{0},\;\;\mathfrak{v}_{1}:=\left[ \begin{array} [c]{c -2\gamma_{1}\\ \gamma_{1}+\gamma_{2}\\ \gamma_{1}+\gamma_{3}\\ \gamma_{1}+\gamma_{4 \end{array} \right] \in V_{1},\;\;\mathfrak{v}_{2}:=\left[ \begin{array} [c]{c \gamma_{2}-\gamma_{3}\\ \gamma_{1}-\gamma_{4}\\ \gamma_{4}-\gamma_{1}\\ \gamma_{3}-\gamma_{2 \end{array} \right] \in V_{2}, \] and by direct application of the matrix $\mathscr L$ on the vectors $\mathfrak{v}_{j}$, $j=0,1,2$, we obtain that \begin{align*} \mu_{0} & =\frac{128}{3}r_{o}^{2}U^{\prime\prime}(r_{o})=4\nu_{0}^{2},\\ \mu_{1} & =\frac{64}{3}r_{o}^{2}U^{\prime\prime}(r_{o})=2\nu_{0}^{2},\\ \mu_{2} & =\frac{32}{3}r_{o}^{2}U^{\prime\prime}(r_{o})=\nu_{0}^{2}. \end{align*} Notice that $0<\mu_{2}<\mu_{1}<\mu_{0}~$. \section{Equivariant Bifurcation} In what follows, we are interested in finding non-trivial $T$-periodic solutions to \eqref{eq:mol}, bifurcating from the orbit of equilibrium points $\mathfrak{G}(u_{o})$. By normalizing the period, i.e. by making the substitution $v(t):=u\left( \frac{T}{2\pi}t\right) $ in \eqref{eq:mol}, we obtain the following system \begin{equation \begin{cases} \ddot{v}=-\lambda^{2}\nabla V(v),\\ v(0)=v(2\pi),\;\;\dot{v}(0)=\dot{v}(2\pi), \end{cases} \label{eq:mol1 \end{equation} where $\lambda^{-1}=2\pi/T$ is the frequency. \subsection{Equivariant Gradient Map} Since $\mathscr V$ is an orthogonal $\mathfrak{G}$- representation, we can consider the first Sobolev space of $2\pi$-periodic functions from $\mathbb{R}$ to $\mathscr V$, i.e. \[ H_{2\pi}^{1}(\mathbb{R},\mathscr V):=\{u:\mathbb{R}\rightarrow\mathscr V\;:\;u(0)=u(2\pi),\;u|_{[0,2\pi]}\in H^{1}([0,2\pi];\mathscr V)\}, \] equipped with the inner product \[ \langle u,v\rangle:=\int_{0}^{2\pi}(\dot{u}(t)\bullet\dot{v}(t)+u(t)\bullet v(t))dt~. \] Let $O(2)=SO(2)\cup\kappa SO(2)$ denote the group of $2\times2$-orthogonal matrices, where \[ \kappa=\left[ \begin{array} [c]{cc 1 & 0\\ 0 & -1 \end{array} \right] ,\qquad\left[ \begin{array} [c]{cc \cos\tau & -\sin\tau\\ \sin\tau & \cos\tau \end{array} \right] \in SO(2)~. \] It is convenient to identify a rotation with $e^{i\tau}\in S^{1 \subset\mathbb{C}$. Notice that $\kappa e^{i\tau}=e^{-i\tau}\kappa$, i.e $\kappa$ as a linear transformation of $\mathbb{C}$ into itself, acts as complex conjugation. Clearly, the space $H_{2\pi}^{1}(\mathbb{R},\mathscr V)$ is an orthogonal Hilbert representation of \[ G:=\mathfrak{G}\times O(2),\qquad\mathfrak{G}=S_{4}\times O(3). \] Indeed, we have for $u\in H_{2\pi}^{1}(\mathbb{R},\mathscr V)$ and $(\sigma,A)\in\mathfrak{G}$ (see \eqref{eq:act1}) \begin{align} \left( \sigma,A\right) u(t) & =(\sigma,A)u(t),\label{eq:ac}\\ e^{i\tau}u(t) & =u(t+\tau),\nonumber\\ \kappa u(t) & =u(-t).\nonumber \end{align} It is useful to identify a $2\pi$-periodic function $u:\mathbb{R}\rightarrow V$ with a function $\widetilde{u}:S^{1}\rightarrow\mathscr V$ via the map {$\mathfrak{e}(\tau)=e^{i\tau}:\mathbb{R}$}$\rightarrow S^{1}$. Using this identification, we will write $H^{1}(S^{1},\mathscr V)$ instead of $H_{2\pi }^{1}(\mathbb{R},\mathscr V)$. Put \[ \Omega:=\{u\in H^{1}(S^{1},\mathscr V):u(t)\in\Omega_{o}\}. \] Then, the system \eqref{eq:mol1} can be written as the following variational equation \begin{equation} \nabla_{u}J(\lambda,u)=0,\quad(\lambda,u)\in\mathbb{R}\times\Omega, \label{eq:bif1 \end{equation} where $J:\mathbb{R}\times\Omega\rightarrow\mathbb{R}$ is defined by \begin{equation} J(\lambda,u):=\int_{0}^{2\pi}\left[ \frac{1}{2}|\dot{u}(t)|^{2}-\lambda ^{2}V(u(t))\right] dt. \label{eq:var-1 \end{equation} Assume that $u_{o}\in\mathbb{R}^{12}$ is the equilibrium point of \eqref{eq:mol} described in subsection \ref{sec:equilib}. Then clearly, $u_{o}$ is a critical point of $J$. We are interested in finding non-stationary $2\pi$-periodic solutions bifurcating from $u_{o}$, i.e. non-constant solutions to system \eqref{eq:bif1}. We consider the $G$-orbit of $u_{o}$ in the space $H^{1}(S^{1},\mathscr V)$. We denote by $\mathcal{S}_{o}$ the slice to $G(u_{o})$ in $H^{1 (S^{1},\mathscr V)$. We will also denote by \[ \mathscr J:\mathbb{R}\times\left( \mathcal{S}_{o}\cap\Omega\right) \rightarrow\mathbb{R \] the restriction of $J$ to the set $\mathcal{S}_{o}\cap\Omega$. Then clearly, $\mathscr J$ is $G_{u_{o}}$-invariant. Then, since the orbit $G(u_{o})$ is orthogonal to the slice $\mathcal{S}_{o}$, in a small tubular neighborhood of the orbit $G(u_{o})$, critical points of $\mathscr J$ are critical points of $J$ and consequently, they are solutions to system \eqref{eq:bif1}. This property allows to establish the Slice Criticality Principle (see Theorem \ref{thm:SCP}), to compute the $G$-equivariant gradient degree of $J$ on this small tubular neighborhood, which will provide us the full equivariant topological classification of all non-constant periodic orbits bifurcating from the equilibrium $u_{o}$. Consider the operator $L:H^{2}(S^{1};\mathscr V)\rightarrow L^{2 (S^{1};\mathscr V)$, given by $Lu=-\ddot{u}+u$, $u\in H^{2}(S^{1},\mathscr V)$. Then the inverse operator $L^{-1}$ exists and is bounded. Put $j:H^{2}(S^{1};\mathscr V)\rightarrow H^{1}(S^{1},\mathscr V)$ be the natural embedding operator. Clearly, $j$ is a compact operator. Then, one can easily verify that \begin{equation} \nabla_{u}J(\lambda,u)=u-j\circ L^{-1}(\lambda^{2}\nabla V(u)+u), \label{eq:gradJ \end{equation} where $u\in H^{1}(S^{1},\mathscr V)$. Consequently, the bifurcation problem \eqref{eq:bif1} can be written as $u-j\circ L^{-1}(\lambda^{2}\nabla V(u)+u)=0$. Moreover, we have \begin{equation} \nabla_{u}^{2}J(\lambda,u_{o})v=v-j\circ L^{-1}(\lambda^{2}\nabla^{2 V(u_{o})v+v)~, \label{eq:D2J \end{equation} where $v\in H^{1}(S^{1},\mathscr V)$. Consider the operator \begin{equation} \mathscr A(\lambda):=\nabla_{u}^{2}J(\lambda,u_{o})|_{\mathcal{S}_{o }:\mathcal{S}_{o}\rightarrow\mathcal{S}_{o}. \label{eq:opA \end{equation} Notice that \[ \nabla_{u}^{2}\mathscr J(\lambda,u_{o})=\mathscr A(\lambda), \] thus, by implicit function theorem, $G(u_{o})$ is an isolated orbit of critical points of $J$, whenever $\mathscr A(\lambda)$ is an isomorphism. Therefore, if a point $(\lambda_{o},u_{o})$ is a bifurcation point for \eqref{eq:bif1}, then $\mathscr A(\lambda_{o})$ cannot be an isomorphism. In such a case we put \[ \Lambda:=\{\lambda>0:\mathscr A(\lambda_{o})\text{ is not an isomorphism}\}~, \] and will call the set $\Lambda$ the \textit{critical set} for the trivial solution $u_{o}$. \subsection{Bifurcation Theorem} Consider the $S^{1}$-action on $H^{1}(S^{1},\mathscr V)$, where $S^{1}$ acts on functions by shifting the argument (see \eqref{eq:ac}). Then, $(H^{1 (S^{1},\mathscr V))^{S^{1}}$ is the space of constant functions, which can be identified with the space $\mathscr V$, i.e. \[ H^{1}(S^{1},\mathscr V)=\mathscr V\oplus\mathscr W,\quad\mathscr W:=\mathscr V^{\perp}. \] Then, the slice $\mathcal{S}_{o}$ in $H^{1}(S^{1},\mathscr V)$ to the orbit $G(u_{o})$ at $u_{o}$ is exactly \[ \mathcal{S}_{o}=S_{o}\oplus\mathscr W. \] Any $\lambda_{o}\in\Lambda$ satisfies the condition that $\ \mathscr A(\lambda_{o})|_{S_{o}}:S_{o}\rightarrow S_{o}$ is an isomorphism, since the eigenvalues are $\mu_{j}\neq0$ for $j=0,1,2$, which leads to: \begin{theorem} \label{th:bif1} Consider the bifurcation system \eqref{eq:bif1} and assume that $\lambda_{o}\in\Lambda$ is isolated in the critical set $\Lambda$, i.e. there exists $\lambda_{-}<\lambda_{o}<\lambda_{+}$ such that $[\lambda _{-},\lambda_{+}]\cap\Lambda=\{\lambda_{o}\}$. Define \[ \omega_{G}(\lambda_{o}):=\nabla_{G_{u_{o}}}\text{\textrm{-deg}}\Big(\mathscr A(\lambda_{-}),B_{1}(0)\Big)-\nabla_{G_{u_{o}}}\text{\textrm{-deg }\Big(\mathscr A(\lambda_{+}),B_{1}(0)\Big), \] where $B_{1}(0)$ stands for the open unit ball in $\mathscr H$. If \[ \omega_{G}(\lambda_{o})=n_{1}(H_{1})+n_{2}(H_{2})+\dots+n_{m}(H_{m}) \] is non-zero, i.e. $n_{j}\not =0$ for some $j=1,2,\dots,m$, then there exists a bifurcating branch of nontrivial solutions to \eqref{eq:bif1} from the orbit $\{\lambda_{o}\}\times G(u_{o})$ with symmetries at least $(H_{j})$. \end{theorem} Consider the $S^{1}$-isotypic decomposition of $\mathscr W$, i.e. \[ \mathscr W=\overline{\bigoplus_{l=1}^{\infty}\mathscr W_{l}},\quad\mathscr W_{l}:=\{\cos(l\cdot)\mathfrak{a}+\sin(l\cdot)\mathfrak{b}:\mathfrak{a ,\,\mathfrak{b}\in\mathscr V\} \] In a standard way, the space $\mathscr W_{l}$, $l=1,2,\dots$, can be naturally identified with the space $\mathscr V^{\mathbb{C}}$ on which $S^{1}$ acts by $l$-folding, \[ \mathscr W_{l}=\{e^{il\cdot}z:z\in\mathscr V^{\mathbb{C}}\}. \] Since the operator $\mathscr A(\lambda)$ is $G_{u_{o}}$-equivariant with \[ G_{u_{o}}=\tilde{S}_{4}\times O(2), \] it is also $S^{1}$-equivariant and thus $\mathscr A(\lambda)(\mathscr W_{l})\subset\mathscr W_{l}$. Using the $\tilde{S}_{4}$-isotypic decomposition of $\mathscr V^{\mathbb{C}}$, we have the $G_{u_{o}}$-isotypic decomposition \[ \mathscr W_{l}=W_{0,l}\oplus W_{1,l}\oplus W_{2,l}\oplus W_{3,l}~,\qquad W_{j,l}=\mathcal{W}_{j,l}~. \] Moreover, we have \[ \mathscr A(\lambda)|_{W_{j,l}}=\left( 1-\frac{\lambda^{2}\mu_{j}+1}{l^{2 +1}\right) \text{\textrm{Id\,}}~, \] which implies that $\lambda_{o}\in\Lambda$ if and only if $\lambda_{o ^{2}=l^{2}/\mu_{j}$ for some $l=1,2,3,\dots$ and $j=0,1,2$. Then the critical set $\Lambda$ for the equilibrium $u_{o}$ of the system \eqref{eq:mol} is \[ \Lambda:=\left\{ \frac{l}{\sqrt{\mu_{j}}}:j=0,1,2,\quad l=1,2,3,\dots \right\} , \] and we can identify the critical numbers $\lambda\in\Lambda$ as \[ \lambda_{j,l}=\frac{l}{\sqrt{\mu_{j}}}~. \] The critical numbers are not uniquely identified by the indices $(j,l)$ due to resonances. Indeed, let us list the first critical numbers from $\Lambda$ \[ \lambda_{0,1}<\lambda_{1,1}<\lambda_{2,1}=\lambda_{0,2}<\lambda_{1,2 <\lambda_{2,2}=\lambda_{0,4}~. \] \begin{definition} For simplicity, hereafter we denote by $S_{4}$ the isotropy group $\mathfrak{G}_{u_{0}}=\tilde{S}_{4}$, i.e. with this notation we have that \[ G_{u_{0}}=\mathfrak{G}_{u_{0}}\times O(2)=S_{4}\times O(2)~. \] \end{definition} From the computation of the gradient degree in (\ref{eq:lin-GdegGrad}) with $G_{u_{0}}$, we obtain for $\lambda\notin\Lambda$ that \begin{equation} \nabla_{G_{u_{0}}}\text{\textrm{-deg}}\Big(\mathscr A(\lambda_{o ),B_{1}(0)\Big)=\prod_{\left\{ \left( j,l\right) \in\mathbb{N}^{2 :\lambda_{j,l}<\lambda_{o}\right\} }\nabla\text{\textrm{-deg}}_{\mathcal{W _{j,l}}~.\label{Pre \end{equation} For each critical number $\lambda_{j,l}$ we choose two numbers $\lambda _{-}<\lambda_{j,l}<\lambda_{+}$ such that $[\lambda_{-},\lambda_{+ ]\cap\Lambda=\{\lambda_{j,l}\}$. Calculating the difference of the gradient degree at $\lambda_{+}$ and $\lambda_{-}$ using (\ref{Pre}), we obtain that the equivariant invariants are given by \begin{align*} \omega_{G}(\lambda_{0,1}) & =\nabla\text{\textrm{-deg}}_{\mathcal{W}_{0,1 }-(S_{4}\times O(2))\\ \omega_{G}(\lambda_{1,1}) & =\nabla\text{\textrm{-deg}}_{\mathcal{W}_{0,1 }\ast\Big(\nabla\text{\textrm{-deg}}_{\mathcal{W}_{1,1}}-(S_{4}\times O(2))\Big)\\ \omega_{G}(\lambda_{2,1}) & =\nabla\text{\textrm{-deg}}_{\mathcal{W}_{0,1 }\ast\nabla\text{\textrm{-deg}}_{\mathcal{W}_{1,1}}\ast\Big(\nabla \text{\textrm{-deg}}_{\mathcal{W}_{2,1}}\ast\nabla\text{\textrm{-deg }_{\mathcal{W}_{0,2}}-(S_{4}\times O(2))\Big) \end{align*} \subsection{Computation of the Gradient Degree} We consider the product group $G_{1}\times G_{2}$ given two groups $G_{1}$ and $G_{2}$. The well-known result (see \cite{DKY,Goursat}) provides a description of the product group $G_{1}\times G_{2}$. Namely, for any subgroup $\mathscr H$ of the product group $G_{1}\times G_{2}$ there exist subgroups $H\leq G_{1}$ and $K\leq G_{2}$ a group $L$ and two epimorphisms $\varphi:H\rightarrow L$ and $\psi:K\rightarrow L$ such that \begin{equation} \mathscr H=\{(h,k)\in H\times K:\varphi(h)=\psi(k)\}, \end{equation} In this case, we will use the notation \[ \mathscr H=:H\prescript{\varphi}{}\times_{L}^{\psi}K, \] and the group $:H\prescript{\varphi}{}\times_{L}^{\psi}K$ will be called an \textit{amalgamated} subgroup of $G_{1}\times G_{2}$. Therefore, any closed subgroup $\mathscr H$ of $S_{4}\times O(2)$ is an amalgamated subgroup $H^{\varphi}\times_{L}^{\psi}K$, where $H\leq S_{4}$ and $K\leq O(2)$. In order to make amalgamated subgroup notation simpler and self-contained we will assume tha \[ L=K/\ker(\psi), \] so $\psi:K\rightarrow L$ is evidently the natural projection and there is no need to indicate it. On the other hand the group $L$ can be naturally identified with a finite subgroup of $O(2)$ being either $D_{n}$ or $\mathbb{Z}_{n}$, $n\geq1$. Since we are interested in describing conjugacy classes of $\mathscr H$, we can identify the epimorphism $\varphi:H\rightarrow L$ by indicating \[ Z=\text{Ker\thinspace}(\varphi)\quad\text{ and }\quad R=\varphi^{-1}(\langle r\rangle) \] where $r$ is the rotation generator in $L$ and $\langle r\rangle$ is the cyclic subgroup generated by $r$. Then, instead of using the notation $H^{\varphi}\times_{L}^{\psi}K$ we will write \begin{equation} \mathscr H=:H{\prescript{Z}{}\times_{L}^{R}}K~, \label{eq:amalg \end{equation} where $H$, $Z$ and $R$ are subgroups of $S_{4}$ identified by \begin{align*} V_{4} & =\{(1),(12)(34),(13)(24),(14)(23)\}~,\\ D_{4} & =\{(1),(1324),(12)(34),(1423),(34),(14)(23),(12),(13)(24)\}~,\\ Z_{4} & =\{(1),(1324),(12)(34),(1423)\}\,,\\ D_{3} & =\{(1),(123),(132),(12),(23),(13)\}~,\\ D_{2} & =\{(1),(12)(34),(12),(34)\}~,\\ D_{1} & =\{(1),(12)\}~. \end{align*} In the case when all the epimorphisms $\varphi$ with the kernel $Z$ are conjugate, there is no need to use the symbol $R$ in \eqref{eq:amalg}, so we will simply write $\mathscr H=H{\prescript{Z}{}\times_{L}K}$. In addition, in the case all epimorphisms $\varphi$ from $H$ to $L$ are conjugate, we can also omit the symbol $Z$, i.e. we will write $\mathscr H=H\times_{L}K$. The notation explained in this section is useful to obtain the classification of the all conjugacy classes $(\mathscr H)$ of closed subgroups in $S_{4}\times O(2)$. Let us point out that to obtain a complete equivariant classification of the bifurcating branches of nontrivial solutions, the full topological invariant $\omega_{G}(\lambda_{j_{o},1})\in U(I\times O(2))$ should be considered. In particular, although it is not the case here, the invariant $\omega _{G}(\lambda_{j_{o},1})$ may contain maximal orbit types $(H)$ with infinite Weyl's group $W(H)$. With the use of GAP programming (see \cite{Pin}) one will definitely be able to establish the exact value of the invariant $\omega _{G}(\lambda_{j_{o},1})$, but at this moment requires additional computer programming. Therefore, in order to simplify the computations, we consider its truncation to $A(I\times O(2))$, given by \[ \widetilde{\omega}_{G}(\lambda_{j_{o},1}):=\pi_{0}\Big(\omega_{G (\lambda_{j_{o},1})\Big). \] where $\pi_{0}:U(G)\rightarrow A(G)$ is a ring homomorphism. Other more complex molecular structures may require the full value of the invariant $\omega_{G}(\lambda_{j_{o}})$ in the Euler ring $U(G)$, we should keep in mind that it is necessary to use the full $G$-equivariant gradient degree for its analysis. We can use GAP programming (see \cite{Pin}) to compute the basic degrees truncated to $A(G)$. \begin{align*} \mathrm{Deg}_{\mathcal{W}_{0,l}}=\; & -{\color{red}({S_{4} \prescript{}{}\times_{{}}D_{l})}+{({S_{4}}\prescript{}{}\times_{{}}O(2))},\\ \mathrm{Deg}_{\mathcal{W}_{1,l}}=\; & -{\color{red}({D_{4} \prescript{{D_2}}{}\times_{\mathbb{Z}_{2}}D_{2l})}-{\color{red}({D_{2 }\prescript{{D_1}}{}\times_{\mathbb{Z}_{2}}D_{2l})}-{\color{red}({D_{4 }\prescript{{\bz_1}}{}\times_{D_{4}}D_{4l})}-{\color{red}({D_{3 }\prescript{}{}\times_{{}}D_{l})}\\ & -{\color{red}({D_{3}}\prescript{{\bz_1}}{}\times_{D_{3}}D_{3l})}+2{({D_{1 }\prescript{}{}\times_{{}}D_{l})}-{({\mathbb{Z}_{1}}\prescript{}{}\times_{{ }D_{l})}+{({\mathbb{Z}_{2}}\prescript{{\bz_1}}{}\times_{\mathbb{Z}_{2} D_{2l})}\\ & +{({D_{2}}\prescript{{\bz_1}}{{\bz_2}}\times_{D_{2}}D_{2l})}+{({V_{4 }\prescript{{\bz_1}}{}\times_{D_{2}}D_{2l})}+{({D_{2} \prescript{{D_1}}{}\times_{D_{1}}D_{l})}-{({\mathbb{Z}_{2} \prescript{{\bz_1}}{}\times_{D_{1}}D_{l})}\\ & +{({S_{4}}\prescript{}{}\times_{{}}O(2))},\\ \mathrm{Deg}_{\mathcal{W}_{2,l}}=\; & -{\color{red}({S_{4} \prescript{{V_4}}{}\times_{D_{3}}D_{3l})}-{({D_{4}}\prescript{}{}\times_{{ }D_{l})}+{({V_{4}}\prescript{}{}\times_{{}}D_{l})}-{({D_{4} \prescript{{V_4}}{}\times_{\mathbb{Z}_{2}}D_{2l})}\\ & +2{({D_{4}}\prescript{{V_4}}{}\times_{D_{1}}D_{l})}+{({S_{4} \prescript{}{}\times_{{}}O(2))},\\ \mathrm{Deg}_{\mathcal{W}_{3,l}}=\; & -{\color{red}({D_{4} \prescript{{\bz_4}}{}\times_{\mathbb{Z}_{2}}D_{2l})}-{\color{red}({D_{4 }\prescript{{\bz_1}}{}\times_{D_{4}}D_{4l})}-{\color{red}({D_{2 }\prescript{{D_1}}{}\times_{\mathbb{Z}_{2}}D_{2l})}+2{({D_{1} \prescript{{\bz_1}}{}\times_{\mathbb{Z}_{2}}D_{2l})}\\ & +{({\mathbb{Z}_{2}}\prescript{{\bz_1}}{}\times_{\mathbb{Z}_{2}}D_{2l )}-{({\mathbb{Z}_{1}}\prescript{}{}\times_{{}}D_{l})}-{({D_{3} \prescript{{\bz_3}}{}\times_{\mathbb{Z}_{2}}D_{2l})}-{({D_{3} \prescript{{\bz_1}}{}\times_{D_{3}}D_{3l})}\\ & +{({D_{2}}\prescript{{\bz_1}}{{D_1}}\times_{D_{2}}D_{2l})}+{({D_{2 }\prescript{{\bz_1}}{{\bz_2}}\times_{D_{2}}D_{2l})}+{({V_{4} \prescript{{\bz_1}}{}\times_{D_{2}}D_{2l})}-{({\mathbb{Z}_{2} \prescript{{\bz_1}}{}\times_{D_{1}}D_{l})}\\ & +{({S_{4}}\prescript{}{}\times_{{}}O(2))},\\ \mathrm{Deg}_{\mathcal{W}_{4,l}}=\; & -{\color{red}({S_{4} \prescript{{A_4}}{}\times_{\mathbb{Z}_{2}}D_{2l})}+{({S_{4} \prescript{}{}\times_{{}}O(2))}, \end{align*} Next, we use GAP programming (see \cite{Pin}) and the product $\ast$ of the Euler ring $U(\Gamma)$ to compute the full equivariant invariants to $A(I\times O(2))$, where the maximal isotropy classes are colored red \begin{align*} \widetilde{\omega}_{G}(\lambda_{0,1})=\; & -{\color{red}({S_{4 }\prescript{}{}\times_{{}}D_{1})},\\ \widetilde{\omega}_{G}(\lambda_{1,1})=\; & -{\color{red}({D_{4 }\prescript{{D_2}}{}\times_{\mathbb{Z}_{2}}D_{2})}-{\color{red}({D_{2 }\prescript{{D_1}}{}\times_{\mathbb{Z}_{2}}D_{2})}-{\color{red}({D_{4 }\prescript{{\bz_1}}{}\times_{D_{4}}D_{4})}+{\color{red}({D_{3} \prescript{}{}\times_{{}}D_{1})}\\ & -{\color{red}({D_{3}}\prescript{{\bz_1}}{}\times_{D_{3}}D_{3})}+{({D_{2 }\prescript{}{}\times_{{}}D_{1})}-{({D_{1}}\prescript{}{}\times_{{}}D_{1 )}+{({\mathbb{Z}_{2}}\prescript{{\bz_1}}{}\times_{\mathbb{Z}_{2}}D_{2})}\\ & +{({D_{2}}\prescript{{\bz_1}}{{\bz_2}}\times_{D_{2}}D_{2})}+{({V_{4 }\prescript{{\bz_1}}{}\times_{D_{2}}D_{2})}+{({D_{4}}\prescript{{D_2}}{}\times _{D_{1}}D_{1})}+{({D_{1}}\prescript{{\bz_1}}{}\times_{D_{1}}D_{1})}\\ & -{({\mathbb{Z}_{2}}\prescript{{\bz_1}}{}\times_{D_{1}}D_{1})},\\ \widetilde{\omega}_{G}(\lambda_{2,1})= & -{\color{red}({S_{4} \prescript{{V_4}}{}\times_{D_{3}}D_{3})}-{\color{red}({S_{4} \prescript{}{}\times_{{}}D_{2})}+{({D_{4}}\prescript{{V_4}}{}\times _{\mathbb{Z}_{2}}D_{2})}-{({\mathbb{Z}_{4}}\prescript{{\bz_2}}{}\times _{\mathbb{Z}_{2}}D_{2})}\\ & +2{({D_{2}}\prescript{{D_1}}{}\times_{\mathbb{Z}_{2}}D_{2})}-{({D_{1 }\prescript{{\bz_1}}{}\times_{\mathbb{Z}_{2}}D_{2})}-2{({\mathbb{Z}_{2 }\prescript{{\bz_1}}{}\times_{\mathbb{Z}_{2}}D_{2})}+2{({S_{4} \prescript{}{}\times_{{}}D_{1})}\\ & -{({D_{4}}\prescript{}{}\times_{{}}D_{1})}-2{({D_{3}}\prescript{}{}\times _{{}}D_{1})}-{({D_{2}}\prescript{}{}\times_{{}}D_{1})}+{({D_{1} \prescript{}{}\times_{{}}D_{1})}\\ & +{({\mathbb{Z}_{1}}\prescript{}{}\times_{{}}D_{1})}+2{({D_{4 }\prescript{{D_2}}{}\times_{\mathbb{Z}_{2}}D_{2})}+2{({D_{3} \prescript{{\bz_1}}{}\times_{D_{3}}D_{3})}-{({D_{4} \prescript{{\bz_2}}{{\bz_4}}\times_{D_{2}}D_{2})}\\ & -{({D_{2}}\prescript{{\bz_1}}{{D_1}}\times_{D_{2}}D_{2})}-{({D_{2 }\prescript{{\bz_1}}{{\bz_2}}\times_{D_{2}}D_{2})}-{({V_{4} \prescript{{\bz_1}}{}\times_{D_{2}}D_{2})}-{({D_{4}}\prescript{{D_2}}{}\times _{D_{1}}D_{1})}\\ & +{({D_{4}}\prescript{{V_4}}{}\times_{D_{1}}D_{1})}-{({D_{2} \prescript{{D_1}}{}\times_{D_{1}}D_{1})}+3{({\mathbb{Z}_{2} \prescript{{\bz_1}}{}\times_{D_{1}}D_{1})}. \end{align*} \section{Description of the Symmetries} The invariants $\omega_{G}(\lambda_{j,1})$ give the bifurcation of periodic solutions for each of five maximal groups. However, we only know that a group is maximal if it is maximal in a certain isotypical component of a Fourier mode. Since the bifurcation from $\lambda_{2,1}=\lambda_{0,2}$ with maximal group ${S_{4}\prescript{S_4}{}\times_{\mathbb{Z}_{1}}^{{}}D_{2}}$ is not independent of minimal period bifurcation from $\lambda_{0,1}$ with maximal group ${S_{4}\prescript{S_4}{}\times_{\mathbb{Z}_{1}}^{{}}D_{1}}$, we cannot conclude that these two bifurcation are different from each other. We can conclude that the other 7 maximal groups in the invariants $\omega _{G}(\lambda_{j,1})$ for $j=0,1,2$ give different global families of periodic solutions with period $T=2\pi\lambda_{j,1}l_{o}$ (for some $l_{o}\in \mathbb{N}$), where $(\lambda_{j,1}l_{o})^{-1}$ is the limit frequency. Next we describe the symmetries of the solutions for these maximal isotropy groups. Notice that we have identified the elements of $\tilde{S}_{4}$ with $S_{4}$, i.e. an element $\sigma\in S_{4}$ in a maximal group acts a \[ \sigma u_{j}=A_{\sigma}u_{\sigma(j)}~. \] \subsection{Families with Frequency $\sqrt{\mu_{0}}$} The tetrahedron configuration has one global family of periodic solutions starting with frequency $\lambda_{0,1}^{-1}=\sqrt{\mu_{0}}$. This family has symmetries \[ S_{4}\prescript{S_4}{}\times_{\mathbb{Z}_{1}}^{{}}D_{1}. \] This group is generated by $S_{4}$ and $\kappa\in D_{1}$. The symmetry $S_{4}$ implies that the configurations is a regular tetrahedron at any time. Moreover, the group $D_{1}$ implies tha \[ u(t)=\kappa u(t)=u(-t), \] i.e. the periodic solution is a brake orbit, which means that the velocity $\dot{u}$ of all the molecules are zero at the times $t=0,\pi$, \[ \dot{u}(0)=\dot{u}(\pi)=0. \] Therefore, these solution consist of a regular tetrahedron that expands and contracts in periodic motion, in an orbit which is similar to a line. \subsection{Families with Frequency $\sqrt{\mu_{1}}$} The tetrahedron configuration has five different families of periodic solutions starting with frequency $\lambda_{1,1}^{-1}=\sqrt{\mu_{1}}$, each family with a different group of symmetries. The group \[ {D_{4}\prescript{D_2}{}\times_{\mathbb{Z}_{2}}^{{}}D_{2} \] is generated by the elements $\kappa\in O(2)$, $(12),(34)\in S_{4}$ and $((13)(24),e^{i\pi})\in S_{4}\times O(2)$. The element $\kappa$ implies that the periodic solution is a brake orbit as in the description before. We have that $A_{(12)}$ is the inversion over the plane that has the points $\gamma_{3}$, $\gamma_{4}$ and the middle point of $\gamma_{1}$ and $\gamma_{2}$, and such that it interchanges $\gamma_{1}$ with $\gamma_{2}$. Then, the symmetry $(12)$ implies that $u_{1}$ is the inversion of $u_{2}$, and similarly $(34)$ implies that $u_{3}$ is the inversion of $u_{4}$. The element $A_{(13)(24)}$ is a rotation by $\pi$ that interchanges $\gamma_{1}$ with $\gamma_{3}$ and $\gamma_{2}$ with $\gamma_{4}$. Therefore $u_{1}$ is the $\pi$-rotation and $\pi$-phase shift of $u_{3}(t)$. In this symmetries all the orbits are determined by the positions of only one of the particles $u_{1}$. The group \[ {D_{2}\prescript{D_1}{}\times_{\mathbb{Z}_{2}}^{{}}D_{2} \] is generated by the elements $\kappa\in O(2)$, $(12)\in S_{4}$ and $((34),e^{i\pi})\in S_{4}\times O(2)$. The symmetry $\kappa$ implies that the solutions is a brake orbit, $(12)$ that $u_{1}$ is the inversion of $u_{2}$, and $((34),e^{i\pi})$ that $u_{3}$ is the $\pi$-rotation and $\pi$-phase shift of $u_{4}(t)$. In this case, the orbit of $u_{1}$ determines $u_{2}$ and $u_{4}$ determines $u_{3}$, but there is no relation among these two pairs of particles. The group \[ {D_{4}\prescript{\mathbb{Z}_1}{}\times_{D_{4}}^{{}}D_{4} \] is generated by $\left( (12),\kappa\right) $ and $\left( (1324),e^{i\pi /2}\right) $ in $S_{4}\times O(2)$. The element $\left( (12),\kappa\right) $ implies that $u_{1}(t)$ is the inversion of $u_{2}(-t)$. In this case, the orbit is not brake, which means it is similar to a circle. The matrix $A_{(1324)}$ is a rotor reflections by $\pi/2$. Then the symmetry $\left( (1324),e^{i\pi/2}\right) $ implies that the particles are related by applying a $\pi/2$-rotoreflection and at the same, a temporal phase shift by $\pi/2$. The group \[ {{D_{3}}\prescript{}{}\times_{{}}D_{1} \] is generated by $\kappa$, which implies that the solution is a brake orbits, and the group $D_{3}$, which implies that the positions $u_{1}$, $u_{2}$ and $u_{3}$ always form a triangle. In this case, the position $u_{4}$ follows a movement that counterbalance the triangle formed by these elements. The grou \[ -{{D_{3}}\prescript{{\bz_1}}{}\times_{D_{3}}D_{3} \] is generated by the elements $((123),e^{i2\pi/3})$ and $((12),\kappa)$. The element\break$((123),e^{i2\pi/3})$ implies that $u_{1}(t)=u_{2}(t+2\pi /3)=u_{3}(t+4\pi/3)$ and therefore, the movement in these three elements is a (discrete)\ rotating wave. In addition, the element $\left( (12),\kappa \right) $ implies that this rotating wave is invariant by an inversion in time, $u_{1}(t)=A_{(1,2)}u_{2}(-t)=A_{(1,2)}u_{1}(-t-2\pi/3)$. \subsection{Families with Fequency $\sqrt{\mu_{2}}$} The tetrahedron configuration has one family of periodic solutions starting with frequency $\lambda_{1,1}^{-1}=\sqrt{\mu_{2}}$ with symmetries ${S_{4}\prescript{V_4}{}\times_{D_{3}}^{{}}D_{3}}$. This group is generated by $V_{4}$ and $\left( (123),2\pi/3\right) ,((12),\kappa)\in S_{4}\times O(2)$. The symmetries $V_{4}$ place coordinates $u(t)$ in form of a non-regular tetrahedron figure with two axis of symmetry at any time. The element $\left( (123),2\pi/3\right) $ means that $u_{1}$, $u_{2}$ and $u_{3}$ are related by a rotation of $2\pi/3$ and a phase shift of $2\pi/3$. \section{Appendix: \textbf{Equivariant Gradient Degree}} \subsection{Group Actions} In what follows $G$ always stands for a compact Lie group and all subgroups of $G$ are assumed to be closed \cite{Bre, Kawa}. For a subgroup $H\subset G$, denote by $N\left( H\right) $ the \textit{normalizer} of $H$ in $G$, and by $W\left( H\right) =N\left( H\right) /H$ the \textit{Weyl group} of $H$ in $G$. In the case when we are dealing with different Lie groups, we also write $N_{G}\left( H\right) $ (resp. $W_{G}\left( H\right) $) instead of $N\left( H\right) $ (resp. $W\left( H\right) $). We denote by $\left( H\right) $ the conjugacy class of $H$ in $G$ and use the notations: \begin{align*} \Phi\left( G\right) & :=\left\{ \left( H\right) :H\;\;\text{is a subgroup of }\;G\right\} ,\\ \Phi_{n}\left( G\right) & :=\left\{ \left( H\right) \in\Phi\left( G\right) :\text{\textrm{dim\thinspace}}W\left( H\right) =n\right\} . \end{align*} The set $\Phi\left( G\right) $ has a natural partial order defined by \begin{equation} \left( H\right) \leq\left( K\right) \;\;\Longleftrightarrow\;\;\exists _{g\in G}\;\;gHg^{-1}\subset K. \label{eq:partial \end{equation} For a $G$-space $X$ and $x\in X$, the subgroup $G_{x}:=\left\{ g\in G:\;gx=x\right\} $ is called the \textit{isotropy} of $x$; $G\left( x\right) :=\left\{ gx:\;g\in G\right\} $ the \textit{orbit} of $x$, and the conjugacy class $\left( G_{x}\right) $ is called the \textit{orbit type} of $x$. Also, for a subgroup $H\subset G$, we use \[ X^{H}:=\left\{ x\in X:\;G_{x}\supset H\right\} \] for the fixed point space of $H$. The orbit space for a $G$-space $X$ will be denoted by $X/G$. As any compact Lie group admits only countably many non-equivalent real (resp. complex) irreducible representations. Given a compact Lie group $G$, we will assume that we know a complete list of all its real (resp. complex) irreducible representations, denoted $\mathcal{V}_{i}$, $i=0,$ $1,$ $\ldots$ (resp. $\mathcal{W}_{j}$, $j=0,$ $1,$ $\ldots$). We refer to \cite{AED} for examples of such lists and the related notation. Let $V$ (resp. $W$) be a finite-dimensional real (resp. complex) $\Gamma $-representation (without loss of generality, $V$ (resp. $W$) can be assumed to be orthogonal (resp. unitary)). Then, $V$ (resp. $W$) decomposes into the direct sum of $G$-invariant subspaces \begin{equation} V=V_{0}\oplus V_{1}\oplus\dots\oplus V_{r}\text{,} \label{eq:Giso \end{equation} (resp. $W=W_{0}\oplus W_{1}\oplus\dots\oplus W_{s}$), called the $G \textit{-}\emph{isotypical decomposition of }$V$ (resp. $W$), where each isotypical component $V_{i}$ (resp. $W_{j}$) is \emph{modeled} on the irreducible $G$-representation $\mathcal{V}_{i}$, $i=0,$ $1,$ $\dots,$ $r$, (resp. $\mathcal{W}_{j}$, $j=0,$ $1,$ $\dots,$ $s$), i.e. $V_{i}$ (resp. $W_{j}$) contains all the irreducible subrepresentations of $V$ (resp. $W$) which are equivalent to $\mathcal{V}_{i}$ (resp. $\mathcal{W}_{j}$). \subsection{Euler Ring} Let \[ U\left( G\right) :={\mathbb{Z}}\left[ \Phi\left( G\right) \right] . \] denotes the free $Z$-module generated by $\Phi(G)$ . \begin{definition} \textrm{\textrm{\label{def:EulerRing}$($cf. $\mathrm{\cite{tD}})$ Define a ring multiplication on generators $\left( H\right) $, $\left( K\right) \in\Phi\left( G\right) $ as follows: \begin{equation} \left( H\right) \ast\left( K\right) =\sum_{\left( L\right) \in \Phi\left( G\right) }n_{L}\left( L\right) ,\label{eq:Euler-mult \end{equation} where \begin{equation} n_{L}:=\chi_{c}\left( \left( G/H\times G/K\right) _{L}/N\left( L\right) \right) \label{eq:Euler-coeff \end{equation} with $\chi_{c}$ the Euler characteristic taken in Alexander-Spanier cohomology with compact support (cf. \cite{Spa}). The $\mathbb{Z}$-module $U\left( G\right) $ equipped with the multiplication \eqref{eq:Euler-mult}, \eqref{eq:Euler-coeff} is a ring called the \textit{Euler ring} of the group $G$ (cf. \cite{BtD}) } } \end{definition} The ${\mathbb{Z}}$-module $A\left( G\right) :={\mathbb{Z}}\left[ \Phi _{0}\left( G\right) \right] $ equipped with a similar multiplication as in $U\left( G\right) $ but restricted to generators from $\Phi_{0}\left( G\right) $, is called a \emph{Burnside ring}, i.e. \[ \left( H\right) \cdot\left( K\right) =\sum_{\left( L\right) n_{L}\left( L\right) ,\qquad\left( H\right) ,\text{ }\left( K\right) ,\text{ }\left( L\right) \in\Phi_{0}\left( G\right) , \] where $n_{L}:=\left( \left( G/H\times G/K\right) _{L}/N\left( L\right) \right) =\left\vert \left( G/H\times G/K\right) _{L}/N\left( L\right) \right\vert $ (here $\chi$ stands for the usual Euler characteristic). In this case, we have \begin{equation} n_{L}=\frac{n\left( L,K\right) \left\vert W\left( K\right) \right\vert n\left( L,\text{ }H\right) \left\vert W\left( H\right) \right\vert -\sum_{\left( I\right) >\left( L\right) }n\left( L,I\right) n_{I}\left\vert W\left( I\right) \right\vert }{\left\vert W\left( L\right) \right\vert }, \label{eq:rec-coef \end{equation} where \[ n(L,K)=\left\vert \frac{N(L,K)}{N(K)}\right\vert ,\quad N(L,K):=\{g\in G:gLg^{-1}\subset K\}, \] and $\left( H\right) ,$ $\left( K\right) ,$ $\left( L\right) ,$ $\left( I\right) $ are taken from $\Phi_{0}\left( G\right) $. Notice that $A\left( G\right) $ is a ${\mathbb{Z}}$-submodule of $U\left( G\right) $, but not a subring. Define $\pi_{0}:U\left( G\right) \rightarrow A\left( G\right) $ on generators $\left( H\right) \in\Phi\left( G\right) $ by \begin{equation} \pi_{0}\left( \left( H\right) \right) \begin{cases} \left( H\right) & \text{ if }\;\left( H\right) \in\Phi_{0}\left( G\right) ,\\ 0 & \text{ otherwise. \end{cases} \label{eq:pi_0-homomorphism \end{equation} Then we have: \begin{lemma} \label{lem:pi_0-homomorphism} (cf. \cite{BKR}) The map $\pi_{0}$ defined by $(\mathrm{\ref{eq:pi_0-homomorphism}})$ is a ring homomorphism, i.e. \[ \pi_{0}\left( \left( H\right) \ast\left( K\right) \right) =\pi _{0}\left( \left( H\right) \right) \cdot\pi_{0}\left( \left( K\right) \right) ,\qquad\left( H\right) ,\text{ }\left( K\right) \in\Phi\left( G\right) . \] \end{lemma} Let us point out that, although the computations of the Euler ring structure $U(G)$ are quite challenging in general, in the case $G=\Gamma\times O(2)$ (here $\Gamma$ is a finite group) it can be effectively computed using the Burnside ring multiplication structure in $A\left( G\right) $ via Lemma \ref{lem:pi_0-homomorphism} allows us to use to partially describe the Euler ring multiplication structure in $U\left( G\right) $. \subsection{Equivariant Gradient Degree} Let $G$ be a compact Lie group and $V$ be a $G$-representation. Let $\varphi:V\rightarrow\mathbb{R}$ be a continuously differentiable $G$-invariant functional. Define $\mathcal{M}_{\nabla}^{G}$ as the set of pairs $(\nabla\varphi,\Omega)$ such that $\nabla\varphi$ is an $G$-equivariant field $\nabla\varphi:V\rightarrow V$ such that \[ \nabla\varphi(v)\neq0\text{ for }v\in\partial\Omega. \] \begin{theorem} \label{thm:Ggrad-properties} There exists a unique map $\nabla_{G \text{\textrm{-deg\thinspace}}:\mathcal{M}_{\nabla}^{G}\rightarrow U(G)$, which assigns to every $(\nabla\varphi,\Omega)\in\mathcal{M}_{\nabla}^{G}$ an element $\nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla\varphi,\Omega)\in U(G)$, called the $G$\textit{-gradient degree} of $\nabla\varphi$ on $\Omega $, \begin{equation} \nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla\varphi,\Omega)=\sum _{(H_{i})\in\Phi(\Gamma)}n_{H_{i}}(H_{i})=n_{H_{1}}(H_{1})+\dots+n_{H_{m }(H_{m}), \label{eq:grad-deg \end{equation} satisfying the following properties: \begin{description} \item \textbf{(Existence)} If $\nabla_{G}\text{\textrm{-deg\thinspace} (\nabla\varphi,\Omega)\not =0$, i.e. there is in \eqref{eq:grad-deg} a non-zero coefficient $n_{H_{i}}$, then exists $u_{0}\in\Omega$ such that $\nabla\varphi(u_{0})=0$ and $(G_{u_{0}})\geq(H_{i})$. \item \textbf{(Additivity)} Let $\Omega_{1}$ and $\Omega_{2}$ be two disjoint open $G$-invariant subsets of $\Omega$ such that $(\nabla\varphi)^{-1 (0)\cap\Omega\subset\Omega_{1}\cup\Omega_{2}.$ Then, $\nabla_{G $\textrm{-deg\thinspace}$(\nabla\varphi,\Omega)=\nabla_{G} \textrm{-deg\thinspace}$(\nabla\varphi,\Omega_{1})+\nabla_{G} \textrm{-deg\thinspace}$(\nabla\varphi,\Omega_{2}).$ \item \textbf{(Homotopy)} If $\nabla_{x}\Psi:[0,1]\times V\rightarrow V$ is a $G$-gradient $\Omega$-admissible homotopy, then \[ \nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla_{v}\Psi(t,v),\Omega )=\text{\textit{constant}}. \] \item \textbf{(Normalization)} Let $\varphi\in C_{G}^{2}(V,\mathbb{R})$ be a special $\Omega$-Morse function such that $(\nabla\varphi)^{-1}(0)\cap \Omega=G(u_{0})$ and $G_{u_{0}}=H$. Then, \[ \nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla\varphi,\Omega )=(-1)^{\mathrm{m}^{-}(\nabla^{2}\varphi(u_{0}))}\cdot(H), \] where \textquotedblleft$\mathrm{m}^{-}(\cdot)$\textquotedblright\ stands for the total dimension of eigenspaces for negative eigenvalues of a (symmetric) matrix. \item \textbf{(Multiplicativity)} For all $(\nabla\varphi_{1},\Omega_{1})$, $(\nabla\varphi_{2},\Omega_{2})\in\mathcal{M}_{\nabla}^{G}$, \[ \nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla\varphi_{1}\times\nabla \varphi_{2},\Omega_{1}\times\Omega_{2})=\nabla_{G}\text{\textrm{-deg\thinspace }}(\nabla\varphi_{1},\Omega_{1})\ast\nabla_{G}\text{\textrm{-deg\thinspace }(\nabla\varphi_{2},\Omega_{2}) \] where the multiplication `$\ast$' is taken in the Euler ring $U(G)$. \item \textbf{(Suspension)} \textit{If }$W$\textit{ is an orthogonal $G$\textit{-representation and $\mathcal{B}$ an open bounded invariant neighborhood of }$0\in W$\textit{, then} \[ \nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla\varphi\times\mbox{\rm Id}_{W ,\Omega\times\mathcal{B})=\nabla_{G}\text{\textrm{-deg\thinspace} (\nabla\varphi,\Omega). \] \item \textbf{(Hopf Property)} Assume $B(V)$ is the unit ball of an orthogonal $G$-representa\-tion $V$ and for $(\nabla\varphi_{1},B(V)),(\nabla\varphi _{2},B(V))\in\mathcal{M}_{\nabla}^{G}$, one has \[ \nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla\varphi_{1},B(V))=\nabla _{G}\text{\textrm{-deg\thinspace}}(\nabla\varphi_{2},B(V)). \] Then, $\nabla\varphi_{1}$ and $\nabla\varphi_{2}$ are $G$-gradient $B(V)$-admissible homotopic. \end{description} \end{theorem} \subsection{Equivariant Gradient Degree in Hilbert Spaces} Let $\mathscr H$ be a Hilbert $G$-representation and $\Omega\subset\mathscr H$ and open bounded $G$-invariant set. A $C^{1}$-differentiable $G$-invariant functional and $f:\mathscr{H}\rightarrow\mathbb{R}$ given by $f(x)=\frac{1 {2}\Vert x\Vert^{2}-\varphi(x)$, $x\in\mathscr{H}$, is called \textit{$\Omega $-admissible} if $\nabla\varphi:\mathscr{H}\rightarrow\mathscr{H}$ is a completely continuous map and \[ \forall_{x\in\partial\Omega}\qquad\nabla f(x)=x-\nabla\varphi(x)\not =0~. \] By a \textit{$G$-equivariant approximation scheme} $\{P_{n}\}_{n=1}^{\infty}$ in $\mathscr{H}$, we mean a sequence of $G$-equivariant orthogonal projections $P_{n}:\mathscr{H}\rightarrow\mathscr{H}$, $n=1$, $2$, \dots, such that: \begin{itemize} \item[(a)] the subspaces $\mathscr{H}^{n}:=P_{n}(\mathscr{H})$, $n=1,2,$ \dots, are finite-dimensional; \item[(b)] $\mathscr{H}^{n}\subset\mathscr{H}^{n+1}$, $n=0,1,2,$ \dots; \item[(c)] $\displaystyle \lim_{n\to\infty} P_{n}x=x$ for all $x\in\mathscr{H} $. \end{itemize} \vskip.3cm Then for an $\Omega$-admissible $G$-map $f:\mathscr H\rightarrow\mathbb{R}$, one can define a sequence $f_{n}:\mathscr{H}_{n}\rightarrow\mathbb{R}$ by $f_{n}(x):=\frac{1}{2}\Vert x\Vert^{2}-\varphi(x)$, $x\in\mathscr{H}_{n}$. By a standard argument, for sufficiently large $n\in\mathbb{N}$, the maps $\nabla f_{n}(x):=x-P_{n}\nabla\varphi(x)$, $x\in\mathscr{H}$, are $\Omega_{n $-admissible, where $\Omega_{n}:=\Omega\cap\mathscr{H}_{n}$. Moreover, the gradient equivariant degrees $\nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla f_{n},\Omega_{n})$ are well defined and are the same, i.e. for $n$ sufficiently large \[ \nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla f_{n},\Omega_{n})=\nabla _{G}\text{\textrm{-deg\thinspace}}(\nabla f_{n+1},\Omega_{n+1}), \] which implies by Suspension Property of the $G$-equivariant gradient degree that we can put \begin{equation} \nabla_{G}\text{\textrm{-deg\thinspace}}(\nabla f,\Omega):=\nabla _{G}\text{\textrm{-deg\thinspace}}(\nabla f_{n},\Omega_{n}), \label{eq:grad-H \end{equation} $\;\;$ where $\;n\;$ is sufficiently large. One can verify that this construction doesn't depend on the choice of a $G$-approximation scheme in the space $\mathscr{H}$, for instance see \cite{DKY}. We should mention that the ideas behind the usage of the approximation methods to define topological degree can be rooted to \cite{BP}. \subsection{Degree on the Slice} Suppose that the orbit $G(u_{o})$ of $u_{o}\in\mathscr H$ is contained in a finite-dimensional $G$-invariant subspace, so the $G$-action on that subspace is smooth and $G(u_{o})$ is a smooth submanifold of $\mathscr H$. Denote by $S_{o}\subset\mathscr H$ the slice to the orbit $G(u_{o})$ at $u_{o}$. Denote by $V_{o}:=T_{u_{o}}G(u_{o})$ the tangent space to $G(u_{o})$ at $u_{o}$. Then clearly, $S_{o}=V_{o}^{\perp}$ and $S_{o}$ is a smooth Hilbert $G_{u_{o}}$-representation. Then we have (cf. \cite{xx}). \begin{theorem} {\smc(Slice Principle)} \label{thm:SCP} Let $\mathscr{E}$ be an orthogonal $G$-representation, $\varphi:\mathscr{H}\rightarrow\mathbb{R}$ be a continuously differentiable $G$-invariant functional, $u_{o}\in\mathscr H$ and $G(u_{o})$ be an isolated critical orbit of $\varphi$ . Let $S_{o}$ be the slice to the orbit $G(u_{o})$ and $\mathcal{U}$ an isolated tubular neighborhood of $G(u_{o})$. Put $\varphi_{o}:S_{o}\rightarrow\mathbb{R}$ by $\varphi_{o}(v):=\varphi(u_{o}+v)$, $v\in S_{o}$. Then \begin{equation} \nabla_{G}\text{\textrm{-deg}}(\nabla\varphi,\mathcal{U})=\Theta (\nabla_{G_{u_{o}}}\text{\textrm{-deg}}(\nabla\varphi_{o},\mathcal{U}\cap S_{o})), \label{eq:SDP \end{equation} where $\Theta:U(G_{u_{o}})\rightarrow U(G)$ is defined on generators $\Theta(H)=(H)$, $(H)\in\Phi(G_{u_{o}})$. \end{theorem} We show how to compute $\nabla_{G}\text{-deg\thinspace}(\mathscr A,B(V))$, where\textrm{ }$\mathscr A:V\rightarrow V$ is a symmetric $G$-equivariant linear isomorphism and $V$ is an orthogonal $G$-representation, i.e. $\mathscr A=\nabla\varphi$ for $\varphi(v)=\frac{1}{2}(\mathscr Av\bullet v)$, $v\in V$, where \textquotedblleft$\bullet$\textquotedblright\ stands for the inner product. Consider the $G$-isotypical decomposition \eqref{eq:Giso} of $V$ and put \[ \mathscr A_{i}:=\mathscr A|_{V_{i}}:V_{i}\rightarrow V_{i},\quad i=0,1,\dots,r. \] Then, by the multiplicativity property, \begin{equation} \nabla_{G}\mbox{-deg}(\mathscr A,B(V))=\prod_{i}^{r}\nabla_{G \mbox{-deg}(\mathscr A_{i},B(V_{i})) \label{eq:deg-Lin-decoGrad \end{equation} Take $\xi\in\sigma_{-}(\mathscr A)$, where $\sigma_{-}(\mathscr A)$ stands for the negative spectrum of $\mathscr A$, and consider the corresponding eigenspace $E(\xi):=\ker(\mathscr A-\xi\mbox{Id})$. Define the numbers $m_{i}(\xi)$ by \begin{equation} m_{i}(\xi):=\dim\left( E(\xi)\cap V_{i}\right) /\dim\mathcal{V}_{i}, \label{eq:m_j(mu)-gra \end{equation} and the so-called basic gradient degrees by \begin{equation} \text{deg}_{\mathcal{V}_{i}}:=\nabla_{G}\mbox{-deg}(-\mbox{Id\,},B(\mathcal{V _{i})). \label{eq:basicGrad-deg0 \end{equation} Then have that \begin{equation} \nabla_{G}\text{\textrm{-deg}}(\mathscr A,B(V))=\prod_{\xi\in\sigma _{-}(\mathscr A)}\prod_{i=0}^{r}\left( {\mbox{\rm deg}_{\mathcal{V}_{i} }\right) ^{m_{i}(\xi)}. \label{eq:lin-GdegGrad \end{equation} \noindent\textbf{Acknowledgement.} The authors are grateful to H-P. Wu for the GAP programing of the topological invariants. C. Garc\'{\i}a was partially supported by PAPIIT-UNAM through grant IA105217. I. Berezovik and W. Krawcewicz acknowledge partial support from National Science Foundation through grant DMS-1413223. W. Krawcewicz was also supported by Guangzhou University during his visit it the summer 2017.
train/arxiv
BkiUcCs5qsNCPdQKtejf
5
1
\section{\label{sec:intro}Introduction} Spin-exchange optical pumping (SEOP) is a technique which allows for the nuclear spin-angular momentum of certain noble gasses to be increased to of order 10\%. SEOP has been used to study the NMR characteristics of porous media \cite{Terskikh2002AMaterials}, to examine protein dynamics \cite{Schroder2013XenonAlert}, and to perform clinical lung-imaging in humans \cite{Oros2004HyperpolarizedMRI} SEOP involves two steps: optical pumping of an alkali metal vapor and spin-exchange from the alkali metal vapor to the noble gas nuclei. First, a beam of circularly polarized light is directed on to a transparent cell containing a macroscopic amount of alkali metal, usually rubidium (Rb). The cell is heated to between 100-200 $^{\circ}$C to produce an optically-thick alkali-vapor. The laser is tuned to the D1 transition frequency, and the circular polarization of the laser imposes a selection-rule that will only excite electrons from one spin-state to the other. The spin-polarized electron quickly recouples with the nuclear spin, and thus contributes to the spin-polarization of the atom. Next, during spin-exchange, the alkali vapor transfers spin-angular momentum to the noble gas nuclei. A gas mixture is introduced into the cell containing a noble gas and some other inert gases, usually helium and nitrogen. During collisions, the alkali metal electron couples with the noble gas nuclei via the Fermi-contact interaction, thus transferring its spin-angular momentum. Although the alkali metal atom leaves this interaction depolarized, it is quickly repolarized by absorption of another photon from the laser. Hyperpolarization of xenon-129 ($^{129}$Xe) using Rb vapor has become a popular method of producing polarized gas, both because $^{129}$Xe is inexpensive relative to other options and because SEOP of $^{129}$Xe can be accomplished on a relatively fast time-scale. Hyperpolarized xenon-129 (HP $^{129}$Xe) gas is typically produced in a continuous manner using a flow-through polarizer \cite{Driehuys1996High-volume129Xe, Ruset2006Optical129Xe, Schrank2009a}. A flow-through polarizer operates by flowing a $^{129}$Xe gas mixture through an optical pumping cell containing Rb vapor. Because $^{129}$Xe can undergo spin-exchange on a fast-time scale, the gas can flow through cells with total gas flow rates $\sim$1 SLM. $^{129}$Xe is usually kept at concentrations between 1-5\% in these systems, thus a flow-through polarizer can reasonably produce 1 liter of HP$^{129}$Xe in approximately an hour. With increased interest in this technique for medical imaging, there is a need to develop technology which can reliably produce high-volume, high-polarization HP$^{129}$Xe on short timescales. Recently, Freeman, et al. \cite{Freeman2014} noted that some styles of flow-through polarizers were not producing HP $^{129}$Xe with the polarization that theoretical models predicted. In order to gain insight into this deficiency, a new Finite Element Model (FEM) model, which builds upon previous models \cite{Fink2005, Fink2007}, was constructed to simulate the full three-dimensional dynamics of SEOP cells similar to those examined by Freeman \cite{Schrank2019ACode}. Here, we present the preliminary results of that model. \section{Setup of the Simulation\label{sec:simsetup}} \begin{figure*} \subfloat[\label{fig:fullcell}]{\includegraphics[width=0.4\textwidth]{Figures/RbGeometry.png}} \subfloat[\label{fig:fullcellvert}]{\includegraphics[width=0.4\textwidth]{Figures/RbGeometryVert.png}}\\ \subfloat[\label{fig:halfcell}]{\includegraphics[width=0.4\textwidth]{Figures/100cchalf.png}} \subfloat[\label{fig:rbdropcell}]{\includegraphics[width=0.4\textwidth]{Figures/RbDrop.png}} \caption{The different geometries that were modelled in the current study: (\ref{fig:fullcell}) the 100-cc horizontal geometry with a full-circumferential shell for the Rb source, (\ref{fig:fullcellvert}) the 100-cc vertical geometry with a full-circumferential shell for the Rb source, (\ref{fig:halfcell}) the 100-cc horizontal geometry with a half-circumferential shell for the Rb source, (\ref{fig:rbdropcell}) the 300-cc horizontal geometry with a drop for the Rb source. The 300-cc cell has an area drawn directly behind the Rb drop to model the drops shadow. A further 100-cc vertical geometry with a half-circumferential shell for the Rb source was modelled but not pictured in the figure. In the figures, areas in black represent the Rb sources in the optical pumping section of the cells. The outlet tube sources are not pictured in the figures.} \label{fig:allgeometries} \end{figure*} Three different geometries where investigated in the course of this study: (1) a 100-cc cell with a full-circumferential Rb distribution, (2) a 100-cc cell with a half-circumferential Rb distribution, and (3) a 300-cc cell with a Rb drop (see Figure \ref{fig:allgeometries}). The geometries were designed to closely resemble the geometries of experimental cells used by Freeman et al. However, it should be noted that Freeman et al. did not report the Rb distribution in the cells, and this may strongly alter the simulated results. Rb sources for the diffusion module were modelled in two areas in the optical pumping cell geometries. First, a Rb source was modelled in the optical pumping body (cylindrical portion). This source was either modelled as a thin film (Figure \ref{fig:fullcell}, \ref{fig:fullcellvert}, and \ref{fig:halfcell}) or a drop (Figure \ref{fig:rbdropcell}), and these sources were meant to emulate the Rb metal that was introduced into the body of the optical pumping cells for Freeman et al.'s testing. The 300-cc cell's Rb-drop source proved to be a special challenge for the model. The FEM model for the laser absorption is very poor at handling discontinuities, and a discontinuity exists directly behind the Rb drop: a shadow. The Rb drop was modelled to have a 3.5-mm radius, and thus it extended up above the wall of the cell. In order to account for this shadow cast by the drop, a second region behind the drop had to be drawn where the laser absorption model was disabled. This effectively caused that area to be in the dark. Models that did not employ a shadow in the geometry would not execute. The second source/sink was modelled as a thin film in the outlet tube of the cell. In the 100-cc cells, this thin film started at the joint between the outlet tube and the cell body, and it extended to the final bend in the outlet tube. Two different configurations were used for the Rb source in the outlet tube of the 300-cc cell: (1) the same configuration as the 100-cc cells, and (2) the thin film started at the first bend in the outlet tube and extended to the final bend in the outlet tube. This source/sink was meant to emulate the area of the cell where Rb vapor condensed after exiting the optical pumping region. Visual inspection of used optical pumping cells of this design show buildup of Rb metal in the outlet tube of the cell. A body force, that is a condition applied to the entire simulated volume as opposed to just at a boundary, was applied to the models in order to simulate gravity and enable gas convection in the simulation. This force was modelled either such that the cell was oriented horizontally (Figure \ref{fig:fullcell}) or vertically (Figure \ref{fig:fullcellvert}). The vertical case was only investigated in the 100-cc cells. Current flow-through polarizer designs that use cells of this size exclusively use a horizontal configuration. Therefore, the vertical simulations should be viewed as a more exploratory simulation rather than an attempt to understand current cell dynamics. However, some flow-through polarizers with much larger cells than those explored in this study do use a vertical orientation Cells were modelled at four wall-temperature boundary conditions: \hbox{110 $^{\circ}$C}, \hbox{120 $^{\circ}$C}, \hbox{130 $^{\circ}$C}, and \hbox{140 $^{\circ}$C}. These different boundary conditions simulated different oven-air temperatures that the optical pumping cell could experience. Other laser, cell, flow-rate, and HP$^{129}$Xe relaxation rate configuration parameters are identical to those described in Ref. \cite{Schrank2019ACode}, as are the parameters and expressions used for important physical properties such as the diffusion constant, etc. The computational parameters of the model and the model implementation are also described in Ref. \cite{Schrank2019ACode}. All the successful simulations were run as transient simulations with 600 time steps at 0.1 sec. per time step. The time to compute a single transient model was tens of days of CPU time. There were several attempts to run steady-state simulations on some of the 300 cc geometries. However, all of the attempts failed to produce converging results. The module that calculates the $^{129}$Xe polarization was disabled for transient simulations of the 100 cc model because of an instability in the computational module that was observed in transient-model tests. The 300 cc model did not suffer from these computational difficulties, and thus the $^{129}$Xe polarization module was enabled during these calculations. Transient simulations were run until changes in the calculated (1) Rb polarization, (2) average cell-body temperature, and (3) laser absorption were less that 0.05\%. If those conditions did not occur, the simulations were run until either the simulation completed the programmed number of time-steps (600; corresponding to 60 seconds of simulated time) or the simulation terminated with a failure to converge. Transient simulations that reached a steady state in all three metrics were used as the initial conditions in a steady state simulation with the $^{129}$Xe polarization module enabled in order to calculate the output polarization of HP$^{129}$Xe. The output polarization of HP$^{129}$Xe was calculated by averaging the polarization in the cell outlet tube over a slice 4 cm from the edge of the geometry. The slice was offset from the edge of the geometry because of artifacts in the simulations caused by the boundary. Some of the solutions reached a computational point where the simulation would not restart, presumably because the solution previous time-step's solution vector was poorly conditioned. \section{Results\label{sec:results}} \begin{table*}[t] \caption{Results of the 100-cc simulations at all temperatures. The top row describes the four different Rb configuration and cell orientation combinations used in the simulations. In the case that the simulation reached steady state, the predicted HP$^{129}$Xe polarization and laser absorption are reported. For simulations that did not reach steady state, the results are described in three ways: (1) full oscillations were observed, (2) initial oscillations were observed, (3) a rapid rise in temperature and laser absorption were observed. Text in italics denotes premature termination of the simulation.\label{tab:all100ccresults}} \begin{tabular}{c|c|c?c|c|} &\textbf{Full-Horz.} &\textbf{Full-Vert.} &\textbf{Half-Horz.} & \textbf{Half-Vert.}\\ \hline \textbf{110 $^{\circ}$C}&\begin{tabular}{c c}$^{129}$Xe Polarization:&4.7\%\\Laser Absorp.:&7.9\%\end{tabular}&\begin{tabular}{c c}$^{129}$Xe Polarization:&5.5\%\\Laser Absorp.:&2.2\%\end{tabular}&\begin{tabular}{c c}$^{129}$Xe Polarization:&2.0\%\\Laser Absorp.:&4.5\%\end{tabular}&\begin{tabular}{c c}$^{129}$Xe Polarization:&2.3\%\\Laser Absorp.:&4.5\%\end{tabular}\\ \hline \textbf{120 $^{\circ}$C}&Oscillations&Oscillations& \textit{Rapid Temp. and Abs. Increase} &Oscillations\\ \hline \textbf{130 $^{\circ}$C}&Oscillations&Oscillations& \textit{Initial Oscillations} &\textit{Initial Oscillations} \\ \hline \textbf{140 $^{\circ}$C}& \textit{Initial Oscillations}\ &Oscillations& \textit{Initial Oscillations} &\textit{Initial Oscillations}\\ \hline \end{tabular} \end{table*} \begin{table}[b] \caption{Results of the 300-cc cell simulations at all temperatures. The 300-cc cells had two Rb outlet configurations: (1) the Rb film started at the joint between the outlet and the body (Full Rb Outlet), and (2) the Rb film started at the first bend in the outlet (Partial Rb Outlet). Text in italics again denotes prematurely terminated simulations. Only one temperature parameter was used with the Full Rb Outlet, and it was observed that the Rb began to diffuse back into the cell body. For all temperature parameters, the Partial Rb Outlet configuration appeared to reach a steady-state solution, but a lack of computational resources prevented the complete observation of a steady-state solution. The reported $^{129}Xe$ polarization and laser absorption represent the values modelled at the last time-step of the respective simulation.\label{tab:all300ccresults}} \begin{tabular}{c|c|c|} & \textbf{Full Rb Outlet} & \textbf{Partial Rb Outlet} \\ \hline \textbf{110 $^{\circ}$C} & \cellcolor{black!25} & \begin{tabular}{c}Xenon Polarization:6.0\%\\ Laser Absorption: 4.8\%\end{tabular}\\ \hline \textbf{120 $^{\circ}$C} & \cellcolor{black!25} & \begin{tabular}{c}Xenon Polarization 9.6\%\\Laser Absorption: 7.8\%\end{tabular}\\ \hline \textbf{130 $^{\circ}$C} & \cellcolor{black!25} & \begin{tabular}{c}Xenon Polarization: 14.4\%\\Laser Absorption: 12.5\%\end{tabular}\\ \hline \textbf{140 $^{\circ}$C} & \textit{Back-Diffusion} &\begin{tabular}{c}Xenon Polarization:20\%\\Laser Absorption: 19.3\%\end{tabular}\\ \hline \end{tabular} \label{tab:my_label} \end{table} Visualizations of the results from all of the simulations can be found in Ref. \cite{SchrankSupportData2020}. Transient results are stored as movies, and the visualization of HP$^{129}$Xe polarizations from steady-state solutions are stored as images. As will be developed more fully in Section \ref{sec:discusion}, many of the solutions failed to reproduce experimental behavior. Although this is clearly an indication that the model is not a complete description of the system, the solutions provide some direction as to qualitative reasons for poor cell performance. \subsection{100-cc Cell Simulation Results} Summaries of the results for the 100-cc cell simulations are shown in Table \ref{tab:all100ccresults}, and two sample visualizations of the results are shown in Figure \ref{fig:resultvisualization}. The 100-cc models in most cases did not reach a steady-state solution. Many of the simulations displayed a strong oscillatory behavior in average temperature, Rb polarization, and laser absorption. Simulations with wall-temperatures above 110 $^\circ$C both displayed this oscillatory behavior and also tended to terminate prematurely, i.e. before the end of the prescribed number of time steps. In the case of simulations on horizontally-oriented cells with a full-circumferential Rb source, the trend in oscillatory behavior was definitively seen in all the simulations with a wall temperature above 120 $^{\circ}$C because these simulations did not terminate prematurely. For other cell configurations, the fluid-flow model frequently did not converge at a particular time-step causing the simulations to terminate. In many cases, these premature terminations were restarted only to terminate again due to a similar error. When the simulations were successfully restarted, they displayed discontinuities in the calculated laser absorption, average temperature, and Rb polarization. An example of this can be seen in Figure \ref{fig:100ccvertresult} at time step 500 (50 seconds). In this case, all three of the plotted metrics suddenly drop at the time step where the model was restarted. These premature terminations and restart artifacts made it difficult to come to firm conclusions on the long-term behavior of the model for those parameters. In some cases, simulations seemed to begin to display oscillatory behavior, but the model terminated prematurely before several oscillations could be observed. However, the suggestion of initial oscillations seemed present. Due to the oscillatory behavior of other simulations of the same cell configuration at lower temperatures, it was assumed, if these simulations could be successfully restarted, they would display the same oscillatory behavior as the simulations at lower temperatures. Finally, the 120 $^{\circ}$C horizontally-oriented model with a half-circumferential Rb source reached neither steady-state nor displayed oscillatory behavior, but instead displayed a rapid rise in average temperature and laser absorption before terminating prematurely. Again, the premature termination makes it difficult to predict the long-term behavior of the model at these parameters. However, based on the behavior of other simulations at the same temperature, it is suspected that this simulation would have also begun oscillatory behavior as well. The oscillatory cycles followed a common pattern. During the initial stage, the Rb polarization remained high and stable, but the average cell temperature and laser absorption steadily rose. This state remained until the average gas temperature was roughly 100\% higher than wall boundary condition temperature. At this point, the Rb polarization rapidly decreased, while the laser absorption and temperature rapidly increased. The average temperature in the cells peaked $\sim$150\% higher than the wall boundary condition, and the Rb polarization fell to $\sim$ 0\%. At this point, the Rb vapor absorbed most of the laser light at the front of the cell. This allowed the temperature of the gas in most of the cell to fall rapidly. As the average temperature decreased, both the Rb vapor number density and laser absorption decreased allowing the laser to, once again, penetrate deep into the optical pumping cell. Thus, the oscillations are driven by the interchange of positive feedback, when laser heated gases vaporize excess Rb, and negative feedback, when the excess Rb blocks laser light at the front of the cell so that it can no longer effectively heat the gas. The sudden changes described above were accompanied by a marked change in the gas-flow behavior. In the horizontally oriented cells, the convection rotation changed orientations from rotating about the axis of the cell body cylinder to rotating about an axis in the transverse plane (Figure \ref{fig:100ccresult}). This change in flow pattern remained even after the temperature and absorption returned to lower levels. Cells oriented vertically show a tightening of the convection rotation towards the front of the cell (Figure \ref{fig:100ccvertresult}). In the case of horizontal cells, this new flow pattern remains even after the average temperature has decreased. In vertical cells, unlike the horizontal models, the change in flow patterns oscillated with the average temperature and laser absorption. In addition to this oscillatory behavior, the models predicted points in the cell whose temperature exceed 1000 $^{\circ}$C, which is very likely a non-physical result. If this configuration of Rb is ever realized in an optical pumping cell, the Rb film experiencing high temperatures is likely to relocate to another portion of the cell. In contrast, the Rb sources in the FEM model are inexhaustible and static. \begin{figure*}[t] \subfloat[\label{fig:100ccresult}]{\includegraphics[width=0.4\textwidth]{Figures/120CFull-gray.png}} \subfloat[\label{fig:100ccvertresult}]{\includegraphics[width=0.4\textwidth]{Figures/120CFullvert-gray.png}}\\ \subfloat[\label{fig:300ccrboutletresult}]{\includegraphics[width=0.4\textwidth]{Figures/300cc140crboutletnolegend-gray.png}} \subfloat[\label{fig:results300ccresult}]{\includegraphics[width=0.4\textwidth]{Figures/140CDrop-gray.png}} \caption{[Color Online] Four visualizations of the simulations. On the left of each subfigure is a visualization of the optical pumping cell with gas flow streamlines. The velocity vector field of the gas moving through the cells is denoted in each figure. The geometry of the cell is sliced, and the color of the slice denotes the number density of Rb in inverse cubic meters. On the right of each subfigure is a graph of the average temperature in fraction above the set point temperature (black, dashed line), fraction of laser power absorbed (dotted, purple line), and fraction of Rb polarization(solid, brown line). The green vertical line in each graphs denotes the time step at which the visualization on the left is taken. The simulations are: (\ref{fig:100ccresult}) the visualization of the 120 $^{\circ}$ C horizontally-oriented 100-cc cell with a full-circumferential Rb distribution; (\ref{fig:100ccvertresult}) the visualization of the 120 $^{\circ}$C vertically-oriented 100-cc cell with a full-circumferential Rb distribution; (\ref{fig:300ccrboutletresult}) the visualization of the initial 140 $^{\circ}$C 300-cc cell with the Rb film in the outlet tube extending up to the wall of the body of the cell; and (\ref{fig:results300ccresult}) the visualization of the 140 $^{\circ}$C 300-cc cell with the Rb film in the outlet tube moved to the first bend of the outlet tube. Solution visualizations for all models and parameters can be found in Ref. \cite{SchrankSupportData2020}.} \label{fig:resultvisualization} \end{figure*} \subsection{300-cc Cell Simulation Results} The 300-cc optical pumping cell model was constructed after completing the initial simulations with the 100-cc models. The model's construction attempted to eliminate the above described oscillatory behavior, as it is not observed in experimental setups. First, the volume of the cell was increased in order to decrease the modelled intensity of the laser beam and decrease the likelihood of generating non-physical temperatures. In addition, the Rb source in the the body of the cell was reconfigured to a single drop structure on the bottom surface of the optical pumping cell body. Initially, a Rb sink in the form of a thin film on the walls of the outlet was maintained as in the 100-cc cells. The cell was only simulated in the horizontal orientation. The results are shown in Table \ref{tab:all300ccresults}, and samples of visualization are shown in Figures \ref{fig:300ccrboutletresult} and \ref{fig:results300ccresult}. In the initial simulation of this configuration (Figure \ref{fig:300ccrboutletresult}), the model again predicted that the gases in the body of the cell are heated. Due to the configuration and cell dimensions, the temperature near the Rb drop does not significantly rise. However, the gases exiting through the outlet can be several hundred degrees above the set-point temperature. Excessive heating of the Rb source/sink in the outlet caused an excess of Rb vapor. Due to the proximity of the modelled source to the cell body, some of this excess was able to diffuse back into the cell body and block substantial portions of the light. Eventually, the Rb source in the outlet became the dominant source of Rb vapor in the body of the cell, and the laser adsorption dramatically increased. The simulation terminated prematurely, so it is unclear if this system would generate oscillations as observed in the 100-cc cells. A second 300-cc geometry was generated with the outlet Rb source/sink recessed farther into the outlet in order to prevent the vapors from this source diffusing back into the body of the cell (Figure \ref{fig:results300ccresult}). The outlet Rb source/sink in this model started at the first bend in the outlet. In this configuration, the hot exit gases again heated the Rb in the outlet and generated excess Rb. However, because the Rb film was recessed farther in the outlet, it did not diffuse into the cell body. At some locations in the outlet, the Rb number density exceed the Rb density in the cell body by an order of magnitude. Due to computational limitations, it was not possible to take these simulations to a steady-state solution. In the case of the 300-cc geometry, the $^{129}$Xe polarization computational module was enabled. The value for the output $^{129}$Xe polarization and the laser absorption for those simulations is recorded in Table \ref{tab:all300ccresults}. Although the simulations did not meet the definition of steady-state, the trajectory of the observed changes in average cell temperatures, laser absorption, and $^{129}$Xe polarization appeared to be headed towards a steady-state solution. However, limitations in available computational time prevented taking these simulations to more than 600 time-steps. \section{Conclusion and Future Research\label{sec:conclusion}} The initial simulations computed using a new FEM model have revealed qualitative insights about current designs for optical pumping cells. In particular, the model indicates that cell performance is strongly linked to the Rb source distribution. Rb sources that are near the ``top'' of the cell or closest to where the pump laser beam enters the cell can contribute more strongly to the overall Rb vapor number density because of heating due to the laser and gas. Even a Rb source that is nominally upstream of the cell body can diffuse back into the body and strongly effect the dynamics of optical pumping. Although in this model, the Rb source distribution is static, there is no reason to believe this is the case in actual SEOP systems. Rb metal will redistribute itself into different configurations during the course of the life of the cell, and these configurations may not be optimum for optical pumping. In addition to other mechanisms, this reconfiguration may contribute to the aging and eventual failure of optical pumping cells. Rb metal films in the outlet tube of the cell may also require careful curation to prevent depolarization in this area. This was first suggested as a problem by Ref. \cite{Ruset2005}, but these simulations suggest the problem may be more acute due to the enhanced number density of Rb vapor in the outlet tube. All of this taken together suggests that measures should be taken to eliminate Rb metal build up in the outlet tube, or to generate a protocol to occasionally drive the Rb metal from the outlet back into the cell body. This should be considered despite recent innovations to move the Rb source to the inlet tube in a presaturation region, as initially suggested by Ref. \cite{Fink2007}. In fact, the model would suggest that, in the case of optical pumping cells that use a presaturation region, a protocol should be developed to drive Rb from both the cell body and the outlet to preserve control over the Rb number density of the optical pumping region of the cell. The subtle dynamics of flow-through optical pumping revealed in these simulations may have application in other areas of spin-exchange physics. For several decades now, Helium-3 polarization has underachieved its theoretical maximum value \cite{Babcock2006LimitsHe3}. The physical mechanism behind the so called X-factor, which describes this deficiency, remains elusive. Perhaps detailed computations of hyperpolarized Helium-3 setups which account for thermal, fluid, and diffusion effects will similarly reveal dynamics that can explain this deficiency. However, it is clear that additional work needs to be done with the FEM model. The optimal model-solver parameters have not been found such that transient simulations can be run to completion with certainty. This instability, along with the relatively long computational times, makes rigorous computational studies of SEOP cell dynamics challenging. Unfortunately, the nature of FEM makes it difficult to predict optimal solver parameters \textit{a priori}. Thus this problem must be tackled by iteratively attempting different solver parameter combinations. In addition, because the HP$^{129}$Xe FEM module is unstable in transient simulations, a detailed study of output polarization in simulations that do not reach steady state is not possible. Although experimentally SEOP cells reach steady-state, studying the evolution of the polarization may be useful in the construction of more efficient flow-through polarizer cells. Finally, since it is clear that the Rb source distribution is critical in the flow dynamics of the cell, it is important to identify physically realistic Rb source distributions. This is important in the FEM simulations as the Rb sources are immobile. Thus, we cannot rely on the computed dynamics of the model to predict changes in the Rb source distribution. This FEM model provides insight into understanding the poor performance of current optical pumping systems without the need to appeal to cell contamination or other mechanisms. As improvements are made to the model, it may eventually become necessary to appeal to mechanisms outside of what is described in the model. However, before accepting these explanations, every effort should first be made to understand the behaviour of SEOP systems in terms of known physical principles. \section{Discussion of Optical Pumping Cell Simulation Results\label{sec:discusion}} Because many of the solutions failed to converge to time-independent solutions, the results cannot be used to validate the model against Freeman et. al's results. However, there are insights that can be drawn despite the inability to make a direct comparison with the experimental results First, the lack of a steady-state solution is indicative of the dynamics of SEOP flow-through systems. The oscillatory behavior observed in simulated systems is not observed in experimental setups. However, the boundary conditions that seemed to give rise to the oscillatory behavior do not seem entirely unreasonable and are physical possible. Temperatures generated by laser heating in the simulation are high enough to rapidly boil rubidium. That suggests that liquid rubidium sources in the cell body may be dynamically redistributed during SEOP on fast timescales compared to the production of HP$^{129}$Xe. It cannot currently be determined theoretically if there exists a Rb distribution configuration that is stable. The oscillatory behavior of many of the simulations, while not realized in experimental setups, may also be indicative of the conditions that give rise to laser runaway-heating observed in experimental setups. In experimental SEOP polarizers, laser runaway-heating is characterized by an increase in the absorption of laser-light that gives rise to increased heating of the cell-body \cite{Ruset2005}. It is suspected that the heating creates the same positive-feedback mechanism described above, in which increased heating of the cell-body increases the Rb-vapor density. This increased density absorbs more of the laser light, which in turn heats the cell further. Second, the simulations indicate that the distribution of Rb in the optical pumping cell body can drastically change the performance of the cell. In particular, Rb sources that are on the ``top'' and ``front'' of the cell are likely to contribute to the Rb vapor concentration to a far greater extent than Rb sources in other areas of the cell. The dominance of Rb sources on the ``top'' was observed in the 100-cc cell simulation in which the Rb source was a full-circumferential distribution (Figure \ref{fig:100ccresult}). This is due to the nature of convection transporting hot gases to the top portion of the cell. Since the gas will be heated by the laser, the system will have a temperature gradient between the upper and lower portions of the cell. The high temperature in the upper portion of the cell will tend to evaporate Rb in those locations more quickly. Similarly, Rb sources near the ``front'' of the cell are likely to contribute to the concentration of the Rb vapor to a far greater extent than sources in the ``back'' of the cell. Because the laser intensity is highest at the ``front'', gases are likely to be close to the highest temperature experienced in the body of the cell. As seen in the initial 300-cc optical pumping cell simulation, Rb sources that are nominally in the outlet tube may significantly contribute to the Rb number density in the body of the cell (Figure \ref{fig:300ccrboutletresult}). Depending on the configuration of the Rb in the cell, light may be absorbed preferentially at the front of the cell causing large dark regions in the body. Although not definitively observed in these simulations because the $^{129}$Xe polarization module was disabled, such a situation could give rise to decreased HP$^{129}$Xe polarization. However, as in the case of runaway-heating, the Rb sources may preferentially relocate to the back of the cell. Therefore, it is unclear the extent of this effect in actual SEOP cells in which the Rb sources are mobile. As cells age, Rb sources may become less mobile and more evenly distributed, and this distribution may contribute to decreased output polarization in aged cells. Finally, the simulations may indicate that Rb in the outlet of the SEOP cell is of far greater importance than previously suspected. Gases which are significantly warmer than the set-point temperature of the cell may exit the outlet tube and may give rise to Rb number densities in that region that are significantly higher than the Rb number density in the body(Figure \ref{fig:results300ccresult}). Because the laser light does not illuminate the outlet tube of the cell, the Rb vapor in the outlet tube will not be polarized and, thus, may cause rapid depolarization of HP$^{129}$Xe passing through the outlet tube. It is important to discuss the obvious discrepancy in the observed behavior of the 300-cc cells and reasonable physical behavior of physical SEOP systems, particularly in terms of the laser absorption for a given wall temperature. As described in \cite{Schrank2019ACode}, the model uses a simplified expression for the laser absorption term in the system of differential equations. The absorption presented by the simulated Rb in this configuration may be such that the difference between the simplified expression and the full expression becomes important, which results in a large discrepancy between modelled absorption and those that are observed in physical SEOP systems.
train/arxiv
BkiUd4LxaL3Sui42XYMH
5
1
\section{Introduction} Diffractive optical elements (DOE) are thin phase or amplitude elements that operate by means of diffraction to produce arbitrary distributions of light. Some examples of DOEs include diffraction gratings, Fresnel zone lenses~\cite{shiono1990blazed}, diffractive axicons~\cite{golub2006fresnel} and so on. DOEs are extensively used as beam shaping elements, lenses and other essential optical components. The typical design flow of DOEs is as follows: 1. Calculation of DOE phase profile, using techniques such as the G-S algorithm~\cite{gerchberg1972practical}, simplified mesh technique~\cite{bhattacharya2008simplified}, modulo $2\pi$ conversion of refractive elements or analytic equations~\cite{bhattacharya2017design} 2. Simulating the optical fields using scalar diffraction equations and 3. Fabrication using lithography techniques followed by experimental verification. In this paper we present the implementation details of \textit{GDoeSII}, an all-in-one software with graphical user interface, which allows users to perform scalar diffraction simulations and then convert the phase profiles from standard image formats such as JPG and PNG to GDSII format for further lithography process. We believe this software will help graduate students and researchers working in the field of difractive optics and allied fields. The following sections explain the implementation details of the two main modules of the software which are: 1. DOE module, 2. GDSII conversion module. \section{Diffractive optics module} In this section we discuss the theoretical background behind the DOE simulator module. The geometry of the problem is shown in Fig.~\ref{fig:1}. We used the scalar diffraction integrals which are the Fresnel and Fraunhofer diffraction integrals to compute the intensity of the field distribution at a desired plane from the DOE. The software uses the Fresnel (\textit{near field}) diffraction integral Eq.~\ref{eq:1} to compute the complex field $U(x,y,z)$ at a distance $z$ from the DOE plane. \begin{figure}[!h] \centering \includegraphics[width = 11cm]{Geometry.PNG} \caption{Geometry of the planes for scalar diffraction theory} \label{fig:1} \end{figure} \begin{equation} U(x,y,z) = \frac{e^{ikz}}{j\lambda z}e^{j\frac{k}{2z}(x^2+y^2)}\int \int \{ U(\xi , \eta) e^{j\frac{k}{2z}(\xi^2+\eta^2)}\}e^{-j\frac{2\pi}{\lambda z}(x\xi + y\eta)}d\xi d\eta, \label{eq:1} \end{equation} Where $x$, $y$ are the coordinate system in the observation plane and $\xi,\eta$ are the coordinate system in the DOE plane, $k$ is the wave vector and $\lambda$ is the wavelength of the light. A simplified version of Eq.~\ref{eq:1}, also known as Fraunhofer diffraction(\textit{far field}) integral given in Eq.~\ref{eq:2} is used to compute the complex field at far field. \begin{equation} U(x,y) = \frac{e^{jkz}e^{j\frac{k}{2z}(x^2+y^2)}}{j\lambda z}\int \int U(\xi,\eta)e^{-j\frac{2\pi}{\lambda z}(x\xi+y\eta)}d\xi d\eta \label{eq:2} \end{equation} The above two equations yield reasonably accurate result under the following conditions~\cite{goodman2005introduction}: \begin{enumerate} \item The diffracting aperture must be large compared with the wavelength $\lambda$ \item The diffracting fields must not be observed too close to the aperture \end{enumerate} \section{GDSII conversion module} The next step after simulation is the fabrication of the physical DOE. Most lithography systems such as the Electron beam lithography (EBL) and Photo/UV lithography systems accept the DOE phase designs in only GDSII format. Often researchers have to rely on expensive software for this conversion step. \par To address this important step in the DOE design cycle, we added the GDSII conversion module for the conversion of the DOE phase profile from standard image formats such as JPG and PNG to GDSII format. Typical image to GDSII conversion process involves creating a square pixel in the GDS file for each white or dark colored pixel in the image file. This process takes very long time and the output GDS file size is usually very large. In our approach, the algorithm detects continuous same valued pixels and groups them as a line segment in the GDS file. The line segment takes much less memory. The grouping method used in our program is depicted in Fig.~\ref{fig:grouping}. The algorithm can also convert the image file into $n$ layered GDSII files by quantizing the image to $n$ intensity levels. This is very useful for researchers working with 3D optical elements. Fig.\ref{fig:2} shows the images converted using \textit{GDoeSII} into 2, 4, 8 levels respectively. This module offers another feature to create arrays of basic shapes such as circles, triangles and rectangles. \begin{figure}[!h] \centering \includegraphics[width = 2in]{Exp.PNG} \caption{Grouping of same valued pixels in a row} \label{fig:grouping} \end{figure} \begin{figure}[!h] \centering \subfloat[]{\includegraphics[height=1.75in]{kathakali_2.png}} \hspace{0.5cm} \subfloat[]{\includegraphics[height=1.75in]{2.PNG}} \\ \subfloat[]{\includegraphics[height=1.75in]{4.PNG}} \hspace{0.5cm} \subfloat[]{\includegraphics[height=1.75in]{8.PNG}} \caption{(a) Original image, Converted GDSII files (b) 2 level (c) 4 level (d) 8 levels} \label{fig:2} \end{figure} \par The following parameters were measured to access the performance of the software. Fig.~\ref{fig:metrics} (a) shows the conversion times and output GDS file size for different sizes of the input image. Fig.~\ref{fig:metrics} (b) shows the conversion time and output GDS file size when a single image (2000x2000) was converted to multiple layered GDS file. However, these numbers can vary with the hardware capacity of the machine. \begin{figure} \centering \subfloat[]{\includegraphics[width = 7 cm]{S1.png}} \subfloat[]{\includegraphics[width = 7 cm]{S2.png}} \caption{GDS file conversion time and output file size(a) for different image sizes (b) for same image but different number of layers in output GDS file} \label{fig:metrics} \end{figure} \section{Simulation and Experiment results} In this section we show the simulation, fabrication and experimental results for the case of an Airy beam~\cite{siviloglou2007observation}. An Airy beam is a non diffractive solution of the paraxial diffraction equation. The airy beam can be generated using a cubic phase profile, which is shown in Fig. \ref{fig:3}. \begin{figure}[!h] \centering \subfloat[]{\includegraphics[height = 1.5in]{cub.png}} \hspace{0.5cm} \subfloat[]{\includegraphics[height = 1.5in]{Airy.png}} \caption{(a) Cubic phase profile (b) Airy beam Intensity at $z$ = 10~cm (Fresnel)} \label{fig:3} \end{figure} \par The phase profile shown in Fig.~\ref{fig:3} is converted into GDSII format and fabricated using Electron beam lithography (Raith 150 TWO) system. These results are summarized in Fig.~\ref{fig:4}. Fig.~\ref{fig:4} (c) shows the experimentally generated Airy beam which matches the simulated intensity profile shown in Fig.~\ref{fig:3} (b). \begin{figure}[!h] \centering \subfloat[]{\includegraphics[height = 1.5in]{Capture.PNG}} \hspace{0.1cm} \subfloat[]{\includegraphics[height = 1.5in]{fab.jpeg}} \hspace{0.1cm} \subfloat[]{\includegraphics[height = 1.5in]{AiryExp.jpg}} \caption{(a) GDSII converted image (b) Confocal microscope image of the DOE (c) Experimental CCD image of the Airy beam} \label{fig:4} \end{figure} \begin{figure}[!h] \centering\includegraphics[height = 1.5in]{Curve.png} \caption{Curved region in an example GDSII design} \label{fig:5} \end{figure} \section{Conclusion} We have introduced \textit{GDoeSII}, a Python based software for Microsoft Windows platform which facilitates the computation of Intensity distributions produced by diffractive optical elements. This program also enables the users to convert phase profiles from image formats such as JPG and PNG to GDSII (also multi layer) format in very less time and file size. One problem with this conversion arises when there are curved features in the image file. Fig.~\ref{fig:5} shows a curved segment of the GDSII file which clearly shows rough ridges. Future updates to the software will focus on further improvement of the algorithm to accurately convert curved elements. We hope that our openly available software will help researchers in the field of Optics and Nanofabrication. \section{Acknowledgements} \noindent R D thanks Dr. Vijaya Kumar, Dr. Soon Hock Ng, Vandhana Narayanan and Sripriya for their contribution in testing the software. \label{} \\ \section*{References} \bibliographystyle{elsarticle-num}
train/arxiv
BkiUdLA5qsBDFMGksgRM
5
1
\section{Introduction} In the interior of some compact stars the density is so high that the hadrons melt into their fundamental constituents, giving rise to quark matter. It has been known for long time now that cold dense quark matter should exhibit the phenomenon of color superconductivity \cite{reviews}. It is our aim here to explain how a strong magnetic field affects this phenomenon. This is not a simple academic question. The real fact is that almost all compact stars sustain a strong magnetic field, of the order of $B \sim 10^{12} - 10^{14}$ G for pulsars, and of $B \sim 10^{14}-10^{15}$ G for magnetars. A comparison of the gravitational and magnetic energies of a compact star tells us that the maximum fields may be as high as $B \sim 10^{18}-10^{19}$ G. The common believe is that all the above mentioned compacts objects are neutron stars, where neutrons are in a superfluid phase, while the protons are in a superconducting one, probably of type I. An external magnetic field influences both quantitatively and qualitatively the color superconducting state \cite{MCFL}. We will explain here why this is so. This work has been done in collaboration with Efrain J. Ferrer and Vivian de la Incera. \section{Color-flavor locking phase} The ground state of QCD at high baryonic density with three light quark flavors is described by the (spin zero) condensates \cite{alf-raj-wil-99/537} \begin{equation} \langle q^{ai}_{L } q^{bj}_{L }\, \rangle =-\langle q^{ai}_{R } q^{bj}_{R }\, \rangle = \Delta_A \, \epsilon^{abc} \epsilon_{ijc} \ , \end{equation} where $q_{L/R}$ are Weyl spinors (a sum over spinor indices is understood), and $a,b$ and $i,j$ denote flavor and color indices, respectively. For simplicity we have neglected a color sextet component of the condensate, as it is a subleading effect. The diquark condensates lock the color and flavor transformations breaking both- thus, the name color-flavor locked (CFL) phase. The symmetry breaking pattern in the CFL phase is $ SU(3)_C \times SU(3)_L \times SU(3)_R \times U(1)_B \rightarrow SU(3)_{C+L+R}$. There are only nine Goldstone bosons that survive to the Anderson-Higgs mechanism. One is a singlet, scalar mode, associated to the breaking of the baryonic symmetry, and the remaining octet is associated to the breaking of the axial $SU(3)_A$ group, just like the octet of mesons in vacuum. An important feature of spin-zero color superconductivity is that although the color condensate has non-zero electric charge, there is a linear combination of the photon $A_{\mu}$ and a gluon $G^{8}_{\mu}$ that remains massless \cite{alf-raj-wil-99/537}, $ \widetilde{A}_{\mu}=\cos{\theta}\,A_{\mu}-\sin{\theta}\,G^{8}_{\mu} \ , $ while the orthogonal combination $\widetilde{G}_{\mu}^8=\sin{\theta}\, A_{\mu}+\cos{\theta}\,G^{8}_{\mu}$ is massive. In the CFL phase the mixing angle $\theta$ is sufficiently small ($\sin{\theta}\sim e/g\sim1/40$). Thus, the penetrating field in the color superconductor is mostly formed by the photon with only a small gluon admixture. The unbroken $U(1)$ group corresponding to the long-range rotated photon (i.e. the $\widetilde {U}(1)_{\rm e.m.}$) is generated, in flavor-color space, by $\widetilde {Q} = Q \times 1 - 1 \times Q$, where $Q$ is the electromagnetic charge generator. We use the conventions $Q = -\lambda_8/\sqrt{3}$, where $\lambda_8$ is the 8th Gell-Mann matrix. Thus our flavor-space ordering is $(s,d,u)$. In the 9-dimensional flavor-color representation that we will use here (the color indexes we are using are (1,2,3)=(b,g,r)), the $\widetilde{Q}$ charges of the different quarks, in units of $\widetilde{e} = e \cos{\theta}$, are \begin{equation} \label{q-charges} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $s_{1}$ & $s_{2}$ & $s_{3}$ & $d_{1}$ & $d_{2}$ & $d_{3}$ & $u_{1}$ & $u_{2}$ & $u_{3}$ \\ \hline 0 & 0 & - & 0 & 0 & - & + & + & 0 \\ \hline \end{tabular} \end{equation} While a weak magnetic field only changes slightly the properties of the CFL superconductor, in the presence of a strong magnetic field the condensation pattern is changed, giving rise to a new phase, the magnetic color-flavor locked (MCFL) phase. \section{Magnetic color-flavor locking phase} An external magnetic field to the color superconductor will be able to penetrate it in the form of a ``rotated" magnetic field $\widetilde{B}$. With respect to this long-ranged field, although all the superconducting pairs are neutral, a subset of them are formed by quarks with opposite rotated $\widetilde{Q}$ charges. Hence, it is natural to expect that this kind of condensates will be affected by the penetrating field, as the quarks couple minimally to the rotated gauge field. Furthermore, one may expect that these condensates are strengthened by the penetrating field, since their paired quarks, having opposite $\widetilde{Q}$-charges and opposite spins, have parallel (instead of antiparallel) magnetic moments. The situation here has some resemblance to the magnetic catalysis of chiral symmetry breaking \cite{MC}, in the sense that the magnetic field strengthens the pair formation. Despite this similarity, the way the field influences the pairing mechanism in the two cases is quite different as we will discuss later on. A strong magnetic field affects the flavor symmetries of QCD, as different quark flavors have different electromagnetic charges. For three light quark flavors, only the subgroup of $SU(3)_L \times SU(3)_R$ that commutes with $Q$, the electromagnetic generator, is a symmetry of the theory. Equally, in the CFL phase a strong $\widetilde{B}$ field should affect the symmetries in the theory, as $\widetilde{Q}$ does not commute with the whole locked $SU(3)$ group. Based on this considerations, we proposed the following diquark (spin zero) condensate \cite{MCFL} \begin{equation} \langle q^{ai}_{L } q^{bj}_{L }\, \rangle =-\langle q^{ai}_{R } q^{bj}_{R }\, \rangle = \Delta_A \, \epsilon^{ab3} \epsilon_{ij3} + \Delta_A^B \left( \epsilon^{ab2} \epsilon_{ij2} + \epsilon^{ab1} \epsilon_{ij1} \right) \ , \end{equation} and as for the CFL case, we have only considered the leading antritiplet color channel. For a discussion of the remaining allowed structures in the subleading sextet channel see \cite{MCFL}. Here we have been guided by the principle of highest symmetry, that is, the pair condensation should retain the highest permitted degree of symmetry, as then quarks of different colors and flavors will participate in the condensation process to guarantee a maximal attractive channel at the Fermi surface \cite{alf-raj-wil-99/537}. In the MCFL phase the symmetry breaking pattern is $ SU(3)_C \times SU(2)_L \times SU(2)_R \times U(1)^{(1)}_A\times U(1)_B \times U(1)_{\rm e.m.} \rightarrow SU(2)_{C+L+R} \times {\widetilde U(1)}_{\rm e.m.} $. Here the symmetry group $U(1)^{(1)}_A$ is related to a current which is an anomaly free linear combination of $u,d$ and $s$ axial currents, and such that $U(1)^{(1)}_A \subset SU(3)_A$. The locked $SU(2)$ group corresponds to the maximal unbroken symmetry, such that it maximizes the condensation energy. The counting of broken generators, after taking into account the Anderson-Higgs mechanism, tells us that there are only five Goldstone bosons. As in the CFL case, one is associated to the breaking of the baryon symmetry; three Goldstone bosons are associated to the breaking of $SU(2)_A$, and another one associated to the breaking of $U(1)^{(1)}_A$. To study the MCFL phase we used a Nambu-Jona-Lasinio (NJL) four-fermion interaction abstracted from one-gluon exchange \cite{alf-raj-wil-99/537}. This simplified treatment, although disregards the effect of the $\widetilde {B}$-field on the gluon dynamics and assumes the same NJL couplings for the system with and without magnetic field, keeps the main attributes of the theory, providing the correct qualitative physics. The NJL model is treated as the proper effective field theory to study color superconductivity in the regime of moderate densities. The model is defined by two parameters, a coupling constant $g$ and an ultraviolet cutoff $\Lambda$. The cutoff should be much higher than the typical energy scales in the system, that is, the chemical potential $\mu$ and the magnetic energy $\sqrt{\widetilde{e}\widetilde{B}}$. The MCFL gap equations for arbitrarily value of the magnetic field are extremely difficult to solve, and they require a numerical treatment. However, we have found a situation where an analytical solution can be found. This corresponds to the case $\widetilde{e}\widetilde{B} >\mu^2/2$, where $\mu$ is the chemical potential. In this case, only charged quarks in the lowest Landau level contribute to the gap equation, a situation that drastically simplifies the analysis. In BCS theory, and in the presence of contact interactions, the fermionic gap has an exponential dependence on the inverse of the density of states close to the Fermi surface, which is proportional to $\mu^2$. Effectively, one can find that within the NJL model the CFL gap reads \begin{equation} \label{gapCFL} \Delta^{\rm CFL}_A \sim 2 \sqrt{\delta \mu} \, \exp{\Big( -\frac{3 \Lambda^2 \pi^2} {2 g^2 \mu^2} \Big) } \ . \end{equation} with $\delta \equiv \Lambda - \mu$. In the MCFL phase, when $\widetilde{e}\widetilde{B} >\mu^2/2$, we find instead \begin{equation} \label{gapBA} \Delta^B_A \sim 2 \sqrt{\delta \mu} \, \exp{\Big( - \frac{3 \Lambda^2 \pi^2} {g^2 \left(\mu^2 + \widetilde{e} \widetilde{B} \right)} \Big) } \ . \end{equation} For the value of the remaining gaps of the MCFL phase, see \cite{MCFL}. All the gaps feel the presence of the external magnetic field. As expected, the effect of the magnetic field in $\Delta^{B}_{A}$ is to increase the density of states, which enters in the argument of the exponential as typical of a BCS solution. The density of states appearing in (\ref{gapBA}) is just the sum of those of neutral and charged particles participating in the given gap equation (for each Landau level, the density of states around the Fermi surface for a charged quark is $\widetilde{e}\widetilde{B}/2 \pi^2$). The gap formed by $\widetilde{Q}$-neutral particles, although modified by the $\widetilde{B}$ field \cite{MCFL}, has a subleading effect in the MCFL phase. As mentioned at the beginning of this Section, the situation here shares some similarities with the magnetic catalysis of chiral symmetry breaking \cite{MC}; however, the way the field influences the pairing mechanism in the two cases is quite different. The particles participating in the chiral condensate are near the surface of the Dirac sea. The effect of a magnetic field there is to effectively reduce the dimension of the particles at the lowest Landau level, which in turn strengthens their effective coupling, catalyzing the chiral condensate. Color superconductivity, on the other hand, involves quarks near the Fermi surface, with a pairing dynamics that is already $(1+1)$-dimensional. Therefore, the ${\widetilde B}$ field does not yield further dimensional reduction of the pairing dynamics near the Fermi surface and hence the lowest Landau level does not have a special significance here. Nevertheless, the field increases the density of states of the ${\widetilde Q}$-charged quarks, and it is through this effect, as shown in (\ref{gapBA}), that the pairing of the charged particles is reinforced by the penetrating magnetic field. \section{Conclusions} We have presented the arguments to explain why three light flavor color superconductivity is made stronger, not weaker, by the presence of magnetism. These arguments have been corroborated by an explicit computation of the quark gaps within a NJL model, in the regime of strong magnetic fields. To better understand the relevance of this new phase in astrophysics we need to explore the region of moderately strong magnetic fields $\widetilde{e}\widetilde{B}< \mu^2/2$, which requires to carry out a numerical study of the gap equations including the effect of higher Landau levels. The presence of a strong magnetic field affects the values of the quark gaps, and thus, it will modify the equation of state of the color superconductor, although we do not expect this to be a very pronounced effect. More drastically, the low energy physics of the MCFL phase would differ from that of the CFL phase, through the disappearance of light degrees of freedom. This fact will have consequences on several macroscopic properties of the superconductor, that we hope to explore soon.
train/arxiv
BkiUeYE4eIOjRvCHtvgR
5
1
\section{Introduction} Among various three-dimensional (3D) topological materials, ZrTe$_5$ is unique in that small volume changes \cite{HWeng_PRX2014,ZFan_SciRep2017}, small strains \cite{Mutch2019,YZhang_NatComm2017}, or even moderate temperature \cite{BXu_PRL2018} can induce a topological phase transition from weak to strong topological insulator (TI) phase. This unique aspect may be attributed to the fact that a single ZrTe$_5$ layer is a two-dimensional (2D) TI \cite{HWeng_PRX2014}. Before the experimental discovery of weak TI, $\beta$-type bismuth iodide (Bi$_4$I$_4$) \cite{Noguchi2019}, ZrTe$_5$ was a first realistic candidate for a weak TI. In order to drive the material from a weak to strong TI by external means, the band gap must be closed at a critical value, which manifests Dirac semimetal \cite{BJYang2014}. However, this Dirac semimetal phase is not protected by crystal rotational symmetry like Na$_3$Bi and Cd$_3$As$_2$ \cite{ZWang_PRB2012,ZWang_PRB2013,BJYang2014} or by nonsymmorphic group symmetry like $\beta$-cristobalite BiO$_2$ \cite{SMYoung_PRL2012}. The sensitivity of the topological phase to small changes of the lattice constant or temperature, places this material as an ideal playground for exploring effects of external stimuli on topological properties. Furthermore, such sensitivity resulted in experimental observations of weak TI \cite{YZhang_NatComm2017,HXiong_PRB2017,YYLv_PRB2018}, strong TI \cite{YZhang_NatComm2017,GManzoni_PRL2016}, and Dirac semimetal phases \cite{JLZhang_PRL2017,RYChen_PRL2015,JWang_PNAS2018} in 3D ZrTe$_5$. Recent magnetotransport experiments on 3D ZrTe$_5$ showed interesting features including a large anomalous Hall effect as a function of the orientation of external magnetic ({\bf B}) field (despite the absence of magnetic order) \cite{Liang2018,JGe2019} as well as 3D quantum Hall effect and metal-insulator transition \cite{FTang_Nature2019}. In general the intrinsic anomalous Hall effect requires contributions of nonzero Berry curvature from occupied bands \cite{Di_Xiao2010,Nagaosa2010}. When the {\bf B} field is rotated out of plane, the anomalous Hall resistivity was observed to abruptly increase in an antisymmetric fashion or reveal strong asymmetry as a function of the field orientation \cite{Liang2018}. On the other hand, for the in-plane {\bf B} field, the anomalous Hall resistivity was observed to show clear antisymmetry as a function of the field orientation \cite{Liang2018}. The latter feature cannot be explained by the planar Hall effect ~\cite{Nandy_PRL2017,Burkov2017} alone. Theoretical efforts have been so far mostly limited to understanding topological nodal structures using lowest-order effective models when the {\bf B} field is parallel to the crystal axes \cite{RYChen_PRL2015}. Very recently, Burkov \cite{Burkov2018} proposed an effect of mirror anomaly on the intrinsic anomalous Hall conductivity (AHC) for Dirac semimetals when the {\bf B} field rotates. The anomalous Hall effect observed in Ref.~\cite{Liang2018} has not been theoretically understood yet. In order to provide insight into the origin of the intriguing anomalous Hall effect, we construct a Wannier-function-based tight-binding (WFTB) model for 3D ZrTe$_5$ from first-principles calculations and investigate topological phase transitions induced by Zeeman splitting while ignoring Landau levels. The magnitude of the {\bf B} field is fixed such that the Zeeman splitting is greater than a small band gap, while the {\bf B} field direction is varied within the crystal $ab$, $bc$, and $ac$ planes. The WFTB model predicts that a pair of type-I Weyl nodes are formed for any direction of {\bf B} field except for when the {\bf B} field is parallel to the $a$ or $b$ axes, considering crossings of the conduction and valence bands. This pair of Weyl nodes abruptly transforms into a nodal ring when the {\bf B} field aligns with the $a$ or $b$ axis, which conceptually differs from annihilation of Weyl nodes with opposite chirality at the same $k$ point. Interestingly, when the top two valence bands cross, the WFTB model suggests type-II topological nodal structures depending on the direction of {\bf B} field. We also show that the linearized $k \cdot p$ model is not enough to capture even qualitatively correct nodal structures for some {\bf B} field directions. We numerically compute the intrinsic AHC as a function of the orientation of {\bf B} field and chemical potential, using the WFTB model. Our results can be compared with the experimental antisymmetric component of the intrinsic out-of-plane AHC as a function of the tilting angle. We present the crystal structure and symmetries of ZrTe$_5$ in Sec.~\ref{sec2} and construction of the WFTB model in Sec.~\ref{sec3}. We show the calculated band structures using the WFTB model in the presence of {\bf B} field and discuss the induced topological phases as a function of {\bf B}-field direction in Sec.~\ref{sec4}. We compare our findings from the WFTB model with those from the linearized $k \cdot p$ model in Sec.~\ref{sec5}. Then we present and analyze the calculated AHC as a function of {\bf B}-field direction and chemical potential in Sec.~\ref{sec6}. We summarize our conclusions in Sec.~\ref{sec7}. \section{Crystal Structure and Symmetries}\label{sec2} \subsection{Crystal structure} Bulk ZrTe$_5$ crystallizes in the orthorhombic structure with space group $Cmcm$ (No. 63), $D_{2h}$, where the experimental lattice constants are $a=3.9797$, $b=14.470$, and $c=13.676$~\AA~\cite{Fjellvag1986}. A primitive unit cell [Fig.~\ref{fig:geo}(a) and (b)] contains two Zr and ten Te atoms. The Zr atoms (green) are located at Wyckoff position $4c$, and the two Te atoms (purple) and eight Te atoms (orange) are at $4c$ and $8f$, respectively. Each 2D zigzag layer connected along the $a$ and $c$ axes is well separated by $\frac{b}{2}$ and stacked along the $b$ axis with weak van der Waals interaction. Each Zr atom is bonded with eight Te atoms [Fig.~\ref{fig:geo}(b)]. We consider the following Bravais lattice vectors for the primitive unit cell: {\bf a}$_1$=($\frac{a}{2}$,$-\frac{b}{2}$,0), {\bf a}$_2$=($\frac{a}{2}$,$\frac{b}{2}$,0), {\bf a}$_3$=(0,0,$c$) in Cartesian coordinates. In our convention, the $a$, $b$, and $c$ axes are the $x$, $y$, and $z$ axes in Cartesian coordinates. The corresponding reciprocal lattice vectors are: {\bf b}$_1$=2$\pi$($\frac{1}{a}$,$-\frac{1}{b}$,0), {\bf b}$_2$=2$\pi$($\frac{1}{a}$,$\frac{1}{b}$,0), and {\bf b}$_3$=2$\pi$(0,0,$\frac{1}{c}$). The first Brillouin zone (BZ) is shown in Fig.~\ref{fig:geo}(c), where $S$=(0,$\frac{1}{2}$,0), $\Gamma$=(0,0,0), $Z$=(0,0,$\frac{1}{2}$), $R$=(0,$\frac{1}{2}$,$\frac{1}{2}$), $Y$=($-\frac{1}{2}$,$\frac{1}{2}$,0), and $T$=($-\frac{1}{2}$,$\frac{1}{2}$,$\frac{1}{2}$), in fractional coordinates. These high-symmetry $k$ points are equivalent to $X$, $\Gamma$, $Y$, $M$, $Z$, and $R$ in Ref.\cite{HWeng_PRX2014}, respectively. The zone boundary point $X$ is located at ($\eta$,$\eta$,0), where $\eta=\frac{1}{4}(1+\frac{a^2}{b^2})$. In order to examine the effect of Zeeman splitting, we apply 0.25\% compressive uniaxial stress along the $b$ axis to the DFT-relaxed unstrained geometry while keeping the volume fixed, such that ZrTe$_5$ remains in a strong TI phase with a small band gap in the presence of spin-orbit-coupling (SOC). For reference, the relaxed unstrained lattice constants are $a=4.0341$, $b=14.6998$, and $c=13.8843$~\AA.~In the strained case, the lattice constants are $a=4.0391$, $b=14.6630$, and $c=13.9017$~\AA. The results obtained from the WFTB model correspond to the 0.25\% strained structure. \begin{figure}[htb] \centering \includegraphics[width=0.45 \textwidth]{fig1.eps} \caption[geofig]{(a)-(b) Side views of ZrTe$_5$ unit cell. Zr atoms at Wyckoff position $4c$ are green, and Te atoms at $4c$ ($8f$) are purple (orange). Here ${\bf a}_1$, ${\bf a}_2$, and ${\bf a}_3$ are the primitive cell Bravais lattice vectors. (c) First BZ with high-symmetry $k$ points and reciprocal lattice vectors shown.} \label{fig:geo} \end{figure} \subsection{Symmetries}\label{sec2:symmetries} 3D ZrTe$_5$ has inversion symmetry as well as the following crystal symmetries: two-fold rotational symmetries along the $a$ and $b$ axes ($C_{2a}$ and $C_{2b}$), two-fold screw symmetry along the $c$ axis, mirror symmetries about the $ab$ and $bc$ planes ($M_{ab}$, $M_{bc}$), and glide mirror symmetry about the $ac$ plane ($M_{ac}$). Since the inversion center does not coincide with the origin of the rotational symmetries, the space group is nonsymmorphic. Inversion symmetry persists even in the presence of {\bf B} field. Depending on the direction of {\bf B} field, the following symmetries can survive: (screw) $C_2$ symmetry about the {\bf B} field direction and the (glide) mirror symmetry about the plane perpendicular to the {\bf B} field, or $C_{2\perp}{\cal T}$ where $C_{2\perp}$ is $C_2$ symmetry about the direction perpendicular to the {\bf B} field, and ${\cal T}$ is the time-reversal operator. \section{Construction of Wannier-function tight-binding model}\label{sec3} We first calculate the electronic structure of bulk strained ZrTe$_5$ without SOC and ${\bf B}$ field by using the density-functional theory (DFT) code {\sc VASP} \cite{VASP1,VASP2}. With the DFT-calculated band structure and initial atomic orbitals, we generate Wannier functions (WFs) by using {\sc Wannier90} \cite{Wannier90}. Then we construct a SOC-free tight-binding model from the WFs and add atomic-like SOC to the tight-binding model such that the model-calculated band structure agrees with the DFT result. Last, we add Zeeman energy to the tight-binding model. \subsection{Initial DFT calculations}\label{sec3:DFT} We perform the DFT calculations using {\sc VASP} \cite{VASP1,VASP2} within the Perdew-Burke-Ernzerhof (PBE) generalized-gradient approximation (GGA) \cite{Perdew1996} for the exchange-correlation functional with and without SOC. We use projector augmented wave (PAW) pseudopotentials \cite{Bloch1994} with an energy cutoff of 350 eV and a $19 \times 19 \times 5$ Monkhorst-Pack $k$-point mesh. For the experimental geometry~\cite{Fjellvag1986}, our DFT calculation shows that bulk ZrTe$_5$ with SOC is in a strong TI phase with a direct band gap of about 100~meV. The structure with 0.25\% compressive strain along the $b$ axis has a band gap of 2.2~meV. All calculated band structures from the WFTB model correspond to the strained structure, unless specified otherwise. \subsection{SOC-free Hamiltonian}\label{sec3:spinfree} In order to construct the SOC-free WFTB model, we start with an initial set of 40 projected atomic orbitals comprised of $d_{z^2}$, $d_{x^2-y^2}$, $d_{xy}$, $d_{yz}$, and $d_{xz}$ orbitals centered at two Zr sites and $p_{x}$, $p_{y}$, and $p_{z}$ orbitals centered at ten Te sites in the primitive unit cell. We need to include both Zr $d$ orbitals and Te $p$ orbitals in the WFTB model due to their large contributions to valence and conduction bands, respectively, near $\Gamma$ indicated by our DFT calculations. With the VASP-calculated Bloch eigenvalues and eigenstates, we compute the overlap matrix and projection matrix at each DFT-sampled $k$ point by using {\sc Wannier90} \cite{Wannier90,Marzari1997,Souza2001}. We apply the disentanglement procedure within the outer energy window $[-8.23, 5.27]$~eV relative to the Fermi level. In this energy window, the number of Bloch bands ranges from 43 to 47, and both occupied and unoccupied bands are included. We check that the generated WFs are close to pure atomic orbitals with only real components. In order to maintain the features of the atomic orbitals, maximal localization is not applied in the wannierization. Now we construct the SOC-free WFTB model by using the generated WFs, $|{\mathbf R}+{\mathbf s}_{\beta}\rangle$, centered at ${\mathbf R}+{\mathbf s}_{\beta}$, where ${\mathbf R}$ are Bravais lattice vectors and ${\mathbf s}_{\beta}$ denote the sites of orbital ${\beta}$ ($\beta$$=$1,...,40). The SOC-free Hamiltonian matrix ${\cal H}_0$ \cite{Marzari1997} reads \begin{eqnarray} {\cal H}_{0,\alpha \beta}({\mathbf k}) &=& \langle \psi_{{\mathbf k},\alpha}| {\cal H}_0 |\psi_{{\mathbf k},\beta}\rangle, \\ &=& \sum_{{\mathbf R}} e^{-i {\mathbf k} \cdot ({\mathbf R}+{\mathbf s}_{\alpha}-{\mathbf s}_{\beta})} t_{\alpha \beta}({\mathbf R}-{\mathbf 0}), \label{eq:Hab} \\ t_{\alpha \beta}({\mathbf R}-{\mathbf 0}) &=& \langle {\mathbf R} + {\mathbf s}_{\alpha}| {\cal H}_0 | {\mathbf 0} + {\mathbf s}_{\beta} \rangle, \end{eqnarray} where $|\psi_{k}({\mathbf r})\rangle$ are Bloch states over the crystal momentum ${\mathbf k}$ space. Here $t_{\alpha \beta}({\mathbf R}-{\mathbf 0})$ is a hopping or tunneling parameter from orbital $\beta$ at site ${\mathbf s}_{\beta}$ in the home cell at ${\mathbf R}={\mathbf 0}$ to orbital $\alpha$ at site ${\mathbf s}_{\alpha}$ in the unit cell located at ${\mathbf R}$. \subsection{Addition of SOC and Zeeman energy} We add on-site SOC to the home-cell terms since the generated WFs are close to the atomic orbitals that we project onto. Considering the spin degrees of freedom, the number of WFs (basis set functions) is now 80. The SOC Hamiltonian becomes ${\cal H}_{\small SOC} = \lambda \mathbf{L}\cdot {\mathbf{\sigma}}$, where $\lambda$ is the SOC parameter, ${\mathbf L}$ is the orbital angular momentum, and ${\mathbf {\sigma}}$ represent Pauli spin matrices. We find that with $\lambda_{\small \rm{Zr}}=-0.12$~eV and $\lambda_{\small \rm{Te}}=0.60$~eV, the WFTB calculated band structure agrees well with the VASP-calculated band structure, especially near the $\Gamma$ point in the vicinity of the Fermi level. We add the {\bf B} field as a Zeeman term only and do not include the Peierls phase in the hopping parameters $t_{\alpha \beta}$ because Landau levels are ignored in our calculations. Then the Zeeman interaction reads ${\cal H}_{Z} = g \mu_{\mathrm B} \mathbf{S} \cdot \mathbf{B}$, where $g$ is the effective electronic $g$ factor, $\mu_B$ is Bohr magneton and ${\mathbf S}$ is the spin angular momentum. The WFTB model has the following Hamiltonian: \begin{eqnarray} {\cal H} &=& {\cal H}_0 + {\cal H}_{\small SOC} + {\cal H}_Z. \label{eq:Htot} \end{eqnarray} Hereafter we consider the Zeeman interaction energy of 10~meV unless specified otherwise. Experimental data on ZrTe$_5$ indicates that the $g$ factor is highly anisotropic. The $g$ factor when {\bf B} field is along the $b$ axis ($g_y$) is 21.3 \cite{YLiu2016,RYChen_PRL2015}, whereas the $g$ factor for the $a$ axis ($g_x$) is about 3.19 \cite{YLiu2016}. The $g$ factor for the $c$ axis ($g_z$) was not reported. Therefore, for {\bf B} field along the $b$ axis, the Zeeman energy of 10~meV corresponds to a {\bf B} field of about 16~T, considering that the Zeeman energy can be expressed as $\sqrt{g_x^2 B_x^2 + g_y^2 B_y^2 + g_z^2 B_z^2} \frac{\mu_B}{2}$. As long as the Zeeman energy is greater than the band gap, the topological phase transitions presented in this work can be realized. For the strained case, the band gap of 2.2~meV implies a requisite minimum {\bf B} field of 3.6~T when the {\bf B} field is applied along the $b$ axis. \subsection{Comparison of WFTB-calculated to DFT-calculated band structure with and without SOC}\label{sec3:WFTBSOC} \begin{figure}[htb] \begin{center} \includegraphics[width=0.4 \textwidth]{Comp_bands_withoutSOC.eps} \hspace{0.5truecm} \includegraphics[width=0.4 \textwidth]{Comp_bands_wSOC.eps} \caption[a figure]{(a) Comparison between the VASP-calculated (black) and the WFTB-calculated (red) band structures for strained ZrTe$_5$ without SOC in the primitive unit cell. (b) Likewise with SOC included. The dashed lines indicate the Fermi levels. No {\bf B} field is included here.} \label{fig:bulk} \end{center} \end{figure} In the absence of the Zeeman term, we compare the DFT-calculated band structure of bulk strained ZrTe$_5$ to the WFTB-calculated result with and without SOC, as shown in Fig.~\ref{fig:bulk}. Without SOC, the valence and conduction bands meet linearly at a Dirac point along the $\overline{\Gamma Y}$ direction or $\pm y$ axis. The WFTB-calculated band structure agrees well with the DFT result up to about $\pm$1.0 eV from the Fermi level. With SOC, a small band gap of 2.2 meV opens up at $\Gamma$ which is also reliably captured by the WFTB model. We calculate the 3D topological indices ($\nu_0$; $\nu_1$,$\nu_2$,$\nu_3$) \cite{LiangFu2007} of strained ZrTe$_5$ using the DFT-calculated wave function. For the reciprocal vector ${\mathbf G}=\nu_1{\mathbf b}_1 + \nu_2{\mathbf b}_2 + \nu_3{\mathbf b}_3$, we find that ($\nu_0$; $\nu_1$,$\nu_2$,$\nu_3$)=(1;110). Since $\nu_0=1$, strained ZrTe$_5$ is a strong TI. Each band is at least doubly degenerate due to the inversion and time-reversal symmetries. In the $k_z=\frac{\pi}{2}$ plane, the four time-reversal invariant $k$ points are fourfold degenerate due to the additional mirror symmetry $M_{ab}$. \section{Zeeman-splitting induced topological phases from WFTB model}\label{sec4} In the presence of {\bf B} field, we diagonalize the $80 \times 80$ Hamiltonian matrix, Eq.~(\ref{eq:Htot}), with the same $k$ points as the DFT calculation. We examine the topological properties and evolution of the nodal structure as a function of {\bf B}-field direction for a fixed Zeeman energy of 10~meV with an isotropic $g$ factor ($g=2.0$) for simplicity. We consider cases that the {\bf B} field is applied along the $a$, $b$, and $c$ axes as well as in the $ab$, $bc$, and $ac$ planes. The energy window of interest is [-0.12,+0.05]~eV relative to the Fermi level $E_{\rm F}$, considering that the bulk ZrTe$_5$ samples studied in Ref.~\cite{Liang2018} are slightly hole-doped. Within this energy window, the following three types of gapless crossings are in principle possible: crossings between the bottom two conduction bands, crossings between the bottom conduction and top valence bands, and crossings between the top two valence bands. However, we do not find crossings between the bottom two conduction bands for any {\bf B}-field directions. We focus on the latter two types of crossing only. \subsection{Magnetic field along the $a$ axis: Nodal-ring semimetal}\label{sec4:aaxis} Figure~\ref{fig:Ba-nodal}(a) shows the WFTB-calculated band structure along the $Y-\Gamma-Z$ direction near $\Gamma$ in the vicinity of the Fermi level, when the {\bf B} field is applied along the $a$ axis. The gapless points are found from crossings between the top valence and bottom conduction bands in the $k_b$-$k_c$ plane and they form a ring, as shown in Fig.~\ref{fig:Ba-nodal}(b). Since the two bands meet with opposite slope, this is a type-I nodal ring. Note that with the {\bf B} field, the relevant remaining symmetries are inversion symmetry, $C_{2a}$ and $M_{bc}$. From the eigenvectors of the WFTB model, we confirm that the two crossing bands have opposite $M_{bc}$ mirror eigenvalues. The intercepts with the $b$ and $c$ axes are (0, $\pm$0.003072, 0) and (0, 0, $\pm$0.000673)~$2\pi\cdot$\AA$^{-1}$, respectively. The gapless ring is not an equi-energy curve, and no other gapless points are found within the energy window of interest. \begin{figure}[htb] \begin{center} \includegraphics[width=0.2 \textwidth]{Fig3a_Sep02_2019.eps} \hspace{0.2truecm} \includegraphics[width=0.25 \textwidth]{Fig3b.eps} \caption[a figure]{(a) WFTB-calculated band structure relative to the Fermi level $E_{\rm F}$ along the $Y-{\Gamma}-Z$ direction when the {\bf B} field aligns with the $a$ axis. (b) The corresponding gapless nodal ring in the $k_b$-$k_c$ plane with the energy gap in color scale.} \label{fig:Ba-nodal} \end{center} \end{figure} In order to identify the topological nature of the gapless ring, we compute the Berry phase ${\varphi}_{\rm B}$ around a closed circle ${\cal C}$ interlocking the gapless ring. The Berry phase is defined as a sum of line integrals of the Berry connection of all occupied bands $n$, ${\mathbf A}_n({\mathbf k})$, over a closed path ${\cal C}$ in $k$ space: \begin{equation} \varphi_{\rm B} = \sum_{n=1}^{\rm{occ}} \oint_{{\cal C}} d{\bf k} \cdot {\mathbf A}_n({\mathbf k}) \end{equation} where ${\mathbf A}_n({\mathbf k})=i \langle u_{n{\mathbf k}}|{\mathbf \nabla_{\mathbf k}}u_{n{\mathbf k}}\rangle$. Here $u_{n{\mathbf k}}$ is a periodic function of the Bloch state. We find that the Berry phase is $\pi$. Thus, the ring of the gapless points is indeed a topological nodal ring. \subsection{Magnetic field along the $b$ axis: Nodal-ring semimetal}\label{sec4:baxis} Figure~\ref{fig:Bb-nodal}(a) shows the calculated band structure along the $S-\Gamma-Z$ direction when the {\bf B} field aligns with the $b$ axis. The bottom conduction and top valence bands meet near the Fermi level along the $\Gamma-Z$ and $\Gamma-X$ directions (not shown), whereas a small gap opens up along the $\Gamma-S$ direction. Similar to Sec.~\ref{sec4:aaxis}, a type-I ring of gapless points is found in the $k_a$-$k_c$ plane near $\Gamma$ in the vicinity of the Fermi level [Fig.~\ref{fig:Bb-nodal}(b)]. The gapless points intercept the $k_a$ and $k_c$ axes at ($\pm$0.000743, 0, 0) and (0, 0, $\pm$0.000840)~$2\pi\cdot$\AA$^{-1}$. We expect that the gapless crossings are allowed because the two crossing bands have opposite $M_{ac}$ glide mirror eigenvalues. We find that the Berry phase is $\pi$; the gapless ring is a topological nodal ring. No other gapless crossings are found in the energy window of interest. \begin{figure}[htb] \begin{center} \includegraphics[width=0.24 \textwidth]{Fig4a_Sep02_2019.eps} \hspace{0.2truecm} \includegraphics[width=0.21 \textwidth]{Fig4b.eps} \caption[a figure]{(a) WFTB-calculated band structure along the $S-\Gamma-Z$ directions when {\bf B} field is parallel to the $b$ axis. (b) The corresponding gapless nodal ring in the $k_a$-$k_c$ plane.} \label{fig:Bb-nodal} \end{center} \end{figure} \subsection{Magnetic field along the $c$ axis: Weyl or type-II nodal-ring semimetal}\label{sec4:caxis} When the {\bf B} field aligns with the $c$ axis, the valence and conduction bands meet with opposite slope along the $c$-axis at (0, 0, $\pm$0.000624)~2$\pi$$\cdot$\AA$^{-1}$ in the vicinity of the Fermi level as shown in Fig.~\ref{fig:Bc-weyl}(a). We evaluate the topological charge $\chi_n$ associated with the gapless points by computing the Berry curvature ${\mathbf \Omega}_n({\mathbf k})$ using the method discussed in {\sc WannierTools} \cite{WTOOLS} and Ref.~\cite{Villanova2018}. The Berry curvature can be calculated as \cite{XWang2007} \begin{widetext} \begin{equation} \epsilon_{\alpha\beta\gamma}\Omega_{n,\gamma}({\bf k}) = -2 {\rm Im} \sum_{m \neq n} \frac{\langle\langle \phi_n({\bf k}) \|{\cal H}_{\alpha}\|\phi_m({\bf k}) \rangle\rangle \langle\langle \phi_m({\bf k}) \|{\cal H}_{\beta}\|\phi_n({\bf k}) \rangle\rangle}{({\cal E}_m({\bf k}) - {\cal E}_n({\bf k}))^2}, \label{eq:omega} \end{equation} \end{widetext} where ${\cal H}_{\alpha}\equiv \partial {\cal H} \slash \partial k_{\alpha}$. Here $\|\phi_n({\bf k}) \rangle\rangle$ and ${\cal E}_n({\bf k})$ are the $n$-th eigenvector and eigenvalue of ${\cal H}({\bf k})$ [Eq.~(\ref{eq:Htot})], and $\epsilon_{\alpha\beta\gamma}$ is the Levi-Civita tensor without a sum over $\gamma$. The topological charge $\chi_n$ of each gapless point (arising from a crossing of band $n$ and band $n+1$) is then calculated by enclosing it in a small sphere ${\cal S}$, \begin{equation} \chi_n = \frac{1}{2\pi}\oint_{{\cal S}} \sum_{l=1}^{n} dS \ \hat{\mathbf{n}} \cdot \mathbf{\Omega}_l(\mathbf{k}), \label{eq:chi} \end{equation} where ${\mathbf{n}}$ is a unit vector normal to ${\cal S}$. We find that the topological charge associated with the gapless points are $\chi_n=\mp 1$, respectively, and so they are type-I Weyl points. The two Weyl points are related by inversion symmetry. In addition, we find that the top two valence bands touch in the vicinity of the Fermi level, as shown in Fig.~\ref{fig:Bc-weyl}(b) and (c). The gapless points form a nodal ring in the $k_a$-$k_b$ plane [Fig.~\ref{fig:Bc-weyl}(d)]. There are two interesting features of this ring. First, the band dispersion near the gapless point has the same slope along the $k_a$ and $k_b$ axes, and so it is referred to as a type-II nodal ring. Note that the nodal ring discussed earlier in Secs.\ref{sec4:aaxis} and \ref{sec4:baxis} is of type-I. Second, the type-II nodal ring extends to the neighboring BZ, forming a closed thin cigar shape. The size of the type-II nodal ring is much larger than the type-I nodal rings discussed in Secs.\ref{sec4:aaxis} and \ref{sec4:baxis}. We evaluate the Berry phase around the circle interlocking the ring, finding that it is indeed $\pi$; the ring is topologically protected by $M_{ab}$ symmetry. \begin{figure}[htb] \begin{center} \includegraphics[width=0.47\textwidth]{Fig5_paneled.eps} \caption[Nodal lines]{(a)-(c) Band structures when {\bf B} field is parallel to the $c$ axis. For (c), $k_y$ is fixed to be 0.034100$~$2${\pi}$$\cdot$\AA$^{-1}$. (d) Type-II nodal ring in the $k_a$-$k_b$ plane with the gap size in color scale. Here the yellow region indicates the whole first BZ.} \label{fig:Bc-weyl} \end{center} \end{figure} \subsection{Magnetic field in the $ab$ plane}\label{sec4:abplane} \begin{figure*}[htb] \begin{center} \includegraphics[width=0.7\textwidth]{paneled3.eps} \caption[Bab]{(a)-(g) Schematic evolution of the nodal ring and Weyl points as a function of the angle between {\bf B} field and the $a$ axis, $\phi$, (h) WFTB-calculated Berry curvature in the $k_a$-$k_b$ plane at $\phi=45^{\circ}$, and (i) WFTB-calculated evolution of the nodal structure as a function of $\phi$, when the {\bf B} field is in the $ab$ plane. In (a), (d), and (g), the blue plane indicates the mirror-symmetry plane where the topological nodal ring resides. In (b), (c), (e), and (f), the red and blue filled circles correspond to Weyl points with topological charge $\chi$ of $+1$ and $-1$, respectively. In (h), the topological charge of the Weyl points is denoted, and the energy gap is shown in color scale. In (i), the color scale indicates the value of $\phi$.} \label{fig:Bab} \end{center} \end{figure*} Figure~\ref{fig:Bab}(a)-(g) schematically shows the evolution of the type-I nodal ring and Weyl points as the {\bf B} field rotates in the $ab$ plane. When the {\bf B} field is slightly rotated from the $b$ axis in the $ab$ plane, the WFTB model shows that the type-I nodal ring in the $k_a$-$k_c$ plane is abruptly gapped out (due to broken mirror symmetry) everywhere but two gapless points, transforming into a pair of type-I Weyl nodes ($\chi_n={\pm}1$) lying in the $k_a$-$k_b$ plane. As the polar angle $\phi$ between the {\bf B} field and the $a$ axis further decreases, the Weyl points initially close to the $k_a$ axis evolve toward the $k_b$ axis in the $k_a$-$k_b$ plane. Then when the {\bf B} field aligns with the $a$ axis ($\phi=0$), the pair of Weyl points suddenly transforms into a nodal ring in the $k_b$-$k_c$ plane (with the restoration of a mirror symmetry). As the {\bf B} field continues to rotate clockwise beyond the positive $a$ axis ($\phi < 0$), the nodal ring transforms into a pair of Weyl points where the chirality of the Weyl points is now exchanged. For example, Fig.~\ref{fig:Bab}(h) exhibits the calculated Berry curvature with a pair of Weyl nodes obtained from the WFTB model at ${\phi}=45^{\circ}$. Figure~\ref{fig:Bab}(i) summarizes the calculated evolution of the nodal ring and Weyl points as a function of $\phi$ for 0 $\le$ $\phi$ $\le$ $90^{\circ}$, where the intercepts with the $k_a$ and $k_b$ axes become part of the nodal rings (not drawn) when $\phi=90^{\circ}$ and $\phi=0$, respectively. In contrast to a common belief, our result demonstrates that there is another way to annihilate Weyl points other than bringing a pair of Weyl points with opposite chirality to the same $k$ point. This possibility was earlier discussed within effective models in the case of Dirac semimetals in the presence of {\bf B} field~\cite{Burkov2018}. Furthermore, our finding indicates that the chirality of the Weyl nodes can be exchanged with the reversal of the $b$ component of the {\bf B} field. \subsection{Magnetic field in the $ac$ or $bc$ plane}\label{sec4:bcplane} \begin{figure}[htb] \begin{center} \includegraphics[width=0.47\textwidth]{Fig7_paneled.eps} \caption[Nodal lines]{(a)-(b) WFTB-calculated evolution of the type-I and type-II Weyl nodes, respectively, as a function of $\theta$ (color scale) when the {\bf B} field rotates in the $bc$ plane. The type-I Weyl nodes arise from the crossings of the conduction and valence bands, whereas the type-II Weyl nodes arise from the two valence bands. Here $\phi$ is the angle between {\bf B} field and the $a$ axis. In (a) the Weyl points are located at (0, $\pm$0.000540, $\mp$0.000757)~$2\pi{\cdot}$\AA$^{-1}$~for $\theta=45^{\circ}$. (c)-(d) WFTB-calculated band structures near the crossings of the conduction (CB1) and valence bands (VB2) and of the two valence bands (VB1 and VB2) for the {\bf B} field in the $bc$ plane with $\theta$=10$^{\circ}$.} \label{fig:Bbc} \end{center} \end{figure} When the {\bf B} field is rotated from the $c$ axis in the $bc$ plane, we find that the pair of type-I Weyl nodes with ${\chi}_n=\pm 1$ (arising from the conduction and valence bands) move somewhat away from the $k_c$ axis and return to the axis as the angle $\theta$ approaches $90^{\circ}$, where $\theta$ is the angle between the {\bf B} field and the $c$ axis. See Fig.~\ref{fig:Bbc}(a). Then when the {\bf B} field aligns with the $b$ axis, the type-I Weyl points [Fig.~\ref{fig:Bbc}(c)] abruptly transform into the type-I nodal ring in the $k_a$-$k_c$ plane, as discussed earlier. For $\theta > 90^{\circ}$, a pair of Weyl nodes reappear with the reversed topological charges compared to those in the case of ${\theta} < 90^{\circ}$, similarly to Sec.~\ref{sec4:abplane}. The transformation of the Weyl nodes into a nodal ring is related to the mirror anomaly~\cite{Burkov2018}, which is discussed later in Sec.~\ref{sec5:burkov}. On the other hand, the type-II nodal ring arising from the top two valence bands is now completely gapped out except for two type-II gapless points [Fig.~\ref{fig:Bbc}(d)] when the {\bf B} field is rotated from the $c$ axis in the $bc$ plane. For example, the type-II Weyl nodes occur at (0, $\pm$0.002140, $\pm$0.000025)~$2\pi{\cdot}$\AA$^{-1}$~at $\theta=10^{\circ}$. We confirm that the type-II gapless points have topological charge $\chi_n=\mp 1$, using the method discussed in Ref.~\cite{Soluyanov2015}. The type-II Weyl nodes with opposite chirality are brought closer to each other as $\theta$ increases. They are eventually annihilated when $\theta$ approaches about 40$^{\circ}$ [Fig.~\ref{fig:Bbc}(b)]. When the {\bf B} field is rotated from the $c$ axis in the $ac$ plane, we find that the pair of type-I Weyl nodes with $\chi_n=\pm 1$ (arising from the conduction and valence bands) remain almost along the $k_c$ axis with only slight changes in their locations. Then when the {\bf B} field is parallel to the $a$ axis, the Weyl nodes transform into a nodal ring. In contrast to the case with the {\bf B} field in the $bc$ plane, there are no crossings from the top two valence bands in this case. \subsection{Summary of topological phases from WFTB model}\label{sec4:summary} Table~\ref{tab:summary} summarizes the topological phases found from the WFTB model. In the next section, we discuss the linearized $k \cdot p$ model and the topological phases predicted from the $k \cdot p$ model. We also compare the findings from the WFTB model and those from the $k \cdot p$ model. \begin{table*}[htb] \centering \caption{Zeeman-splitting driven topological phases in 3D ZrTe$_5$ as a function of the {\bf B}-field orientation based on the WFTB model and the linearized $k \cdot p$ model \cite{RYChen_PRL2015} (discussed in Sec.~\ref{sec5}), considering the energy window of [$-$0.12, 0.05] eV relative to the Fermi level. The second column corresponds to the nodal structure from the crossings of the bottom conduction and the top valence bands, and the third column for that of the top two valence bands in the WFTB model. In the case of the $k \cdot p$ model, the top two valence bands never meet. The topological phase for the {\bf B} field within the $ab$, $bc$, or $ac$ plane (marked by $*$) excludes the cases that the {\bf B} field coincides with the $a$, $b$, or $c$ axis. In the case of WFTB (VB-VB), the type-II Weyl nodes (marked by $^{\dag}$) are formed for only some angles ($< 40^{\circ}$) before they meet and annihilate at the same $k$ point, as in Fig.~\ref{fig:Bbc}(b).} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline Direction {\textbackslash} Model & WFTB (CB-VB) & WFTB (VB-VB) & $k \cdot p$ (CB-VB) \\ \hline {\bf B}$\parallel${\bf a} & 1 type-I nodal ring & - & 1 type-I nodal ring \\ {\bf B}$\parallel${\bf b} & 1 type-I nodal ring & - & 1 type-I nodal ring \\ {\bf B}$\parallel${\bf c} & 2 type-I Weyl nodes & 1 type-II nodal ring & 2 type-I Weyl nodes \\ {\bf B} in $ab$ plane* & 2 type-I Weyl nodes & - & 1 type-I nodal ring \\ {\bf B} in $bc$ plane* & 2 type-I Weyl nodes & 2 type-II Weyl nodes$^{\dag}$ & 2 type-I Weyl nodes \\ {\bf B} in $ac$ plane* & 2 type-I Weyl nodes & - & 2 type-I Weyl nodes \\ \hline \hline \end{tabular} \label{tab:summary} \end{table*} \section{Comparison with linearized $k \cdot p$ model}\label{sec5} \subsection{Lowest-order $k \cdot p$ model}\label{sec5:kdotp} The lowest-order $k \cdot p$ Hamiltonian ${\cal H}_{\mathrm{kp}}({\bf k},{\bf B})$~\cite{RYChen_PRL2015} can be obtained by keeping only linear terms in ${\bf k}$ that satisfy the symmetries of bulk ZrTe$_5$. The Hamiltonian ${\cal H}_{\mathrm{kp}}({\bf k},{\bf B})$ expanded near the $\Gamma$ point reads \begin{eqnarray} {\cal H}_{\mathrm{kp}}({\bf k},{\bf B}) &=& {\cal H}_{\mathrm{kp},0}({\bf k}) + {\cal H}_{\mathrm{kp},{\mathrm Z}} \label{eq:Hkp} \\ {\cal H}_{\mathrm{kp},0}({\bf k}) &=& m \tau^z + v_x k_x \tau^x \sigma^y + v_y k_y \tau^x \sigma^x + v_z k_z \tau^y \\ {\cal H}_{\mathrm{kp},{\mathrm Z}} &=& \frac{1}{2} g \mu_B {\bf{\sigma}} \cdot {\bf B}, \end{eqnarray} where $\tau^{x,y,z}$ and $\sigma^{x,y,z}$ are orbital (conduction and valence bands) and spin Pauli matrices. Here $v_{x,y,z}$ and $m$ are Fermi velocities and mass (or half of the bulk band gap), respectively. The $x$, $y$, and $z$ coordinates correspond to the crystal $a$, $b$, and $c$ axes. ${\cal H}_{\mathrm{kp},{\mathrm Z}}$ is the Zeeman interaction where we assume the same isotropic $g$-factors for the conduction and valence bands for simplicity. The symmetry group of ZrTe$_5$ is generated by two mirror reflections, $M_{ab}$, $M_{bc}$, inversion and time-reversal symmetries, which are represented by $- \tau^{z} \cdot i \sigma^{z}$, $i \sigma^{x}$, $\tau^{z}$, and ${\cal K} \cdot i\sigma^{y}$, respectively (${\cal K}$ is complex conjugation). Equation~(\ref{eq:Hkp}) can be diagonalized for an arbitrary {\bf B} field with energy eigenvalues given by \begin{widetext} \begin{equation} \epsilon_{rs} ({\bf k},{\bf B}) = r \sqrt{m^2 + \left( \frac{g \mu_{B}}{2} \right)^2 B_0^2 + K^2 + s \sqrt{m^2 B_0^2 + (v_z k_z)^2 B_0^2 + A_{\perp}^2 } g \mu_{B}}, \label{eq:Ekp} \end{equation} \end{widetext} where $r,s=\pm$, $B_0^2= B_x^2 + B_y^2 + B_z^2$, $K^2 = (v_x k_x)^2+(v_y k_y)^2+(v_z k_z)^2 $, and $A_{\perp}^2 = (v_x k_x B_y + v_y k_y B_x)^2$. In Ref.~\cite{RYChen_PRL2015} only three {\bf B} field directions ($a$, $b$, and $c$ axes) are considered. The energy eigenvalues, Eq.~(\ref{eq:Ekp}), agree with those in Ref.~\cite{RYChen_PRL2015}. We find that the conduction and valence bands can cross if the Zeeman splitting energy is greater than the band gap, i.e., $(\frac{g \mu_B B_0}{2})^2 > m^2$, whereas the two conduction (valence) bands can not cross each other for any {\bf B} field directions. The resulting nodal structure is summarized in Table~\ref{tab:summary}. \subsection{Comparison between WFTB and lowest-order $k \cdot p$ models}\label{sec5:kdotpcompare} The nodal structure calculated using the WFTB and lowest-order $k \cdot p$ models qualitatively agree with each other in most cases, as listed in Table~\ref{tab:summary}, although the positions of Weyl points or nodal rings may quantitatively differ from each other. The previous study \cite{RYChen_PRL2015} reported the nodal structure only when the {\bf B} field aligns with the crystal axes using the lowest-order $k \cdot p$ model. {\it Qualitative} discrepancy between the WFTB model and the lowest-order $k \cdot p$ model occurs in two cases: (i) when the top two valence bands meet with each other for the {\bf B} field along the $c$ axis and in the $bc$ plane and (ii) when the {\bf B} field lies in the $ab$ plane (though not along the $a$ or $b$ axes). The first discrepancy might arise from the observation that Zr $d$ orbitals contribute to the top two valence bands by about 20\% of the total electron density according to our DFT calculations, whereas the lowest-order $k \cdot p$ model \cite{RYChen_PRL2015} was constructed based on Te $p$ orbitals only. The second discrepancy arises from the extra symmetry imposed on the lowest $k \cdot p$ model due to truncation of higher-order terms. \begin{figure}[htb] \begin{center} \includegraphics[width=0.3\textwidth]{kp_figure.eps} \caption[Bab]{Calculated gapless nodal points in the $k_{\parallel}$-$k_c$ plane when the {\bf B} field is tilted by $45^{\circ}$ from the $a$ axis in the $ab$ plane, using the lowest-order $k \cdot p$ model, where $k_{\parallel}$ is the direction along the $k_a=k_b$ line. The nodal ring is marked in red. All parameter values are set to unity for simplicity.} \label{fig:kp} \end{center} \end{figure} When the {\bf B} field aligns with either the $a$ or $b$ axis, a nodal ring appears in the corresponding mirror plane. Interestingly, in the lowest-order $k \cdot p$ model, we find that this nodal ring persists as long as the {\bf B} field lies in the $ab$ plane, even though there is {\it no} mirror symmetry that protects the nodal ring when the {\bf B} field is away from the $a$ or $b$ axis. For example, Fig.~\ref{fig:kp} shows the calculated nodal ring using the lowest $k \cdot p$ model when the angle between the {\bf B} field and the $a$ axis is $45^{\circ}$. Compare this figure to the nodal structure obtained from the WFTB model [Fig.~\ref{fig:Bab}(h)] for the same {\bf B} field direction. Our study shows that continuous $U(1)$ symmetry is present in the lowest-order $k \cdot p$ model, which rotates both the band structure and the {\bf B} field together about the $c$ axis. This symmetry is represented by \begin{equation} U(\theta) {\cal H}_{\mathrm{kp}}({\bf k},{\bf B}) U^{\dagger}(\theta) = {\cal H}_{\mathrm{kp}}(R(\theta) {\bf k},R^{-1}(\theta) {\bf B}) \end{equation} where \begin{equation} U(\theta) = \begin{bmatrix} e^{-i\theta/2} & 0 & 0 & 0 \\ 0 & e^{i\theta/2} & 0 & 0 \\ 0 & 0 & e^{-i\theta/2} & 0 \\ 0 & 0 & 0 & e^{i\theta/2} \end{bmatrix}, \end{equation} and \begin{equation} R(\theta) = \begin{bmatrix} \cos{\theta} & \sin{\theta} & 0 \\ -\sin{\theta} & \cos{\theta} & 0 \\ 0 & 0 & 1 \end{bmatrix}. \end{equation} For simplicity, we assume an isotropic Fermi velocity and an isotropic $g$-factor, but a similar result holds in general cases. However, higher-order terms in the $k \cdot p$ model would break the $U(1)$ symmetry, and they would gap out the nodal ring unless there is mirror symmetry. Neither the WFTB model nor the crystal structure of ZrTe$_5$ have such $U(1)$ symmetry. \subsection{Mirror anomaly}\label{sec5:burkov} Recently, Burkov \cite{Burkov2018} has pointed out an additional quantum anomaly referred to as mirror anomaly inherent in Dirac semimetals with mirror symmetry, independent of their type or origin such as topological \cite{BJYang2014}, nonsymmorphic \cite{SMYoung_PRL2012}, or accidental, based on the linearized Dirac Hamiltonian. In a Dirac semimetal, the chirality operator $\gamma^5=i{\gamma^0}{\gamma^1}{\gamma^2}{\gamma^3}$ (which projects chirality of the two Weyl fermion components) \cite{Burkov2018,Burkov2017} commutes with only one of the spin components. When the {\bf B} field rotates from this spin direction to the perpendicular axis with which mirror symmetry is present, the Weyl points abruptly transform into a nodal ring protected by the mirror symmetry. Furthermore, in the Dirac Hamiltonian, the positions of the Weyl nodes do not change with rotation angle until they become the nodal ring. As a consequence, intrinsic AHC was predicted to show a singular behavior as a function of the {\bf B} field direction. In the lowest-order $k \cdot p$ model for ZrTe$_5$, Eq.~(\ref{eq:Hkp}), the spin component that commutes with the chirality operator is the $c$ or $z$ component. Therefore, in the $k \cdot p$ model, the mirror anomaly dictates the abrupt transformation of a pair of Weyl points into a nodal ring as well as singular intrinsic out-of-plane AHC when the {\bf B} field rotates from the $c$ axis to the $a$ or $b$ axis. The difference between the WFTB model and the $k \cdot p$ model in this context is that the positions of the Weyl points noticeably change with the ${\bf B}$-field orientation when the field is in the $bc$ plane in the WFTB model. \section{Intrinsic Anomalous Hall effect}\label{sec6} When Weyl points are present at the Fermi level, the material can manifest large AHC despite the point-like Fermi surface, which serves as an important experimental signature for Weyl semimetals. In general, broken time reversal symmetry along with a finite-volume Fermi surface may give rise to nonzero AHC whether Weyl points are present or not. \subsection{Numerical calculation of AHC}\label{sec6:ahc} We numerically compute the AHC $\sigma_{ac}$ of ZrTe$_5$ under an external {\bf B} field based on our WFTB model, as a function of chemical potential as well as the direction of {\bf B} field. In the next two subsections, we separately present our results in the cases of isotropic and anisotropic $g$ factor. We consider $\sigma_{ac}$ because the $ac$ plane is perpendicular to the stacking direction, which is experimentally the most relevant plane \cite{RYChen_PRL2015,Liang2018}. We focus on the \emph{intrinsic} part of the AHC \cite{Di_Xiao2010,Nagaosa2010} which depends only on the Berry curvature: \begin{eqnarray} \sigma_{ac} &=& \frac{e^2}{\hbar}\sum_{n=1}^{\rm{occ}} \int_{\rm{BZ}} \frac{d^3 k}{(2\pi)^3} f({\cal E}_n({\bf k})-\mu) \Omega_{n,b}({\bf k}), \label{eq:ahc} \\ &=& \frac{e^2}{h} \int_{k_b} \frac{dk_b}{2\pi} C(k_b) , \label{eq:ahc-2} \end{eqnarray} where $f({\cal E}_n({\bf k})-\mu)$ is the Fermi-Dirac distribution with chemical potential $\mu$, and $\Omega_{n,b}({\bf k})$ is the $b$-component of the Berry curvature coming from the $n$-th band. Here $C(k_b)$ is the Chern number (considering all occupied bands) calculated at a given $k_b$ plane. We assume that temperature is zero. Regarding the 3D integral of the whole first BZ in Eq.~(\ref{eq:ahc}), we perform 2D integrals in the ${k_a}$-${k_c}$ plane ($\sigma_{ac,k_b}^{\rm{2D}}=C(k_b)\frac{e^2}{h}$) at fixed $k_b$ planes and integrate the 2D integrals along the $k_b$ direction. In this calculation we separate the first BZ into two regions such as near the $\Gamma$ point and away from the $\Gamma$ point, and use a finer (coarser) $k$-mesh for the former (latter) region. When the Fermi surface is composed of a pair of isolated Weyl points with topological charge $\pm \chi$, $|\sigma_{ac}|$ is known to be $\frac{e^2}{h} \frac{2k_b^{\rm{WP}}}{2\pi} |\chi|$, where $2k_b^{\rm{WP}}$ is the separation between the two Weyl points projected onto the $b$ axis. This is because each 2D plane parallel to the $ac$ plane which lies between the two Weyl points contributes Chern number of $\chi$ or $-\chi$ to the integral in Eq.~(\ref{eq:ahc-2}), whereas the planes which do not lie between the two Weyl points have zero Chern number. In this case, the intrinsic AHC is simply proportional to the separation between the Weyl points along the $b$ axis. However, if the Fermi surface has a nonzero volume, there is no such a simple expression for $\sigma_{ac}$ and the AHC must be numerically computed. Equation~(\ref{eq:ahc}) enforces that $\sigma_{ac}$ is strictly zero if the {\bf B} field is parallel to the $ac$ plane due to the $C_{2b} {\cal T}$ symmetry. The $C_{2b} {\cal T}$ symmetry maps the Berry curvature component $\Omega_{b}(k_a,k_b,k_c)$ into $-\Omega_{b}(k_a,-k_b,k_c)$, making the integral in Eq.~(\ref{eq:ahc}) vanish. Therefore, the nonzero AHC observed in Ref.~\cite{Liang2018} for the in-plane {\bf B} fields (i.e. parallel to the $ac$ plane) must originate from the non-intrinsic part of the AHC and/or nonlinear effects. \subsection{Calculated AHC in the case of isotropic $g$ factor} \begin{figure*}[htb] \begin{center} \includegraphics[width=0.35 \textwidth]{Fig9a_new.eps} \hspace{0.5truecm} \includegraphics[width=0.35 \textwidth]{Fig9b_new.eps} \vspace{0.5truecm} \includegraphics[width=0.45\textwidth]{Fig9c_new.eps} \caption{WFTB-calculated AHC $\sigma_{ac}$ as a function of chemical potential $\mu$ and tilting angles $\phi$ and $\theta$ when the {\bf B} field is parallel to (a) the $ab$ plane or (b) the $bc$ plane, in the case of isotropic $g$ factor. Here $\phi$ is the angle between the {\bf B} field and the $a$ axis, and $\theta$ is the angle between the {\bf B} field and the $c$ axis. The Fermi level is set to $\mu=0$. (c)-(h) Calculated $\sigma_{ac}$ vs $\phi$ and $\theta$ at three chemical potential values, $\mu_1$ ($-$6 meV), $\mu_2$ (4 meV), and $\mu_3$ (14 meV). The $\mu_2$ value is close to the type-I Weyl point energy. The inset in (b) shows zoom-in near the $\mu_2$ value.} \label{fig:AHC-1} \end{center} \end{figure*} \begin{figure}[htb] \begin{center} \includegraphics[width=0.23 \textwidth]{Fig10a.eps} \hspace{0.2truecm} \includegraphics[width=0.23 \textwidth]{Fig10b.eps} \vspace{0.5truecm} \includegraphics[width=0.23 \textwidth]{Fig10c.eps} \hspace{0.2truecm} \includegraphics[width=0.23 \textwidth]{Fig10d.eps} \caption{(a) WFTB-calculated band structure along the Weyl-point separation direction and (b) $\sigma_{ac,k_b}^{2D}$ at fixed $k_b$ values vs $k_b$ for three different chemical potential values, when the {\bf B} field is tilted from the $a$ axis by 60$^{\circ}$ in the $ab$ plane. (c)-(d) Calculated $\sigma_{ac,k_b}^{2D}$ vs $k_b$ for different chemical potential values, when the {\bf B} field is tilted from the $c$ axis by 10$^{\circ}$ or 40$^{\circ}$ in the $bc$ plane, respectively. Here $\mu_5=-3.5$~meV corresponds to the energy of the small positive AHC peak right below the Fermi level for $\theta$=5, 10, and 20$^{\circ}$. The type-II Weyl-point energy is $-$4.5~meV. In (a)-(d) we consider the isotropic $g$ factor.} \label{fig:AHC-2} \end{center} \end{figure} Figure~\ref{fig:AHC-1}(a) shows numerically calculated $\sigma_{ac}$ as a function of chemical potential when the {\bf B} field rotates in the $ab$ plane, in the case of isotropic $g$ factor. First of all, we find that intrinsic $\sigma_{ac}$ becomes strictly zero independent of the chemical potential value when the {\bf B} field exactly aligns with the $a$ axis due to the symmetry argument provided earlier. Let us first discuss features at chemical potential $\mu=-4$~meV (referred to as $\mu_2$) which coincides with the type-I Weyl point energy. See Fig.~\ref{fig:AHC-2}(a). At this chemical potential, a sharp, prominent (negative) peak appears for all angles except for 0$^{\circ}$ and 90$^{\circ}$. The peak height varies with $\phi$, as shown in Fig.~\ref{fig:AHC-1}(d). In order to provide more insight, we also calculate $\sigma_{ac,k_b}^{\rm{2D}}$ at different $k_b$ planes, finding that they are quantized as either 0 or $-$1 in units of $\frac{e^2}{h}$. This result explains both the peak height $\sim \frac{e^2}{h} \frac{2k_b^{\rm{WP}}}{2\pi}$ and the angular evolution of the peak height. The abrupt increase in the peak height is attributed to the large separation of the Weyl point along the $b$ axis as the angle increases from zero. The peak height, however, goes to zero smoothly as the angle approaches 90$^{\circ}$ because of the smooth changes of the Weyl point separation. See Fig.~\ref{fig:Bab}(i). In addition to the sharp peak, smoothly rising negative and positive peaks appear near +6 meV ($\mu_1$) and +14 meV ($\mu_3$) for large angles ($\ge$~60$^{\circ}$). See Figs.~\ref{fig:AHC-1}(a), (c), and (e). Note that there are no other Weyl nodes beyond the pair discussed earlier. Now the $\sigma_{ac,k_b}^{\rm{2D}}$ values for $\mu_1$ and $\mu_3$ are not quantized in units of $\frac{e^2}{h}$ [Fig.~\ref{fig:AHC-2}(b)]. However, they contribute to the $\sigma_{ac}$ value, Eq.~(\ref{eq:ahc}), via avoided level crossings as shown in Fig.~\ref{fig:AHC-2}(a). Our result is not unusual. For example, in bcc Fe, Co, and Ni, a very large Berry curvature was found in regions where avoided level crossings occur, and it resulted in large AHC \cite{XWang2007}. Now when the {\bf B} field rotates from the $c$ axis in the $bc$ plane, overall features of $\sigma_{ac}$ [Figs.~\ref{fig:AHC-1}(b), (f)-(h)] are similar to those for the above case, though with some differences. Let us first discuss $\sigma_{ac}$ at chemical potential $\mu_2$. At this chemical potential, the height of the negative peak can be up to an order of magnitude smaller than that for the above case, as shown in the inset of Fig.~\ref{fig:AHC-1}(b) and Fig.~\ref{fig:AHC-1}(g). The $\sigma_{ac,k_b}^{\rm{2D}}$ values are quantized as either 0 or $-$1 in units of $\frac{e^2}{h}$. For example, Fig.~\ref{fig:AHC-2}(c) and (d) show such quantization for $\theta=10$ and 40$^{\circ}$. The observed peak height of $\frac{e^2}{h} \frac{2k_b^{\rm{WP}}}{2\pi}$ also corroborates the contributions of the type-I Weyl points which evolve with the angle as illustrated in Fig.~\ref{fig:Bbc}(a). Next, for $0 < \mu < 8$ meV, the flat $\sigma_{ac}$ region arises from contributions of small nonzero $\sigma_{ac,k_b}^{\rm{2D}}$ values near $\Gamma$. See upward-triangles in Fig.~\ref{fig:AHC-2}(c) and (d). Last, at $\mu=-3.5$~meV a small positive peak appears only for small angles ($<$ 40$^{\circ}$). This peak is associated with the type-II Weyl nodes arising from the crossings of the two valence bands. See Fig.~\ref{fig:Bbc}(d). At higher angles, the type-II Weyl points with opposite chirality are annihilated, and the $\sigma_{ac}$ peak vanishes accordingly. \subsection{Calculated AHC in the case of anisotropic $g$ factor} \begin{figure*}[htb] \begin{center} \includegraphics[width=0.35 \textwidth]{Fig11a_new.eps} \hspace{0.5truecm} \includegraphics[width=0.35 \textwidth]{Fig11b_new.eps} \vspace{0.5truecm} \includegraphics[width=0.45\textwidth]{Fig11c_new.eps} \caption{WFTB-calculated AHC $\sigma_{ac}$ as a function of $\mu$, $\phi$ and $\theta$ when the {\bf B} field is parallel to the $ab$ plane (a) or the $bc$ plane (b), in the case of {\it anisotropic} $g$ factor. (c)-(h) Calculated $\sigma_{ac}$ vs $\phi$ and $\theta$ at $\mu_1$ ($-$6 meV), $\mu_2$ (4 meV), and $\mu_3$ (14 meV). The $\mu_2$ value is close to the type-I Weyl point energy. The inset in (b) shows zoom-in near the $\mu_2$ value.} \label{fig:AHC-3} \end{center} \end{figure*} Figure~\ref{fig:AHC-3}(a) shows our calculated $\sigma_{ac}$ as a function of $\mu$ and $\phi$, using the following anisotropic $g$ factors: $g_x$=3.19 \cite{YLiu2016}, $g_y$=21.3 \cite{RYChen_PRL2015,YLiu2016}, and $g_z$=2.0. In this case, the Zeeman energy remains to be fixed as 10~meV. In other words, $\sqrt{g_x^2 B_x^2 + g_y^2 B_y^2 + g_z^2 B_z^2}\frac{\mu_B}{2}$=10~meV. When the {\bf B} field rotates from the $a$ axis in the $ab$ plane, the height of the sharp AHC peak at $\mu_2$ abruptly increases with the angle and then it immediately starts to decrease. See Fig.~\ref{fig:AHC-3}(d). On the other hand, the height of the smooth peaks at $\mu_1$ and $\mu_3$ sharply increases with the angle and then it saturates at a small angle. See Fig.~\ref{fig:AHC-3}(c) and (e). These features can be explained using the Weyl-point positions and avoided level crossings, similarly to the case of isotropic $g$ factor. A similar trend appears when the {\bf B} field is tilted in the $bc$ plane [Figs.~\ref{fig:AHC-3}(b), (f)-(h)]. One small difference is shown in the inset of Fig.~\ref{fig:AHC-3}(b). The small AHC peak at $\mu_2$ vanishes for very large angles such as 80~$\le \theta <$~90$^{\circ}$, because the $b$ component of the Weyl point position becomes zero. Overall the angular dependence of $\sigma_{ac}$ with the anisotropic $g$ factor qualitatively differs from that with the isotropic $g$ factor in both $ab$ and $bc$ planes. This is attributed to a much larger contribution of the $b$ component of {\bf B} field for a given angle. For the same reason, with the anisotropic $g$ factor, the type-II Weyl nodes are not formed except for extremely small angles away from the $c$ axis. Note that the type-II Weyl nodes with opposite chirality are annihilated at $\theta \ge$~40$^{\circ}$ in the case of isotropic $g$ factor. \section{Conclusion}\label{sec7} In summary, we develop a WFTB model for 3D ZrTe$_5$ from first-principles calculations, considering both Zr $d$ and Te $p$ orbitals. Based on the WFTB model, we investigate Zeeman-splitting induced topological phases and the evolution of the topological nodal structures as a function of the orientation of {\bf B} field (beyond the crystal axes). We find an abrupt transformation of a nodal ring to a pair of Weyl nodes as the {\bf B} field is rotated from either the crystal $a$ or $b$ axis. At some {\bf B} field directions type-II nodal structures are identified from crossings of the valence bands. Comparing the calculated topological phases with those from the linearized $k \cdot p$ model, we find that the latter model does {\it not} capture the correct topological phases when the {\bf B} field is rotated in the $ab$ or in $bc$ plane. We also numerically compute the intrinsic part of the AHC, $\sigma_{ac}$, as a function of chemical potential, when the {\bf B} field is tilted within the $ab$ or $bc$ plane. The calculated results can be compared to the experimental data when the experimental anomalous Hall resistivity $\rho_{ac}^{\rm{AHE}}$ is properly converted into $\sigma_{ac}$, which requires the knowledge of longitudinal resistivity. Our findings may also provide insight into Zeeman-splitting-induced topological phases and their consequences in other Dirac semimetals with mirror symmetries. \begin{acknowledgments} Y.C. was supported by the Virginia Tech ICTAS Fellowship. The computational support was provided by San Diego Supercomputer Center (SDSC) under DMR060009N and VT Advanced Research Computing (ARC). \end{acknowledgments}
train/arxiv
BkiUatDxK7ICUuXei3G-
5
1
\section{Introduction} Competition is ubiquitous in nature, playing a fundamental role on the regulation of biodiversity. It is also a major driving force behind evolutionary change through natural selection. The simplest competition models, inspired in the pioneering work by Lotka and Volterra, and May and Leonard \cite{1920PNAS....6..410L,1926Natur.118..558V,May-Leonard}, consider the dynamics of two or three species subject to interspecific predation (or selection), mobility and reproduction interactions. Despite their simplicity, these models (see \cite{2014-Szolnoki-JRSI-11-0735,2018JPhA...51f3001D} for recent reviews) incorporate some of the main ingredients responsible for the observed dynamics of many biological systems, and are able to reproduce some the dynamical features of specific biological systems with a limited number of species \cite{lizards,2002-Kerr-N-418-171,bacteria,Reichenbach-N-448-1046}. More complex competition models, involving more species \cite{Szabo2008,Hawick2011,Hawick_2011,Avelino-PRE-86-031119,Avelino-PRE-86-036112} and/or additional interactions \cite{Peltomaki2008,2010-Wang-PRE-81-046113,2014-Cianci-PA-410-66,2010-Yang-C-20-2,2017-Souza-Filho-PRE-95-062411}, have also been investigated in recent years, revealing a plethora of complex dynamical spatial structures \cite{2010-Ni-PRE-82-066211,Avelino-PLA-378-393,Avelino-PRE-89-042710,2017-Avelino-PLA-381-1014,2018-Avelino-EPL-121-48003}, diverse scaling regimes \cite{Avelino-PRE-86-036112,PhysRevE.96.012147} and phase transitions \cite{PhysRevE.76.051921, 2001-Szabo-PRE-63-061904, 2004-Szabo-PRE-69-031911, 2004-Szolnoki-PRE-70-037102, 2007-Perc-PRE-75-052102, 2008-Szabo-PRE-77-011906, 2011-Szolnoki-PRE-84-046106, 2013-Vukov-PRE-88-022123, 2018-Bazeia-EPL-124-68001}. In some of these competition models species coexistence may last for an arbitrary amount of time, while in others it is transient. Coexistence-promoting mechanisms, responsible for maintaining coexistence over long periods of time, are usually associated to the ability of the species to increase (decrease) their population in response to negative (positive) perturbations to their typical abundances \cite{siepielski_mcpeek_2010}. Among these, density-dependent mortality \cite{gross_edwards_feudel_2009,metz_sousa_valencia_2010} has been claimed to have a positive impact in promoting species coexistence (see also \cite{holt_1985,10.1086/323113,mittelbach_hall_dorn_garcia_steiner_wojdak_2004} for a discussion of the impact of density-independent mortality). Here, we investigate a sub-class of spatial stochastic May-Leonard models characterized by mutual predation interactions of equal strength between any two individuals of different species. In their standard version, the dynamics of these models results in a network of one-species domains whose dynamics is curvature driven, with the characteristic size of the network of one-species domains growing proportionally to $t^{1/2}$ \cite{Avelino-PRE-86-031119} --- $t$ being the physical time. However, in practice, this growth is limited by the size of the simulation boxes, thus resulting in a limited period of coexistence. The main aim of this letter is the determination of the impact on population dynamics of the death by starvation of individuals after a given number of successive unsuccessful predation attempts. We shall start by introducing our set of models in the following section. We then study the impact of death by starvation on the dynamics of initially flat and circular interfaces between spatial domains occupied by individuals of competing species, as well as the two-dimensional dynamics of population networks starting from random initial conditions. We will show that, under certain conditions, death by starvation prevents the endless growth of the characteristic length scale of the network of one-species spatial domains, acting as a coexistence-promoting mechanism. \section{Models \label{sec2}} In this letter we shall investigate the dynamics of May-Leonard models with mutual predation interactions of equal strength between any two individuals of different species. To this end, we shall perform square lattice simulations with periodic boundary conditions in which each one of its ${\mathcal N}$ sites may be either empty or occupied by a single individual. The species are labelled by the number $i$ (or $j$), with $i,j=1,...,N$ --- in this letter we shall only consider models with two or three species ($N=2$ or $N=3$). Empty sites shall be denoted by $\otimes$. The number of individuals of the species $i$ and the number of empty sites will be denoted by $I_i$ and $I_\otimes$, respectively --- the density of individuals of the species $i$ and the density of empty sites shall be defined by $\rho_i=I_i/{\mathcal N}$ and $\rho_{\otimes} = I_\otimes/{\mathcal N}$, respectively. The possible interactions are predation \begin{equation} i\ j \to i\ \otimes\,, \nonumber \end{equation} mobility \begin{equation} i\ \odot \to \odot\ i\,, \nonumber \end{equation} and reproduction \begin{equation} i\ \otimes \to ii\,, \nonumber \end{equation} where $\odot$ represents either an individual of any species or an empty space. Mobility, reproduction and predation interactions occur with probabilities $m$, $r$ and $p$, respectively (the same for all species). For the sake of definiteness, we shall take $m=0.5$ and $m+p+r=1$ in all the simulations. At each simulation step, the algorithm randomly picks an occupied site to be the active one, randomly selects one of its four adjacent neighbour sites to be the passive one, and randomly chooses an interaction to be executed by the individual at the active position. These three steps are repeated until a possible interaction is selected. If predation is selected, the impossibility of executing an interaction happens when the passive is an empty site or the passive and active individuals are of the same species. On the other hand, if reproduction is selected, the interaction only takes place if the passive is an empty site. Whenever the selected interaction is not executed, the active individual is said to have carried out an unsuccessful interaction attempt. A generation time (our time unit) is defined as the time necessary for $\mathcal{N}$ successive (and successful) interactions to be completed. The non-standard ingredient in our simulations is the death of an individual after a given number ${\mathcal N}_u$ of successive unsuccessful predation attempts --- in our model the most recent number of successive unsuccessful predation attempts of its progenitor is passed on to every newborn individual at the time of birth. This means that the ability of a newborn to survive unsuccessful predation attempts is strongly dependent on the strength of its progenitor at the time of birth (the strength being defined as ${\mathcal N}_u$ minus the latest number of unsuccessful predation attempts). \begin{figure}[!htb] \centering \includegraphics{fig1} \caption{Evolution of the width $W$ of an initially flat interface obtained from $1000$ realizations considering ${\mathcal N}_u=\infty$ (upper panel) and ${\mathcal N}_u=25$ (bottom panel). Notice that the much more significant growth of the width of the interfaces for ${\mathcal N}_u=25$ compared to the standard ${\mathcal N}_u=\infty$ case may also be confirmed in the snapshots, obtained for single realizations of ${\mathcal N}_u=\infty$ and ${\mathcal N}_u=25$ models, shown in the inset panels} \label{fig1} \end{figure} \section{Dynamics of initially flat/circular interfaces \label{sec3}} In this section we study the impact that a finite value of ${\mathcal N}_u$ has on the dynamics of spatial stochastic two-species May-Leonard models with mutual interspecific predation interactions of equal strength. To this end, a large number of spatial stochastic numerical simulations has been performed with the following parameters: $m=0.5$, $r=0.3$, $p=0.2$, and ${\mathcal N}_u=25$ (any individual dies after 25 successive unsuccessful predation attempts) or ${\mathcal N}_u=\infty$ (standard case, no deaths by starvation). In this section we shall consider the dynamics of initially flat and circular interfaces, before studying the dynamics of population networks with random initial conditions in the following section. \begin{figure}[!htb] \centering \includegraphics{fig2} \caption{Evolution of the width $W_*$ of an initially flat interface obtained from $1000$ realizations considering ${\mathcal N}_u=\infty$ (upper panel) and ${\mathcal N}_u=25$ (bottom panel). Notice that not only $W_*$ is much less sensitive to the box size than $W$, but also that the growth of $W_*$ with time observed in the case with ${\mathcal N}_u=25$ is absent for ${\mathcal N}_u=\infty$.} \label{fig2} \end{figure} \subsection{Initially flat interface} Here we consider the dynamics of an initially flat interface separating the left and right halves of the lattice which are initially fully occupied by individuals of the red (1) and blue (2) species, respectively. Our simulations are performed on a $N_x \times N_y$ lattice. The position ($k_W(l)$) of the interface for each row $l$ may be found as the value of $k_W(l)$ that minimizes the sum \begin{equation} F(k_W[l]) = \sum_{k=1}^{N_x} \left(S_{kl}-ST[k-k_W]\right)^2\, \end{equation} for each value of $l$. Here, $ST[x]$ is the step function, defined as $ST=1$ for $x<0$, $ST=0$ for $x=0$, and $ST=-1$ for $x>0$, and $S_{kl}=0$, $S_{kl} = +1$, or $S_{kl} = -1$ depending on whether the site of coordinates $(k,l)$ is empty, occupied by an individual of the red (1) species, or occupied by an individual of the blue (2) species. We shall follow refs. \cite{2017-Brown-PRE-96-012147, 2013-Roman-PRE-87-032148, 2012-Roman-JSMTE-2012-p07014}, and define the interface width as \begin{equation} W(t)=\sqrt{\frac{1}{N_y} \sum_{l=1}^{N_y} \left(k_W[l] - \langle k_W \rangle\right)^2}\,, \end{equation} where \begin{equation} \langle k_W \rangle = \frac{1}{N_y} \sum_{l=1}^{N_y} k_W[l] \,. \end{equation} \begin{figure}[!htb] \centering \includegraphics{fig3} \caption{Evolution of the average density $\rho_1$ of individuals of the red species ($1$) obtained using $1000$ realizations of the evolution of an initially circular domain containing individuals from that species surrounded by an outer domain containing individuals of the blue species ($2$), considering ${\mathcal N}_u= \infty$ (upper panel) and ${\mathcal N}_u=25$ (bottom panel). The solid red line shows the average value of $\rho_1$ as a function of the number of generations $t$ while the shaded region represents the sample standard deviation. Notice that the collapse of the circular domain, which takes place for ${\mathcal N}_u=\infty$, is not observed for ${\mathcal N}_u=25$. Instead, the inset panels displaying snapshots of a single realization taken at three different times show that for ${\mathcal N}_u=25$ there is a significant departure of circular symmetry and that, after an initial shrinking stage, the domain may grow and split into separate subdomains.} \label{fig3} \end{figure} Figure \ref{fig1} illustrates the evolution of the width of an initially flat interface obtained from $1000$ realizations considering ${\mathcal N}_u=\infty$ (upper panel) and ${\mathcal N}_u=25$ (bottom panel). Notice that, after an initial transient stage, the growth of the width of the interfaces for ${\mathcal N}_u=25$ is much faster compared to that of the standard ${\mathcal N}_u=\infty$ case --- the evolution approaching a scaling regime with $W \propto t^{0.18}$ at large $t$ for ${\mathcal N}_u=\infty$. This much faster growth may also be confirmed in the snapshots, obtained for single realizations of the ${\mathcal N}_u=\infty$ and ${\mathcal N}_u=25$ models, shown in the inset panels. The inset panels show the development of complex dynamical structures along the interfaces for ${\mathcal N}_u=25$, leading to rougher interfaces compared to the standard ${\mathcal N}_u=\infty$ case. It is also clear from the inset panels that for ${\mathcal N}_u=25$ death by starvation results in a relatively low constant average density of individuals away from the interfaces --- the average density being reached when the equilibrium between the mortality and reproduction rates is attained. This is the key property of models with finite ${\mathcal N}_u$, which is responsible for the growth of the interface thickness and roughness with time observed in Fig. \ref{fig1}. Figure \ref{fig2} shows the average evolution of the width of an initially flat interface with time obtained from $1000$ realizations with ${\mathcal N}_u=\infty$ (upper panel) and ${\mathcal N}_u=25$ (bottom panel), considering an alternative definition of interface width. In this case, the interface width $W_*$ defines the interval of $k$ for which the abundance of the two species is bellow $68\%$. Fig. \ref{fig2} shows that, not only $W_*$ is much less sensitive to the box size than $W$, but also that the growth of $W_*$ with time observed in the ${\mathcal N}_u=25$ case is absent for ${\mathcal N}_u=\infty$. For ${\mathcal N}_u=25$ the evolution approaches a scaling regime with $W_* \propto t^{0.3}$ at large $t$. Although the two definitions of interface width are physically distinct --- $W_*$ being much less sensitive than $W$ to small wavelength fluctuations which do not introduce large modifications to the interface profile at each row --- both evolve very differently in the ${\mathcal N}_u=25$ and ${\mathcal N}_u=\infty$ cases, thus capturing the impact of death by starvation on interface dynamics. \subsection{Initially circular interface} Let us now consider the dynamics of an initially circular interface. Figure \ref{fig3} illustrates the evolution of the average density $\rho_1$ of individuals of the the red species ($1$) obtained using $1000$ realizations of the evolution of an initially circular domain containing individuals of that species surrounded by an outer domain containing individuals of the blue species ($2$), considering ${\mathcal N}_u=\infty$ (upper panel) and ${\mathcal N}_u=25$ (bottom panel). The solid red line shows the average value of $\rho_1$ as a function of the number of generations $t$ while the shaded region represents the sample standard deviation. The upper panel of Fig. \ref{fig3} shows that if ${\mathcal N}_u = \infty$ the initially circular domain always collapses. In this case, a standard curvature dominated regime is recovered, with the domain wall area being roughly proportional to $t-t_c$, for $t \le t_c$ ($t_c$ being the time of collapse). This explains the approximately linear dependence of the average density of the inner species (1) on time. The bottom panel of Fig. \ref{fig3} shows that, contrary to what happens for ${\mathcal N}_u = \infty$, in the ${\mathcal N}_u = 25$ case initially circular domains do not collapse despite the existence of an initial shrinking stage. Instead, the inset panels, displaying snapshots of a single realization taken at three different times, show that for ${\mathcal N}_u=25$ there is a significant departure of circular symmetry and that, after an initial shrinking stage, the initially circular domain may grow and split into separate subdomains. \begin{figure}[!htb] \centering \includegraphics{fig4} \caption{Evolution of the population network in the two species model for ${\mathcal N}_u=\infty$ (upper panels) and ${\mathcal N}_u=25$ (bottom panels). The snapshots of a $1000^2$ simulation of the two and three species model were taken after $1000$ (left panel), $4000$ (middle panel), and $16000$ generations (right panel). Notice that the growth of single species domains, which takes place for ${\mathcal N}_u=\infty$, is not observed in the ${\mathcal N}_u=25$ case.} \label{fig4} \end{figure} \begin{figure}[!htb] \centering \includegraphics{fig5} \caption{The same as in Fig. \ref{fig4} but for the three species model.} \label{fig5} \end{figure} \section{Network Simulations \label{sec4}} In this section we shall consider the results of spatial stochastic numerical simulations with random initial conditions in two spatial dimensions. At the beginning of the simulation each site is either occupied by an individual of any of the $N=2,3$ species or left empty with a uniform discrete probability of $1/(N+1)$ (except if stated otherwise, the following parameter values are assumed: $m=0.5$, $r=0.3$, $p=0.2$). The results of $1000^2$ simulations of the two and three species models for ${\mathcal N}_u=\infty$ (upper panels) and ${\mathcal N}_u=25$ (bottom panels) are illustrated in Figs. \ref{fig4} and \ref{fig5}, respectively. The snapshots were taken after $1000$ (left panels), $4000$ (middle panels), and $16000$ generations (right panels). \begin{figure}[!htb] \centering \includegraphics{fig6} \caption{Evolution of $\langle \rho_\otimes^{-1} \rangle$ with time ($\rho_\otimes^{-1}$ being proportional to the characteristic length $L$ of the network). Notice that for ${\mathcal N}_u=\infty$, after an initial transient regime, $\langle \rho_\otimes^{-1} \rangle$ grows proportionally to $t^{1/2}$. On the other hand, for ${\mathcal N}_u=25$ the network attains a regime in which $\langle \rho_\otimes \rangle \sim {\rm const}$, indicating that the average characteristic length scale of the network becomes roughly constant in time. In both cases the simulations run for $50000$ generations on a $5000^2$ grid. The results were averaged over $25$ simulations.} \label{fig6} \end{figure} In the absence of death by starvation (${\mathcal N}_u=\infty$) there are almost no empty sites deep inside the domains. Empty sites are created only when predation occurs on the interface between competing domains. Empty sites can move around as a result of mobility interactions, but are eventually filled as a result of reproduction. In the ${\mathcal N}_u=\infty$ case, the dynamics is curvature dominated, with the velocity of the interfaces being roughly proportional to their curvature. This has been shown to lead to a population network evolution whose characteristic lengthscale $L$ is roughly proportional to $t^{1/2}$ \cite{Avelino-PRE-86-031119}. This growth of the characteristic scale of the network can be observed in the upper panels of Figs. \ref{fig4} and \ref{fig5}, for simulations with two and three species respectively. Eventually, for ${\mathcal N}_u=\infty$ the size of the domains becomes of the order of the box size and the coexistence is lost (as illustrated in Fig. \ref{fig7} considering a simulation on a smaller grid). On the other hand, the bottom panels of Figs. \ref{fig4} and \ref{fig5} show that, in the presence of a mortality rate associated to insufficient predation (${\mathcal N}_u=25$), the average density of individuals has a peak in the interface regions, decreasing towards the interior of the domains, reaching an asymptotic value determined by the equilibrium between death and reproduction. This is responsible for the dynamical behaviour of initially flat and circular interfaces found in the previous sections and is the reason why the characteristic scale of the network does not change significantly from $t=1000$ to $t=16000$, as observed in the bottom panels of Figs. \ref{fig4} and \ref{fig5}. As discussed before, for ${\mathcal N}_u=\infty$ the empty sites are mainly concentrated on the borders of competing domains. The thickness of the interfaces of empty sites is roughly constant, which implies that the total interface length $L_T$ is roughly proportional to the number of empty spaces $I_\otimes$. On the other hand, the number of domains is roughly proportional the ratio between the total area $A$ and the average domain area $L^2$, where $L$ is the characteristic length scale of the network. It is also proportional to the ratio between the total interface length $L_T$ inside the simulation box and the average domain perimeter (which is proportional to $L$). Therefore, $A/L^2 \propto L_T /L$, thus implying that $L \propto A/L_T \propto {\mathcal N}/I_\otimes = \rho_\otimes^{-1}$ \cite{Avelino-PRE-86-031119}. Hence, for ${\mathcal N}_u=\infty$ the characteristic length scale of the network is inversely proportional to the number of empty spaces. \begin{figure}[!htb] \centering \includegraphics{fig7} \caption{The phase space evolution for a single realization of the standard (${\mathcal N}_u=\infty$) and starvation (${\mathcal N}_u=25$) models with two and three species (top and bottom, panels, respectively panel). Although the initial conditions are the same for both standard and starvation models, long lasting coexistence only occurs in the starvation model. In both case the simulations run for 10000 generations on a $250^2$ grid.} \label{fig7} \end{figure} Figure \ref{fig6} depicts the evolution of $\langle \rho_\otimes^{-1} \rangle$ (the angle brackets represent an ensemble average) with time for ${\mathcal N}_u=\infty$ (solid green line) and ${\mathcal N}_u=25$ (solid magenta line) --- $\rho_\otimes^{-1}$ being proportional to the characteristic length $L$ of the network. In both cases the simulations run for $50000$ generations on a $5000^2$ grid. The results were averaged over $25$ simulations. Figure \ref{fig6} shows that in the absence of death by starvation (${\mathcal N}_u=\infty$), and for $t \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 50$, the characteristic length scale of the network grows with time as $\langle L \rangle \propto \langle \rho_\otimes^{-1} \rangle \propto t^{1/2}$. On the other hand, in the ${\mathcal N}_u=25$ case the network attains a regime in which $\langle \rho_\otimes \rangle \sim \rm const$, indicating that the average characteristic length scale $\langle L \rangle$ of the network becomes roughly constant in time for $t \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 200$. Figure \ref{fig7} shows the phase space evolution for a single realization of the standard (${\mathcal N}_u=\infty$) and starvation (${\mathcal N}_u=25$) models with two and three species (top and bottom, panels, respectively panel). The initial conditions are the same for both standard and starvation models and, in both cases, the simulations run for 10000 generations in a $250^2$ grid. Figure \ref{fig7} confirms that long lasting coexistence only occurs in the starvation model (${\mathcal N}_u=25$). \begin{figure}[!htb] \centering \includegraphics{fig8} \caption{Comparison between the values of the density of individuals $\rho_D$ deep inside the domains obtained using the analytical approximation given by Eq. (\ref{rhod}) (dashed dark line) and those obtained from the numerical results (blue dots connected with a solid line) for various values of $p/r$.} \label{fig8} \end{figure} \begin{figure}[!htb] \centering \includegraphics{fig9} \caption{Probability of extinction of at least one species as a function of $p/r$ (assuming that $m=0.5$ and $p+r+m=1$). The results were obtained from an average of $1000$ $200^2$ simulations with two species ($N=2$) starting from random initial conditions.} \label{fig9} \end{figure} Far from the borders of sufficiently large spatial domains occupied by a single species individuals die at each predation attempt, independently of the value of ${\mathcal N}_u$. On the other hand, an individual is born at each successful reproduction attempt (an unsuccessful predation attempt will occur whenever the passive is an occupied site). Assuming that the probability of the passive being an empty site is equal to $1-\rho_D$, where $\rho_D$ is equal to the density of individuals deep inside the domains, the equilibrium condition may be written as \begin{equation} p=r(1-\rho_{D})\,, \end{equation} or, equivalently, \begin{equation} \rho_{D}=1-\frac{p}{r}\,. \label{rhod} \end{equation} According to Eq. (\ref{rhod}) equilibrium is only possible if $p<r$. If $p>r$ then $\rho_D=0$. In practice, the probability of a site far from the borders being occupied is not independent of the distribution of individuals in neighbouring sites, and Eq. (\ref{rhod}) is only approximately valid. Fig. 8 shows the comparison between the values of $\rho_D$ obtained using the analytical approximation given by Eq. (\ref{rhod}) (dashed black line) with the corresponding numerical results (blue dots connected with a solid line) for various values of $p/r$ (assuming that $m=0.5$ and $p+r+m=1$). Fig. 8 shows that the analytical fit is almost perfect for small values of $p/r$ ($p/r < 0.5$), but that the numerical results deviate from the analytical prediction for larger values of $p/r$. In particular, the maximum value of $p/r$ consistent with $\rho_D \neq 0$ ($p/r \sim 0.75$) is slightly smaller than the analytical prediction ($p/r = 1$). Fig. 9 shows the probability of extinction $P_{\rm ext}$ of at least one species as a function of $p/r$ for ${\mathcal N}_u=2$ (green dots connected with a solid line) and ${\mathcal N}_u=25$ (magenta squares connected with a solid line). The results were obtained from an average of $1000$ $200^2$ simulations with two species ($N=2$) starting from random initial conditions, again assuming that $m=0.5$ and $p+r+m=1$ (we verified that the probability profiles for $N=3$ are almost identical to those obtained for $N=2$). Fig. 9 shows that the death by starvation, if $p/r$ is sufficiently low, contributes to the preservation of coexistence. The similarity of the probability profiles obtained with ${\mathcal N}_u=2$ and ${\mathcal N}_u=25$ (except for a small shift in $p/r$) indicates that the main qualitative results obtained in this paper do not depend on the specific choice of ${\mathcal N}_u$. For fixed values of $m$, $r$ and $p$, the asymptotic characteristic length scale of the network is larger for larger values of ${\mathcal N}_u$ (note that for ${\mathcal N}_u=\infty$, the value of $L$ would always keep growing). Therefore, $L$ becomes larger for ${\mathcal N}_u=25$ than for ${\mathcal N}_u=2$. This is the reason why the interval of $p/r$ allowing for coexistence is broader for ${\mathcal N}_u=2$ than in the ${\mathcal N}_u=25$ case for simulations performed in a finite box. \section{CONCLUSIONS \label{conclusion}} In this letter we have investigated the dynamics of spatial stochastic May-Leonard models in two spatial dimensions, considering the death of individuals by starvation after a given number of successive unsuccessful predation attempts as a new ingredient. We have considered models with mutual predation interactions between between any two individuals of different species which, in their standard version, generally lead to the loss of coexistence in a finite time. By considering numerical simulations with two and three species, as well as analytical arguments, we have demonstrated that death by starvation can play an important role on the dynamics of population networks. In particular, we have shown that, if the reproduction rate is sufficiently high, death by starvation may lead be responsible for the preservation of coexistence. \acknowledgments P.P.A. acknowledges the support by FEDER—Fundo Europeu de Desenvolvimento Regional funds through the COMPETE 2020—Operational Programme for Competitiveness and Internationalisation (POCI), and by Portuguese funds through FCT - Fundação para a Ciência e a Tecnologia in the framework of the project POCI-01-0145-FEDER-031938 and the FCT grant UID/FIS/04434/2013. B.F.O thanks Funda\c c\~ao Arauc\'aria, Fapern, FCT and INCT-FCx (CNPq/FAPESP) for financial and computational support.
train/arxiv
BkiUeAA4eIOjR8Kl4gTu
5
1
\section{Magnetic Reconnection in Collisionless and Collisional Fluids} A magnetic field embedded in a perfectly conducting fluid preserves its topology for all time (Parker 79). Although ionized astrophysical objects, like stars and galactic disks, are almost perfectly conducting, they show indications of changes in topology, ``magnetic reconnection'', on dynamical time scales (Parker 1970, Lovelace 1976, Priest \& Forbes 2002). Reconnection can be observed directly in the solar corona ( Innes et al 1997, Yokoyama \& Shibata 1995, Masuda et al. 1994), but can also be inferred from the existence of large scale dynamo activity inside stellar interiors (Parker 1993, Ossendrijver 2003). Solar flares (Sturrock 1966) and $\gamma$-ray busts (Fox et al. 2005, Galama et al. 1998) are usually associated with magnetic reconnection. Previous work has concentrated on showing how reconnection can be rapid in plasmas with very small collisional rates (Shay et al. 1998, Drake 2001, Drake et al. 2006, Daughton et al. 2006), which substantially constrains astrophysical applications of the corresponding reconnection models. We note that if magnetic reconnection is slow in some astrophysical environments, this automatically means that the results of present day numerical simulations in which the reconnection is enevitably fast due to numerical diffusivity do not correctly represent magnetic field dynamics in these environments. If, for instance, the reconnection were slow in collisional media this would entail the conclusion that the entire crop of interstellar, protostellar and stellar MHD calculations would be astrophysically irrelevant. Here we present numerical evidence, based on three dimensional simulations, that reconnection in a turbulent fluid occurs at a speed comparable to the rms velocity of the turbulence, regardless of either the value of the resistivity or degree of collisionality. In particular, this is true for turbulent pressures much weaker than the magnetic field pressure so that the magnetic field lines are only slightly bent by the turbulence. These results are consistent with the proposal by Lazarian \& Vishniac (1999, henceforth LV99) that reconnection is controlled by the stochastic diffusion of magnetic field lines, which produces a broad outflow of plasma from the reconnection zone. This work implies that reconnection in a turbulent fluid typically takes place in approximately a single eddy turnover time, with broad implications for dynamo activity (Parker 1970, 1993, Stix 2000) and particle acceleration throughout the universe (de Gouveia dal Pino \& Lazarian 2003, 2005, Lazarian 2005, Drake et al. 2006). Astrophysical plasmas are often highly ionized and highly magnetized (Parker 1970). The evolution of the magnetic field in a highly conducting fluid can be described by a simple version of the induction equation \begin{equation} \frac{\partial \vec{B}}{\partial t} = \nabla \times \left( \vec{v} \times \vec{B} - \eta \nabla \times \vec{B} \right) , \end{equation} where $\vec{B}$ is the magnetic field, $\vec{v}$ is the velocity field, and $\eta$ is the resistivity coefficient. Under most circumstances this is adequate for discussing the evolution of magnetic field in an astrophysical plasma. When the dissipative term on the right hand side is small, as is implied by simple dimensional estimates, the magnetic flux through any fluid element is constant in time and the field topology is an invariant of motion. On the other hand, reconnection is observed in the solar corona and chromosphere (Innes et al. 1997, Yokoyama \& Shibata 1995, Masuda et al. 1994, Ciaravella \& Raymond 2008), its presence is required to explain dynamo action in stars and galactic disks (Parker 1970, 1993), and the violent relaxation of magnetic fields following a change in topology is a prime candidate for the acceleration of high energy particles (de Gouveia Dal Pino \& Lazarian 2003, henceforth GL03, 2005, Lazarian 2005, Drake et al. 2006, Lazarian \& Opher 2009, Drake et al. 2010) in the universe. Quantitative general estimates for the speed of reconnection start with two adjacent volumes with different large scale magnetic field directions (Sweet 1958, Parker 1957). The speed of reconnection, i.e. the speed at which inflowing magnetic field is annihilated by ohmic dissipation, is roughly $\eta/\Delta$, where $\Delta$ is the width of the transition zone (see Figure 1). Since the entrained plasma follows the local field lines, and exits through the edges of the current sheet at roughly the Alfven speed, $V_A$, the resulting reconnection speed is a tiny fraction of the Alfven speed, $V_A\equiv B/(4\pi \rho)^{1/2}$ where $L$ is the length of the current sheet. When the current sheet is long and the reconnection speed is slow this is referred to as Sweet-Parker reconnection. Observations require a speed close to $V_A$, so this expression implies that $L\sim \Delta$, i.e. that the magnetic field lines reconnect in an ``X point''. The first model with a stable X point was proposed by Petschek (1964). In this case the reconnection speed may have little or no dependence on the resistivity. The X point configuration is known to be unstable to collapse into a sheet in the MHD regime (see Biskamp 1996), but in a collisionless plasma it can be maintained through coupling to a dispersive plasma mode (Sturrock 1966). This leads to fast reconnection, but with important limitations. This process has a limited astrophysical applicability as it cannot be important for most phases of the interstellar medium (see Draine \& Lazarian 1998 for a list of the idealized phases), not to speak about dense plasmas, such as stellar interiors and the denser parts of accretion disks. In addition, it can only work if the magnetic fields are not wound around each other, producing a saddle shaped current sheet. In that case the energy required to open up an X point is prohibitive. The saddle current sheet is generic for not parallel flux tubes trying to pass through each other. If such a passage is seriously constrained, the magnetized highly conducting astrophysical fluids should behave more like Jello rather than normal fluids. Finally, the traditional reconnection setup does not include ubiquitous astrophysical turbulence\footnote{The set ups where instabilities play important role include Simizu et al. (2009a,b). For sufficiently large resolution of simulations those set-ups are expected to demonstrate turbulence. Turbulence initiation is also expected in the presence of plasmoid ejection (Shibata \& Tanuma 2001). Numerical viscosity constrains our ability to sustain turbulence via reconnection, however.} (see Armstrong, Rickett \& Spangler 1994, Elmegreen \& Scalo 2004, McKee \& Ostriker 2007, Haverkorn, Lazarian 2009, Chepurnov \& Lazarian 2010). Fortunately, this approach provides another way of accelerating reconnection. Indeed, an alternative approach is to consider ways to decouple the width of the plasma outflow region from $\Delta$. The plasma is constrained to move along magnetic field lines, but not necessarily in the direction of the mean magnetic field. In a turbulent medium the two are decoupled, and fluid elements that have some small initial separation will be separated by a large eddy scale or more after moving the length of the current sheet. As long as this separation is larger than the width of the current sheet, the result will not depend on $\eta$. \begin{figure}[!t] \begin{center} \includegraphics[width=1.0 \columnwidth]{fig1.eps} \caption{{\it Upper plot}: Sweet-Parker model of reconnection. The outflow is limited by a thin slot $\Delta$, which is determined by Ohmic diffusivity. The other scale is an astrophysical scale $L\gg \Delta$. {\it Middle plot}: Reconnection of weakly stochastic magnetic field according to LV99. The model that accounts for the stochasticity of magnetic field lines. The outflow is limited by the diffusion of magnetic field lines, which depends on field line stochasticity. {\it Low plot}: An individual small scale reconnection region. The reconnection over small patches of magnetic field determines the local reconnection rate. The global reconnection rate is substantially larger as many independent patches come together. From Lazarian et al. 2004.} \label{fig_rec} \end{center} \end{figure} LV99 we introduced a model that included the effects of magnetic field line wandering (see Figure 1). The model relies on the nature of three-dimentional magnetic field wandering in turbulence. This nature is different in three and two dimensions, which provides the major difference between the LV99 model and the earlier attempts to solve the problem of magnetic reconnection appealing to turbulence (Matthaeus \& Lamkin 1985). The effects of compressibility and heating which were thought to be important in the earlier studies (Matthaeus \& Lamkin 1985, 1986) do not play the role for the LV99 model either. The model is applicable to any weakly perturbed magnetized fluid, irrespectively, of the degree of plasma being collisional or collisionless (cf. Shay et al. 1998). Two effects are the most important for understanding of the nature of reconnection in LV99. First of all, in three dimensions bundles of magnetic field lines can enter the reconnection region and reconnection there independently (see Figure~1), which is in contrast to two dimensional picture where in Sweet-Parker reconnection the process is artificially constrained. Then, the nature of magnetic field stochasticity and therefore magnetic field wandering (which determines the outflow thickness, as illustrated in Figure~1) is very different in 2D and the real 3D world (LV99). In other words, removing artificial constraints on the dimensionality of the reconnection region and the magnetic field being absolutely straight, LV99 explores the real-world astrophysical reconnection. Our calculations in LV99 showed that the resulting reconnection rate is limited only by the width of the outflow region. This proposal, called ``stochastic reconnection'', leads to reconnection speeds close to the turbulent velocity in the fluid. More precisely, assuming isotropically driven turbulence characterized by an injection scale, $l$, smaller than the current sheet length, we find \begin{equation} V_{rec}\approx \frac{u_l^2}{V_A}\left(l/L\right)^{1/2}\approx u_{turb}\left(l/L\right)^{1/2}, \label{recon1} \end{equation} where $u_l$ is the velocity at the driving scale and $u_{turb}$ is the velocity of the largest eddies of the strong turbulent cascade. Note, that here "strong" means only that the eddies decay through nonlinear interactions in an eddy turn over time (see more discussion of the LV99). All the motions are weak in the sense that the magnetic field lines are only weakly perturbed. It is useful to rewrite this in terms of the power injection rate $P$. As the perturbations on the injection scale of turbulence are assumed to have velocities $u_l<V_A$, the turbulence is weak at large scales. Therefore, the relation between the power and the injection velocities are different from the usual Kolmogorov estimate, namely, in the case of the weak turbulence $P\sim u_l^4/(lV_A)$ (LV99). Thus we get, \begin{equation} V_{rec}\approx \left(\frac{P}{LV_A}\right)^{1/2} l, \label{recon2} \end{equation} where $l$ is the length of the turbulent eddies parallel to the large scale magnetic field lines as well as the injection scale. The reconnection velocity given by equation (\ref{recon2}) does not depend on resistivity or plasma effects. Therefore for sufficiently high level of turbulence we expect both collisionless and collisional fluids to reconnect at the same rate. \section{Testing of Lazarian \& Vishniac 99 Model} \begin{figure*} \center \includegraphics[width=0.3\textwidth]{fig2a.ps} \includegraphics[width=0.3\textwidth]{fig2b.ps} \caption{{\it Left panel}: Current intensity and magnetic field configuration during stochastic reconnection. We show a slice through the middle of the computational box in the xy plane after twelve dynamical times for a typical run. The shared component of the field is perpendicular to the page. The intensity and direction of the magnetic field is represented by the length and direction of the arrows. The color bar gives the intensity of the current. The reversal in $B_x$ is confined to the vicinity of y=0 but the current sheet is strongly disordered with features that extend far from the zone of reversal. {\it Right panel}: Representation of the magnetic field in the reconnection zone with textures. \label{fig:top_turb}} \end{figure*} Here we describe the results of a series of three dimensional numerical simulations aimed at adding turbulence to the simplest reconnection scenario and testing equation (\ref{recon2}). We take two regions with strongly differing magnetic fields lying next to one another. The simulations are periodic in the direction of the shared field (the z axis) and are open in the reversed direction (the x axis). The external gas pressure is uniform and the magnetic fields at the top and bottom of the box are taken to be the specified external fields plus small perturbations to allow for outgoing waves. The grid size in the simulations varied from 256x512x256 to 512x1028x512 so that the top and bottom of the box are far away from the current sheet and the region of driven turbulence around it. At the sides of the box where outflow is expected the derivatives of the dynamical variables are set to zero. A complete description of the numerical methodology can be found in Kowal et al. (2009). All our simulations are allowed to evolve for seven Alfven crossing times without turbulent forcing. During this time they develop the expected Sweet-Parker current sheet configuration with slow reconnection. Subsequently we turn on isotropic turbulent forcing inside a volume centered in the midplane (in the xz plane) of the simulation box and extending outwards by a quarter of the box size. The turbulence reaches its full amplitude around eight crossing times and is stationary thereafter. In Figure 2 we see the current density on an xy slice of the computational box once the turbulence is well developed. As expected, we see that the narrow stationary current sheet characteristic of Sweet-Parker reconnection is replaced by a chaotic structure, with numerous narrow peaks in the current density. Clearly the presence of turbulence has a dramatic impact on the structure of the reconnection zone. In addition, we see numerous faint features indicating weak reconnection between adjacent turbulent eddies. \begin{figure} \center \includegraphics[width=0.9\columnwidth]{fig3b.eps} \caption{Reconnection speed versus input power for the driven turbulence. We show the reconnection speed, defined by equation (4) plotted against the input power for an injection wavenumber equal to 8 (i.e. a wavelength equal to one eighth of the box size) and a resistivity $\nu_u$. The dashed line is a fit to the predicted dependence of $P^{1/2}$ (see eq. (3)). The horizontal line shows the laminar reconnection rates for each of the simulations before the turbulent forcing started. Here the uncertainty in the time averages are indicated by the size of the symbols and the variances are shown by the error bars. \label{pow_dep}} \end{figure} The speed of reconnection in three dimensions can be hard to define without explicit evaluation of the magnetic field topology. However, in this simple case we can define it as the rate at which the $x$ component of the magnetic field disappears. More precisely, we consider a yz slice of the simulation, passing through the center of the box. The rate of change of the area integral of |$B_x$| is its flux across the boundaries of the box minus the rate at which flux is annihilated through reconnection (see more discussion in Kowal et al. 2009) \begin{equation} \partial_t\left(\int|B_x|dzdy\right)=\oint sign(B_x)vec{E}d\vec{l}-2V_{rec}B_{x,ext}L_z \label{measure} \end{equation} where electric field is $\vec{E}=\vec{v}\times \vec{B} -\eta \vec{j}$, $B_{x,ext}$ is the absolute value of $B_x$ far from the current sheet and $L_z$ is the width of the box in the $z$ direction. This follows from the induction equation under the assumption that the turbulence is weak to lead to local field reversals and that the stresses at the boundaries are weak to produce significant field bending there. In other words, fields in the $x$ direction are advected through the top and bottom of the box, and disappear only through reconnection. Since we have assumed periodic boundary conditions in the $z$ direction the boundary integral on the right hand side is only taken over the top and bottom of the box. By design this definition includes contributions to the reconnection speed from contracting loops, where Ohmic reconnection has occurred elsewhere in the box and $|B_x|$ decreases as the end of a reconnected loop is pulled through the plane of integration. It is worth noting that this estimate is roughly consistent with simply measuring the average influx of magnetic field lines through the top and bottom of the computational box and equating the mean inflow velocity with the reconnection speed. Following equation (\ref{measure}) we can evaluate the reconnection speed for varying strengths and scales of turbulence and varying resistivity. In Figure~\ref{pow_dep} we see the results for varying amounts of input power, for fixed resistivity and injection scale as well as for the case of no turbulence at all. The line drawn through the simulation points is for the predicted scaling with the square root of the input power. The agreement between equation (\ref{recon2}) and Figure~\ref{pow_dep} is encouraging but does not address the most important aspect of stochastic reconnection, i.e. its insensitivity to $\eta$. \begin{figure} \center \includegraphics[width=0.9\columnwidth]{fig4.eps} \caption{Reconnection speed versus resistivity. We show the reconnection speed plotted against the uniform resistivity of the simulation for an injection wavenumber of 8 and an injected power of one. We include both the laminar reconnection speeds, using the hollow symbols, fit to the expected dependence of $\eta_u$, and the stochastic reconnection speeds, using the filled symbols. As before the symbol sizes indicate the uncertainty in the average reconnection speeds and the error bars indicate the variance. We included simulations with large, $B_z=1$, and small, $B_z=0.1$, guide fields. \label{ueta_dep}} \end{figure} In Figure~\ref{ueta_dep} we plot the results for fixed input power and scale, while varying the background resistivity. In this case $\eta$ is taken to be uniform, except near the edges of the computational grid where it falls to zero over five grid points. This was done to eliminate edge effects for large values of the resistivity. We see from the Figure~\ref{ueta_dep} that the points for laminar reconnection scale as $\sqrt{\eta}$, the expected scaling for Sweet-Parker reconnection. In contrast, the points for reconnection in a turbulent medium do not depend on the resistivity at all. In summary, we have tested the model of stochastic reconnection in a simple geometry meant to approximate the circumstances of generic magnetic reconnection in the universe. Our results are consistent with the mechanism described by LV99. The implication is that turbulent fluids in the universe including the interstellar medium, the convection zones of stars, and accretion disks, have reconnection speeds close to the local turbulent velocity, regardless of the local value of resistivity. Magnetic fields in turbulent fluids can change their topology on a dynamical time scale. In Kowal et al. (2009) we also studied the dependence of the reconnection on the anomalous resistivity, which increases effective resistivity for high current densities. The anomalous resistivity can be used as a proxy for plasma effects, e.g. collisionless effects in reconnection. While it enhances the local speed of individual reconnection events results in Kowal et al. (2009) testify that the total reconnection rate does not change. Any numerical study has to address the issue of the possible numerical effects on the results. We show the dependence of the reconnection rate on the numerical resolution in Figure~\ref{fig:reso_dep}. The reconnection rate increases with the increase of the resolution, which testifies that the fast reconnection is not due to numerical effects. Indeed, higher numerical reconnection is expected for lower resolution simulations. \begin{figure} \center \includegraphics[width=0.9\columnwidth]{fig5.eps} \caption{Dependence of the reconnection rate on the numerical resolution. If the fast reconnection were due to yet unclear numerical effects on small scales, we would expect to see the increase of the reconnection rate with the decrease of the numerical box. If anything, the actual dependence of the reconnection rate on the box size shows the opposite dependence. \label{fig:reso_dep}} \end{figure} Finally, it is important to give a few words in relation to our turbulence driving. We drive our turbulence solenoidally to minimize the effects of compression, which does not play a role in LV99 model. The turbulence driven in the volume around the reconnection layer corresponds to the case of astrophysical turbulence, which is also volume-driven. On the contrary, the case of the turbulence driven at the box boundaries would produce spatially inhomogeneous imbalanced turbulence for which we do not have analytical predictions (see discussion of such turbulence in Beresnyak \& Lazarian 2009). We stress, that it is not the shear size of our numerical simulations, but the correspondence of the observed scalings to those predicted in LV99 that allows us to claim that we proved that the 3D reconnection is fast in the presence of turbulence. \section{Acceleration of Cosmic Rays} \begin{figure}[!t] \includegraphics[width=\columnwidth]{fig6.eps} \caption{ Cosmic rays spiral about a reconnected magnetic field line and bounce back at points A and B. The reconnected regions move towards each other with the reconnection velocity $V_R$. The advection of cosmic rays entrained on magnetic field lines happens at the outflow velocity, which is in most cases of the order of $V_A$. Bouncing at points A and B happens because either of streaming instability induced by energetic particles or magnetic turbulence in the reconnection region. In reality, the outflow region gets filled in by the oppositely moving tubes of reconnected flux which collide only to repeat on a smaller scale the pattern of the larger scale reconnection. From Lazarian (2005).} \label{fig_recon} \end{figure} In what follows we discuss the first order Fermi acceleration which arises from volume-filling reconnection\footnote{We would like to stress that Figure 1 exemplifies only the first moment of reconnection when the fluxes are just brought together. As the reconnection develops the volume of thickness $\Delta$ gets filled with the reconnected 3D flux ropes moving in the opposite directions.}. The LV99 presented such a model of reconnection and observations of the Solar magnetic field reconnection support the volume-filled idea (Ciaravella \& Raymond 2008). Figure~\ref{fig_recon} exemplifies the simplest realization of the acceleration within the reconnection region expected within LV99 model. As a particle bounces back and forth between converging magnetic fluxes, it gains energy through the first order Fermi acceleration described in de Gouveia dal Pino \& Lazarian (2003, 2005, henceforth GL05) (see also Lazarian 2005). To derive the energy spectrum of particles one can use the routine way of dealing with the first order Fermi acceleration in shocks (see Longair 1992). Consider the process of acceleration of $M_0$ particles with the initial energy $E_0$. If a particle gets energy $\beta E_0$ after a collision, its energy after $m$ collisions is $\beta^m E_0$. At the same time if the probability of a particle to remain within the accelerating region is $P$, after $m$ collisions the number of particles gets $P^m M_0$. Thus $\ln (M/M_0)/\ln(E/E_0)=\ln P/\ln\beta$ and \begin{equation} \frac{M}{M_0}=\left(\frac{E}{E_0}\right)^{\ln P/\ln\beta} \end{equation} For the stationary state of accelerated particles the number $M$ is the number of particles having energy equal or larger than $E$, as some of these particles are not lost and are accelerated further. Therefore: \begin{equation} N(E)dE=const\times E^{-1+(\ln P/\ln\beta)} dE \label{NE} \end{equation} To determine $P$ and $\beta$ consider the following process. The particles from the upper reconnection region see the lower reconnection region moving toward them with the velocity $2V_{R}$ (see Figure~\ref{recon2}). If a particle from the upper region enters at an angle $\theta$ into the lower region the expected energy gain of the particle is $\delta E/E=2V_{R}\cos\theta/c$. For isotropic distribution of particles their probability function is $p(\theta)=2\sin\theta\cos\theta d\theta$ and therefore the average energy gain per crossing of the reconnection region is \begin{equation} \langle \delta E/E \rangle =\frac{V_{R}}{c}\int^{\pi/2}_{0} 2\cos^2\theta \sin\theta d\theta=4/3\frac{V_{R}}{c} \end{equation} An acceleration cycle is when the particles return back to the upper reconnection region. Being in the lower reconnection region the particles see the upper reconnection region moving with the speed $V_{R}$. As a result, the reconnection cycle provides the energy increase $\langle \delta E/E \rangle_{cycle}=8/3(V_{R}/c)$ and \begin{equation} \beta=E/E_0=1+8/3(V_{R}/c) \label{beta} \end{equation} Consider the case of $V_{diff}\ll V_R$. The total number of particles crossing the boundaries of the upper and lower fluxes is $2\times 1/4 (n c)$, where $n$ is the number density of particles. With our assumption that the particles are advected out of the reconnection region with the magnetized plasma outflow the loss of the energetic particles is $2\times V_{R}n$. Therefore the fraction of energetic particles lost in a cycle is $V_{R} n/[1/4(nc)]=4V_{R}/c$ and \begin{equation} P=1-4V_{R}/c. \label{P} \end{equation} Combining Eq.~(\ref{NE}), (\ref{beta}), (\ref{P}) one gets \begin{equation} N(E)dE=const_1 E^{-5/2}dE, \label{-5/2} \end{equation} which is the spectrum of accelerated energetic particles for the case when the back-reaction is negligible (see also GL05)\footnote{The obtained spectral index is similar to the one of Galactic cosmic rays.}. The first order acceleration of particles entrained on the contracting magnetic loop can be understood from the Liouville theorem. As in the process of the magnetic tubes are contracting, the regular increase of the particle's energies is expected. The requirement for the process to proceed efficiently is to keep the accelerated particles within the contracting magnetic loop. This introduces limitations on the particle diffusivities perpendicular to magnetic field direction. The subtlety of the point above is related to the fact that while in the first order Fermi acceleration in shocks magnetic compression is important, the acceleration via LV99 reconnection process is applicable to incompressible fluids. Thus, unlike shocks, not the entire volume that shrinks for the acceleration, but only the volume of the magnetic flux tube. Thus high perpendicular diffusion of particles may decouple them from the magnetic field. Indeed, it is easy to see that while the particles within a magnetic flux rope depicted in Figure~6 bounce back and forth between the converging mirrors and get accelerated, if these particles leave the flux rope fast, they may start bouncing between the magnetic fields of different flux ropes which may sometimes decrease their energy. Thus it is important that the particle diffusion parallel and perpendicular magnetic field stays different. Particle anisotropy which arises from particle preferentially getting acceleration in terms of the parallel momentum may also be important. \section{Simulations of the Acceleration of Cosmic Rays by Reconnection} In the numerical studies of the cosmic ray acceleration we use data cubes obtained from the models of the weakly stochastic magnetic reconnection described in \S 2. For a given snapshot we obtain a full configuration of the plasma flow variables (density and velocity) and magnetic field. We inject test particles in such an environment and integrate their trajectories solving the motion equation for relativistic charged particles \begin{equation} \frac{d}{d t} \left( \gamma m \vec{u} \right) = q \left( \vec{E} + \vec{u} \times \vec{B} \right) , \end{equation} where $\vec{u}$ is the particle velocity, $\gamma \equiv \left( 1 - u^2 / c^2 \right)^{-1}$ is the Lorentz factor, $m$ and $q$ are particle mass and electric charge, respectively, and $c$ is the speed of light. The study of the magnetic reconnection is done using the magnetohydrodynamic fluid approximation, thus we do not specify the electric field $\vec{E}$ explicitly. Nevertheless, the electric field is generated by either the flow of magnetized plasma or the resistivity effect and can be obtained from the Ohm's equation \begin{equation} \vec{E} = - \vec{v} \times \vec{B} + \eta \vec{j} , \end{equation} where $\vec{v}$ is the plasma velocity and $\vec{j} \equiv \nabla \times \vec{B}$ is the current density. In our studies we are not interested in the acceleration by the electric field resulting from the resistivity effects, thus we neglect the last term. After incorporating the Ohm's law, the motion equation can be rewritten as \begin{equation} \frac{d}{d t} \left( \gamma m \vec{u} \right) = q \left[ \left( \vec{u} - \vec{v} \right) \times \vec{B} \right] . \label{eq:trajectory} \end{equation} In our simulation we do not include the particle energy losses, so particle can gain or loose through the interaction with moving magnetized plasma only. For the sake of simplicity, we assume the speed of light 20 times larger than the Alfven speed $V_A$, which defines plasma in the nonrelativistic regime, and the mean density is assumed to be 1 atomic mass unit per cubic centimeter, which is motivated by the interstellar medium density. We integrate equation~\ref{eq:trajectory} for 10,000 particles with randomly chosen initial positions in the domain and direction of the motion. In Figure~\ref{fig:energy} we show the particle energy evolution averaged over all integrated particles for two cases. In the left plot we used plasma fields topology obtained from the weakly stochastic magnetic reconnection models, in the right plot we use fields topology taken from the turbulence studies. Gray area shows the particle energy dispersion over the group of particles. \begin{figure}[ht] \center \includegraphics[width=0.8\columnwidth]{fig7a.eps} \includegraphics[width=0.8\columnwidth]{fig7b.eps} \caption{Particle energy evolution averaged over 10,000 particles with the initial energy $E_0=10^5$ MeV and random initial positions and directions. In the upper plot we show results for the weakly stochastic turbulence environment and in the lower plot for turbulent environment without magnetic reconnection. In both cases we assume $c = 20 V_A$ and $\langle \rho \rangle = 1$ u/cm$^{3}$. \label{fig:energy}} \end{figure} In the case of reconnection model, the expected exponential acceleration is observed until time about 100 hours. Later on, the physical limitations of the computational domain result in a different growth rate corresponding to $E\sim t^{1.49}$. In the case of turbulence without large scale magnetic reconnection the growth of energy is slower $E\sim t^{0.73}$. This testifies that that the presence of reconnection makes the acceleration more efficient. The numerical confirmation of the first order acceleration in the reconnection regions is presented in our forthcoming paper. \begin{figure}[ht] \center \includegraphics[width=0.8\columnwidth]{fig8a.eps} \includegraphics[width=0.8\columnwidth]{fig8b.eps} \caption{Particle spectrum evolution for 10,000 particles with the uniform initial energy distribution $E_0=10^5-10^6$ MeV and random initial positions and directions. In the upper plot with show results for the weakly stochastic turbulence environment and in the lower plot for turbulent environment without magnetic reconnection. In both cases we assume $c = 20 V_A$ and $\langle \rho \rangle = 1$ u/cm$^{3}$. \label{fig:spectrum}} \end{figure} In Figure~\ref{fig:spectrum} we show the evolution of particle energy distribution. Initially uniform distribution of particle energy ranging from $10^5$ to $10^6$~MeV evolves faster to higher energies if the reconnection is present. In this case, the final distribution is log-normal being more peaked over the time and with decreasing dispersion of energies in logarithmic scale. On the contrary, in the case of pure turbulence, the energy distribution after evolving to the log-normal shape preserves its dispersion over the time. As magnetic reconnection is ubiquitous process, the particle acceleration within reconnection process should be widely spread. Therefore accepting the preliminary character of these results above we are involved in more extensive studies of the acceleration-via-reconnection. \section{Explanation of the Anomalous Cosmic Ray Origin} The processes of the energetic particle acceleration in the process of turbulent reconnection can preaccelerate particles to the intermediate energies helping to solve the problem of particle injection into shocks. It can also act as the principal process of acceleration. Below we present the case where we believe that the latter takes place. Since the crossing of the termination shock (TS) by Voyager 1 (V1) in late 2004 and by Voyager 2 (V2) in mid 2007 it became clear that several paradigms needed to be revised. Among them was the acceleration of particles. Prior to the encounter of the termination shock by V1 the prevailing view was that anomalous cosmic rays (ACRs) were accelerated at the TS by diffusive shock acceleration (DSA) to energies 1-300 MeV/nuc (e.g., Jokipii \& Giacalone, 1998; Cummings \& Stone, 1998). However, with the crossing of the TS by V1 the energy spectrum of ACR did not unroll to the expected source shape: a power-law at lower energies with a roll off at higher energies. After 2004, both the V1 spectrum in the heliosheath and the V2 spectrum upstream the TS, continued to evolve toward the expected source shape. To explain this paradox several models were proposed. Among them, McComas \& Schwadron (2006) suggested that at a blunt shock the acceleration site for higher energy ACRs would be at the flanks of the TS, where the injection efficiency would be higher for DSA and connection times of the magnetic field lines to the shock would be longer, allowing acceleration to higher energies. Fisk et al. (2006) on the other hand suggested that stochastic acceleration in the turbulent heliosheath would continue to accelerate ACRs and that the high-energy source region would thus be beyond the TS. Other works, such as Jokipii (2006) and Florinki and Zank (2006) try to explain the deficit of ACRs based on a dynamic termination shock. Jokipii (2006) pointed out that a shock in motion on time scales of the acceleration time of the ACRs, days to months, would cause the spectrum to differ from the expected DSA shape. Florinki \& Zank (2006) calculated the effect of Magnetic Interacting Regions (MIRs) with the Termination Shock on the ACR spectral shape. They show that there is a prolonged period of depressed intensity in mid-energies from a single MIR. Other recent works have included stochastic acceleration, as well as other effects (Moraal et al., 2006, 2007; Zhang, 2006; Langner and Potgieter, 2006; Ferreira et al., 2007). It became clear after the crossing of the TS by V2 that these models would require adjustments. The observations by V2 indicate for example that a transient did not cause the modulation shape of the V2 spectrum at the time of its TS crossing. When both spacecraft were in the heliosheath in late 2007, the radial gradient in the 13-19 MeV/nuc ions did not appear to be caused by a transient. The 60-74 MeV/nuc ions have no gradient, so no north-south or longitudinal asymmetry is observed in the ACR intensities at the higher energies. In Lazarian \& Opher (2009, LO09) we propose an alternative model, which explains the source of ACRs as being in the heliosheath and we appeal to magnetic reconnection as a process that can accelerate particles. LO09 explained the origin of the magnetic field reversals that induce magnetic reconnection in heliosheath and heliopause. Indeed, it is well known that magnetic field in the heliosphere change polarity and induce reconnection. For instance, as the Sun rotates magnetic field twists into a Parker spiral (Parker 1958) with magnetic fields separated by a current sheet (see Schatten 1971). The changes of magnetic field are also expected due to the Solar cycle activity. The question now is at what part of the heliosheath we expect to see reversals. The structure of the magnetic field in the solar wind is complex. The solar magnetic field lines near the termination shock are azimuthal and form a spiral (see Figure \ref{fig_anomal}). We expect the reconnection and the corresponding energetic particle acceleration to happen at the heliosheath closer to the heliopause. This explains why Voyagers do not see the signatures of anomalous cosmic ray acceleration as they pass the termination shock. Appealing to their model of collisionless reconnection, Drake et al. (2010) provided a similar explanation of the origin of the anomalous cosmic rays. \begin{figure}[ht] \center \includegraphics[width=0.7\columnwidth]{fig9a.eps} \includegraphics[width=0.7\columnwidth]{fig9b.eps} \caption{{\it Upper plot}. Global view of the interaction of the solar wind with the interstellar wind. The spiral solar magnetic field (shown in dark dashed lines) is shown being deflected at the heliopause. The heliopause itself is being deflected by the interstellar magnetic field. (figure adapted from S. Suess (2006). {\it Lower plot}. A meridional view of the boundary sectors of the heliospheric currenty sheet and how the opposite sectors get tighter closer to the heliopause. The thickness of the outflow regions in the reconnection region depends on the level of turbulence. From LO09. \label{fig_anomal}} \end{figure} \section{Convergence with other Models of Reconnection and Acceleration} Since the introduction of the LV99 model, more traditional approaches to reconnection have been changed considerably. At the time of its introduction, the models competing with LV99 were some modifications of a single X-point collisionless reconnection. Those models had point-wise localized reconnection region and inevitably prescribed opening of the reconnection region upon the scales comparable to $L$ (see Figure~1). Such reconnection was difficult to realize in astrophysical conditions in the presence of random forcing which at high probability would collapse the opening of the reconnection layer. Single X-point reconnection were rejected in observations of solar flares by Ciaravella \& Raymond (2008). Modern models of collisionless reconnection resemble the original LV99 model in a number of respects. For instance, they discuss, similarly to the LV99, the volume filled reconnection, although one may still wonder how this volume filling is being achieved in the presence of a single reconnection layer (see Drake et al. 2006). While the authors still talk about islands produced in the reconnection, in three dimensions these islands are expected to evolve into contracting 3D loops or ropes (Daughton et al. 2008), which is similar to what is depicted in Figure~\ref{fig_recon}. Thus we do not expect to see a cardinal difference between the first order Fermi processes of the acceleration described in GL03 and later in Drake et al. (2006). This suggests that the backreaction of the particles calculated in Drake et al. (2006) considering the firehose instability may be employed as a part of the acceleration process described in GL03. The departure from the idea of regular reconnection and introduction of magnetic stochasticity is also obvious in a number of the recent papers appealing to the tearing mode instability\footnote{The idea of appealing to the tearing mode as a means of enhancing the reconnection speed can be traced back to Strauss (1988), Waelbroeck (1989) and Shibata \& Tanuma (2001). LV99 showed that the linear growth of tearing modes is insufficient to obtain fast reconnection. The new attack on the problem assumes that the non-linear growth of the islands due to merging provides their growth rates at the large scales that are larger than the direct growth of the tearing modes at those scales. This situation when the non-linear growth is faster than the linear one is rather unusual and requires further investigation.} as the process of enhancing reconnection (Loureiro et al. 2009, Bhattacharjee et al. 2009). The 3D loops that should arise as a result of this process should be able to accelerate energetic particles via the process described in GL03. As tearing modes can happen in a collisional fluid, this may potentially open another channel of reconnection in such fluid. The limitation of this process is that the tearing mode reconnection should not be too fast as this would present problems with explaining the accumulation of the flux prior to the flare. At the same time the idea of tearing reconnection does not have the natural valve of enhancing the reconnection speed, which is contrary to the LV model where the degree of reconnection is determined by the level of turbulence. Thus the periods of slow reconnection in LV99 model are ensured by the low level of turbulence prior to the flare. We believe that tearing reconnection can act to distabilize the Sweet-Parker reconnection layer, inducing turbulence. We note, however, that in most astrophysical situations one has to deal with the {\it pre-existing} turbulence, which is the consequence of high Reynolds number of the fluid. Such turbulence may modify or suppress instabilities, including the tearing mode instability. We claim that it, by itself, induces fast reconnection. We may also argue that even if the astrophysical fluid is kept initially laminar the fluid a thick outflow from the reconnection region caused by tearing is expected to become turbulent. We suspect that this may be the cause of the reconnection explosions reported recently (Lapenta 2008, Bettarini \& Lapenta 2009). \section{Summary} The successful testing of the LV99 model of fast reconnection opens avenues for the search of implications of that scheme. One of the implications of the model is the first order Fermi acceleration of energetic particles in the reconnection layer. As reconnection processes are expected to be ubiquitous in astrophysics, we expect the acceleration in reconnection layers to be also ubiquitous. Our simulations of the energetic particle acceleration in the reconnection layer provide results consistent with the first order Fermi acceleration. The origin of the anomalous cosmic rays may be related with the mechanism of particle acceleration via reconnection. {\bf Acknowledgments}. A.L. acknowledges NSF grants AST 0808118 and ATM 0648699, as well as the support of the NSF Center for Magnetic Self-Organization. \bibliographystyle{elsarticle-harv}
train/arxiv
BkiUd4Q4ukPiEV8g6J_8
4
0.8
\section{Introduction} \noindent The mass of any physical object is thought to be an entity which quantifies its amount of matter and energy. However, conceptually, it is defined as inertial and gravitational (passive and active) mass. The inertial mass is indeed a measure of an object's resistance to the change of its position due to an applied force. On the other hand, the passive gravitational mass measures the strength of an object's interaction with the gravitational field, while the active one is a measure of the strength of the gravitational field due to a particular object. However, Einstein's principle of equivalence asserts the equality of the gravitational $(m_g)$ and inertial mass $(m_i)$ of a body. In fact, by now all the experiments have failed to make any difference between them \cite{Weinberg, Hartlee}. More generally, such equivalence between the inertial and gravitational mass is included, for instance, in the weak equivalence principle (WEP), which confirms the universality of free fall such that all bodies in a given gravitational field and at the same space-time point would undergo the same acceleration. Further, the strong equivalence principle (SEP) is a generalization in the sense that it governs all the effects of the gravitational interaction on all physical systems and holds for all the \emph{laws of nature}. In fact, the WEP replaces the \emph{laws of nature} (which might be the case for SEP) by the \emph{laws of motion of freely falling bodies} \cite{Weinberg}. Independently of the equivalence of masses, however, the origin of mass itself is another conceptual problem of physics to solve, and the mechanism through which particles acquire mass is indeed an important subject from the point of view of the basic constituents and interactions among them in nature. In this context, the mass generation is identified with the symmetry breakdown of the Lagrangian corresponding to a particular theory where the mass comes as a consequence of the symmetry loss and features of the self-interactions in the standard model (SM), which is perhaps the most celebrated theory of modern elementary particle physics. The SM is a full relativistic quantum field theory, and it has been indeed incredibly successful in describing the electromagnetic, weak and strong interactions between the basic constituents (quarks and leptons) with the symmetry group $ G_{SM} \equiv SU(3)_{C} \otimes SU(2)_{W} \otimes U(1)_{Y}$ down to the distances as small as $10^{-16} cm$. In SM, the interaction of the constituents of matter with the Higgs field allows all the particles to have different mass \cite{Kan93}. The mass of the particles obtained via the so-called Higgs Mechanism is then proportional to the vacuum expectation value (VEV) of the Higgs field so that mass can be given in terms of the parameters of the Higgs potential. However, in GR it is still not possible to explain the origin of mass from the curved space-time, and the mass is used as a parameter with the equivalence $m_g \equiv m_i \equiv M$. The SM, therefore, provides a unique way to explain the acquisition of mass by basic constituents of matter via Higgs Mechanism \cite{Peskin}. Moreover, the cosmological consequences of mass production still demands further explanation and a unified theory in this context is still lacking. However, some physical aspects to completely solve the issue within gauge-gravitation theories, Supersymmetry (SuSy), Supergravity (SuGra) and Loop Quantum Gravity (LQG) are really promising. It is noteworthy that the appearance of mass in SM by the virtue of Higgs Mechanism comes in a natural way as well as it co-exists peacefully in various known processes of the physical world as predicted by SM itself. Unfortunately, some of the basic aspects of the Higgs Mechanism are still unknown as the Higgs particle is still not an experimental reality. However, with the developments in SuSy (which emphasize a symmetry between fermions and bosons), it leads to the possibility to cancel unphysical quadratic divergences in the theory as well as it provides an answer to the hierarchy problem between the electro-weak (EW) ($\sim 10^2$GeV) and Planck ($\sim 10^{19}$GeV) scale \cite{Kan93}. Therefore, the supersymmetric version of the SM may play an important role to stabilize the hierarchy against quantum corrections, and in the minimal supersymmetric SM (MSSM) with the radiative EW symmetry breaking, the stability of the Higgs field leads to mass generation to be around the EW scale. Remarkably enough, the problem of mass generation and its explanation is still a very important subject which needs to be explored in view of the various developments in modern physics, and it is certainly not a closed chapter for further discussions. In fact, the Higgs Mechanism along with the search of the Higgs particles at higher and higher energies has narrowed down the scope for other theories in this regard and has become a natural tendency to find an appropriate answer to understand the mass generation \cite{Kan93}.\\ In the present article, the problems associated with the mass generation are revisited from the perspectives of the different well known mass-containing Lagrangians. The Higgs Mechanism in view of the SM is summarized with the current experimental status. The various phenomenological aspects related to the Higgs Mechanism (in view of the different unification schemes of the fundamental interactions) are reviewed. The gravitational-like interactions and the possibility without interacting Higgs particles, which puts some constraints on Higgs Mechanism, are also discussed by the virtue of the Higgs field gravity. The impact of the Higgs scenario on the physical world is concluded along with its possible future prospects. \section{The mass generation and different symmetry breaking modes} \noindent In order to have a discussion on the mass generation mechanism within the notions of the analytical mechanics, let us first write Hamilton's principle of the stationarity (or least) action in the following form, \begin{equation} \delta \int \, {\cal L} \, \sqrt{-g} \, d^4x \, \equiv \, 0. \end{equation} In fact, there are two possible ways to introduce the mass term in a particular theory: \\ \noindent (i) With an additional term $({\cal L}_m)$ containing the mass in the general Lagrangian ${\cal L}$.\\ \noindent(ii) With the SSB via an extra term ${\cal L}_H \equiv {\cal L}_5$ having a $5^{th}$ symmetry-breaking force. \\ \noindent It is possible to achieve the well-known equations (viz Schr$\ddot{o}$dinger, Klein-Gordon and Dirac equation) in non-relativistic as well as in relativistic quantum mechanics (RQM) with the aforesaid first choice. The Lagrangians from which these equations can be derived, actually contain the mass terms and have the following form in the natural system of units, \begin{equation} {\cal L}_{SE}= -\frac{1}{2M}\, \psi^*_{,k} \psi_{,k} +\frac{i}{2}(\psi^*\, \psi_{,t}- \psi \, \psi^*_{,t})- V(\psi \psi^*) ,\label{Sch} \end{equation} \begin{equation} {\cal L}_{KG}= \frac{1}{2}\, \left(\psi^*_{,\mu} \psi^{,\mu}- M^2 \psi^* \psi \right),\label{KG} \end{equation} \begin{equation} {\cal L}_{D}= \frac{i}{2} \,( \bar{\psi} \gamma^\mu\, \psi_{,\mu}- \, \bar{\psi}_{,\mu} \gamma^\mu\, \psi )\, - M \, \bar{\psi}\psi, \label{Dir} \end{equation} where $M$ denotes the mass and ${,\mu}\equiv \partial _\mu $, while $\gamma^\mu$ represent the well-known Dirac matrices. The problem with this choice, however, relies on the well tested fact of parity violation (viz the CP violation in the $\beta$-decay in Wu-like experiments in the electro-weak interactions). This violation cannot be achieved by adding mass by hand because in the equations (\ref{Sch}-\ref{Dir}), the left- and right-handed particles couple all in the same way to vector-bosons in order to preserve the gauge invariance \cite{Kan93}. Moreover, a massive propagator (which gives the probability amplitude for a particle to travel from one place to another in a given time, or to travel with a certain energy and momentum, in this case for massive virtual particles) does not lose its longitudinal term and as well does not transform in a (transversal), massless, one in the limit $M \rightarrow 0$. As a consequence of this, most closed Feynman graphs diverge, which makes the theory non-renormalizable. It is therefore needed to have a theory with the requirements of renormalizablity and which can be achieved by the spontaneous symmetry breakdown, for which the existence of an extra scalar field (the Higgs field) is needed to make the theory (e.g. SM) mathematically consistent \cite{Kan93,Vel77}. The characteristics of different symmetry-breaking modes may therefore be defined from the point of view of the parity violation and renormalization. For the parity violation and well behaved propagators, the best choice is the symmetry breaking where the mass is produced as a consequence of the \emph{loss} of symmetry and self-interactions. The requirement for such breakdown of symmetry also demands another gauge invariant (i.e. current-conserving) term in the Lagrangian, which is identified with the interaction of the spontaneous production of mass. The symmetry breakdown can be given through different modes, depending on the properties of the ground state. In different quantum field theories, the ground state is the vacuum state, and it is therefore important to check the response of the vacuum state to the symmetry breaking. As such, there are three main modes \cite{Gui91} as given below: \\ \,\, \, (i) The \emph{Wigner}-Weyl mode, \\ \vspace{0.08cm} (ii) The Nambu-\emph{Goldstone} mode,\\ \vspace{0.08cm} (iii) The Higgs mode.\vspace{0.2cm} \\In these processes, the symmetry group $G$ breaks down to a rest-symmetry group $\tilde{G}$ (i.e. $G\rightarrow\tilde{G})$ with $\tilde{G}=\bigcap_{r=1}^n \tilde{G}_r$ where $n>1$ is valid for more than one breaking process. For instance, in the SM, the breaking $SU(3)_C \otimes SU(2)_W\otimes U(1)_Y \rightarrow SU(3)_C \otimes U(1)_{em}$ is valid, while for the grand unified theory (GUT) under $SU(5)$, the breaking $SU(5)\rightarrow SU(3)_C\otimes SU(2)_W\otimes U(1)_Y$ also takes place at about $10^{15} GeV$. However, another interesting example of symmetry-breaking comes from the fundamental asymmetry between space and time which is found in the signature of the relativistic metric \cite{Wet05} and is mostly given in \emph{ad hoc} manner. Such asymmetry may be generated as a property of the ground state following a symmetry breakdown in the universe from which the structure of the quantum field theory and gravitational field equations would be derivable.\\ In particular, the \emph{Wigner}-Weyl mode is the most usual symmetry-breaking mode in quantum mechanics (QM), with a real invariant vacuum which can be identified with the classical one as follows, \begin{equation} U|0> \, = \, |0>. \end{equation} The \emph{Wigner}-Weyl mode is indeed related to the existence of degeneracy among particles in the multiplets and the violation of which enforces an explicit symmetry breakdown in the Hamiltonian $H$. Such situation appears in the Zeeman effect where turning-on of the external fields causes the breakdown of the rotational symmetry. One more example of the \emph{Wigner}-Weyl mechanism may be seen in the breaking of $SU(3)_C$ to $SU(2)_{W}$ due to the effect of hypercharges, and which further breaks in $U(1)$ because of Coulomb interaction. However, the $U(1)$-symmetry remains unbroken because of the current-conservation law \cite{Gui91}. Further, in both the Nambu-\emph{Goldstone} and Higgs modes, the symmetry is actually not lost but camouflaged and hidden in the background of the mass generation. However, these two modes differ from each other through their gauge-symmetry while both of them are given by the vacuum defined as follows, \begin{equation} U|0> \, \neq \, |0>. \end{equation} It is worth mentioning that the Nambu-\emph{Goldstone} mode works \emph{globally} while the Higgs mode acts \emph{locally} in view of gauge invariance. The main difference between them is that in the Nambu-\emph{Goldstone} Mechanism both massive (Higgs) and massless (Goldstone) particles (generally bosons) appear, while in the Higgs Mechanism only the massive particles are present and the mass acquisition of gauge bosons is at the cost of the Goldstone particles, which are gauged away unitarily. The degrees of freedom of the massless particles, however, do not disappear from the physical spectrum of the theory. In general sense, the gauge fields absorb the Goldstone bosons and become massive while the Goldstone bosons themselves become the third state of polarization for massive vector-bosons. This interpretation is analogous to the \emph{Gupta-Bleuler} Mechanism where, with the quantization of a massless field $A^\mu$, the temporal degree of freedom of $A^\mu$ (i.e. $A^0$) cancels with longitudinal space-like components of $p_\mu \cdot {\bf A}$ in a way that $A_\mu$ becomes the transversal components of ${\bf p} \times {\bf A}$ \cite{Gui91}. In the Higgs Mechanism, the Goldstone mode cancels the time-like components of the gauge fields in such a way that the three space-like components remain intact and $A_\mu$ behaves like a massive vector-boson. The mass generation by this sort of way can best be identified in the Meissner effect in conventional superconductivity, and the SSB may therefore be applicable in explaining such mechanism also in non-relativistic theories \cite{Gre95}. However, an analogy between the Higgs Mechanism and the Meissner effect may be explained in terms of the Yukawa-Wick interpretation of the Higgs Mechanism where the Goldstone bosons at unitary gauge vanish because of the existence of long-ranged forces while their short-ranged behavior may be transcribed by Yukawa's theory for the massive fields. The condensed electron-pairs (the Cooper pairs) in the ground state of a superconductor may then be identified with a Higgs field which leads the magnetic flux expulsion with a finite range given by the penetration depth, which is basically the reciprocal effective mass acquired by the photons. For instance, in a Scalar-Tensor Theory (STT) of gravitation with symmetry breaking, which is derivable as the simplest Higgs-curvature coupled theory \cite{Hil87, Bij95} and is based on the analogous properties of the Higgs and gravitation \cite{Deh91, Bij94}, the Higgs field of the theory has a finite range which is the inverse of its Higgs field mass \cite{Bez07a}. There, analogies between the Higgs field for the Schwarzschild metric and the London equations for the Meissner effect also exist. In general, the coupling between the superconductor and Higgs is of more profound nature since it helps in modern contributions to understand SM, especially in context of dual QCD where the Higgs field with magnetic charge leads to the Meissner effect of color electric flux which provides a unique way to understand the quark confinement mechanism in the background of the magnetic charge condensation \cite{Mand, Hn1}. \section{The Higgs mechanism and unitary gauge} \noindent The mass generation through an interaction with a non-empty vacuum can be traced back to the $\sigma$-model proposed by Schwinger where the $\sigma$ and $\varphi_i$ ($i=1, \ldots, 3$) lead to the appearance of three massive and one massless vector bosons. The $\sigma$-model seems typical in view of the physical economy in comparison to the Higgs Mechanism, which demands the appearance of only one scalar field $\phi$ \cite{Hig64}. In fact, the scalar multiplet in the SM belongs to a doublet representation of the gauge group in the following form, \begin{equation} \phi={{\phi ^{+}} \choose {\phi ^{0}}}, \end{equation} which is defined with a non-trivial vacuum state having the characteristics of symmetry breaking of the gauge group $G$ to the rest-symmetry of the isotropy group $\tilde{G}$. The complex field $\phi^0$ can be further re-written in terms of real fields (i.e. $\phi^0=(\tilde{\sigma}+ i\chi) / \sqrt{2})$. With the spontaneous breakdown of the gauge symmetry, the minimal value of the energy-density $u$ is taken by the ground state value $\phi_0=v$ with $<\tilde{\sigma}>=v$. The $\tilde{\sigma}$ and $\chi$- fields may then be identified with the Higgs particles and Goldstone bosons respectively. The symmetry of the Lagrangian is then broken when particles fall from their false vacuum (with $\phi=0$) to the real one ($\phi=v$). In general, for such SSB, the less energy is required to generate a new particle (i.e. Higgs particle) with the associated features of the self-interaction. With Higgs bosons as neutral particles, the photons are not able to \emph{see} them and remain massless in electrodynamics, while, however, the neutral $Z$-bosons couple to Higgs bosons via a Weinberg-mixture with charged $W$-boson fields.\\ The Higgs mode, in fact, does not need to violate parity, while this indeed occurs in the $\sigma$-model \cite{Kan93}. Nevertheless, this violation is given in SM through the isospin scalar field $\phi$ as a doublet in its iso-vectorial form instead of only an iso-scalar with $\phi=vN$, where the unitary vector $N$ in the isospin-space satisfies $N^\dagger N=1$. On the other hand, the right-handed bosonic multiplets are only iso-scalar, while left-handed ones are iso-doublets (for up and down states). The multiplets acquire mass through the components of $N$, which in the unification model (viz $SU(5)$ for instance) is matrix-valued for the first symmetry-breaking. The mass of the states is determined by the VEV $v$ and an arbitrary coupling constant $g$. However, the parity violation appears naturally through the gauging of the group with the help of the non-canonical Pauli-$\sigma$-operators \cite{Deh95,Gei97}. Moreover, in the LQG, such parity violation is described by matching the \emph{immirzi} parameter (which measures the size of the quantum of area in Planck units) with the black hole entropy \cite{Ran06}.\\ In general, the simplest way to generate the spontaneous breakdown of symmetry is to have a Lagrangian with the Higgs potential $V(\phi)$ and a transformed gauged field $\phi'_a= U\phi_a=e^{\tilde{\lambda}^a \tau_a}=e^{\chi_a}$ in the following form, \begin{equation} {\cal L}_H= {\cal L}(\phi)= \frac{1}{2}\, \phi^\dagger_{;\nu}\, \phi^{;\nu} - V(\phi) = {\cal L}(\phi'),\label{L} \end{equation} where $;\mu \equiv D_\mu$ is the covariant derivative and the potential $V(\phi)$ has the form as follows, \begin{align} V(\phi)= \frac{\mu^2}{2} \,\phi^\dagger \phi + \frac{\lambda}{4!} \, (\phi^\dagger \phi)^2 \,\,,\label{Higgspot} \end{align} where $\mu^2<0$ and $\lambda>0$. Such theories are called $\phi^4$-theories. The last term in the potential (\ref{Higgspot}) is not bilinear and it is crucial for the apparent symmetry breakdown. The Lagrangian given by equation (\ref{L}) is invariant under the spatial-inversion (i.e. $\phi \rightarrow -\phi$) with the features of the tachyonic condensation (i.e. condensate for an imaginary mass with $\mu^2 <0$). Such conditions are needed to stay with the Higgs mode, which otherwise becomes a Wigner mode with classical vacuum where self-interactions lack to produce the necessary Higgs Mechanism at the relatively low energies of the hodiernal universe. However, with the possibility of the tachyonic condensation, the ground state $\phi_0$ becomes twice degenerate and $\phi_z=0$ has a maximal value for the energy-density $u$. The minimum energy is then given by the non-vanishing Higgs ground state value (i.e. $v\neq 0$) in the following form: \vspace{-0.2cm} $$ u_0= u(\phi_0)= -\frac{3}{2} \, \frac{\mu^4}{\lambda} \equiv u_{min}, $$ \vspace{-0.5cm} \begin{equation} \phi_0^{(+)}= \sqrt{-\frac{6\mu^2}{\lambda}} \, e^{i\alpha} \equiv \tilde{v}= v\, e^{i \alpha} \neq 0. \end{equation} For a pure scalar case, $v$ is to be chosen between $\phi_0^{(-)}$ and $\phi_0^{(+)}$. As such, in technical jargons, the circle of the localized minima for the minimality condition of $u$ is popularly known as a \emph{Mexican hat} and the regions with different $\phi_0$-values are called the \emph{topological defects} while those changing with the values $\phi=v\leftrightarrow -v$ are termed as \emph{interface domains}. Further, it is also possible to have the choice of $\alpha=0$ without making any restriction to the system since this does not demand any kind of physical changes. However, this choice does not allow mass to go through the phase transitions without changing its vacuum value. Therefore, even if the Lagrangian is invariant under phase transitions, it must suffer the loss of invariance explicitly through its ground state, and the particles that fall in this state interact with the Higgs bosons and slow down. In particular, in view of the Special Relativity (SR), the massless particles travel with the speed of light $c$ and massive ones have as speed $v<c$. So the mass generation of the particles may be interpreted in relation to their interaction with the Higgs field. In fact, the energy of the system is nothing but a meager and $\phi$ lies near the minimum of energy. It is, therefore, possible to expand the scalar field around its minimal state with its excited values $\hat{\phi}$ in the following form: \begin{equation} \phi=v+\hat{\phi}. \end{equation} The Lagrangian (\ref{L}) may now be given in iso-scalar form (only up to second order terms) as follows, \begin{equation} {\cal L}(\hat{\phi}) = \frac{1}{2}\,\hat{\phi}^\dagger_{,\nu}\, \hat{\phi}^{,\nu} - \frac{M_{H}^2}{2} \, \hat{\phi}^2 - \frac{\lambda}{3!} \, v \, \hat{\phi}^3 - \frac{\lambda}{4!} \, \hat{\phi}^4 \, \neq \,{\cal L}(-\hat{\phi}).\label{Lexc} \end{equation} The first term in the Lagrangian (\ref{Lexc}) corresponds to the kinetic energy of the Higgs field while the second one represents the mass term (i.e. $ M_{H}^2 \equiv -2\mu^2$) for the Higgs field. In fact, due to the presence of the term for the excited field (i.e. $\hat{\phi}^3$) in the Lagrangian (\ref{Lexc}), the symmetry is suddenly broken since the Lagrangian (\ref{Lexc}) is not spatially invariant anymore. However, the Lagrangian in the iso-vectorial form may be re-written as \begin{widetext} \begin{equation} {\cal L}(\hat{\phi}) = \frac{1}{2} \, \hat{\phi}^{\dagger a}\,_{,\nu} \, \, \hat{\phi}_a\,^{,\nu} + \frac{1}{2} \, g^2 A_{\mu a} \,^b\, \phi_0^{\dagger a}\, A^{\mu}\,_b \,^c \, \phi_{0 c} - \frac{\lambda}{4!}\,(\phi_0 ^{\dagger a} \, \hat{\phi}_a + \hat{\phi}^{\dagger a} \, \phi_{0 a})^2 ,\label{Lexc-vec} \end{equation} \end{widetext} where $\phi_a$ is a scalar quantity in spin-space. The Lagrangian in iso-vectorial form is invariant under local transformations for which it is needed to define the covariant derivative (different for left- and right-handed states because of the couplings which give the parity violation). As such, the massive term of the excited Lagrangian with these excited Higgs scalar field leads to the term of the mass of the gauge bosons as \begin{equation} M_{A} = \frac{1}{2}\, g^2 \, A_{\mu a}\,^b \phi_0^{\dagger a} A^\mu\,_b \,^c \phi_{0c} \sim ({\cal M}^2)^{ij}A_{\mu i}A^{\mu}\,_j, \label{A-M2} \end{equation} where the mass term for the gauge boson $A_\mu$ comes from the covariant derivative, and the mass-square matrix (operator), which is symmetric and real, is given below in the natural system of units: \begin{equation} ({\cal M}^2)^{ij}= 4\pi g^2 \phi_0^\dagger \tau^{(i}\tau^{j)} \phi_0 = 2\pi g^2 v^2 (c^{ij} \underline{1} + N^\dagger d^{ij}\,_k \tau^k N).\label{M2} \end{equation} Such broken phase of symmetry cannot be reached by perturbative expansion techniques from the normal vacuum. The SSB may therefore be thought of as a phase transition which is manifestly non-perturbative. However, the Higgs field does not give mass to the neutrinos in the usual form of SM. In the equation (\ref{M2}), $\tau^i=\tau^{i\, \dagger}$ are the generators of the gauge group and they satisfy the following properties for the rest and broken phase of the symmetry respectively: \begin{equation} \left. \begin{array}{c} \tau^j_a\, ^b N_b=0 \\ \tau^j_a\, ^b N_b \neq 0 \end{array} \right\} \,. \end{equation} The diagonal components of the mass-square matrix are positive definite and correspond to the mass of the gauge bosons coupled to the scalar vector field $\phi$. The masses corresponding to the Higgs scalar multiplet are then $M_{H}^2 = - 2\mu^2$ and $M_{G} = 0$ where the massless Goldstone boson $(M_G)$ belongs to the Nambu-Goldstone mode because of the global symmetry breakdown which carries the quantum number of the broken generator. Further, with the conditions of conserved current corresponding to an exact symmetry of the Lagrangian, the non-invariant vacuum follows $\chi_a|0> \neq 0$ (where $\chi_a$ is the Goldstone boson field) while Lorenz invariance implies that $\tilde{\lambda}^a\tau_a=\chi_a$ is valid at least for one $a$. Moreover, there must be a state $|m>\in {\cal H}$ with $<m|\chi_a|0> \neq 0$ for a massless spin-$0$ particle. Nevertheless, in nature such massless scalar spin-$0$ particles do not seem to exist and the Goldstone bosons would give rise to long-ranged forces in classical physics to generate new effects in various scattering and decay processes in nature. The possible non-relativistic long-range forces arising from the existence of massless Goldstone particles are spin-dependent, and it is quite difficult to observe them directly. However, in principle, the $\gamma_5$-couplings along with the CP- violation would change to scalar interactions which may then subsequently lead to spin-independent long-range forces \cite{Moh86}. The existence of such Goldstone bosons may also affect the astrophysical considerations with some sort of new mechanism for the energy loss in stars. Furthermore, the excited Higgs field distinguishes from the ground state by a local transformation that can be gauged away through an inverse unitary transformation $U^{-1}$. Such unitary transformations contain the Goldstone fields $\tilde{\lambda}$ (as the generator of symmetry) in the following form, \begin{equation} U=e^{\tilde{i\lambda}^a\tau_a}=e^{i\chi_a}, \end{equation} and it is possible to gauge out the Goldstone bosons from the theory by the following unitary gauge transformations, \begin{equation} \phi=\frac{\rho}{v} \, U \phi_0=\rho UN, \end{equation} and consequently we have, \begin{equation} \phi \rightarrow U^{-1}\phi =\varrho \, (U^{-1}U) N=\varrho\, N;~~~~ \psi \rightarrow U^{-1}\psi, \end{equation} with $\rho=\phi^\dagger \phi$, $\phi=v(1+\varphi)=v \,\zeta$. The absence of the Goldstone bosons is then mathematically permitted, which indicates that Goldstone's rule of massless particles in the broken phase of symmetry is only valid for the global gauge while the unitary gauge considered here is a local one. In SM, the gauge fixing for the leptonic multiplet is given by $N=(0,1)^T$ with $SU(2)_ W\otimes U(1)_Y$ as followed from the electro-weak interactions, \begin{equation} \left. \begin{array}{c} \psi_{fL}= {{\nu_f} \choose {e_f}}_L\\ \psi_{fR}=e_{fR} \end{array} \right\} \,,\label{psi} \end{equation} where the parity is defined by the projection operator $(1\pm \gamma^5)$ . The $+$ and $-$ signatures denote the left $(L)$ and right $(R)$-handedness of the particles respectively. In equation (\ref{psi}), $f$ represents the family of leptons i.e. $f=$($e$, $\mu$, $\tau$). For instance, the masses of the first generation leptons (i.e. electron and its corresponding neutrino) are given as follows, \begin{equation} \left. \begin{array}{c} M_e= G \, v\\ M_\nu =0\label{me} \end{array} \right\} \,, \end{equation} while the masses of the gauge-bosons are defined as below, \begin{equation} \left. \begin{array}{l} M_W=\sqrt{\pi} g_2 v= \left[\frac{\pi \alpha}{\sqrt 2\,G_F \sin^2 \vartheta_w}\right]^{\frac{1}{2}} = \frac{37.3 GeV}{\sin\vartheta_w}\\ \\ \quad M_Z=\frac{M_W}{c_w} > \, M_W \end{array} \right\} \,, \end{equation} where $G_F \simeq 1.166 \times 10^{-5}$GeV$^{-2}$ and $\alpha \simeq 1/137$ are the Fermi and Sommerfeld structure constants, respectively. However, the Weinberg term $c_w = \cos \vartheta_w$ (for the mixing of $Z^0$ with the $W^\pm$ and $A$) may be defined in terms of the coupling constant of the hypercharge in the following form, \begin{equation} g_2 \sin \vartheta_w = g_1 \cos \vartheta_w = e. \end{equation} The measurements for the Weinberg mixing angle $(\vartheta_w)$ within SM lead to the following approximate values, \begin{equation} \left. \begin{array}{c} \sin^2 \vartheta_w \cong 0\cdot23 \\ \cos^2 \vartheta_w \cong 0\cdot77 \end{array} \right\} \,. \end{equation} The Weinberg mixing angle not only relates the masses of $W^{\pm}$ and $Z^0$ bosons, but also gives a relationship among the electromagnetic $(e)$, charged weak $(g_2)$ and neutral $(g_1)$ couplings and ultimately leads to the following approximate values of mass for $W$ and $Z$ bosons, \begin{equation} \left. \begin{array}{c} M_W \cong 78 \, \text{GeV} \\ M_Z \cong 89 \, \text{GeV} \end{array} \right\} \,. \end{equation} However, the baryonic matter field is given in terms of the doublets where $f$ denotes the different generations of quarks (while in QCD, it counts the flavor with the color-triplet of $SU(3)_C$). This interaction is given by a new enlargement term in the Lagrangian (which is necessary to generate the mass of fermions via Higgs Mechanism) as below, $$ {\cal L}(\phi, \psi)= - G_f \,(\bar{\psi}^A \phi^{\dagger a}\hat{x}\psi_{aA}+ \bar{\psi}^{aA} \hat{x}^\dagger \phi_a \psi_A) $$ \begin{equation} \equiv - M_f(\bar{\psi}^AN^{\dagger a} \psi_{aA}+ \bar{\psi}^{aA}N_a \psi_A). \end{equation} The propagator for the exchanged boson (i.e. Higgs boson) via the Higgs interaction of two fermions turns out to be in the lowest order of the amplitude the same as derived from a Yukawa potential (i.e. a screened Coulomb potential). The propagator or Green function of such Klein-Gordon equation of a massive particle itself is enough to demonstrate that the Higgs interaction is of Yukawa-type. In fact, the scalar field $(\phi_a)$ couples with fermions $(\psi_A)$ through the Yukawa matrix $\hat{x}$, and the mass of the fermions may be then given as $M_f=G_f \, v$, and it is worth to notice that the equation (\ref{me}) is a special case of it. Such Higgs-coupling to the fermions is model-dependent, although their form is often constrained by discrete symmetries imposed in order to avoid 3-level flavor changing neutral currents mediated by the Higgs exchange. However, to have a more accurate picture, the quantum mechanical radiative corrections are to be added in order to have an effective potential $V_{eff}(\phi)$. Since the coupling is also dependent on the effective mass of the field, the $\lambda \, \mu^2 \phi^2$ and $\lambda^2 \phi^4$ terms from a vacuum energy contribution are caused by vacuum fluctuations of the $\phi$-field and must be incorporated in the system to have a correct physical description. Furthermore, there are additional quantum gravitational contributions and temperature dependence so that $V_{eff}(\phi) \rightarrow V_{eff}(\phi, T) \sim V_{eff}(\phi)+ M^2(\phi) \,T^2- T^4$. As a consequence, symmetry must be restored at high energies (or temperatures), especially in the primordial universe \cite{Cer96}, which is contrary to the present state of the universe. The symmetry breakdown through the cooling of the universe after the Big Bang, in turn, provokes the appearance of the four well-known elementary interactions. In this context, though, it is an open question if there are more than one Higgs particle; it would be necessary that at least two of them exist in usual unifying theories to occur SSB so that gauge bosons become massive at different energy-scales as in GUT. Moreover, there is the symmetry breakdown of parity, too (which may be understood in terms of \emph{axions}) \cite{Pec77, Wei78}, the mechanism of which was claimed experimentally demonstrated shortly sometimes earlier during the year 2006 \cite{Zav06}. \section{Some phenomenological aspects} \noindent Though the SM explains and foresaw many aspects of nature proven by the well tested experiments, there is still a problem of special relevance which is popularly known as the {\it hierarchy} problem. The EW breaking scale related to the Higgs mass is expected too high in SM, and this seems quite unnatural to many physicists. This problem has apparently no solution within SM. If it can be solved, it can then signify non-elementarity of the Higgs fields. Indeed, if they are elementary, then there must be a symmetry protecting such fields from a large radiative correction to their masses \cite{Kan97}. In order to do so, the first choice is to take the Higgs field as a composite structure containing only an effective field (in the way one can explain superconductivity as following Higgs Mechanism), and it then seems indeed possible to construct a renormalizable SM without the fundamental Higgs scalar field \cite{Gie03}. However, the second choice is to supersymmetrize the SM, which leads to the possibility to cancel the unphysical quadratic divergences in the theory; in this way, it is possible to provide an answer to the {\it hierarchy} problem between the EW and Planck mass scale. The supersymmetric version of SM may, therefore, be an important tool to stabilize the {\it hierarchy} against the quantum corrections. As such, in the MSSM with the radiative EW symmetry breaking, the stability of the Higgs potential leads to the mass generation around the EW scale. In such SuSy versions one uses two doublets for the Higgs field as follows, \begin{equation} \Phi_1={{\phi ^{0*}_1} \choose {\phi ^{-}_1}} ; \,\, \, \Phi_2={{\phi^{+}_2} \choose {\phi ^{0}_0}}\,, \end{equation} to generate the mass of up and down fermions. The most general potential \cite{Kan93} for this purpose is of the following form, \begin{widetext} $$ V ( \Phi_1\, \Phi_2) = M_{11}^2 \Phi^\dagger_1 \Phi_1+ M_{22}^2 \Phi^\dagger_2 \Phi_2- [M_{12}^2 \Phi^\dagger_1 \Phi_2] + \frac{1}{2}\lambda_1(\Phi_1^\dagger \Phi_1)^2 + \frac{1}{2}\lambda_2(\Phi_2^\dagger \Phi_2)^2 $$ \begin{equation} + \lambda_3 (\Phi_1^\dagger \Phi_1)(\Phi_2^\dagger \Phi_2) + \lambda_4 (\Phi_1^\dagger \Phi_2)(\Phi_2^\dagger \Phi_1) + \frac{1}{2}\lambda_5(\Phi_1^\dagger \Phi_2)^2 + [\lambda_6(\Phi_1^\dagger \Phi_1)+ \lambda_7(\Phi_2^\dagger \Phi_2)]+ h.c.\, , \end{equation} \end{widetext} where the $\lambda_6$ and $\lambda_7$ are often dropped out in view of the cancelation by the following symmetry: \begin{equation} \Phi_1 \rightarrow -\Phi_1 \, \Rightarrow \, M_{12}=0. \end{equation} In view of the above-mentioned potential, the scalar field develops a non-degenerate VEV (with $M_{ij}^2$ having at least one negative eigenvalue). The minimization of the potential leads to \begin{equation} <\Phi_1>= {{0} \choose {v_1}}; \,\,\, <\Phi_2>= {{0} \choose {v_2}}, \end{equation} which in turn defines, \begin{equation} v^2= v_1^2+ v_2^2=\frac{4M_W^2}{g^2}=(246\,\text{GeV})^2;~~~~ \tan\beta= \frac{v_2}{v_1}\,. \end{equation} In this scenario, there are eight degrees of freedom in total including three Goldstone bosons ($G^{\pm}, G^0$) those are absorbed by $W^\pm$ and $Z^0$ bosons. The remaining physical Higgs particles are two CP-even scalar particles ($h^0$ and $H^0$ with $M_{h^0} <M_{H^0}$) (one is a CP-odd scalar $A^0$ and the other is a charged Higgs pair $H^\pm$). In fact, two of the neutral fields come from the real part $\Re e\phi_1^0$ and $\Re e\phi_2^0$ and the third actually belongs to the imaginary part of a linear combination of $\Phi_1$ and $\Phi_2$ \cite{Moh86}. However, the mass parameters $M_{11}$ and $M_{22}$ can be eliminated by minimizing the scalar potential. The resulting squared masses for the CP-odd and charged Higgs states are then given as follows: \begin{alignat}{1} M_{A^0}^2=& \frac{2\, M_{12}^2}{\sin 2 \beta} - \frac{1}{2} v^2(2 \lambda_5+ \lambda_6 \tan^{-1}\beta+ \lambda_7 \tan\beta),\\ M_{H^\pm}^2=& M_{A^0}^2+ \frac{1}{2}v^2(\lambda_5- \lambda_4). \end{alignat} The SuSy, therefore, couples the fermions and bosons in such a way that the scalar masses have two sources (given by $v$) for their quadratic divergences, one from scalar loop which comes with a positive sign and another from a fermion loop with negative sign. The radiative corrections to the scalar masses can be controlled by canceling the contributions of the particles and their SuSy partners (i.e. s-particles) those come in the spectrum because of the breakdown of SuSy. Since SuSy is not exact, such cancelation might not be complete as the Higgs mass receives a contribution from the correction which is limited by the extent of SuSy-breaking. However, in the structure of this model, the quantum loop corrections would induce the symmetry-breaking in a natural way, and they may be helpful in solving some other conceptual problems of SM \cite{Kan93}. Since the appearance of the top-quark at an extremely high energy-scale (in comparison to the one of all other quark flavors) could not be explained within the well established notions of SM, it might be understood as consequence of an unknown substructure of the theory. Therefore, the SM and MSSM would be only an effective field theory with another gauge force which is strong at $SU(2) \otimes U(1)$ breaking-scale. On the other hand, in technicolor (TC) theory, the Higgs particles are not believed to be fundamental and the introduction of technifermions (a type of pre-quarks or preons) represent the quarks as composite particles with a new symmetry which is spontaneously broken when technifermions develop a dynamical mass (independently of any external or fundamental scalar fields. The developments of the TC theory, however, has not yet been fully able to suppress the possibility of such scalar fields completely out, so that an extended version of it would be required (i.e. ETC) \cite{Kan93}. Furthermore, another interesting fact is the heavy mass of the top-quarks, which leads to some models claiming that such a mass may only be generated dynamically with so-called top condensation \cite{Kan97} where the Higgs particles might have a phenomenological presence with the $t\bar{t}$ bound states of top-quarks \cite{Kan93, Bij94}). Next, and listing more, if one thinks about the existence of Higgs particles in terms of the developments in LQG, the existence of the Higgs particles can be questioned because in LQG the particles are derived though a preon-inspired (Helon) model as excitations of the discrete space-time. Further, the Helon model does not offer a \emph{preon} for Higgs (as a \emph{braid} of space-time) so that it lacks of the same until the model indeed finds an expansion with a Higgs \emph{braid} \cite{Bil06}. On the other hand, the phenomenological nature of Higgs (which might also explain other problems as the impossibility to measure the gravitational constant $G$ exactly \cite{Fae-Deh05}) might also be a consequence of a more profound coupling as with gravitation. In view of the fact that the Higgs particles couple gravitationally within the SM \cite{Bij95,Deh91,Deh90}, the consequences are discussed in detail in the next section. \section{The Gravitational-like interactions and Higgs mechanism} \noindent The general relativistic models with a scalar field coupled to the tensor field of GR are conformally equivalent to the multi-dimensional models, and using Jordan's isomorphy theorem, the projective spaces (like in the Kaluza-Klein's theory) may be reduced to the usual Riemannian $4-dim$ spaces \cite{Fau01}. Such scalar field in the metric first appeared in Jordan's theory \cite{Jor55} and manifests itself as \emph{dilaton}, \emph{radion} or \emph{gravi-scalar} for different cases, and those in fact correspond to a scalar field added to GR in a particular model \cite{Cot97}. The gravitational constant $G$ is then replaced by the reciprocal of the average value of a scalar field through which the strength of gravity can be varied (thus breaking the SEP), as was first introduced by Brans and Dicke \cite{Bra61} by coupling a scalar field with the curvature scalar in the Lagrangian $\cal{L}$.\\ However, a more general covariant theory of gravitation can accommodate a massive scalar field in addition to the massless tensor field \cite{OHan72, Ach73} so that a generalized version of the Jordan-Brans-Dicke (JBD) theory with massive scalar fields can be derived \cite{Fuj74}. It is worth mentioning that Zee was the first who incorporated the concept of SSB in the STT of gravitation \cite{Zee79}. It represents a special case of the so-called \emph{Bergmann-Wagoner} class of STTs \cite{Berg68, Bro01}. The latter is more general than that of the JBD class alone (where $\omega=const$ and $\Lambda(\phi)=0$), because of the dependence of the coupling term $\omega$ on the scalar field and of an appearing cosmological function. In Zee's approach, the function $\Lambda(\phi)$ depends on a symmetry breaking potential $V(\phi)$, and it is therefore quite reasonable to consider a coupling of Higgs particles with those which acquire mass through a short-ranged gravitational-like interaction within SM \cite{Bij95, Deh91,Deh90}. Such a model is compatible with Einstein's ideas to the Mach principle \cite{Ein13}.\\ The simplest Higgs field model beyond the SM consists of a single particle which only interacts with the Higgs sector of SM. With a fundamental gauge-invariant construction block $\phi^\dagger \phi$, the simplest coupling of a particle to the Higgs field may be defined as $\breve{\lambda} \phi^\dagger \phi X $ where $X$ is a scalar field. The Higgs field develops a VEV and, after shifting it, the vertex leads to a mixing between the scalar and the Higgs field, which may give rise to new effects those do not involve the scalar explicitly \cite{Hil87}. The X-field may not be considered as fundamental, but an effective description of an underlying dynamical mechanism is possible through its connection to the technicolor theories \cite{Hil87} (i.e. alternatively to a connection between the gravity and Higgs sector). In fact, both the gravity and Higgs particles possess some universal characteristics, and such a commonality leads to a relation between the Higgs sector and gravity which is popularly termed as the Higgs field gravity \cite{Deh91}. Further, there may be a similarity between $X$ and the hypothetical graviton since both are the singlets under the gauge group \cite{Bij95}. They have no coupling to the ordinary matter and therefore have experimental constraints for their observations. One can even argue about their absence from the theory because they can have a bare mass term which can be made to be of the order of the Planck mass and that makes these fields invisible. However, one can assume that all the masses including that of the Planck mass are given by SSB processes in nature. In this case, there is a {\it hierarchy} of mass scales $M_P\gg v$. With these similarities, $X$ can be considered to be essentially the graviton and may be identified with the curvature scalar ${\cal R}$ \cite{Bij95}. Moreover, this possibility may be used to explain the naturalness problem, especially since other candidates as top quark condensation or technicolor have not functioned well so far and supersymmetry doubles the spectrum of elementary particles replacing Bose (Fermi) degrees of freedom with Fermi (Bose) degrees of freedom and with all supersymmetric particles which are by now beyond physical reality. However, the cut-off of the theory at which the Higgs mass is expected may not be so large and of the order of the weak scale \cite{Bij94}. The Higgs particles therefore seem to couple naturally to the gravitation, and a STT of gravitation with a general form of Higgs field for symmetry breaking can indeed be derived within SM \cite{Deh92, Deh93}. Moreover, Higgs may be explained as a phenomenological appearance of the polarization of the vacuum since it leads to a cosmological term which may be identified in terms of the Higgs potential with a functional coupling parameter $G$ \cite{Deh92,Deh93}. In such STTs the scalar field $\phi$ may behave similarly to a cosmon \cite{Bij94}. The Higgs field may, therefore, also contribute in cosmological range as a part of the Cold Dark Matter (CDM) because of the functional nature of the coupling $G$ and the self-interacting DM (SIDM) \cite{Ges92}-\cite{Bez07b}. However, the unification of gravitation with the SM and GUT using Higgs field may also explain Inflation for baryogenesis and solve the flatness problem. The scalar fields and Higgs Mechanism lead to various inflationary models in cosmology where the cosmological constant produces the inflationary expansion of the universe \cite{Per98}-\cite{Sch79}. Within the original, \emph{old} inflation \cite{Gut81}, the scalar field should tunnel from its false vacuum to the minimal value $v$ while, on the other hand, in the \emph{new} inflation it rolls slowly from $\phi \ll v$ to $v$ and then oscillates near to it. However, in case of the \emph{chaotic} inflation, the rolling-over is explained in the range from $\phi \gg v$ to $v$ \cite{Cer96}. The new inflation can lead in a symmetry-broken STT to a deflation epoch before the expansion, and then a fine-tuning is needed for the universe not to collapse in a singularity again. However, for the chaotic inflation, which seems to be the most natural form of inflation not only within theories of induced gravity with Higgs fields but in general models \cite{Lin05}, too, a singularity at the beginning of time is not needed because of a breaking of the Hawking-Penrose energy condition \cite{Pen65}-\cite{Haw68}. In fact, this is possible for sufficiently large negative pressures which are possible as a consequence of Yukawa-type interactions \cite{Lib69} that might play an important role in early stages of the universe \cite{Deh75}. Therefore, the chaotic inflation is preluded in general not by a singularity of a Big Bang but by a so-called Big Bounce, having its signatures within LQG \cite{Ash06}. In particular, in a theory of induced gravity with Higgs mechanism \cite{Cer95b}, after inflation, the Higgs potential might decay in to the baryons and leptons with the oscillations from the Higgs potential which might be interpreted in terms of the SIDM in form of the Higgs particles. The actual interactions of Higgs particles are only given by their coupling to the particles within SM and the field equation for the Higgs field \cite{Deh92} can be given as follows, \begin{align} \phi_{;\mu}\,^{;\mu} +\mu^2\phi+ \frac{\lambda}{6}(\phi^\dagger \phi) \, \phi= -2G \, (\,\bar{\psi}_{R}\hat{x} \psi_L) , \end{align} where $\hat{x}$ is the Yukawa coupling operator which represents the coupling of the Higgs field to the fermions, and the subscripts $L$ and $R$ refer the left- and right-handed fermionic states of $\psi$ respectively. In SM the source of the Higgs field are the particles that acquire mass through it, and the Lagrangian for the case of a coupling of the Higgs field to space-time curvature through the Ricci scalar $({\cal R})$ \cite{Deh92} is given in the following form, \begin{alignat}{1} {\cal L}= [\, \frac{\breve{\alpha}}{16\pi}\, \phi^\dagger \phi\, {\cal R} + \frac{1}{2}\, \phi^\dagger_{;\mu} \, \phi^{;\mu}- V(\phi)\, ] + {\cal L}_M \, , \label{LHSTT} \end{alignat} where the dimensionless gravitational coupling parameter $\breve{\alpha}$ can be interpreted as a remnant of a very strong interaction which is given by the ratio of Planck's mass to boson mass as $\breve{\alpha} \simeq (M_P/M_B)^2\gg 1$ \cite{Bij95}. On the other hand, $\breve{\alpha}$ is coupled to the gravitational strength $G$, on which the redefined scalar field mass of the model (i.e. $M_H$) is dependent. This mass is expected around a $10^{-17}$ part of the value needed in SM and $10^{-4}$ of the one in GUT under SU(5). It is even possible for the Higgs particles to decouple from the rest of the universe and interact only gravitationally. That is the case if the same $\phi$ has a coupling to ${\cal R}$ in ${\cal L}_M$ for mass generation of the gauge bosons \cite{Deh93}. Within this scalar-tensor theory then, the Higgs field equations with coupling to ${\cal R}$ and $\phi$ ($\sim$ SM) and only to ${\cal R} $ ($\sim$ GUT), respectively, are given below: \begin{alignat}{1} \phi^{;\mu}\,_{;\mu}- \frac{ \breve{\alpha}}{8\pi} \, {\cal R} \, \phi + \mu^2\phi+ \frac{\lambda}{6}\, (\phi^\dagger \phi) \, \phi&= -2G\, (\bar{\psi}_R \hat{x} \psi_L) \,,\label{kphi}\\ \phi^{;\mu}\,_{;\mu}- \frac{ \breve{\alpha}}{8\pi} \, {\cal R} \, \phi + \mu^2\phi+ \frac{\lambda}{6} \, (\phi^\dagger \phi) \, \phi & = 0. \end{alignat} The field equation (\ref{kphi}) after the symmetry breakdown (with the excited Higgs field which satisfies $(1+\varphi)^2= 1+\xi $) acquires the following form, \begin{equation} \xi^{,\mu}\,_{;\mu}+ M^{*2}\xi=\frac{1}{1+4\pi/3\breve{\alpha}}\, \frac{8\pi \tilde{G}}{3} \, [\,T - \sqrt{1+\xi} \,\, \bar{\psi}\hat{m}\psi \,],\label{trace} \end{equation} where $\tilde{G}=({\breve{\alpha} v^2})^{-1}$ is related to Newton's constant. The Higgs field mass is given by \begin{equation} M^{*2}= l^{-2}= \left[ { \frac{16 \pi \tilde{G} ( \mu^4 / \lambda ) }{1 + \frac {4 \pi }{3 \breve{\alpha} } } } \right] = \left[\frac{\frac{4\pi}{9\breve{\alpha}}\lambda v^2}{1+ \frac{4\pi}{9\breve{\alpha}} }\right] , \label{M} \end{equation} which indicates that the Higgs field possesses a finite range defined by the length scale $l$. $T$ is the trace of the energy-stress-tensor $T_{\mu \nu}$ given in the following form, \begin{equation} T^{\mu \nu} = \frac{i}{2} \, \bar{\psi}\gamma^{(\mu}_{L,R}\psi^{;\nu)}+ h.c.- \frac{1}{4\pi} \, (F^{\mu}\,_\lambda^a F^{\nu \lambda}_a - F^a_{\alpha \beta}F_a^{\alpha \beta}g^{\mu \nu})\,. \label{FT} \end{equation} The field-strength tensor in equation (\ref{FT}) is defined as $F_{\mu \nu}=-ig^{-1}\, [{\cal D}_\mu,{\cal D}_\nu]$ where ${\cal D_\mu}$ is the covariant derivative. However, the generalized Dirac matrices $\gamma^\mu=h_a^\mu \gamma^a$ in equation (\ref{FT}) satisfy the following relation, \begin{align} \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu=2 \, g^{\mu \nu} \, \underline{1}\,. \end{align} The trace of the energy-stress tensor as mentioned in equation (\ref{trace}) is then given as follows, \begin{align} T=\frac{i}{2}\bar{\psi}\gamma^\mu_{L,R}\psi_{;\mu}+ h.c.= \sqrt{1+\xi} \, \,\bar{\psi}\hat{m}\psi. \end{align} However, in view of the coupling of $\phi$ with the matter-Lagrangian, the energy-stress tensor and source of Higgs particles cancel the contribution due to each other such that the Higgs particles are no longer able to be generated and interact only through the gravitational channel. \section{Search for Higgs boson and constrains} \noindent The search for the Higgs boson is the premier goal for the high energy physicists as the SM without the Higgs boson (or at least Higgs Mechanism) is not manifestly consistent with nature. It is often said that the Higgs boson is the only missing piece of SM as the top-quarks are now already subject of experimental reality. The Higgs bosons could not be generated so far in the particle accelerators, although their practical reality in explaining mass is not questioned at all and has been proved in many ways. However, the fundamental character of Higgs bosons still demands more explanation, and due to the absence of experimental evidence, they lie in the category of yet to be discovered objects. From the point of view of the search for Higgs particles, SuSy-models are leading candidates (although no supersymmetric particles have been discovered so far), while TC-models do not contain Higgs particles at all and some gravitational theories are often interpreted as with Higgs particles only interacting gravitationally. The MSSM, having the particle spectrum of SM along with the corresponding superpartners and two Higgs doublets in order to produce mass, is consistent with supersymmetry as well avoiding the gauge anomalies due to the fermionic superpartners of bosons, stabilizing the search for Higgs mass. Using group renormalization techniques, MSSM Higgs masses have been calculated, and two specific bounds on it can be made for the case when the top-squark mixing is almost negligible and for the case when it is maximal. The assumption $M_T = 175$GeV and $M_{\overline T} = 1$ TeV leads to $M_{h^0} \gtrsim 112$ GeV when the mixing is negligible, while maximal mixing produces the large value of $M_{h^0} \gtrsim 125$ GeV. On the other hand, within a MSSM with explicit CP violation, to constrain CP phases in the MSSM, the measurements of thallium electric dipole moments (DPM) are used \cite{Lee07}. The present experimental constraints suggest that the lightest Higgs mass (say $H_1$) has to lie in the range 7 GeV $\lesssim M_{H1}\lesssim 7.5$ GeV ($\tan \beta \backsimeq 3$), or $\lesssim 10$GeV ($3\lesssim \tan \beta \lesssim 5$), assuming mild cancelations in the thallium EDM. In a scenario with explicit CP violation in MSSM, the lightest Higgs boson can be very light ($M_{H1}\lesssim 10$GeV), with the other two neutral Higgs bosons significantly heavier ($M_{H2,H3}\gtrsim 100$GeV). Here, CP is explicitly broken at the loop level, and the three neutral MSSM Higgs mass eigenstates have no longer CP parities due to a CP-violating mixing between the scalar and pseudo-scalar neutral Higgs bosons. The lightest Higgs boson is mostly CP odd, and its production possibility at the Large Electron-Positron Collider (LEP) is highly suppressed. The second-lightest Higgs ($H_2$) at $\sim 110$ GeV dominantly decays in $H_1$, which then decays in two $b$ quarks and $\tau$ leptons. This leads to a decay mode containing 6 jets in the final state \cite{Lee07} that was recovered with only low efficiency by the LEP2.\\ In the search for the Higgs boson, the mass of Higgs particles is a constraint, and searches for it started in the early 1980's with the LEP1, and without knowing all parameters, it was nearly impossible to know at which energy-scale to search. In this constraint, maximal mixing corresponds to an off-diagonal squark squared-mass that produces the largest value $M_{h^0}$ with extremely large splitting in top-squark mass eigenstates. On the other hand, the weak scale SuSy predicts $M_h \gtrsim 130$ GeV, all relatively in accordance with a possible Higgs mass of the order $114$GeV for which CERN presented possible positive results in September 2000 (this was achieved after delaying the shut-down of LEP and reducing the number of collisions for the Higgs search to get additional energy and work over the original capacities of the collider). However, the experiments were forced to stop for further improvement in the accelerator (towards the new Lepton-Hadron Collider (LHC)) and such results could not be achieved once again by other laboratory groups. Further, the updates presented in 2001 lessened the confidence and more thorough analysis reduced the statistical significance of the data to almost nothing. However, with $M_T$ known, at least another parameter is given in the theory, i.e. $\tan \beta ={v_2} / {v_1}\thickapprox 1$ for all SuSy energy-scales \cite{Kan93}. Here, $v_i$, $i=1,2$ are the ground state values of each Higgs doublet needed for SuSy. In fact, the search for Higgs uses different possible decaying processes, and especially in the LEP, the decaying processes of electron-positron collisions produce $WW$, $ZZ$ and $\gamma \gamma$-pairs in most of the cases, given in the following form \cite{Wel03}, \begin{equation} \left. \begin{array}{c} e^+e^-\rightarrow W^+W^-\\ e^+e^-\rightarrow ZZ\\ e^+e^-\rightarrow W^+W^-\gamma\\ e^+e^-\rightarrow \gamma \gamma \end{array} \right\} \,. \end{equation} There are also other possible channels where hadrons or heavier lepton-pairs are seen in $e^+e^-$ collisions as given below, \begin{equation} \left. \begin{array}{c} e^+e^-\rightarrow e^+e^- q\bar{q}\\ e^+e^-\rightarrow q\bar{q}(\gamma)\\ e^+e^-\rightarrow \mu^+\mu^- \end{array} \right\} \,. \end{equation} Nevertheless, in experiments searching for Higgs particles, it is important to separate them from the $HZ$-channel (i.e. $e^+e^- \rightarrow HZ$), and for this one has to pick out the $H$ and $Z$ decay products against the background of all other decay channels, although the cross-section is very small for the $HZ$ channel with respect to hadron ones, and smaller for the greater masses of the Higgs particles. Especially the $ZZ$ production is an irreducible background to $ZH$ production. The $Z$ bosons can decay in $W$ bosons or, from an excited state, in other $Z$ and Higgs bosons $H$. Then again, $H$ is expected to decay into four jets with 60\% possibility in the form of heavy hadrons, \begin{equation} \left. \begin{array}{c} H \rightarrow b\bar{b} \\ Z\rightarrow q\bar{q} \end{array} \right\} \,. \end{equation} There is missing energy with 18\% possibility, while a leptonic channel can exists having 6\% possibility, \begin{equation} \left. \begin{array}{c} H \rightarrow b\bar{b} \\ Z\rightarrow l^+l^- \end{array} \right\} \,. \end{equation} Moreover, another channel, which has 9\% possibility, is the $\tau$-channel, \begin{equation} \left. \begin{array}{c} H \rightarrow b\bar{b}(\tau^+ \tau^-) \\ Z\rightarrow \tau^+ \tau^-(q\bar{q}) \end{array} \right\} \,. \end{equation} Thus, experiments searching for $Z$ boson events are therefore accompanied by a pair of bottom-quarks which all together have enough energy to come from a very heavy object (the Higgs boson candidates). Then, the total number of events in all decaying channels are to be compared against the total number expected in the theory along with the measured particle energy against the machine performance at all time-events to be sure that the changes in the accelerator energy and collision rate do not affect the interpretation. The cross-section for $HZ$ channels is meager and dependent on the mass of the Higgs particles. For a Higgs mass of $M_H=110$ GeV, the cross-section for $e^+e^-\rightarrow HZ$-decays is already smaller than for $ZZ$ ones, and decreasing for higher masses. For energies greater than $110$ GeV, the cross-section of $e^+ e^- \rightarrow e^+ e^- q \bar{q}$-decays is the biggest, in the scale of $10^4 pb$ and increasing, while $e^+ e^- \rightarrow q \bar{q} (\gamma)$ is the next one, around $10^3 -10^2 pb$ and increasing \cite{Wel03}. The cross-sections of a decay in muons $\mu^+ \mu^- (\gamma)$ and in $\gamma$-rays are almost equal. The decay channel in weakons $e^+ e^- \rightarrow W^+ W^-$ ascends rapidly in energies higher than around $140$GeV, and it is almost constant after around $170$GeV with a cross-section something larger than for $WW$ and $\gamma \gamma$ in a cross-section scale of $10pb$. The decay channel in $ZZ$ is much weaker but perceivably higher than zero, around $1pb$ after energies in the scale of $180$GeV. The same for the shorter $W^+W^-\gamma$ decay channel. The $HZ$ channel is expected even weaker than the last ones and perceivably unlike zero only after around $220$GeV, in a cross-section scale of $10^{-1}pb$. For the decay channels to be analyzed, it is important to detect the $Z$ decays. Further, OPAL also detects $Z\rightarrow e^+e^-$ events. The electron pair events have low multiplicity and electrons are identified by a track in the central detector and a large energy deposit in the electromagnetic calorimeter, $E/p = 1$. The $Z\rightarrow \mu^+\mu^-$ events are also analyzed in L3 where the muons penetrate the entire detector and let a small amount of energy in the calorimeters. The L3 emphasizes lepton and photon ID with a precise BGO crystal ECAL and a large muon spectrometer. All detectors reside inside a r=6m solenoid with a magnetic field B=0.5T. The $Z\rightarrow \tau^+\tau^-$ events are also detected by the DELPHI collaboration. Tau lepton-decays are dominated by 1 and 3 charged tracks, with or without neutrals, missing neutrino(s) and back-to-back very narrow \emph{jets}. For them, DELPHI has an extra particle ID detector known as RICH. However, to detect heavy hadrons in nature, it is important to remind that they decay weakly, sometimes in leptons with long lifetime and characteristic masses and event shapes. For instance, $b$ and $c$ hadrons decay in to the leptons around $20\%$ with high momentum $p$. The electrons are then ionized in tracking chambers while the muons match between the central track and muon chambers. Moreover, the leptons give charge to the decaying hadron as in $e^+e^-\rightarrow Z\rightarrow b\bar{b}$ in L3. However, in the LHC a Higgs mass of up to twice of the $Z$ boson mass might be measured. The production mode is based on partonic processes, as in the Tevatron, and the greatest rate should come from gluon fusion to form a Higgs particle ($gg\rightarrow H$) via an intermediate top-quark loop where the gluons produce a virtual top-quark pair which couples to the Higgs particles. Furthermore, the alternatives are the channels of hadronic jets, with a richer kinematic structure of the events, which should allow refined cuts increasing the signal-to-background ratio. The latter channels are the quark-gluon scattering ($q(\bar{q})g\rightarrow q\bar{q}H$) and the quark-antiquark annihilation ($q\bar{q}\rightarrow gH$), both dominated by loop-induced processes involving effective $ggH$ and $ggHZ$-couplings \cite{Bre04}. Nevertheless, there is still the possibility of more decaying channels, and the generalizations of SM (for example in supersymmetric models) demands the existence of more possible decays with supersymmetric particles (such as through squark loops). Such a generalization might be needed with respect to the problems those are not solvable within SM and seem to be definitely secured within the supersymmetric version of SM (i.e. MSSM). However, experimental evidences for supersymmetric particles are important to sustain the physical reality of the theory as well as to clarify the reason of such heavy masses (if these particles really do exist). In fact, the self-consistency of SM to GUT at a scale of about $10^{16}$GeV requires a Higgs mass with the upper and lower bounds (2003) as given below \cite{Wel03}, \begin{align} 130\, \text{GeV}\lesssim M_H\lesssim 190\,\text{GeV}, \end{align} and which was corrected in 2004 (after exact measurements of top quark mass) with the following value, \begin{align} M_H\lesssim 250\,\text{GeV}\label{bound}. \end{align} Such higher values of mass for the Higgs bosons make the theory non-perturbative while too low values make vacuum unstable \cite{Wel03}. From the experimental point of view and within the standard models, the Higgs masses $M_H < 114$GeV are excluded at least with 95 \% of the what we call confidence level. The Fermi laboratory also announced an estimate of 117GeV for the same in June 2004. However, the EW data strongly prefers the light Higgs bosons and according to the fit precision of all data, the most likely value of Higgs mass should be slightly below the limit set by the direct searches at LEP2, while the upper limit for Higgs mass lies around $220$GeV at 95\% of confidence level \cite{D0Col}, which is given by equation (\ref{bound}) after making all the relevant corrections. Moreover, it would be quite interesting to look for an existing theoretical prediction of a Higgs mass around 170GeV \cite{Cha06}, which comes from an effective unified theory of SM based on noncommutative geometry and with neutrino-mixing coupled to gravity. The Higgs boson mass is, however, basically unrestricted, but even there are even indications from lattice calculations that the simplest version of EW interaction is inconsistent unless $ M_H \lesssim 700$GeV \cite{Wel03}. With such a mass higher than $800$GeV, the Higgs bosons would be strongly interacting, so that many new signals would appear in the Higgs boson scenario. There are also some evidences for the indirect searches of Higgs scalar in Ultra High Energy (UHE) cosmic rays interactions \cite{udg}. {\it Where the Higgs particles do appear} ? is of course an open question, but an important puzzle to solve in the elementary particle physics, and it may be consisting of some new conjectures which have still to come in modern physics. \section{Concluding remarks} \noindent The SM of modern elementary particle physics provides a concise and accurate description of all fundamental interactions except gravitation. The answer of the fundamental problem which allows the elementary particles to become heavy is now addressed in terms of the Higgs boson in SM, which is quite unlike either a matter or a force particle. The Higgs Mechanism is, therefore, a powerful tool of modern particle physics which makes the models mathematically consistent and able to explain the nature of fundamental interactions in a manifest way. The bosons and fermions are believed to gain mass through a phase transition via Higgs Mechanism. In this way, the particles are able to be coupled with experiments, and a theoretical explanation may be given of how the mass generation takes place. Nevertheless, the Higgs particles, belonging to the Higgs field, are still not experimental reality and need to be observed to make any model complete. In the same way, the SM might be needed to be generalized consistently in view of the different hues of unification schemes and other models, viz GUT, SuSy, TC, STT (with or without Higgs Mechanism). One more possibility to answer the problem comes from the LQG, which seems to explain the general nature of all particles in space-time.\\ The Higgs particles interact in a gravitative and Yukawa form, but their nature is still not completely understood. Their fundamental existence is not a fact until and unless they are observed in high energy experiments such that the SSB could finally be believed to be the natural process of the mass generation mechanism. On the other hand, the Higgs particles may turn out to couple only gravitationally with a possibility to be generated in the accelerators in form of the SIDM. The search for Higgs particles is a very important task in physics and it is believed that their mass would be achievable with the future generation of high energy experiments (especially in those which are scheduled to start at LHC in near future). In a lucid way of speaking, the Higgs bosons are believed to have a mass of less than $250$GeV and over $130$GeV, and the current experimental status is that they are heavier than $114$GeV. The search for the Higgs boson is still a matter of speculation in the absence of clear experimental evidences, and the detection of Higgs particles as a real observable particle in future will be a momentous occasion ({\it eureka moment}) in the world of elementary particle physics to certify the basic ideas of SSB for the mass generation in the universe. \\ \\ {\bf Acknowledgments} : The authors are grateful to Professor H. Dehnen for his kind motivation and help to substantially improve the manuscript and also thankful for his warm hospitality during their stay at the Fachbereich Physik, Universit$\ddot{a}$t Konstanz, Germany. The authors would like to sincerely thank Professor F. Steiner and R. Lamon, Universit$\ddot{a}$t Ulm, Germany for various interesting as well as stimulating discussions and useful comments. \newpage
train/arxiv
BkiUcIbxK1ThhCdy7Wmi
5
1
\section{Introduction} The propagation of coherent light through scattering media yields random wavefields with typical intensity structures called optical speckles. The control of light distribution inside and through complex media by wavefront modulation of the impinging beam is of critical importance for application ranging from bio-imaging~\cite{Yang_NP_15} to telecommunications~\cite{Padgett_OL_12}, for instance by multiplexing information with orbital angular momentum~\cite{Li_LSA_19}. Information transmission through diffusers is typically characterized in terms of field and intensity correlations~\cite{Akkermans_Montambaux}. For diffusers exhibiting so-called `'memory effect''correlations, important invariants were identified under specific spatial (tilt and shift) transformations ~\cite{Stone_PRL_88,Feng_PRL_88,Judkewitz_NP_15,Vellekoop_optica_17}. Additionally, regardless of the wavefront of the impinging beam, critical points in random wavefields exhibit many topological correlations~\cite{Freund_1001_correlations}, which thus demand the development specific tools to be analyzed. Optical vortices are especially important critical points since they are centered on singular phase points coinciding with nodal points of the intensity. They spontaneously appear in random wavefields~\cite{Berry_PRSLA_74}, and thereby allow efficient super-resolution microscopy~\cite{Pascucci_PRL_16,Pascucci_ArXiv_17}. The present work aims at exploring the possibility to manipulate topological correlations between critical points in random wavefields under symmetry control and spiral wavefront modulation, in a Fourier plane of the impinging beam. Critical points are characterized by their topological charge and their Poincaré number~\cite{Dennis_thesis}. They may typically be controlled by applying phase or amplitude masks in a Fourier plane. Any smooth and regular transform of the wavefield (either in phase or amplitude) induces changes preserving both the topological charge and the Poincaré number~\cite{Freund_1001_correlations,Nye_PRSLA_88}. Noteworthy, these conservation rules account for the topological stability of isolated vortices of charge $1$ in speckles since the creation or annihilation of vortices can only occur by pairs~\cite{Nye_PRSLA_88, Pascucci_PRL_16}. As opposed to smooth phase transforms, the addition of a spiral phase mask in a Fourier plane is a \emph{singular transform} and results in a change of the total orbital angular momentum~\cite{Larkin_JOSA_01,Padgett_PO_09}. Recently, considering correlations between the spatial distribution of critical points in a speckle under such spiral phase transforms~\cite{Gateau_PRL_17}, we observed a strong inter-play between intensity maxima and optical vortices. More precisely, the obtained results suggested that the topological charge of these critical points were all incremented by applying a $+1$ spiral phase mask in the Fourier plane. The impossibility to spontaneously get $+2$-charged vortices (unstable and thus unlikely in random light structures~\cite{Freund_OC_93}) resulted in the observation of a partial cyclic permutation of the three populations of critical points (namely, maxima and $\pm 1$-charged vortices). Furthermore, as a third kind of possible transform, it was observed that the orbital angular momentum may be not conserved when using amplitude masks with a high degree of symmetries~\cite{Cordoba_PRL_05,Xie_OL_12}. As a result, optical vortices can be created using simple amplitude masks~\cite{Visser_PRA_09,Cheng_AO_14}. This property proved to be of interest for imaging applications to reveal symmetries of an imaged object~\cite{Boyd_LSA_17,Willner_OL_17,Chen_OL_16} and for allowing topological charge measurements~\cite{Chavez-Cerda_PRL_10}, especially in astronomy~\cite{Beijersbergen_PRL_08}. Here, combining spiral phase transforms of order $n$ with star-like amplitude masks having discrete point group symmetries $D_3$ and $D_4$, we study experimentally the topological correlations between intensity maxima and optical vortices in speckles. A new co-localization criterion is proposed, inspired by statistical mechanics. Although random wavefields do not possess any symmetry, such a combination allows us to strengthen periodicity and even to control the period of the cyclic permutation. Noteworthy, for an amplitude mask of symmetry $D_4$, a phase saddle point appears as a complementary critical point to complete a cycle of period 4. A transposition between vortices of charge -1 and vortices of charge +1 is also revealed when adding a 2-charged spiral phase mask. \section{Experimental procedure} The experimental procedure consisted in modulating a random phase pattern in a Fourier plane with an amplitude mask and a spiral phase mask. Here, spiral phase masks of order $n$, ${\rm SP}_{n}(\theta) = e^{i.n.\theta}$ (in polar coordinates), were applied for $n \in \llbracket -6;6 \rrbracket$. As amplitude masks, three Binary Amplitude (${\rm BA}$) masks were used: a disk and two periodic angular slits with a point group symmetry $D_3$ and $D_4$. They are defined by the following angular transmission function (in polar coordinates): \begin{equation} {\rm BA}^{\rm N}(\theta) = \begin{cases} 1, & \text{ if $\lvert \theta-(k-\frac{1}{2}).\frac{2.\pi}{\rm N} \rvert\ < \frac{\pi}{32}$ with $k \in \llbracket 1;\rm N \rrbracket$ }.\\ 0, & \text{otherwise}. \end{cases} \label{eq:apert} \end{equation} for $N\in\left\{3,4\right\}$. By convention, ${\rm BA}^{\infty}$ defines the disk-shaped aperture (obtained for $\rm N \geqslant 32$). For $\rm N < 32$, the orbital angular momentum content (or spiral spectrum~\cite{Torner2005}) of the ${\rm BA}^{\rm N}$ aperture exhibits discrete harmonics for the spiral modes $n= \rm{p.N}$ with ${\rm p}\in \mathbb{Z}$. Given the width of the angular slits and provided that $N$= 3 or 4, the aperture ${\rm BA}^{N}$ can be considered as invariant by the addition of ${\rm SP}_{n}$ when $n = \pm \rm N$ or $\pm 2.\rm N$. Such an invariance in the Fourier plane is thus necessarily associated with a periodic transform of the speckle pattern in the real space. \begin{figure}[htb] \centering \fbox{\includegraphics[width=\linewidth]{Figure_103}} \caption{Experimental setup (a) used to measure the intensity and the phase of speckle patterns corresponding to the different binary amplitude masks. A spatial light modulator (SLM) is illuminated with a collimated laser beam at 635nm. The phase is measured by phase stepping interferometry. Both the reference and the signal wavefronts are imprinted on the SLM. in addition to a blazed grating which allows sending undiffracted light to a beam block (BB) and the first order diffracted beam to a camera (Cam.). A stack of eight images was then sequentially recorded while phase shifting the reference beam. The measured intensity $I_{0}$ maps (top, green colorscale) and phase $\Phi_{0}$ maps (bottom, gray colorscale) are presented for a circular aperture (${\rm BA}^{\infty}$) (b), periodic angular-slits with a point group symmetry $D_3$ (${\rm BA}^{3}$) (c), and periodic angular-slits with a point group symmetry $D_4$ (${\rm BA}^{4}$) (d). Miniatures of the ${\rm BA}$ masks are displayed for illustration.} \label{fig:principle} \end{figure} The experimental configuration is detailed in Fig.~\ref{fig:principle} [See Supplement 1, Section 1 for further details on the experimental methods]. A spatial light modulator (SLM) (LCOS, X$10468$, Hamamatsu) was illuminated with a collimated laser beam at $635~{\rm nm}$ and Fourier conjugated to a camera (768x1024 pixels, pixel size: $\rm{4.65x4.65}$ $\mu \rm{m^2}$) with a converging lens. The phase $\Phi_{n}$ and amplitude $A_{n}$ of the modulated (${\rm SP}_{n}$ mask) random wave were measured at the camera plane by phase stepping interferometry~\cite{Creath_2005}. To do so, the SLM (792x600 SLM pixels, pixel size: $\rm{20x20}$ $\mu \rm{m^2}$) was split in two parts to generate both the modulated random wave (or signal wave) on one side and a reference wave on the other side (see Fig.~\ref{fig:principle}a). The signal wave was generated by adding simultaneously the scattering random phase pattern, the spiral phase modulation ${\rm SP}_{n}$ and the amplitude mask ${\rm BA}^{N}$. Adding a blazed grating achieved spatial separation of the imprinted signal wavefront from undiffracted light (the latter being sent to a beam-block). The signal speckle intensity $I_{n}$ could be measured directly by removing the contribution of the reference beam. For phase-stepping interferometry, an additional Fresnel lens was added to the reference beam in order to cover the camera surface. The latter spherical contribution as well as the relative phase-tilt between the signal and the reference beams were removed in a numerical post-processing step. A stack of eight images was sequentially recorded by phase shifting the reference beam by $2 \pi/8$ phase-steps. All BA masks had the same radius of $r=170$ pixels at the SLM, so yielding the same speckle grains size on the camera plane: $\lambda/(2{\rm NA})$ = 70 $\mu \rm m$ -Full Width Half Maximum (FWHM), where $\lambda$ is the wavelength and $\rm{NA}$ $\simeq r/f \simeq 4.53\times 10^{-3}$ the numerical aperture of illumination (with $f=750~{\rm mm}$ the focal length of the lens L in Fig.~\ref{fig:principle}a). The speckle grain size thus covered $15$ camera pixels and ensured a fine sampling of the speckle patterns. Hereafter, all distances and spatial densities are expressed setting $\lambda/(2{\rm NA})$ as the length unit. In Fig.~\ref{fig:principle}(b-d), an illustration of the speckle intensity and phase maps obtained for the three different geometries of BA masks is shown. In the following study, intensity maps $I_{n}$ and phase maps $\Phi_{n}$ were measured for all three ${\rm BA}^{N}$ masks ($N\in \{ 3,4,\infty \}$) and for each ${\rm SP}_{n}$ masks ($n \in \llbracket -6;6 \rrbracket$). For comparison, the intensity-map $I_{\rm rand}$ and the phase-map $\Phi_{\rm rand}$ obtained for a non-correlated scattering random pattern were acquired for all the ${\rm BA}^{N}$ masks independently. \section{Statistical analysis of the topological correlation between critical points} \subsection{Studied critical points} The field at the camera being linearly polarized, optical fields are here studied as scalar fields. The location of the main critical points of the experimental intensity and phase maps were measured [See Supplement 1, Section 1.C for details on the detection of critical points] and their statistical correlation distances were analyzed. Importantly, for a given BA mask, adding ${\rm SP}_n$ masks preserves all statistical properties of the speckle patterns, such as the number-density of critical points. Phase saddle-points of $\Phi_{n}$ are notated $S_{n}^{p}$ and vortices of charge $\pm 1$: $V_{n}^{\pm}$. Vortices of charge higher than $1$ do not appear in Gaussian random wavefields~\cite{Freund_OC_93}. Maxima and saddle-points of $I_{n}$ are notated $M_{n}$ and $S_{n}^{I}$, respectively. Non-zero minima and phase extrema have not been considered here, since having significantly lower densities~\cite{Freund_PLA_95}. All the notations are summarized in Table~\ref{tab:critical-points}. The measured average number-densities of the critical points are presented in Table~\ref{tab:density-points}. The density of the critical points of type $X$ ($X =$ $V^{\pm}$, $M$, $S^{p}$ or $S^{I}$) is notated $\rho(X)$. As expected, $\rho(V^{-})$ and $\rho(V^{+})$ are equal~\cite{Freund_1001_correlations}, and $\rho(X)$ depends both on the type of critical point and the BA mask. \begin{table}[htbp] \centering \caption{\bf Notations for the main critical points} \begin{tabular}{cccc} \hline \bf Phase & Maxima & Saddle & Vortices (charge $\pm 1$) \\ & - & $S^{p}$ & $V^{-}$ and $V^{+}$ \\ \hline \bf Intensity & Maxima & Saddle & Zeros \\ & $M$ & $S^{I}$ & $V^{-}$ and $V^{+}$ \\ \hline \end{tabular} \label{tab:critical-points} \end{table} \begin{table}[htbp] \centering \caption{\bf Measured average number density of critical points (length unit: $\lambda/(2.{\rm NA})$). The average number of $V^{-}$ is 660.85 for the circular aperture (${\rm BA}^{\infty}$). } \begin{tabular}{ccccc} \hline BA mask & $V^{-}$(or $V^{+}$) & $M$ & $S^{p}$ & $S^{I}$ \\ \hline ${\rm BA}^{\infty}$ & $0.19$ & $0.32$ & $0.36$ & $0.65$ \\ ${\rm BA}^{3}$ & $0.20$ & $0.39$ &$0.36$ & $0.79$ \\ ${\rm BA}^{4}$ & $0.19$ & $0.33$ & $0.40$ & $0.70$ \\ \hline \end{tabular} \label{tab:density-points} \end{table} \subsection{Statistical tools for the analysis of topological correlations} \begin{figure}[h] \centering \fbox{\includegraphics[width=\linewidth]{Figure_206}} \caption{Statistical analysis of the separation distances between one set of critical points (here $V_0^-$) and the closest point of another set (notated $X$). The distance between $V_0^-$ and the closest $X$ is notated {\it d}$\left(V_0^-, X\right)$. The Radial Probability Density Functions (RPDF) of the nearest neighbor (a) and the corresponding Weighted Median Distance (WMD) (b) are shown. Radial Distribution Functions (RDF) of the nearest neighbor (c) and the corresponding Weighted Median Normalized Distance (WMND) (d) provide a statistical toolbox to study the spatial correlation between pairs of critical points. The results were derived from experimental measurements of $I_n$ and $\Phi_{n}$ obtained for the amplitude mask ${\rm BA}^{\infty}$.} \label{fig:defnormalized} \end{figure} To study statistical transformations of critical points quantitatively, new specific tools are presented. What we discuss as transformation of critical points by the addition of spiral phase masks refers to the mean nearest neighbor distances between populations of critical points and calls for a discrimination parameter. Two specific statistical tools are then described below: the radial density function (RDF) and the Weighted Median Normalized Distance (WMND). In our previous study~\cite{Gateau_PRL_17}, correlations between critical points could be characterized by computing the radial probability density function (RPDF) of the nearest-neighbor distance. Fig.~\ref{fig:defnormalized}a presents RPDFs of the distance {\it d}$\left(V_0^-, X\right)$ in the case of ${\rm BA}^{\infty}$. We define {\it d}$\left(Y,X\right)$ as the distance between a $Y$-point and the closest $X$-point. The RPDF(r) corresponds to the probability to find the closest $X$-point at the distance $r$ from a $Y$-point, per unit area ($\int_0^\infty RPDF(r)2\pi r dr=1$). One drawback associated with the use of the RPDF is that it may suggest paradoxes if improperly interpreted. Considering {\it d}$\left(V_0^-, V_2^+\right)$ and {\it d}$\left(V_0^-, S_{\rm rand}^I\right)$, it seems that $V_0^-$ correlates both with $V_2^+$ and $S_{\rm rand}^I$ since both RPDFs reach high values at zero distances. While a correlation is expected in the former case (due to topological charge incrementation), no correlation is expected from the latter which involves two independent sets of random points. The reason why the amplitude of the RPDF of {\it d}$\left(V_0^-, S_{\rm rand}^I\right)$ is higher than the one of {\it d}$\left(V_0^-, V_2^+\right)$ at zero distances, is just due to the $\sim 3$-times higher spatial density of intensity saddle points $S_{\rm rand}^I$ as compared to vortices $V_2^+$ (see Table~\ref{tab:density-points}): the probability to find a saddle point at close distance is thus larger. To quantitatively characterize topological correlations, we thus need to normalize RPDFs by the number densities $\rho(X)$. Our first statistical tool, the radial-distribution function (RDF) -- well known in statistical mechanics~\cite{chandler1987introduction} -- was extended here for nearest neighbor by normalizing the RPDF of {\it d}$\left(V_0^-, X\right)$ by $\rho(X)$, and the distances {\it d}$\left(V_0^-, X\right)$ by the mean X-interpoint half-distance $\left(2\sqrt{\rho(X)}\right)^{-1}$ ~\cite{Tournus2011}. Fig.~\ref{fig:defnormalized}c shows the RDF of the same data as in Fig.~\ref{fig:defnormalized}a. As a result, all the RDFs of {\it d}$\left(V_0^-, X_{\rm rand}\right).2.\sqrt{\rho(X)}$ are superimposed for every $X_{\rm rand}$, and the spatial correlation betweeen $V_0^{-}$ and $V_2^+$ clearly appears. To obtain a single binary parameter discriminating the spatial correlation between $V_0^-$ and $X$, we further define the Weighted Median Normalized Distance (WMND) as a second statistical tool: the WMND$\left(V_0^-, X\right)$ is the $50\%$ weighted percentile of {\it d}$\left(V_0^-, X\right).2.\sqrt{\rho(X)}$ with weights corresponding to the RDF values. A ${\rm WMND}\left(V_0^-, X\right)$ around $0.5$ means that no spatial correlation exist between $V_0^-$ and $X$, while ${\rm WMND}<0.5$ and ${\rm WMND}>0.5$ mean an attraction and a repulsion, respectively. A zero WMND value means perfect correlation while WMND=1 means perfect anti-correlation. Fig~\ref{fig:defnormalized}d presents the WMND$\left(V_0^-, X\right)$ for all the critical points considered in this study and for $n\in \llbracket 0;6 \rrbracket$. For comparison, the Weighted Median Distance (WMD) associated with the RPDFs -- defined as the $50\%$ weighted percentile of {\it d}$\left(V_0^-, X\right)$ with weights corresponding to the RDPF values -- is also computed and displayed in Fig.~\ref{fig:defnormalized}b. To validate this tool, taking $BA^\infty$ as an illustrative example, we notice that WMND$\left(V_0^-, X_{\rm rand}\right)$ is around 0.5 for all the $X_{\rm rand}$, as expected. By comparison, WMD$\left(V_0^-, S^I_{\rm rand}\right)=0.35$, which irrelevantly suggests correlations as discussed above. Moreover, for $n>3$, the RDFs of {\it d}$\left(V_0^-, X_n\right)$ are observed to match the RDFs of {\it d}$\left(V_0^-, X_{\rm rand}\right)$: no noticeable spatial correlation is obtained for $n>3$. As expected again, the WMND$\left(V_0^-, X_{\rm rand}\right)$ are around 0.5 for $n>3$. Conversely, we get WMD$\left(V_0^-, S^I_{n}\right)< 0.38$, which would falsely suggest correlations. All these observations validate the WMND as a parameter to assess the spatial correlation between pairs of critical points in a speckle pattern. \section{Topological correlations between critical points for the different amplitude masks} Fig.~\ref{fig:multiplesp} presents the WMND($\rm{Y}_0$, $X_n$) for all the critical points ($\rm{Y}_0$ and $X_n$) screened in this study, for ${\rm SP}_{n}$ masks with $n \in \llbracket -6;6 \rrbracket$ and for the three considered BA apertures. The WMND was verified to be around $0.5$ for all the amplitude masks and all the pairs ($\rm{Y}_0$,$X_{\rm{rand}}$), for which there is obviously no spatial correlation. For the sake of readability, in the following, we only discuss the interplay between critical points when adding positively charged $\rm SP$ masks but symmetrical behaviours are observed for negatively charged $\rm SP$ masks (Fig.~\ref{fig:multiplesp}). \begin{figure}[htb] \centering \fbox{\includegraphics[width=\linewidth]{Figure_303}} \caption{Weighted Median Normalized Distance (WMND) for all possible pairs of critical points screened here and for the addition of spiral phase mask with charges up to n =$\pm 6$. The WMND were computed from experimental measurements of $I_n$ and $\Phi_{n}$. } \label{fig:multiplesp} \end{figure} For the aperture ${\rm BA}^{\infty}$, the WMND reveals several noticeable features (reported in Table~\ref{tab:transformation}). First, as expected from our previous study~\cite{Gateau_PRL_17}, we notice some spatial correlations for the triplets $(V_{m-1}^-, M_m , V_{m+1}^+)$. Because $\rho(V_0^-)<\rho(M_1)$, a one-to-one transformation is impossible between vortices and maxima. Although $V_0^-$ and $V_2^+$ have the same number density, we also notice that WMND$(V_0^-,V_2^+)$ is significantly different from $0$, indicating that the rate of the macroscopic transformation from $V_0^-$ to $V_2^+$ is below 1. In agreement with the 3-point cyclic permutation algebra observed in~\cite{Gateau_PRL_17}, a weak attraction is found for the pair $(V_0^{+}, V_{+1}^{-})$, corresponding to the topological-charge equation $1+1=-1$. However, as an alternative transformation for $V_0^+$, a similar attraction is now also observed for $(V_0^{+}, S^p_{+1})$. $V_0^+$ is thus subjet to a bifurcation between $V_{+1}^-$ and $S^p_{+1}$, which implies two different mechanisms. As a first possibility, some $V_0^{+}$ transform into $S^p_{+1}$. This transformation inspires the following interpretation: When adding a ${\rm SP}_{+1}$ phase mask to an isolated Laguerre-Gaussian beam with topological charge $+1$, a $+2$-charged vortex is obtained. Under weak perturbation, this $+2$ vortex splits into two $+1$ vortices, accompanied by the creation of both an intensity saddle point and a phase saddle point in between~\cite{Nye_PRSLA_88}. The creation of this pair of saddle points is governed by the Poincaré number conservation. In the frame of this model, a $V_0^+$ vortex is expected to co-localize with both $S_{+1}^I$ and $S_{+1}^p$ and to anti-correlate with the two $V_1^+$ that split away. Although no noticable spatial attraction was found for $(V_0^+,S_{+1}^I)$, a co-localization is observed for $(V_0^{+},S^p_{+1})$ and a weak repulsion is observed for the pair $(V_0^{+}, V_{+1}^{+})$, consistent with this interpretation. In speckles, where $+2$-charged vortices cannot be encountered since unstable~\cite{Freund_OC_93}, the weak perturbation approximation cannot be fully valid, potentially accounting for the remaining discrepancy between experimental observations and the proposed model. As a second possible transformation, the more surprising attraction of the pairs $(V_0^{+}, V_{+1}^{-})$ is observed, which calls for another mechanism. Since no such transformation can be imagined from isotropic $V_0^{+}$, it may only be interpreted by a mechanism dominated by strong perturbations. The statistically uniform mesh created by vortices and maxima in speckles~\cite{Longuet_Higgins_JOSA_60}, together with strong correlations observed for pairs $(M_0,V_{+1}^{+})$ and $(V_0^{-},M_1)$ seem to constrain $V_0^{+}$ to co-localize with $V_{+1}^{-}$. This transformation would deserve further analytical investigation but we anticipate that the creation mechanism of $V_{+1}^{-}$ from $V_0^{+}$ can only be a many-body problem, involving the field structure (maxima, phase saddles and vortices) surrounding the initial $V_0^{+}$ of interest. When adding a $\rm{SP}_{+2}$ mask for ${\rm BA}^\infty$, $V_0^+$ is not observed to significantly co-localize with any remarkable critical point (see Fig.~\ref{fig:multiplesp} and Table~\ref{tab:transformation}), whereas two possible transformations might have been expected for $V_0^+$. On the one hand, from the 3-point cyclic permutation, we could expect that $V_0^{+}$ would transform into $M_{+2}$. On the other hand, since in Table~\ref{tab:transformation}, maxima and phase saddle-points are noted to be simply exchanged (see pairs $(M_0,S^p_{+ 2})$ and $(S^p_0,M_{+ 2})$), a similar symmetrical echange between $-1$ and $+1$ vortices could be expected, yielding a transformation of $V_0^{+}$ into $V_2^-$ (as $V_0^{-}$ is transformed into $V_2^+$). However, no such correlation is observed either for the pair ($V_0^{+}$,$M_2$) or for ($V_0^{+}$,$V_2^-$). Conversely, these correlations appear when applying amplitude masks ${\rm BA}^3$ and ${\rm BA}^4$, respectively, as detailed in the following. For $|n|>3$, no significant spatial correlation with the addition of $\rm{SP}_{n}$ is observed for ${\rm BA}^{\infty}$. This aperture has a circular symmetry. Therefore, its spiral spectrum contains only the fondamental spiral mode $n=0$, and is not invariant by the addition of any SP masks. All the described topological correlations associated with ${\rm BA}^\infty$ are summarized in Table~\ref{tab:transformation}. \begin{table}[htbp] \centering \caption{\bf Macroscopic transformations observed for the critical points $\rm Y_0$ with the amplitude mask ${\rm BA}^{\infty}$. The transformation rates are below 1. } \begin{tabular}{ccccc} \hline \bf \bf Critical point $\rm Y_0$ & & \bf Adding ${\rm SP_{+1}}$ & & \bf Adding ${\rm SP_{+2}}$ \\ \hline $V^{-}_0$ & $\rightarrow$& $M_1$ & $\rightarrow$ & $V^{+}_2$ \\ \hline $M_0$ & $\rightarrow$& $V^{+}_1$ & $\rightarrow$ & $S^p_2$ \\ \hline $V^{+}_0$ & $\rightarrow$& $V^{-}_{1} + S^p_{1}$ & $\rightarrow$ & $\emptyset$ \\ \hline $S^{p}_0$ & $\rightarrow$& $V^{-}_1$ & $\rightarrow$ & $M_2$ \\ \hline \hline \bf Adding ${\rm SP_{-2}}$ & & \bf Adding ${\rm SP_{-1}}$ & & \bf Critical point $\rm Y_0$ \\ \hline $\emptyset$ & $\leftarrow$& $V^{+}_{-1} + S^p_{-1}$ & $\leftarrow$ & $V^{-}_0$ \\ \hline $S^p_{-2}$ & $\leftarrow$& $V^{-}_{-1}$ & $\leftarrow$ & $M_0$ \\ \hline $V^-_{-2}$ & $\leftarrow$& $M_{-1}$ & $\leftarrow$ & $V^{+}_0$ \\ \hline $M_{-2}$ & $\leftarrow$& $V^{+}_{-1}$ & $\leftarrow$ & $S^{p}_0$ \\ \hline \end{tabular} \label{tab:transformation} \end{table} As a possible solution to strengthen the 3-point cyclic permutation, we used the ${\rm BA}^3$ amplitude mask, making the Fourier plane almost invariant with respect to the addition of $\rm SP_{\pm 3k}$ (so long as $3k$, with $k$ integer, remains small enough). Here, four main observations can be noted. First, as expected, the pairs $\left(\rm{Y}_0, \rm{Y}_{\pm 3}\right)$ and $\left(\rm{Y}_0, \rm{Y}_{\pm 6}\right)$ are observed to have a WMND very close to zero, indicating that the macroscopic transformation rate is close to $1$ for all these pairs. Second, we observe that the cycle of period 3 reinforces the spatial correlations of the triplet $(V_{m-1}^-, M_m , V_{m+1}^+)$ and even extends it to the $3^{rd}$ and $6^{th}$ spiral harmonics. Third, no noticeable correlation is observed between $V_{0}^{+}$ and phase saddle-points $S^p_{+ 1}$ (although an anti-correlation is obtained between $V_{0}^{+}$ and $V_{+1}^{+}$), contrary to the case of ${\rm BA}^{\infty}$. The periodicity of $3$ induces a strong correlation for the pairs $(V_0^{+}, V_{+1}^{-})$, and establishes a cyclic permutation of three populations of critical points $V^-$, $M$ and $V^+$. Forth, as a consequence, the pairs $(V_0^{+},M_{2})$ also exhibit strong spatial correlations, contrary to the case of ${\rm BA}^\infty$. In these two latter permutations, it must be reminded that not all intensity maxima $M$ may transform into vortices, because of the difference in spatial densities (Table~\ref{tab:density-points}) of these two populations of critical points~\cite{Gateau_PRL_17}. Next, we constrained the period to be equal to $4$ by using the ${\rm BA}^{4}$ mask. In this case, the WMND$\left(\rm{Y}_0, \rm{Y}_{\pm 4}\right)$ are close to zero (transformation rate close to 1). As expected, this periodicity enhances the spatial correlation for the quadruplet $( V_{m-1}^-, M_{m} , V_{m+1}^+ , S^p_{m+2})$ and extends it to their $4^{th}$ spiral harmonics. Furthermore, in Fig.~\ref{fig:multiplesp}, strong correlations are observed for the pairs $(V_0^-,M_1)$, $(M_0,V_{+1}^+)$ and $(S_0^p,V_{+1}^-)$. However, $V_0^{+}$ is still observed to bifurcate between $V_{+1}^{-}$ and $S_{+1}^{p}$ with the same likelihood, similarly to the ${\rm BA}^{\infty}$ case. Therefore the cyclic permutation of four populations of critical points $S^p$, $V^-$, $M$ and $V^+$ is not clearly established for $\rm BA^4$ (conversely to the permutation obtained for $\rm BA^3$). When considering the addition of $\pm 2$-charged spiral masks for ${\rm BA}^{4}$, a transposition (2-cycle permuation) between $V^+$ and $V^-$ is clearly obtained (with a transposition rate below 1). A strong spatial correlation, reinforced as compared to ${\rm BA}^\infty$, is also clearly observed between $M$ and $S^P$ by addition of $\rm{SP}_{\pm 2k}$. In summary, the results displayed in Fig.~\ref{fig:multiplesp} reveals the fondamental topological transformations of critical points in a speckle with the addition of $\rm SP$ mask for a single-spiral mode aperture (${\rm BA}^{\infty}$ ), and demonstrate the possible modification of topological transformation by the addition of BA masks with dihedral symmetry [See Supplement 1, Section 2.B for numerical confirmation of the experimental results and Section 2.C for details on transformations induced by $\rm BA$ masks with dihedral symmetries of orders higher than 4]. For the sake of simplicity, we chose here star-like amplitude masks with a dihedral symmetry which comprise spiral harmonics with equal amplitudes (at small enough spiral mode number) [See Supplement 1, Section 2.D for details on the influence of the width of the star branches]. \section{Wavefield control in the vicinity of critial points} On a local scale, the transformations of critical points shown in Fig.~\ref{fig:multiplesp} arise from the convolution in the imaging plane of the scattered field with the point spread function (PSF) associated with the combined amplitude and spiral phase masks. Controlling the transformation of a critical point ${\rm Y}_0$ of the complex wavefield $A_0$, by adding a $\rm SP_n$ mask $(n \in \mathbb{Z}^* )$ in a Fourier plane, requires that the PSF associated with the combination of the $\rm SP_n$ and $\rm BA$ masks has a significant amplitude in the coherence area surrounding the critical point, or in other words: the area where the randomness of the speckle pattern has a limited influence as compared to the control by the incident wavefield. As a definition for the coherence area, we use the one proposed by Freund~\cite{Freund_94} : $C_{area}= (\rho(V^+)+\rho(V^-))^{-1}/2$. This definition avoids issues related to the shape of the aperture~\cite{Freund_94} encountered when considering the area of the intensity autocorrelation peak~\cite{Ochoa_83}. In our case, for all three $\rm BA$ masks, the coherence length was measured to be: $C_{length} = \sqrt{C_{area}} \simeq \lambda/(2.\rm{NA})$. Experimentally, the PSFs can be obtained (Fig.~\ref{fig:intercorr}a) by computing the intensity cross-correlations of the measured speckle pattern $I_0$ and the measured speckle patterns $I_n$ associated with $\rm SP_n$ masks ($n \in \llbracket 0;6 \rrbracket$) and for the three $\rm BA$ masks. The mean values of $I_n$ were subtracted before computing the cross-correlations. The intensity cross-correlations xcorr($I_0$,$I_n$) are identical to the PSF of the combined $\rm BA$ and $\rm SP_n$ masks. The centered spot of the autocorrelation xcorr($I_0$,$I_0$) illustrates the spatial extent of the coherence area, and has the same dimension for all three $\rm BA$ masks since having the same radial aperture. For ${\rm BA}^{\infty}$, we observe that xcorr($I_0$,$I_n$) has a circular symmetry with the highest values distributed on a ring whose radius (marked with a green line) increases with $n$ (Fig.~\ref{fig:intercorr}b), as observed for simple Laguerre-Gaussian beams~\cite{Grier2003}. Interestingly, for $n>3$, not only we observe that the ring radius is larger than twice $C_{length}$ but also that its amplitude is decreased to below $1/10$ of the auto-correlation peak value (Fig.~\ref{fig:intercorr}c). As a consequence, the transformation of critical points by applying $\rm SP_n$ masks is inefficient (or ``unlikely'') and dominated by the surrounding random field. For this reason, no spatial correlation between pairs of critical points $({\rm Y}_0,X_n)$ could be found for $n>3$. For $\rm BA^3$ and $\rm BA^4$, the cross-correlation patterns xcorr($I_0$,$I_n$) have dihedral symmetries $D_3$ and $D_4$ and a periodicity of $\rm N$ = 3 and 4, respectively. In both cases, the radial distance of the strongest peak remains below $1.4* C_{length}$, and its amplitude always remains above $1/3$ of the auto-correlation maximum value (Fig.~\ref{fig:intercorr}c). The addition of the $\rm SP_n$ mask thus allows controlling the field inside the coherence area surrounding critical points ${\rm Y}_0$, even for $n>3$. \begin{figure}[htb] \centering \fbox{\includegraphics[width=\linewidth]{Figure_406}} \caption{ Cross-correlation (a) of the speckle patterns $I_0$ and $I_n$ with $n\in \llbracket 0;6 \rrbracket$. The cross correlations illustrate the spatial distribution of the intensity in the point spread function associated with the combined amplitude and spiral phase masks. The colorscale set for the autocorrelations xcorr($I_0$,$I_0$) and was kept for the intercorrelations xcorr($I_0$,$I_n$). A N-periodicity is observed for the aperture $BA^{\rm N}$ with $\rm N$ = 3 or 4. Green circles mark the radial distance to the strongest peak. Radial distance (b) and normalized amplitude of the strongest peak (c) as a function of the charge of the spiral phase mask. The radial distance keeps increasing for the circular aperture ${\rm BA}^{\infty}$ while it remains close or below $\lambda/(2.\rm{NA})$ (the coherence length of the speckle) even for ${\rm{SP}_n}$ with $n\geqslant 3$ for the aperture ${\rm BA}^{\rm N}$ and $\rm N$ = 3 or 4. The amplitude decreases for ${\rm BA}^{\infty}$ while it remains above 0.3 for ${\rm BA}^{3}$ and ${\rm BA}^{4}$. } \label{fig:intercorr} \end{figure} \section{Conclusion} The critical points that naturally appear in a random wavefield can be transformed by the addition of a spiral phase mask in a Fourier plane. Here, we studied these transformations experimentally by imprinting spiral phase masks with a charge $n\in \llbracket -6;6 \rrbracket$ to a laser beam impinging on a randomly scattering surface. In addition, these phase masks were combined with star-like amplitude masks with dihedral symmetries $D_3$ and $D_4$ in order to better control critical point transformations. For a simple disk-shaped aperture carrying a single spiral mode $n=0$, we experimentally demonstrated the topological correlation existing between the critical points of the initial wavefield $A_0$, and the corresponding spiral transformed field $A_n$. A partial transformation of vortices $V_0^{-}$ into maxima $M_{+1}$ was observed as well as a transformation of maxima $M_{0}$ into vortices $V_{+1}^{+}$. Vortices $V_0^{+}$ were observed to either correlate with phase saddle points $S_{+1}^{p}$ or with vortices of opposite sign $V_{+1}^{-}$. For this statistical bifurcation, two transformation interpretations were suggested, calling for further future analytical studies. No simple topological correlation was found between the critical points of the wavefields $A_0$ and $A_n$ for $|n|>3$. This result could be explained by the weak influence of spiral phase masks with a charge higher than $2$ in the coherence area surrounding the critical points. Furthermore, adding centered binary amplitude masks with dihedral symmetry $D_3$ or $D_4$ and \emph{Dirac-comb}-like spiral spectra (of period $3$ and $4$), we demonstrated that it is possible to deeply modify the topological correlation between critical points. The observed changes arise from the introduction of a periodicity in the transformation between the critical points. We could thereby extend the correlation to spiral phase masks with charges higher than 3, and reinforce some spatial correlation intrinsically present with a circular-aperture symmetry. For the amplitude mask with a $D_3$-symmetry, a cyclic permutation between negatively charged vortices $V^-$, maxima $M$, and positively charged vortices $V^+$, is observed. For the amplitude mask with a $D_4$-symmetry, phase saddle points participate as complementary points to complete the $4$-periodic cycle. Considering the addition of 2-charged spiral masks, transpositions between $V^-$ and $V^+$, and between $M$ and $S^p$ were also revealed for $D_4$-symmetry. The enhancement of the spatial correlation between the critical points of the wavefields $A_0$ and $A_n$ (compared to ${\rm BA}^{\infty}$) could be explained by the strong influence of the spiral phase mask in the coherence area surrounding each critical point, when the binary amplitude masks are added. Here, cyclic permutations were controled using binary amplitude masks enforcing periodicity. Interestingly, our study may extend to other amplitude masks with a dihedral symmetry, such as polygonal~\cite{Xie_OL_12} and triangular apertures~\cite{Chavez-Cerda_PRL_10}, whose interactions with vortex beams were studied in free space. For N-gons, the $\rm N^{th}$ spiral harmonics have a much lower amplitude than the fundamental spiral mode $(n=0)$. As a result, the spatial correlations between critical points for $|n|>3$ is weaker than for ${\rm BA}^{\rm N}$, and vanishes with the increasing charge of the $\rm SP$ mask [See Supplement 1, Section 2.E]. In a nutshell, we showed here that it is possible to manipulate the topological correlation between critical points and to control the transformation of critical points in random wavefields by combining amplitude masks and spiral phase transforms. Topological manipulation of critical points in random wavefields is of high importance to understand and control light propagation through scattering and complex media. The statistical study of correlations between permuted critical points provides a new tool to analyse seemingly information-less and random intensity patterns and thus to transmit information through complex media. \section*{Funding Information} This work was partially funded by the french Agence Nationale pour la Recherche (NEOCASTIP ANR-CE09-0015-01, SpeckleSTED ANR-18-CE42-0008-01).
train/arxiv
BkiUfHjxK2li-F6JzvcH
5
1
\section{Introduction} \label{S:Introduction} Many important processes in materials microstructural evolution, such as coarsening, solidification, polycrystalline grain evolution, and magnetic and ferroelectric domain formation and motion, occur on mesoscopic length and time scales. The ``mesoscale'' is the scale ``in between;" in this case, in between atomistic scales of the order of sub-nanometers and femto- to picoseconds, and macroscopic scales of the order of micrometers and microseconds and larger. Mesoscale processes can strongly impact materials properties and performance in engineering applications, providing strong motivation to develop accurate mesoscale microstructure evolution models. Two general mesoscale modeling approaches exist, with the primary difference being how interfaces are handled \cite{moelans2008introduction,emmerich2008advances,duddu2011diffusional}. Sharp-interface approaches, which treat interfaces as mathematically sharp, can be very efficient numerically when simulating the evolution of simple microstructural geometries. However, interface tracking with complex geometries (e.g., during dendritic growth) and topology changes, such as particles merging or splitting, pose significant numerical challenges \cite{duddu2011diffusional}. Diffuse-interface approaches, in which the interface has a finite width, avoid these issues \cite{moelans2008introduction,emmerich2008advances,duddu2011diffusional}. However, they generally require more computational resources because the diffuse interface, which often has a width of a few nanometers, must be resolved even as other structural features may have length scales in the hundreds of nanometers or larger. One popular diffuse-interface technique is the phase field approach, which has been used to study dendritic growth, spinodal decomposition, grain growth, ferroelectric domain formation, and other phenomena \cite{ASM,boettinger2002phase,chen2002phase,emmerich2008advances,moelans2008introduction,steinbach2009phase,nestler2011phase,steinbach2013phase}. In a phase field model, a microstructure is described by one or more continuous fields, $\varphi\left(\mathbf{r},t\right)$. The fields change smoothly over the computational domain and across interfaces. The field variables may be either a physical quantity, such as composition or density, or a phenomenological descriptor \cite{moelans2008introduction}. Originally \cite{Fix,Langer}, the fields were used to denote a local phase (hence the name {\em phase field}), with the value of $\varphi$ at position $\mathbf{r}$ and time $t$ indicating the phase. For example, a two-phase system can be described by a field $\varphi$ that takes the values $\varphi_{\alpha}$ and $\varphi_{\beta}$ in the bulk $\alpha$ and $\beta$ phases, respectively, while at the $\alpha$/$\beta$ interface, the value of $\varphi$ changes smoothly over a finite width. The use of phase field methods is now more diverse, with the phase field variable often representing other quantities or properties, such as concentration or density. The evolution of existing phases within the system is driven by the reduction of the free energy, which is described as a functional of the field variables. Depending on the physics being modeled, the field variables may be conserved or non-conserved. Finally, ``sharp-interface limit'' or ``thin-interface limit'' analyses have shown that phase field models are equivalent to their analogous sharp-interface models when the interface width is significantly smaller than the size of other characteristic length scales (reviewed in Refs. \cite{moelans2008introduction,emmerich2008advances}). For comprehensive descriptions and reviews of phase field modeling, see Refs.\ \cite{ASM,boettinger2002phase,chen2002phase,emmerich2008advances,moelans2008introduction,steinbach2009phase,nestler2011phase,steinbach2013phase}. Quantitative phase field models have been developed to study technologically important phenomena in real materials systems as part of integrated computational materials engineering (ICME) \cite{furrer2011application,luo2015material,schmitz2015microstructure}. In ICME, models at different length scales are linked together to design materials for technological applications. A few selected references of recent quantitative phase field studies include solidification in Al alloys \cite{qin2003phase,kobayashi2003phase,bottger2009simulation}, precipitation in Ni-based superalloys \cite{zhu2002linking,zhu2004three,kitashima2009new}, recrystallization in Ti \cite{gentry2015simulating} and Mg \cite{wang2009grain} alloys, quantum dot formation in InAs/GaAs \cite{aagesen2012phase}, and semiconducting core-shell nanoparticles \cite{mangeri2015influence}. The phase field approach continues to be applied to novel materials systems and phenomena, and a growing number of scientists are adopting the technique. The number of phase field software implementations is proliferating with the growing application of phase field techniques, necessitating a means of benchmarking, validating, and verifying the numerical behavior of a diverse set of codes. Many research domains which apply computational modeling have converged around a small number of standard pieces of software and benchmarking sets (e.g., COMSOL \cite{comsol2015comsol} and ABAQUS \cite{abaqus} for engineering simulations, or VASP \cite{kresse1993ab,kresse1994ab,kresse1996efficiency,kresse1996efficient}, Quantum ESPRESSO \cite{quantumEspresso}, and the G3/99 test set \cite{Curtiss_JCP2000} for electronic structure calculations\footnote{Certain commercial equipment, instruments, or materials are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose.}), but this is not the case for the phase field community. A multitude of phase field software implementations exist, and numerical approaches abound. Phase field simulations have been performed using open-source codes such as MOOSE \cite{tonks2012object, millett2013three}, FEniCS \cite{fenics, welland2015miscibility}, OpenPhase \cite{steinbach2009phase}, DUNE \cite{bastian2008generic,bastian2008genericII}, FiPy \cite{fipy,Wheeler:2010p2378}, as well as with many proprietary codes, such as MICRESS \cite{steinbach1996phase,mecozzi2016phase}, PACE 3D \cite{nestler2005multicomponent, stinner2004diffuse} and other in-house codes. Numerical implementations may employ finite difference, finite volume, finite element, or spectral methods to solve the evolution equations, direct or spectral methods for solid mechanics calculations, explicit or implicit time stepping, and adaptive or non-adaptive meshing. To confidently incorporate quantitative phase field results obtained from this wide variety of numerical methods into ICME, both physical models and numerical implementations must be validated and verified. A set of standard benchmark problems allows the comparison of models, algorithms, and implementations, as well as the testing of solution accuracy, solver optimizations, and code modifications. While the phase field community ultimately needs validated experimental data sets to compare different models, we focus our effort here on first developing benchmark problems for numerical implementations, which is a necessary precursor for the comparison of model results; a model cannot be validated in a useful way until questions about the correctness of numerical implementations are resolved. The micromagnetics community created benchmark problems in the late 1990s to early 2000s to address a similar situation of multiple implementations and numerical methods \cite{muMAG}, and these problems are still evolving today. Benchmark problems significantly aided the community in creating accurate micromagnetics codes \cite{muMAG}, such as the Object Oriented MicroMagnetics Framework (OOMMF) \cite{donahue1999oommf}, MuMax3 \cite{vansteenkiste2014design}, and Magpar \cite{scholz2003scalable}. To aid in the development, validation, and verification of phase field modeling software, the Center for Hierarchical Design (CHiMaD) and the National Institute of Standards and Technology (NIST) are developing phase field benchmark problems. These problems are hosted on the NIST website \cite{PFBenchmark} and are freely available. In addition, NIST will also host the solutions to the problems submitted by members of the phase field community so that the results from different implementations may be compared. Phase field benchmark problems for numerical implementations should exhibit several key features, analogous to those in the micromagnetics benchmark problems. First, the problems should be nontrivial (i.e., not solvable without a computer) and should exhibit differing degrees of computational complexity, yet not require extensive computational resources. Second, simulation outputs must be defined in such a way that results are easily comparable. In addition to snapshots or videos of the evolution of the microstructure itself, the evolution of overall metrics such as the total energy of the system or the volume fraction of each phase should be quantified. Finally, the problems should test a simple, targeted aspect of either the numerical implementation or the physics. For example, simple physics could be used while complicated domain or boundary conditions are tested, or coupled physics could be tested on a simple domain. Numerical aspects that must be challenged include solver algorithms, mesh geometry, boundary conditions, and time integration. Benchmark problems could be especially useful when examining multiphysics coupling, including such behaviors as, e.g., diffusion, linear elasticity, fluid flow, anisotropic interfacial energy, and polarization. In this paper, we present a first set of community-driven, benchmark problems for numerical implementations of phase field models and the efforts of NIST and CHiMaD to date. This first set of problems focuses on diffusion of a solute and phase separation; the second problem adds a coupled non-conserved order parameter. We discuss our choice of model formulations, parameterizations and initial conditions so that these considerations may be kept in mind while developing additional benchmark problems. Furthermore, we demonstrate the utility of benchmark problems by comparing simulation results obtained using two different time adaptivity algorithms. We also briefly review lessons learned from the first CHiMaD ``Hackathon,'' an event in which different phase field codes within the community were challenged against model problems. Finally, we discuss the development of additional formulations for the future, and encourage community involvement in the entire process of problem design, development, and reporting of results. \section{Model formulations \label{sec:Model-formulations}} In phase field models, field variables are evolved using dynamics derived from generalized forces. The field variable is often termed the ``order parameter,'' and we adopt that terminology here. Most commonly, the time evolution is governed by dissipative dynamics, in which the total free energy of the system decreases monotonically with time (i.e., entropy increases at fixed temperature). The order parameter may be locally conserved or non-conserved depending on what physical quantity or property the order parameter represents, and its dynamics are defined by the response of the system to a generalized force defined by the variation in the free energy. Kinetic coefficients, such as mobility or diffusivity, control how the order parameter responds to the force. An example of a conserved order parameter is the concentration of solute in a matrix, while ferroelectric polarization is an example of a non-conserved order parameter. The first problem in this benchmark set models spinodal decomposition via conserved dynamics, while the second models Ostwald ripening via coupled conserved/non-conserved dynamics. In this way, we focus on a single, fundamental aspect of physics (i.e., diffusion and phase separation) in the first problem, and then increase the model complexity in the second problem. We discuss the motivation for each model formulation and the choice of initial conditions, boundary conditions, and computational domains. The problems were formulated to be effectively two-dimensional so that the essential physical behavior is modeled without making the test problems unreasonably large or computationally demanding. \subsection{Spinodal decomposition \label{sub:Spinodal-decomposition}} Spinodal decomposition is one of the oldest problems in the phase field canon, and its formulation in terms of continuum fields goes back to the seminal works by Cahn and Hilliard \cite{cahn1961spinodal}. The Cahn-Hilliard equation thus predates the name ``phase field'' in this context, but the term has subsequently been adopted by the community. While spinodal decomposition may be one of the simplest problems to model, it is highly relevant, as a large number of phase field models include the diffusion of a solute within a matrix. Furthermore, precipitation and growth may also be modeled with the same formulation if the appropriate initial conditions are chosen. For the benchmark problem, we select a simple formulation that is numerically tractable so that results may be obtained quickly and interpreted easily, testing the essential physics while minimizing model complexity and the chance to introduce coding errors. \subsubsection{Free energy and dynamics}\label{sssec:spinodal_free_energy} For this benchmark problem of spinodal decomposition in a binary system, a single order parameter, $c$, is evolved, which describes the atomic fraction of solute. The free energy of the system, $F$, is expressed as \cite{cahn1961spinodal} \begin{equation} F=\int_{V}\left(f_{chem}\left(c\right)+\frac{\kappa}{2}|\nabla c|^{2}\right)dV,\label{eq:F_spinodal} \end{equation} where $f_{chem}$ is the chemical free energy density and $\kappa$ is the gradient energy coefficient. For this problem, we choose $f_{chem}$ to have a simple polynomial form, \begin{equation} f_{chem}\left(c\right)=\varrho_{s}\left(c-c_{\alpha}\right)^{2}\left(c_{\beta}-c\right)^{2},\label{eq:fchem_spinodal} \end{equation} such that $f_{chem}$ is a symmetric double-well with minima at $c_{\alpha}$ and $c_{\beta}$, and $\varrho_{s}$ controls the height of the double-well barrier. Because $f_{chem}$ is symmetric (Fig.\ \ref{fig:energy_p1}), $c_{\alpha}$ and $c_{\beta}$ correspond exactly with the equilibrium atomic fractions of the $\alpha$ and $\beta$ phases. Because $c$ must obey a continuity equation -- the flux of $c$ is conserved -- the evolution of $c$ is given by the Cahn-Hilliard equation \cite{cahn1961spinodal}, which is derived from an Onsager force-flux relationship \cite{balluffi2005kinetics}: \begin{equation} \frac{\partial c}{\partial t}=\nabla\cdot\Bigg\{M\nabla\left(\frac{\partial f_{chem}}{\partial c}-\kappa\nabla^{2}c\right)\Bigg\} \label{eq_fullCH_p1} \end{equation} where $M$ is the mobility of the solute. For simplicity, both the mobility and the interfacial energy are isotropic. We choose $c_{\alpha}=0.3$, $c_{\beta}=0.7$, $\varrho_{s}=5$, $M=5$, and $\kappa=2$. Because the interfacial energy, diffuse interface width, and free energy parameterization are coupled, we obtain the diffuse interface width of $l=7.071 \sqrt{\kappa/\varrho_s}=4.47$ units over which $c$ varies as $0.348<c<0.652$, and an interfacial energy $\sigma=0.01508\sqrt{\kappa \varrho_s}$ \cite{cahn1958free}. \begin{center} \begin{figure} \begin{centering} \subfloat[\label{fig:energy_p1}]{\begin{centering} \includegraphics[scale=0.95]{fig1a_2col} \par\end{centering} } \subfloat[\label{fig:energy_p2}]{\begin{centering} \includegraphics[scale=0.65]{fig1b_2col} \par\end{centering} } \par\end{centering} \caption{The free energy density surfaces for a) the spinodal decomposition problem, and b) the Ostwald ripening problem for $(c,\ \eta)$ (not shown: $(\eta_i, \eta_j)$ surface). The free energy density surfaces are defined for all real values of $c$ (both problems) and $\eta_i$ (Ostwald ripening problem) and not only over the intervals of interest, necessitating care in choosing initial conditions (Section \ref{sub:choices}). \label{fig:free energy surfaces}} \end{figure} \par\end{center} \subsection{Ostwald ripening \label{sub:Ostwald-ripening}} The second benchmark problem examines Ostwald ripening in a system with an ordered phase and a disordered phase; an example of this phenomenon in a real materials system is the growth and coarsening of $\gamma'$ precipitates in a $\gamma$ matrix in nickel-based superalloys \cite{pollock2006nickel,zhu2002linking}. This system is somewhat more complicated than that presented in Section \ref{sub:Spinodal-decomposition}, in that the microstructural evolution is driven by coupled conserved/non-conserved dynamics. However, the formulation is a simple extension of that in the previous section (note that we neglect elastic energy, an important factor in $\gamma$/$\gamma'$ evolution). \subsubsection{Free energy and dynamics}\label{sssec:ostwald_free_energy} The atomic fraction of solute is again specified by the conserved variable $c$, while the phase is indicated by a structural order parameter, $\eta$. The structural order parameter is non-conserved and is a phenomenological phase descriptor, such that the $\alpha$ phase is indicated by $\eta=0$, while the $\beta$ phase is indicated by $\eta=1$. If multiple energetically equivalent orientation variants exist (for example, due to crystallographic symmetry considerations or ordered and disordered phases), the model may include $p$ number of structural order parameters, $\eta_{p}$, with one for each orientation variant. We include a nontrivial number of order parameters by setting $p=4$, a value commonly used in superalloy models; this will stress the numerical solver while not making the problem intractable. In this benchmark problem, the free energy of the system is based on the formulation presented in Ref.\ \cite{zhu2004three} and is expressed as \begin{equation} F=\int_{V}\left(f_{chem}\left(c,\eta_{1},...\eta_{p}\right)+\frac{\kappa_{c}}{2}|\nabla c|^{2}+\sum_{i=1}^{p}\frac{\kappa_{\eta}}{2}|\nabla\eta_{i}|^{2}\right)dV\label{eq:p2_F} \end{equation} where $\kappa_{c}$ and $\kappa_{\eta}$ are the gradient energy coefficients for $c$ and $\eta_{i}$, respectively. While the model in Ref.\ \cite{zhu2004three} follows the Kim-Kim-Suzuki (KKS) formulation for interfacial energy \cite{kim1999phase}, we use the Wheeler-Boettinger-McFadden (WBM) \cite{wheeler1992phase} formulation for simplicity. In the KKS model, the interface is treated as an equilibrium mixture of two phases with fixed compositions such that an arbitrary diffuse interface width may be specified for a given interfacial energy. In the WBM model, interfacial energy and interfacial width are linked with the concentration, such that very high resolution across the interface may be required to incorporate accurate interfacial energies. The formulation for $f_{chem}$ in Ref.\ \cite{zhu2004three} is adapted for our benchmark problem as \begin{equation} f_{chem}\left(c,\eta_{1},...\eta_{p}\right)=f^{\alpha}\left(c\right)\left[1-h\left(\eta_{1},...\eta_{p}\right)\right]+f^{\beta}\left(c\right)h\left(\eta_{1},...\eta_{p}\right)+wg\left(\eta_{1},...\eta_{p}\right),\label{eq:p2_fchem} \end{equation} where $f^{\alpha}$ and $f^{\beta}$ are the chemical free energy densities of the $\alpha$ and $\beta$ phases, respectively, $h\left(\eta_{1},...\eta_{p}\right)$ is an interpolation function, and $g\left(\eta_{1},...\eta_{p}\right)$ is a double-well function. The function $h$ increases monotonically between $h(0)=0$ and $h(1)=1$, while the function $g$ has minima at $g(0)=0$ and $g(1)=0$. The height of the double well barrier is controlled by $w$. We choose the simple formulation \begin{equation} f^{\alpha}\left(c\right)=\varrho^{2}\left(c-c_{\alpha}\right)^{2}\label{eq:f_alpha} \end{equation} \begin{equation} f^{\beta}\left(c\right)=\varrho^{2}\left(c_{\beta}-c\right)^{2}\label{eq:f_beta} \end{equation} \begin{equation} h\left(\eta_{1},...\eta_{p}\right)=\sum_{i=1}^{p}\eta_{i}^{3}\left(6\eta_{i}^{2}-15\eta_{i}+10\right)\label{eq:h} \end{equation} \begin{equation} g\left(\eta_{1},...\eta_{p}\right)=\sum_{i=1}^{p}\left[\eta_{i}^{2}\left(1-\eta_{i}\right)^{2}\right]+\alpha\sum_{i=1}^{p}\sum_{j\neq i}^{p}\eta_{i}^{2}\eta_{j}^{2},\label{eq:g} \end{equation} where $f^{\alpha}$ and $f^{\beta}$ have minima at $c_{\alpha}$ and $c_{\beta}$, $\varrho^{2}$ controls the curvature of the free energies, and $\alpha$ controls the energy penalty incurred by the overlap of multiple non-zero $\eta_{i}$ values at the same point. Because the energy values of the minima are the same (Fig.\ \ref{fig:energy_p2}), $c_{\alpha}$ and $c_{\beta}$ correspond exactly with the equilibrium atomic fractions of the $\alpha$ and $\beta$ phases. The time evolution of $c$ is again governed by the Cahn-Hilliard equation \cite{cahn1961spinodal,elliott1989second}, \begin{equation} \frac{\partial c}{\partial t}=\nabla\cdot\Bigg\{M\nabla\left(\frac{\partial f_{chem}}{\partial c}-\kappa_{c}\nabla^{2}c\right)\Bigg\}. \label{eq:full_CH_p2} \end{equation} The Allen-Cahn equation \cite{allen1979microscopic}, which is based on gradient flow, governs the evolution of $\eta_{i}$, \begin{equation} \frac{\partial\eta_{i}}{\partial t}=-L\left[\frac{\delta F}{\delta \eta_{i}}\right]=-L\left(\frac{\partial f_{chem}}{\partial\eta_{i}}-\kappa_{\eta}\nabla^{2}\eta_{i}\right),\label{eq:p2_AC} \end{equation} where $L$ is the kinetic coefficient of $\eta_{i}$. We choose $M=5$ and $L=5$ so that the transformation is diffusion-controlled, and as in Section \ref{sub:Spinodal-decomposition}, the kinetic coefficients and gradient energy coefficients are isotropic. In addition, we again choose $c_{\alpha}=0.3$ and $c_{\beta}=0.7$, and further specify $k_{c}=k_{\eta}=3$, $\varrho=\sqrt[]{2}$, $w=1$, and $\alpha=5$. For these values, the diffuse interface between $0.1<\eta<0.9$ has a width of of 4.2 units. \subsection{Reasons for choices of models and parameters \label{sub:choices}} The two benchmark problems presented here are simplified formulations designed to focus on fundamental aspects common to almost every phase field model: the diffusion of solute (Section \ref{sub:Spinodal-decomposition}) and the coupling of composition with a structural order parameter (Section \ref{sub:Ostwald-ripening}). All of the model parameters chosen here are within a few orders of unity, improving numerical performance. In addition, the structural order parameter in the second model is phenomenological and varies within the interval of {[}0, 1{]}. This interval is chosen because multiphysics coupling that relies on the phase of the material is often incorporated by way of a structural order parameter (e.g., misfit strain of a precipitate phase with respect to a matrix phase). Several trade-offs were considered between the free energy formulations and the initial conditions. The free energy could be chosen to be realistic, for example by using the CALPHAD method, or to be more simplistic while still representing the main physics and being numerically tractable. The CALPHAD method is a semi-empirical method of formulating free energies of mixing using known thermodynamic data and equilibrium phase diagrams \cite{saunders1998calphad, lukas2007computational}. While CALPHAD free energies are extremely useful, their functional form generally contains natural logarithms, which pose several numerical and mathematical challenges for incorporation into phase field models. Therefore, simple polynomial free energy density formulations are chosen because they are numerically tractable and straightforward to implement. Ideally, the energy formulation should be robust such that the system will tend to the equilibrium values of the phases no matter the initial condition. This behavior may be ensured by fixing the global minimum of the free energy density within the interval of interest. However, many formulations do not exhibit these global minima, including the one in Section \ref{sub:Ostwald-ripening} (Fig.\ \ref{fig:free energy surfaces}b). In addition, the characteristics of the free energy density surface are sensitive to the parameterization of the model. For example, the local minima present at $\eta=0$, $c=0.3$ and $\eta=1,$ $c=0.7$ become shallower as $w$ decreases. For certain values of $w$ and $\alpha$ (e.g., $w=0.1$ and $\alpha=1$), the lowest energy occurs when all of the structural order parameters assume a value of approximately 0.9 in the $\beta$ phase. This behavior is due to the $\eta_{i}^{2}\eta_{j}^{2}$ term in $g$. Finally, transient solute depletion in the $\alpha$ phase, which may cause $c$ to decrease below 0, may occur during the first several time steps of the simulation as the system quickly relaxes from its initial conditions. Furthermore, Gibbs-Thomson-induced composition shift of the $\beta$ phase may result in a composition greater than 1. Both behaviors are non-physical if $c$ is the atomic fraction of solute, but can occur within the formulations in this paper because the free energy function is defined even for non-physical solute concentrations. To avoid these issues, the compositions of the $\alpha$ and $\beta$ phases are chosen as intermediate values within the atomic fraction interval, and the initial conditions presented in Sec. \ref{sub:ICs} are formulated such that the system will not exit the interval of $0\leq c \leq 1$ and $0\leq \eta \leq 1$. \subsection{Initial conditions, boundary conditions, and domain geometries \label{sub:ICs}} Several important factors were considered in determining the initial conditions and computational domains of the benchmark problems. First, the initial conditions for spinodal decomposition and precipitation simulations are typically created with a pseudorandom number generator. However, the initial conditions must be repeatable from implementation to implementation in a benchmark problem, precluding the use of pseudorandom number generation. Therefore, we choose trigonometric functions to provide smoothly varying, relatively disordered fields that are implementation-independent. Furthermore, the average composition and the amplitude and width of the fluctuations must be chosen such that phase separation will occur, as opposed to the formation of a uniformly under- or supersaturated $\alpha$ phase. Finally, the computational domain sizes and shapes are chosen to stress the software implementation, because a wide variety of numerical methods are currently in use. The domain sizes and interface resolution requirements must be large enough that runtime should be improved by parallel computing and mesh and time adaptivity, yet not so large as to require significant resources on a high-performance computing cluster. We also anticipate that the use of non-rectilinear domains will become commonplace as new applications of the phase field method are investigated, such as nano-fabricated structures and cracking. Several phase field investigations (e.g., Refs. \cite{funkhouser2014dynamics, welland2015miscibility}) have already been performed with spherical domains. Several boundary conditions, initial conditions, and computational domain geometries are used to challenge different aspects of the numerical solver implementation. For both benchmark problems, we test four combinations that are increasingly difficult to solve: two with square computational domains with side lengths of 200 units, one with a computational domain in the shape of a ``T'', with a total height of 120 units, a total width of 100 units, and horizontal and vertical section widths of 20 units (Fig.\ \ref{fig:p1_domains_IC}), and one in which the computational domain is the surface of a sphere with a radius of $r=100$ units. While most codes readily handle rectilinear domains, a spherical domain may pose problems, such as having the solution restricted to a two-dimensional curved surface. The coordinate systems and origins are given in Fig.\ \ref{fig:p1_domains_IC}. Periodic boundary conditions are applied to one square domain, while no-flux boundaries are applied to the other square domain and the ``T''-shaped domain. Periodic boundary conditions are commonly used with rectangular or rectangular prism domains to simulate an infinite material, while no-flux boundary conditions may be used to simulate an isolated piece of material or a mirror plane. As the computational domain is compact for the spherical surface, no boundary conditions are specified for it. Note that the same initial conditions are used for the square computational domains with no-flux and periodic boundary conditions (Sections \ref{sub:spinodal_ICs} and \ref{sub:Ostwald_ICs}), such that when periodic boundary conditions are applied, there is a discontinuity in the initial condition at the domain boundaries. \subsubsection{Spinodal decomposition \label{sub:spinodal_ICs}} The initial conditions for the first benchmark problem are chosen such that the average value of $c$ over the computational domain is approximately $0.5$. The initial value of $c$ for the square and ``T'' computational domains is specified by \begin{eqnarray} c\left(x,y\right) & = & c_{0}+\epsilon\left[\cos\left(0.105x\right)\cos\left(0.11y\right)+\left[\cos\left(0.13x\right)\cos\left(0.087y\right)\right]^{2}\right.\nonumber \\ & & \left.+\cos\left(0.025x-0.15y\right)\cos\left(0.07x-0.02y\right)\right],\label{eq:p1_c_init} \end{eqnarray} where $c_{0}=0.5$ and $\epsilon=0.01$. In addition, the initial value of $c$ for the spherical computational domain is specified by \begin{eqnarray} c\left(\theta,\phi\right) & = & c_{0}+\epsilon_{sphere}\left[\cos\left(8\theta\right)\cos\left(15\phi\right)+\left(\cos\left(12\theta\right)\cos\left(10\phi\right)\right)^{2}\right.\nonumber \\ & & +\left.\cos\left(2.5\theta-1.5\phi\right)\cos\left(7\theta-2\phi\right)\right],\label{eq:p1_c_init_sp} \end{eqnarray} where $\epsilon_{sphere}=0.05$, and $\theta$ and $\phi$ are the polar and azimuthal angles, respectively, in a spherical coordinate system. These angles are translated into a Cartesian system as $\theta=\cos^{-1}\left(z/r\right)$ and $\phi=\tan^{-1}\left(y/x\right)$ dependent upon angle. The initial conditions specified by Eqs.\ \ref{eq:p1_c_init} and \ref{eq:p1_c_init_sp} are shown in Fig.\ \ref{fig:p1_domains_IC}. \begin{figure} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig2a_2col} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig2b_2col} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig2c_2col} \par\end{centering} } \par\end{centering} \caption{The computational domains and initial conditions for the spinodal decomposition benchmark problem. The origin for the coordinate system of the sphere is at its center. \label{fig:p1_domains_IC}} \end{figure} \subsubsection{Ostwald ripening \label{sub:Ostwald_ICs}} The initial conditions for the Ostwald ripening problem are qualitatively similar to those given for the spinodal decomposition problem, but the magnitude of the fluctuations around $c=0.5$ are greater, and fluctuations between [0,1] are applied to the structural order parameter fields. The initial condition for $c$ is again given by Eq.\ \ref{eq:p1_c_init} for the square and ``T'' domains, with $c_{0}=0.5$ and $\epsilon=0.05$, while it is given by Eq.\ \ref{eq:p1_c_init_sp} for the spherical domain, with $c_{0}=0.5$ and $\epsilon_{sphere}=0.05$. The initial condition for $\eta_{i}$ in the square and ``T'' domains is given as \begin{eqnarray} \eta_{i}\left(x,y\right) & = & \epsilon_{\eta}\left\{ \cos\left(\left(0.01i\right)x-4\right)\cos\left(\left(0.007+0.01i\right)y\right)\right.\nonumber \\ & & +\cos\left(\left(0.11+0.01i\right)x\right)\cos\left(\left(0.11+0.01i\right)y\right)\nonumber \\ & & +\psi\left[\cos\left(\left(0.046+0.001i\right)x+\left(0.0405+0.001i\right)y\right)\right.\nonumber \\ & & \left.\left.\cos\left(\left(0.031+0.001i\right)x-\left(0.004+0.001i\right)y\right)\right]^{2}\right\} ^{2},\label{eq:p2_n_init} \end{eqnarray} where $\epsilon_{\eta}=0.1$ and $\psi=1.5$, while for the spherical domain, it is given as \begin{eqnarray} \eta_{i}\left(\theta,\phi\right) & = & \epsilon_{\eta}^{sphere}\left\{ \cos\left(i\theta-4\right)\cos\left(\left(0.7+i\right)\phi\right)\right.\nonumber \\ & & +\cos\left(\left(11+i\right)\theta\right)\cos\left(\left(11+i\right)\phi\right)\nonumber \\ & & \psi\left[\cos\left(\left(4.6+0.1i\right)\theta+\left(4.05+0.1i\right)\phi\right)\right.\nonumber \\ & & \left.\left.\cos\left(\left(3.1+0.1i\right)\theta-\left(0.4+0.1i\right)\phi\right)\right]^{2}\right\} ^{2}\label{eq:p2_nsphere} \end{eqnarray} with $\epsilon_{\eta}^{sphere}=0.1$ and $i=1,\ldots,4$ enumerates the order parameters corresponding to the different phase variants. The initial conditions for the Ostwald ripening simulations are shown in Fig.~\ref{fig:p2_domains_ICs} for $c$, $\eta_{1}$, and $\eta_{2}$. \begin{figure} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig3a_2col} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig3b_2col} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig3c_2col} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.2]{fig3d_2col} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.2]{fig3e_2col} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.2]{fig3f_2col} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.2]{fig3g_2col} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.2]{fig3h_2col} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.2]{fig3i_2col} \par\end{centering} } \par\end{centering} \caption{The computational domains and initial conditions for the Ostwald ripening benchmark problem. Top row: Initial conditions for the atomic fraction. Middle row: Initial conditions for $\eta_{1}$. Bottom row: Initial conditions for $\eta_{2}$. Not shown: initial conditions for $\eta_{3}$ and $\eta_{4}$. \label{fig:p2_domains_ICs}} \end{figure} \section{Numerical methods} To provide example solutions to the benchmark problems, the MOOSE computational framework is used. MOOSE \cite{gaston2014continuous,gaston2015physics} is an open-source finite element framework and is the basis for several other phase field applications, including Marmot \cite{tonks2012object} and Hyrax \cite{jokisaari2015general,jokisaari2016nucleation}. To avoid computationally expensive fourth-order derivative operators, the Cahn-Hilliard equation is split into the two second-order equations \cite{elliott1989second,tonks2012object}, given by \begin{equation} \frac{\partial c}{\partial t}=\nabla\cdot\left(M\nabla\mu\right)\label{eq:CH_mu} \end{equation} and \begin{equation} \mu=\frac{\partial f_{chem}}{\partial c}-\kappa\nabla^{2}c,\label{eq:mu} \end{equation} where $f_{chem}$ and $\kappa$ are given in Section \ref{sssec:spinodal_free_energy} and Section \ref{sssec:ostwald_free_energy} for the spinodal decomposition and Ostwald ripening problems, respectively. The square computational domains are meshed with square, four-node quadrilateral elements by the mesh generator within MOOSE, while the ``T''-shaped and spherical domains are meshed with triangular three-node elements using CUBIT \cite{CUBIT}. Linear Lagrange shape functions are employed for $c$, $\eta_{i}$, and $\mu$. For computational efficiency, the system of nonlinear equations are solved with the full Newton method for the first problem, and the preconditioned Jacobian Free Newton-Krylov (PJFNK) method for the second problem. The second backward differentiation formula (BDF2) \cite{iserles2009first} time integration scheme is applied in all cases. The simulations are solved with a nonlinear relative tolerance of $1\times10^{-8}$ and a nonlinear absolute tolerance of $1\times10^{-11}$. To improve computational efficiency, adaptive meshing and adaptive time stepping are used. Each simulation is performed twice, once with the aggressive ``SolutionTimeAdaptive'' time stepper designed to finish the simulation as rapidly as possible \cite{tonks2012object}, and once with the more conservative ``IterationAdaptive'' time stepper, which attempts to maintain a constant number of nonlinear iterations and a fixed ratio of nonlinear to linear iterations. We choose a target of five nonlinear iterations, plus or minus one, and a linear/nonlinear iteration ratio of 100. Both time adaptivity algorithms allow a maximum of 5\% increase per time step. In addition, gradient jump indicators \cite{kirk2006libmesh} for $c$ and $\mu$ are used to determine mesh adaptivity, and the diffuse interface width spans at least five elements in all simulations. \section{Results and discussion} In this section, we present lessons learned from the first Hackathon, the results of the two benchmark problems simulated with two different time adaptivity algorithms, and the needs that should be addressed with future benchmark problems. As discussed in the Introduction, many different software implementations exist for phase field models, including bespoke software developed in-house. While several phase field codes are designed to be applied to multiple types of problems, the possible multiphysics couplings are so varied that it may be impossible to develop a single phase field modeling framework to suit all phase field modeling needs. Benchmark problems will help the phase field community in assessing the accuracy and performance of individual software implementations. Several lessons were learned from the first Hackathon hosted by CHiMaD, influencing the current benchmark problems as well as our design considerations for future problems. The Hackathon is a twenty-four hour event in which teams of two participants each use their phase field software of choice to simulate a specified set of phase field problems with whatever software and computational resources are available to them, including over the Internet. The goal of the Hackathon is to understand how different numerical implementations handle a set of phase field model problems of increasing difficulty with respect to accuracy and speed. We found that the original problem statements needed additional specifications for participants to successfully run the simulations without guesses or assumptions. Furthermore, the free energy functional, which was chosen from the literature, did not produce the phase compositions that were indicated. Finally, we needed standardized outputs for direct, quantitative comparison of the results. For these benchmark problems, we choose the total free energy of the system and microstructural snapshots as the metrics to compare simulation results. Because we use time adaptivity, we choose several synchronization times ($t=$ 1, 5, 10, 20, 100, 200, 500, 1000, 2000, 3000, and 10000) so that simulation results obtained from the different time steppers may be directly compared at given times. Figures \ref{fig:p1_energy_evolution} and \ref{fig:p2_energy_evolution} show the total free energy of the different simulations of spinodal decomposition and Ostwald ripening, respectively. In all cases, the total free energy decreases rapidly, then asymptotically approaches the local energy minimum of the system (which varies given the initial and boundary conditions). While the starting and final free energies are the same for each set of simulations (e.g., spinodal decomposition in the square domain with no-flux boundary conditions), the evolution of the energy is affected by the choice of the adaptive time stepper. In the spinodal decomposition problem, the differences in energy as a result of the different time steppers are small for the square computational domains, but are more significant for the spherical and T-shaped computational domains. As shown in Fig.\ \ref{fig:p1_microstructure}, which presents microstructure snapshots for spinodal decomposition at $t=200$, $t=2000$, and the end of the simulation, obvious microstructural differences are discernible. Small variations early in the simulations strongly affect the microstructural evolution at later times, even in some cases affecting the final lowest-energy structure, as seen for the T-shaped computational domain. \begin{figure} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.9]{fig4a_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.9]{fig4b_2col} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.9]{fig4c_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.9]{fig4d_2col} \par\end{centering} } \par\end{centering} \caption{The total free energy evolution of the different variations of the spinodal decomposition benchmark problem simulated with two different time steppers, for (a) the square computational domain with no-flux boundary conditions, (b) the square computational domain with periodic boundary conditions, (c) the T-shaped computational domain, and (d) the spherical surface domain. ``IA'' indicates the conservative ``IterationAdaptive'' time stepper within MOOSE, and ``STA'' indicates the aggressive ``SolutionTimeAdaptive'' time stepper. \label{fig:p1_energy_evolution}} \end{figure} \begin{figure} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.9]{fig5a_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.9]{fig5b_2col} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.9]{fig5c_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.9]{fig5d_2col} \par\end{centering} } \par\end{centering} \caption{The total free energy evolution of the different variations of the Ostwald ripening benchmark problem simulated with two different time steppers, for (a) the square computational domain with no-flux boundary conditions, (b) the square computational domain with periodic boundary conditions, (c) the T-shaped computational domain, and (d) the spherical surface domain. ``IA'' indicates the conservative ``IterationAdaptive'' time stepper within MOOSE, and ``STA'' indicates the aggressive ``SolutionTimeAdaptive'' time stepper. \label{fig:p2_energy_evolution}} \end{figure} \begin{figure} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.135]{fig6a_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.18]{fig6b_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.18]{fig6c_2col} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.18]{fig6d_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.18]{fig6e_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.18]{fig6f_2col} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.115]{fig6g_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig6h_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig6i_2col} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig6j_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig6k_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.15]{fig6l_2col} \par\end{centering} } \par\end{centering} \caption{Snapshots of the microstructure evolution for spinodal decomposition simulated with different time steppers. (a-c), (g-i): conservative time stepper; (d-f), (j-l): aggressive time stepper. (a), (d), (g), (j): $t=200$; (b), (e), (h), (k): $t=2000$; (c), (f), (i), (l): end time. Note the clearly visible differences in microstructure between the two time steppers in (b) and (e), and (h) and (k).\label{fig:p1_microstructure}} \end{figure} An example microstructure for the Ostwald ripening problem is shown in Fig.\ \ref{fig:p2_structure}, illustrating the solute and structural order parameter fields at $t=20$ for no-flux boundary conditions. The effect of the choice of time stepper is less evident in the free energy evolution (Fig.\ \ref{fig:p2_energy_evolution}), but in some cases, coarsening kinetics are impacted. Figure \ref{fig:coarsening_difference} shows parallel snapshots of the microstructure when the conservative and aggressive time steppers are applied to a simulation with periodic boundary conditions. In both simulations, a smaller particle in the center of the computational domain is shrinking; however, the particle has completely dissolved by $t=4111$ when the conservative time stepper is used (Fig.\ \ref{fig:p2_IA}), while it has not quite disappeared by $t=4131$ when the aggressive time stepper is used (Fig.\ \ref{fig:p2_STA}). As illustrated in Fig.\ \ref{fig:p2_energy_evolution}, the shrinkage of the central particle and the concomitant coarsening of the surrounding particles is slower when the simulation is performed using the aggressive time stepper, which is likely due to the fact that a particle shrinks faster as its radius decreases. The conservative time stepper naturally cuts the time step size as the rate of particle shrinkage increases, while the aggressive time stepper typically tends to increase (or at least maintain) the time step size until the solver is unable to converge to a solution. In multiple instances, particle dissolution is delayed, particularly in the later stages of the simulations. This is likely due to the fact that at later stages of the simulation, there is a greater disparity in the radii of the shrinking and growing particles. In the case of late-stage particle shrinkage and dissolution, then, the aggressive time stepper may choose a time step size that is inappropriate for the physics of the system, increasing the error in the simulation. These results highlight the fact that adaptive time steppers must be carefully assessed and chosen to minimize their impact on the simulated microstructural evolution. \begin{figure} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.19]{fig7a_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.19]{fig7b_2col} \par\end{centering} }\par\end{centering} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.19]{fig7c_2col} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.19]{fig7d_2col} \par\end{centering} }\par\end{centering} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.19]{fig7e_2col} \par\end{centering} } \par\end{centering} \caption{The solute (a) and structural order parameter (b--e) fields at $t=20$ for the Ostwald ripening problem simulated with no-flux boundary conditions (no appreciable difference in results is observed between the conservative and aggressive time steppers). Second-phase particles of differing $\eta_i$ in contact with each other do not coalesce. \label{fig:p2_structure}} \end{figure} \begin{figure} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.19]{fig8a_2col}\label{fig:p2_IA} \par\end{centering} }\subfloat[]{\begin{centering} \includegraphics[scale=0.25]{fig8b_2col}\label{fig:p2_STA} \par\end{centering} } \par\end{centering} \caption{A comparison of the composition fields of Ostwald ripening simulations performed with periodic boundary conditions illustrating the effect of the choice of time stepper on coarsening behavior. A shrinking particle has (a) completely dissolved by $t=4111$ when the simulation is performed with the conservative time stepper, while the particle has not yet completely dissolved by (b) $t=4131$ when the simulation is performed with the aggressive time stepper. \label{fig:coarsening_difference}} \end{figure} For these benchmark problems, we are interested in the microstructural evolution all the way to the lowest energy state, although for other problems, evolving to equilibrium or local energy minimum may be unrealistic or even uninteresting. While the evolution of the total system energy provides important information to assess simulation results, we find that it is difficult to determine a proper simulation exit condition. We originally tried a relative differential norm from one time step to the next with a tolerance of $5\times 10^{-8}$, but found that the simulations would sometimes exit significantly before equilibrium was reached. Therefore, we ran the simulations without exit parameters and relied on human intervention. We found that once a simulation has visibly reached equilibrium (e.g., planar interfaces), the system free energy continues to decrease slowly, presumably due to equilibration of very small solute gradients. Eventually the free energy stops evolving to six or seven significant figures, at which point we chose to end the simulations. In some simulations, however, the value of the total free energy fluctuates in the sixth or seventh significant digit, indicating that the solver has reached its limits in terms of numerical accuracy, and the simulations were again ended. To determine a more useful criterion for ending these simulations, we calculate the rate of change of the volume-averaged free energy density, $\frac{1}{V}\frac{dF}{dt}$, an example of which is plotted in Fig.\ \ref{fig:p1_dfdt} for both benchmark problems simulated with the conservative time stepper. The rate of change allows for a more direct comparison of the evolution of the different within the different computational domains. While the total free energies of the different simulations vary by several orders of magnitude because of their differing computational domain size, $\frac{1}{V}\frac{dF}{dt}$ is similar, as shown in Fig.\ \ref{fig:p1_dfdt}. The rate of change varies by about ten orders of magnitude throughout the course of the simulation, highlighting the need for accurate adaptive time stepping algorithms when studying long-term microstructural evolution. As shown for the T-shaped spinodal decomposition simulation in Fig.\ \ref{fig:p1_dfdt_b}, the numerical noise in the free energy (and thus $\frac{1}{V}\frac{dF}{dt}$) that can occur when the system stops evolving is evident. As an example, we find that a value of $\frac{1}{V}\frac{dF}{dt}=1\times10^{-14}$ in dimensionless units generally appears to be sufficient for these benchmark problems to indicate that the final configuration has been achieved while avoiding the descent into numerical noise. For other systems, the free energy and the rate of change of the average free energy density may need to be normalized to reasonable values for evaluation purposes. \begin{figure} \begin{centering} \subfloat[]{\begin{centering} \includegraphics[scale=0.7]{fig9a_2col}\label{fig:p1_dfdt_a} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.7]{fig9b_2col}\label{fig:p1_dfdt_p2IA} \par\end{centering} } \subfloat[]{\begin{centering} \includegraphics[scale=0.7]{fig9c_2col}\label{fig:p1_dfdt_b} \par\end{centering} }\par\end{centering} \caption{The calculated $\frac{1}{V}\frac{dF}{dt}$ for the benchmark problems simulated with the conservative time stepper, which allows for a direct comparison of the evolution within the different computational domains: (a) spinodal decomposition, (b) Ostwald ripening. (c) The calculated $\frac{1}{V}\frac{dF}{dt}$ for the T-shaped spinodal decomposition simulation as the simulation approaches equilibrium. Note the numerical noise in the free energy of the system as it stops evolving. ``NF'' and ``PBC'' indicate square computational domains with no-flux boundary and periodic boundary conditions, respectively, and ``T'' and ``Sphere'' indicate the T-shaped and spherical surface computational domains.} \label{fig:p1_dfdt} \end{figure} The proposed benchmark problems presented in this paper model only a small subset of physics that have been incorporated into phase field formulations. Future benchmark problems should test additional key aspects, such as anisotropic linear elasticity with inhomogeneous moduli, anisotropic diffusivities and interfacial energies, solidification, and CALPHAD-based thermodynamics. In addition, benchmark problems may benefit from being formulated with a parameter that controls the numerical difficulty of the problem, where possible. The solvers and the interpretation of the problem statement may be verified for the ``easy'' problem, while the software may be stress-tested when the parameter value makes the problem ``difficult.'' Furthermore, perturbation studies, in which the initial conditions or problem parameters are slightly varied, may also be useful in determining how much simulation results differ as a result of numerical solvers versus any inherent instability in the problem. Finally, while the quantities of interest in each benchmark problem may vary (e.g., volume fraction of solidified material, polarization field in a ferroelectric material), we propose that the total free energy evolution is a standard, quantitative output that may be used for every problem. Additional relevant comparison metrics should be identified and utilized on a per-problem basis. Community discussion and feedback is essential for the development of relevant, useful problem sets, and we urge individual researchers to contribute. \section{Conclusion} In this paper, we propose two benchmark problems for numerical implementations of phase field models that capture essential physical behavior present in a vast majority of models: solute diffusion and second-phase growth and coarsening. The model formulations are simplified to make the tests easier to implement within different software; however, the governing physics are captured. Furthermore, the initial conditions are formulated such that they are implementation-independent, yet still disordered, similar to initial conditions often used in the literature. Multiple computational domains and boundary conditions are given so that the numerical implementations may be challenged and to address future needs of phase field applications. We also discuss the need to produce tractable output, i.e., data formats that allow simulation results to be directly compared from different implementations, and identify the total free energy evolution as a metric that should be used for every problem. We demonstrate the utility of the benchmark problems by studying the effect of different time steppers on the microstructural evolution: small variations between the simulations at earlier times become amplified at later times. Given the deviation in our own results, we note that variations in results between different implementations does not necessarily imply an incorrect implementation. We also describe the use of the normalized rate of change of the total free energy to halt the simulations appropriately. Finally, the problems presented in this paper test only a small subset of the physics often incorporated into phase field models by design. Further benchmark problems are needed to model additional physics, such as linear elasticity, anisotropic diffusion and interfacial energies, solidification, and other phenomena. Ultimately, numerical benchmark problems should allow the validation of models with standard experimental data sets by ensuring that the differences in simulation results are not merely due to variations in numerical implementations. For standard benchmark problems to become successful, the community must provide feedback about and input into the currently proposed problems and future problems. These standard benchmark problems are hosted on the NIST website, \url{https://pages.nist.gov/chimad-phase-field/}, along with the simulation data for the models presented here for download and comparison. The website will also serve as a repository for community-submitted results. We encourage the community to contact the authors directly or via the website for additional discussion. \section*{Acknowledgments} The work by A.M.J., P.W.V., and O.G.H. was performed under financial assistance award 70NANB14H012 from U.S. Department of Commerce, National Institute of Standards and Technology as part of the Center for Hierarchical Material Design (CHiMaD). We gratefully acknowledge the computing resources provided on Blues and Fission, high-performance computing clusters operated by the Laboratory Computing Resource Center at Argonne National Laboratory and the High Performance Computing Center at Idaho National Laboratory, respectively. Finally, A.M.J. thanks J.R. Jokisaari for constructive writing feedback. \bibliographystyle{model1-num-names}
train/arxiv
BkiUdw84eIOjRwCobJ-_
5
1
\section{Introduction} {Radio galaxies constitute the parent population of blazars, with low-power radio galaxies thought to be misaligned BL Lacertae objects (BL Lacs), and higher-power sources thought to be associated with flat spectrum radio quasars \citep[FSRQs; e.g.,][]{urr95}. In general, the accretion power determines the radiative properties of the direct emission of the accreting matter (i.e., the thermal continuum of the accretion disk and circumnuclear dust), and is reflected in the presence of high-excitation emission lines in the source spectrum \citep{hin79}. In particular, the ``low-excitation radio galaxies'' (LERGs) are considered to be characterized by lower accretion rates (below $\sim 1\%$ in the Eddington units) and radiatively inefficient accretion flows, while ``high-excitation radio galaxies'' (HERGs) are believed to represent high-accretion rate sources with standard (optically-thick, geometrically-thin) accretion disks. The jet power, on the other hand, which in general scales with the total radio luminosity, was proposed to be related uniquely to the large-scale morphologies of radio galaxies \citep{fan74}, with low- and high-power jets forming Fanaroff-Riley (FR) type I and type II structures, respectively. The ``FSRQs/FR\,IIs/HERGs vs. BL Lacs/FR\,Is/LERGs'' unification scenario is not without its caveats, however, as a number of BL Lacs have been found to be associated with FR\,II-like jets and lobes, while some FSRQs display FR\,I large-scale morphologies \citep[e.g.,][]{blu01,hey07,lan08,chi09,kha10}. Also, many FR II-type radio galaxies are classified as LERGs, while some FR Is are known to be hosted by high-excitation nuclei \citep[e.g.,][]{hard06,hard07,hard09,but10,gen13,min14}.} {Studying the core emission of radio galaxies in the aforementioned context of unification schemes for active galactic nuclei (AGN) can be challenging however, due to the inevitable contributions from relativistic jets, host galaxies, accretion disks, and disk coronae, any one of which may dominate the observed radiative output of a source in different frequency ranges. Hence, a detailed multifrequency analysis is needed to disentangle robustly various emission components in a number of objects, before drawing any definite conclusions regarding the corresponding jet and accretion luminosities. We note that although} the extended, $\geq$\,kpc-scale jets have been resolved in the X-rays and optical for a number of sources\footnote{See {\tt http://hea-www.harvard.edu/XJET/} and {\tt http://astro.fit.edu/jets/} for continually updated lists of large-scale jets resolved in X-rays and optical, respectively.}, sub-kpc scale structures cannot be imaged at frequencies higher than radio, with a few exceptions \citep[see][]{harr09,goo10,wor10,mey13}. For this reason, even for the brightest radio galaxies such as Cen\,A and NGC\,1275, the origin of the observed optical and X-ray core fluxes is still an open issue \citep[e.g.][]{yam13}. Often, the jet origin of the unresolved core emission is claimed based solely on the modeling of broad-band spectral energy distributions (SEDs; see, e.g., \citealt{chi03} and \citealt{fos05} for the case of NGC\,6251), and therefore the alternative possibilities, such as a disk/corona emission, cannot be ruled out. In the X-ray domain there are three pieces of evidence that can help to distinguish between thermal disk/corona emission and non-thermal radiation from an unresolved jet: variability, the Fe-K line, and {--- to a lesser extent ---} the spectral slope. Non-thermal jet emission is expected to be variable on shorter timescales and with greater magnitude than variations in an accretion disk/corona, so fast variability would be an indication of a jet origin, although a lack of such variability would not rule out a jet origin. {Variability constrains were used for example} in the {\em Suzaku} study of Cen\,A by \citet{fuk11b}, who claimed a dominant jet contribution at energies above 100\,keV. \citet{yam13}, on the other hand, concluded that the X-ray core emission from NGC\,1275 is dominated by accreting matter due to a large equivalent width (70--120\,eV) of the detected Fe-K line. Good constraints on Fe-K lines can be achieved by the {\em Suzaku} or {\em XMM-Newton} telescopes, which provide quality X-ray spectra with high signal-to-noise ratios. Obviously, it is possible that in many or even the majority of cases the observed X-ray core emission is a combination of disk/corona and jet contributions, as suggested by the fact that the X-ray spectra of non-jetted AGN are typically harder than those of jetted sources \citep[e.g.,][]{per02}; we note however that the spectral slopes of X-ray continua alone do not provide conclusive evidence in this context {\citep[see the related discussion in][]{hard99}.} Optical line emission can be also used as an additional constraint to diagnose the disk emission in the objects we study. {During the last decade, rapid developments in $\gamma$-ray astronomy has opened a new window for studying the unification of jetted AGN.} Blazars are generally bright in the $\gamma$-ray band {\citep{hart99,ack11} } due to relativistic beaming of the jet emission, which is expected to not be as significant for misaligned radio galaxies. Still, recent observations by the {\em Fermi} Large Area Telescope (LAT) in the high-energy (HE; $0.1-100$\,GeV) band, as well as by the atmospheric Cherenkov telescopes in the very high-energy (VHE; $>0.1$\,TeV) band, have revealed that radio galaxies are high-energy emitters. In particular, {\em Fermi}-LAT has detected 11 radio galaxies with 15 months of sky survey data \citep{abd10b}, including M\,87 \citep{abd09b}, Cen\,A \citep{abd10a}, and NGC\,1275 \citep{abd09a}. These three objects are also the only established TeV emitting radio galaxies \citep[][respectively]{aha06,aha09,ale12}, taking into account that the VHE-detected IC\,310 \citep{ale10} is now proposed to be re-classified as a BL Lac object \citep{kad12}. Here, we report the {\em Suzaku} \citep{mit07} X-ray study of eight nearby (redshifts $z < 0.06$) $\gamma$-ray emitting radio galaxies which are included in the 15-month {\em Fermi}-LAT `misaligned AGN' list \citep{abd10b}. All these radio galaxies are of the FR\,I type, with the exception of 3C\,111 and NGC\,6251 which display classical FR\,II and intermediate FR\,I/FR\,II large-scale radio morphologies, respectively \citep[e.g.,][]{gli04,sam04,gra12}. The remaining three radio galaxies from the misaligned AGN sample are all distant ($z>0.25$) so we do not explore them here. In Section 2 we present our original {\em Suzaku} data analysis for M\,87, PKS\,0625$-$354, and 3C\,78; {\em Suzaku} results for the other sources we discuss, 3C\,111, 3C\,120, Cen\,A, NGC\,1275, and NGC\,6251, are quoted from the literature. In Section 3 we also present the analysis of 5 years of {\em Fermi}-LAT data for two particularly intriguing sources, PKS\,0625$-$354\ and 3C\,78. In Section 4 we discuss the origin of the X-ray emission detected with {\em Suzaku} from unresolved cores of eight analyzed radio galaxies. There we also present for the first time the broad-band SED modeling of PKS\,0625$-$354\ and 3C\,78\ which appear to have X-rays originating from non-thermal jet emission and have not been previously modeled; the modeling is then compared with the analogous modeling of Cen\ A, NGC\,1275, M\, 87, and NGC\,6251 presented previously in the literature. Our main conclusions are summarized in Section 5. \section{ {\em Suzaku} Observations } \subsection{Data Reduction} {\em Suzaku} is an X-ray observatory which contains two instruments; the X-ray Imaging Spectrometer \citep[XIS;][]{koy07} and the Hard X-ray Detector \citep[HXD;][]{tak07,kok07}. The former consists of four CCD cameras. One CCD has been lost in 2006 November, and thus most of observational results shown in this paper were based on the data of three CCD cameras. The latter consists of PIN photo-diodes and GSO scintillators, surrounded by active shields of BGO scintillators. All the available {\em Suzaku} data for the GeV emitting FR I radio galaxies are summarized in Table\,\ref{obslog}. Results for some of the targets have already been published, as indicated in the table; the {\em Suzaku} results for M\,87, PKS\,0625$-$354, and 3C\,78\ are presented here for the first time. The Fe-K line equivalent widths and the X-ray luminosities of all the objects provided in Table\,\ref{obslog} and discussed in detail below in this paper are estimated by analyzing the archival {\em Suzaku} data, including the results of \citet{fuk11a}. All the observations were performed in the XIS $5 \times 5$ or $3 \times 3$ modes, and with the normal mode of the HXD. We utilized data processed with version 2.0--2.7 of the pipeline {\em Suzaku} software, and performed the standard data reduction: a pointing difference of $<1.5^{\circ}$, an elevation angle of $>5^{\circ}$ from the earth rim, a geomagnetic cut-off rigidity (COR) of $>$6 GV, and $>256$ s spent in the South Atlantic Anomaly (SAA). Further selections were applied: Earth elevation angle of $>20^{\circ}$ for the XIS, Cut-off-rigidity (COR) $>$8 GV and the time elapsed from the SAA (T\_SAA\_HXD) was selected to be $> 500$\,s for the HXD. The XIS response matrices were created with {\tt xisrmfgen} and {\tt xissimarfgen} \citep{ish07}. The XIS detector background spectra were extracted at 4--6 arcmin from the target object and then subtracted. We utilized the HXD responses provided by the HXD team. The ``tuned'' ({\tt LCFIT}) HXD background files \citep{fuk09} were used, and the good time interval (GTI) was determined by taking the logical ``AND'' of GTIs among XIS data, HXD data, and HXD background data. For the XIS and HXD-PIN, the Cosmic X-ray Background (CXB) was added to the background spectrum, assuming the flux and spectra in \citet{bol87}, although it was negligible for the HXD-GSO. \begin{table}[!t] {\scriptsize \begin{center} \caption{Summary of {\em Suzaku} observations of eight LAT-detected FR\,I radio galaxies} \label{obslog} \vspace{0.2cm} \begin{tabular}{cccccc} \hline \hline Source & Redshift & ObsID & Date & Exposure$^{\star}$ & References$^{\ddagger}$ \\ \hline 3C\,78/NGC\,1218 & 0.029 & 706013010 & 2011-08-20 & 97\,ks & this study \\ 3C\,84/NGC\,1275 & 0.018 & ---$^{\ast}$ & 2006--2011 & ---$^{\ast}$ & Y13 \\ 3C\,111 & 0.049 & 7050400[1-3]0 & 2010-09-02,09,14 & 170\,ks & T11 \\ 3C\,120 & 0.033 & 7000010[1-4]0 & 2006-02,03$^{\ast\ast}$ & 147\,ks & K07 \\ PKS\,0625$-$354\ & 0.055 & 706014010 & 2011-11-03 & 110\,ks & this study \\ M\,87/3C\,274 & 0.004 & 801038010 & 2006-11-29 & 98\,ks & this study \\ Cen\,A/NGC\,5128 & ---$^{\dagger}$ & 100005010, & 2005-08-19, & 94\,ks & M07 \\ & & 7040180[1-3]0 & 2009-07-20/08-05/08-14 & 118\,ks & F11b \\ NGC\,6251 & 0.024 & 705039010 & 2010-12-02 & 87\,ks & E11\\ \hline \end{tabular} \end{center} $^{\star}$ XIS-0 exposure.\\ $^{\ast}$ Twelve pointings.\\ $^{\ast\ast}$ Four pointings.\\ $^{\dagger}$ The assumed distance 3.8\,Mpc.\\ $^{\ddagger}$ References: \citet{yam13,tom11,kat07,mar07,fuk11b,eva11}. } \end{table} For XIS and PIN detectors, the energy ranges of 0.45--10\,keV and 17--50\,keV, respectively, were used in the fitting. In addition, we ignored the 1.75--1.88\,keV energy interval in the XIS spectra to avoid the response uncertainty. The X-ray spectra were binned for least $\chi$-square spectral fitting {so that one spectral bin contains more than 20 photons}. XIS photons were accumulated within 4 arcmin of the galaxy center, with the XIS-0 and -3 data co-added for PKS\,0625$-$354\ and 3C\,78. For M\,87, the XIS-2 CCD was utilized in the same way as above. Since M\,87 is embedded in the bright extended X-ray emission of the Virgo intracluster medium, we took the integration radius as 1 arcmin, and the background spectrum was taken from the 1.5--2.5 arcmin ring, and ignored the HXD-PIN data. Since the GSO signal was not significant in all the analyzed objects, and the resulting upper limits above 40\,keV are not constraining, below we do not discuss the GSO data analysis results. A relative normalization between the XIS-F and XIS-B was left free to vary, while that between {the XIS and PIN was fixed\footnote{\tt http://www.astro.isas.jaxa.jp/suzaku/analysis/hxd/gsoarf2/} to 1.17.} \subsection{Results} M\,87, PKS\,0625$-$354, and 3C\,78\ are clearly detected with the XIS below 10\,keV; PKS\,0625$-$354\ is also detected with HXD-PIN above 10\,keV. At first, the obtained {\em Suzaku} X-ray spectra were fit with a single power-law model (PL) multiplied by the Galactic absorption with the column densities fixed to the corresponding values provided by \citet{kal05}. The spectra of all three objects showed some residuals in the soft X-ray band. These residuals could be due to the thermal emission from the hot interstellar or intracluster medium, and/or absorption columns in excess of the Galactic values. Thus, one or two {\tt apec} thermal plasma models were included in the second step of the fitting procedure, and the absorption column densities were let free. In the cases of PKS\,0625$-$354\ and 3C\,78\ the metal abundance of {\tt apec} was fixed to 0.3 solar, which is typical for galaxy groups \citep[e.g.,][]{fuk04}. In the case of M\,87, good photon statistics allowed us to leave the metal abundance free, but because the temperature structure of the M\,87 core region is complex \citep[e.g.,][]{mat02}, even a two-temperature {\tt apec} model could not fit the soft X-ray continuum well. Since the detailed investigation of the M\,87 temperature structure is beyond the scope of this paper, we simply applied a two-temperature {\tt apec} model to the 0.7--10\,keV range, constraining the main thermal plasma parameters; then the resulting {\tt apec} model parameter values were fixed during a {\tt apec} + PL model fit to the 3--10\,keV range. Inclusion of the additional thermal components improved the fits, which are summarized in Table\,\ref{xfit} and presented in Figure\,\ref{xspec}. \begin{table}[!t] {\scriptsize \begin{center} \caption{Summary of {\em Suzaku} data spectral fitting} \label{xfit} \vspace{0.2cm} \begin{tabular}{ccccccccc} \hline \hline Source & $N_{\rm H}$ & $kT$ & $Z$ & $L_{\rm 0.5-10\,keV}$ & $\Gamma_{\rm X}$ & $L_{\rm 2-10\,keV}$ & EW & $\chi^2$/d.o.f \\ & $10^{20}$\,cm$^{-2}$ & keV & $Z_{\odot}$ & $10^{42}$\,erg\,s$^{-1}$ & & $10^{42}$\,erg\,s$^{-1}$ & eV & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline M\,87 & 10$\pm$6 (1.9) & 1.79$\pm$0.02 & 1.4$\pm$0.8 & 13 & 2.42$\pm$0.03 & 0.7 & $<13$ & 674/572 \\ & & 2.28$\pm$0.03 & 1.4$\pm$0.2 \\ PKS\,0625$-$354\ & 9$\pm$1 (6.36) & 0.24$\pm$0.02 & 0.3$^{(f)}$ & 9.6 & 2.25$\pm$0.02 & 49 & $<7$ & 640/486 \\ 3C\,78\ & 14$\pm$2 (9.51) & 0.29$\pm$0.04 & 0.3$^{(f)}$ & 1.0 & 2.32$\pm$0.04 & 2.0 & $<75$ & 572/567 \\ & & 1.07$\pm$0.06 & 0.3$^{(f)}$ \\ NGC\,6251 & & & & & 1.82$_{-0.05}^{+0.04}$ & 2.86 & $<66$ & \\ 3C\,111 & & & & & 1.65$\pm$0.02 & 259 & 40$\pm9$ & \\ 3C\,120 & & & & & 1.75$_{-0.02}^{+0.03}$ & 100 & 50$\pm10$ & \\ NGC\,1275 & & & & & 1.73$\pm0.03$ & 7.7 & 75$\pm7$ & \\ Cen\,A & & & & & 1.73$\pm0.03$ & 10 & 76$\pm3$ & \\ \hline \end{tabular} \end{center} (1) Source. (2) Absorption column density for {\tt phabs} model; the values in the parentheses are the Galactic values $N_{\rm H,\,Gal}$ from \citet{kal05}. (3) Temperature in the ${\tt apec}$ fits. For M\,87 and 3C\,78, two temperatures are shown in the two {\tt apec} model. (4) Abundance in the {\tt apec} fit (fixed in the cases of PKS\,0625$-$354\ and 3C\,78). (5) Absorption-corrected ${\tt apec}$ luminosity. (6) Photon index in the PL fit; {values for bottom five objects are quoted from the corresponding references in Table\,\ref{obslog}}. (7) Absorption-corrected PL luminosity. (8) Equivalent width of Fe-K line. (9) Goodness of the fit (in the case of M\,87 the provided $\chi^2$/d.o.f value corresponds to the PL fit to the 3--10\,keV range; see section 2.2 for details). Values for bottom five objects in column 7, 8, and 9 are originally derived by us.} \end{table} The plasma temperatures and luminosities derived for PKS\,0625$-$354\ and 3C\,78\ are consistent with those of interstellar and intracluster media found in elliptical galaxies and galaxy groups \citep[e.g.,][]{bir04,fuk06,die07}. We also checked the archival {\em Chandra} X-ray data for PKS\,0625$-$354\ and confirmed that the X-ray appearance of the source is almost point-like with a faint extended halo due to the host elliptical/poor galaxy cluster Abell\,3392. The temperatures of two {\tt apec} components derived for M\,87 are consistent with those obtained before by \citet{mat02} based on the analysis of {\em XMM-Newton} data; the two distinct thermal components in this case are due to the projection of cool-core and hot-periphery cluster emission at the cluster center. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.32]{fig1a.eps} \includegraphics[scale=0.32]{fig1b.eps} \includegraphics[scale=0.32]{fig1c.eps} \vspace{0.5cm} \caption{{\em Suzaku} spectra of M\,87, PKS\,0625$-$354, and 3C\,78. The black, red, and green symbols are XIS-F, XIS-B, and HXD-PIN spectra, respectively. The solid line represents the best-fit total model, while the dashed and dotted lines are the {\tt apec} and power-law model components, respectively. We include two {\tt apec} model for M\,87 and 3C\,78. The bottom panels show the residuals in units of $\sigma$.} \label{xspec} \end{center} \end{figure} The absorption column densities $N_{\rm H}$ for the three analyzed radio galaxies are slightly larger than the corresponding Galactic value of \citet{kal05}; see Table\,\ref{xfit}. This might be due to the spectral curvature in the soft X-ray band, but we cannot rule out the uncertainty of the $N_{\rm H,\,Gal}$ database, {spectral modeling dependency of the thermal emission}, and also the intrinsic absorption {by the interstellar medium in host galaxies}. The derived power-law photon indices $\Gamma_{\rm X}$, distributed within a narrow range of 2.22--2.45, are somewhat steeper but not extraordinary for coronal emission of Seyfert galaxies. They are also consistent with the X-ray photon indices of high-peaked BL Lac objects \citep[e.g.,][]{don05,aje09}, i.e. the aligned counterpart to FR\,I radio galaxies, where the X-rays have a jet origin. Fluorescence Fe-K lines are common features in AGNs dominated by disk emission. However, none of the objects analyzed here show significant fluorescence Fe-K lines, except for the ionized Fe-K lines from the hot plasma around the M\,87 core. We obtained upper limits of the equivalent widths (EWs) of the narrow Fe-K lines at the rest-frame energy of 6.4\,keV (see Table\,\ref{xfit}); these are particularly low in the cases of PKS\,0625$-$354\ and M\,87. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.32]{fig2a.eps} \includegraphics[scale=0.32]{fig2b.eps} \vspace{0.5cm} \caption{{\em Suzaku} X-ray light curves of PKS\,0625$-$354\ (left) and 3C\,78\ (right) in the 0.45--8\,keV band. The size of the time bins is 4000\,s. The red data points are XIS-B, and others are XIS-F. } \label{xlc} \end{center} \end{figure} \citet{gli08} reported on the 2005 {\em XMM-Newton} data analysis for PKS\,0625$-$354. They found a power-law X-ray component with a photon index of 2.52$_{-0.03}^{+0.02}$ and a flux of $2.6 \times 10^{-12}$ erg\,cm$^{-2}$\,s$^{-1}$; the EW of the Fe-K line was constrained to be $<182$\,eV. \citet{tru99} reported on {\em BeppoSAX} observations of PKS\,0625$-$354\ and 3C\,78\ from 1996--1997; assuming a photon index of 2.3 for both sources, they derived the 1--10\,keV luminosities of the power-law components as $1.8 \times 10^{43}$\,erg\,s$^{-1}$ and $1.5 \times 10^{42}$\,erg\,s$^{-1}$, respectively. Our {\em Suzaku} observations reveals therefore a flatter-spectrum and brighter (by a factor of 2--3) X-ray emission of PKS\,0625$-$354\ when compared with the previous epochs. For 3C\,78, the X-ray flux is almost the same as that in 1997, and we constrain the power-law photon index and Fe-line EW for the first time. The Fe-K line EWs are in general much more strongly constrained in this study than ever before. We also investigated the X-ray time variability of the analyzed radio galaxies during the {\em Suzaku} observations; see Figure\,\ref{xlc}. No statistically significant variability was found in the 0.45--8\,keV range for 3C\,78, however most likely only due to a very low photon statistics. PKS\,0625$-$354, on the other hand, showed a small amount of variability with a time scale of 1--2 days and an amplitude of $\sim 10\%$. {Significant X-ray variability of M\,87 detected in the acquired {\em Suzaku} data will be discussed \citep[and compared with ongoing {\em Chandra} monitoring; see][]{harr09} in the forthcoming paper.} \section{ {\em Fermi}-LAT Observations} The {\em Fermi}-LAT is a pair conversion telescope which has a field of view of about 20\% of the sky from 20\,MeV to over 300\,GeV \citep{atw09}. Since our results indicate that the X-ray spectra of PKS\,0625$-$354\ and 3C\,78\ are dominated by jet emission, we analyzed five years of LAT data for those two radio galaxies. As mentioned in Section\,1, the SED modeling of M\,87 was preformed by \citet{abd09b}. \subsection{Data Analysis and Localization} We analyzed the LAT {\tt P7REP} data from 2008 August 4 to 2013 August 4, corresponding to mission elapsed time (MET) 239557420 to 397353600. Source class ({\tt evclass=2}) events were selected with a zenith angle cut of $<$100$^{\circ}$, and a rocking angle cut of 52$^{\circ}$. For the analysis, LAT Science Tools version v9r32p5 was utilized with the {\tt P7REP\_SOURCE\_V15} Instrument Response Functions (IRFs). Both radio galaxies are clearly visible in the 0.2 to 300\,GeV LAT counts maps. We obtained a localization of the $\gamma$-ray sources associated with each galaxy with the {\tt gtfindsrc} task. The resulting localizations were reduced to the 95\% confidence localization error $r_{95}$= 0.042$^{\circ}$ centered at (RA, DEC) = (96.785$^{\circ}$, $-35$.488$^{\circ}$) for PKS\,0625$-$354\ (NED: 96.778$^{\circ}$, $-35$.488$^{\circ}$), and $r_{95}$= 0.089$^{\circ}$ centered at (47.145$^{\circ}$, 4.130$^{\circ}$) for 3C\,78\ (NED: 47.109$^{\circ}$, 4.111$^{\circ}$). These localized positions are consistent within 0.007$^{\circ}$ and 0.046$^{\circ}$ from the center {of the two targets, respectively}. \subsection{Results} \begin{table}[!b] {\scriptsize \begin{center} \caption{Summary of the {\em Fermi}-LAT data spectral fitting} \label{gfit} \vspace{0.2cm} \begin{tabular}{cccccc} \hline \hline Source & $\Gamma_{\rm HE}$ & $F_{\rm 0.1-100\,GeV}$ & TS & GBMN & IBMN \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline PKS\,0625$-$354\ & 1.72$\pm$0.06 & 6.7$\times10^{-9}$ & 403.2 & 1.06$\pm$0.02 & 1.38$\pm$0.03 \\ 3C\,78\ & 2.01$\pm$0.16 & 4.9$\times10^{-9}$ & 61.3 & 1.04$\pm$0.01 & 0.95$\pm$0.03 \\ \hline \end{tabular} \end{center} (1) Source. (2) HE $\gamma$-ray photon index. (3) Photon flux in the units of ph\,cm$^{-2}$\,s$^{-1}$. (4) Test Statistics of the detection. (5) Galactic background model normalization. (6) Isotropic background model normalization. } \end{table} We extracted the data within a 12$\times$12\,deg$^2$ rectangular region centered on each object. The binned likelihood fitting with the {\tt gtlike} tool was performed. The field background point sources within 14.5$^{\circ}$ from each source, listed in the LAT 2 year catalog \citep{nol12}, were included, and their spectra were assumed to be power-laws with the photon indices fixed to the catalog values. The standard LAT Galactic emission model was used ({\tt gll\_iem\_v05.fits}) and the isotropic diffuse gamma-ray background and the instrumental residual background were represented as a uniform background ({\tt iso\_source\_v05.txt})\footnote{These background models are available at the FSSC:\\ {\tt http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}}. A likelihood analysis was performed with the energy information binned logarithmically in 30 bins in the 0.2--300\,GeV band, and the spatial information binned with 0.15$\times$0.15\,deg$^2$ bin size. For the Galactic and isotropic emission models the normalizations were left free. The spectra of both the analyzed radio galaxies were modeled as power-laws. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.32]{fig3a.eps} \includegraphics[scale=0.32]{fig3b.eps} \vspace{0.25cm} \caption{{\em Fermi}-LAT $\gamma$-ray spectra of PKS\,0625$-$354\ (left) and 3C\,78\ (right). The data points with only the lower error bar represent upper limits of 90\% confidence level flux.} \label{gspec} \end{center} \end{figure} Table\,\ref{gfit} summarizes the {\em Fermi}-LAT data fitting results. The HE $\gamma$-ray photon indices $\Gamma_{\rm HE}$ (evaluated for the 0.2--300\,GeV interval) are within the `standard' blazar range (1.3--3.0) \citep{abd10d}, and we note that PKS\,0625$-$354\ and 3C\,78\ have the hardest LAT spectra among the entire 15 month `misaligned AGN' sample \citep{abd10b}. The values of $\Gamma_{\rm HE}$ and photon fluxes $F_{\rm 0.1-100\,GeV}$ provided here are in agreement with those given in the 15 month catalog. In order to obtain model-independent spectra in the 0.2--300\,GeV range for our two sources, we performed the {\tt gtlike} spectral analysis for several independent energy bins, which were spaced logarithmically. Nine energy bins were analyzed for PKS\,0625$-$354\, and six energy bins for 3C\,78. In each energy bin, we fixed the power-law photon index to 2.0. Figure\,\ref{gspec} shows the resulting spectra, where we do not detect signals below 1 GeV ($TS<5$) for 3C\,78. Interestingly, the $\gamma$-ray detection is significant up to 100\,GeV for both objects. In addition, the analysis indicates a break in the $\gamma$-ray spectrum of PKS\,0625$-$354. We therefore applied the broken power-law model with {\tt gtlike}, and found that the likelihood $L$ increased by $2 \Delta \log L = 401.4$, corresponding to $20\sigma$\footnote{TS=$2 \Delta \log L$ is distributed as $\chi^2$ for one degree of freedom.}; the resulting photon indices were then derived as $1.69 \pm 0.07$ and $4.97 \pm 1.53$ below and above the break energy $64 \pm 23$\,GeV, respectively. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.32]{fig4a.eps} \includegraphics[scale=0.32]{fig4b.eps} \vspace{0.25cm} \caption{{\em Fermi}-LAT 5-year light curves of PKS\,0625$-$354\ (left) and 3C\,78\ (right) in the 0.2--300\,GeV range.} \label{glc} \end{center} \end{figure} In order to investigate the $\gamma$-ray variability of the two analyzed radio galaxies, we binned the LAT data into 30- or 90-day bins for PKS\,0625$-$354, and 1-year bins for 3C\,78. The {\tt gtlike} analysis was performed for each time bin in the same way as the 5-year analysis of the 0.2--300\,GeV band. Figure\,\ref{glc} shows the resulting HE $\gamma$-ray light curves. No significant variability can be claimed for 3C\,78, but PKS\,0625$-$354\ displayed a rather pronounced flare during the second year of the LAT operation. Keeping in mind the hardening in the HE $\gamma$-ray spectrum of NGC\,1275 detected during the flaring state by \citet{kat10}, we split the LAT data for PKS\,0625$-$354\ into the two 2.5-year long intervals, and performed the {\tt gtlike} analysis for each epoch separately, but we did not find any significant spectral evolution. \section{Discussion} \subsection{Origin of the X-ray Emission} In this subsection, we summarize {\em Suzaku} X-ray studies of the GeV-emitting FR\,I radio galaxies. Together with the new results on M\,87, PKS\,0625$-$354, and 3C\,78\ reported in this paper, we refer to {\em Suzaku} results from publications listed in Table\,\ref{obslog}. For most of the objects in our sample the X-ray spectra are quite similar to those of {radio-quiet (non-jetted)} Seyfert galaxies, and only in a few cases the power-law photon indices seem somewhat steeper than those typically derived for Seyferts \citep[$\Gamma_{\rm X} \sim 1.5$--$2.1$; e.g.,][]{per02}. Therefore, the key feature in distinguishing between the disk/corona versus jet origin for the observed X-ray emission is a fluorescence neutral narrow Fe-K line. This line, commonly observed in Seyfert galaxies {accreting at moderate and higher rates with $>10^{-2} L_{\rm edd}$, where $L_{\rm edd}$ is the Eddington luminosity}, is believed to originate from the Compton-thick {dusty} torus, which subtends the accretion disk with a large solid angle, as a result of reflection of coronal nuclear X-ray emission. The Fe-K line width and slow variability support the torus origin of a narrow Fe-K line \citep{fuk11a}. In the case where the X-ray emission is dominated by non-thermal jet radiation, one should not expect a strong Fe-K line, since the jet emission is beamed away from the disk, and so jet photons are not likely to be reflected by the torus. {At the same time, sources accreting at particularly low rates {with $<10^{-2} L_{\rm edd}$} may in principle lack large amounts of circumnuclear dust or prominent coronal components, and as such may be rather weak Fe-K line emitters. Among our sample of targets, four HERGs including Cen\,A, NGC\,1275, 3C\,120, and 3C\,111 reveal clear Fe-K lines, while the objects classified as LERGs --- namely M\,87, PKS\,0625$-$354, 3C\,78, and NGC\,6251 --- do not.} \begin{figure}[!t] \begin{center} \vspace{-1cm} \includegraphics[scale=0.4]{fig5.eps} \vspace{0.5cm} \caption{Fe-K line EW plotted against the X-ray luminosity for our sample of radio galaxies (red circles) and Seyfert galaxies (black triangles) analyzed by \citet{fuk11a}. The data points with only the lower error bar represent upper limits.} \label{fek} \end{center} \end{figure} Figure\,\ref{fek} shows the Fe-K EW plotted versus the X-ray luminosity {$L_{\rm X}$ (spanning a wide range from $\lesssim 10^{41}$\,erg\,s$^{-1}$ up to $\sim 10^{46}$\,erg\,s$^{-1}$) obtained with {\em Suzaku} for Seyfert galaxies, together with our sample of radio galaxies. The Fe-K line-detected radio galaxies are found in the same region of the EW --- $L_{\rm X}$ plane as Seyfert galaxies.} Thus, their X-ray emission is likely dominated by disk emission. Other X-ray properties, such as time variability and X-ray relative flux in the SED, also support the disk origin for these sources \citep{fuk11b,yam13,kat11}. However, Cen\,A is characterized by an excess in hard X-rays (above 100\,keV) that smoothly connects to the GeV continuum of the source \citep{fuk11b}. This may imply that for this source the X-ray radiation consists of both thermal disk/corona at low energies and non-thermal jet emission at higher energies. \begin{figure}[!t] \begin{center} \vspace{-1cm} \includegraphics[scale=0.35]{fig6.eps} \vspace{0.5cm} \caption{Relation between the X-ray luminosity and [O III] 5007\AA\ luminosity for our sample of radio galaxies (filled circles; NED data base) and Seyfert galaxies \citep[empty triangles; data from][]{mul94,win10}.} \label{o3} \end{center} \end{figure} {\em Suzaku} puts strong upper limits on the Fe-K line EWs for M\,87, PKS\,0625$-$354, and 3C\,78. All of these limits are significantly lower than the Fe-K EWs measured for Seyfert galaxies {with comparable X-ray luminosities}, suggesting that the X-ray emission of these three sources is most likely of jet origin. This, in fact, constitutes {the first strong \emph{indication}} for compact non-thermal jet emission in the X-ray band for PKS\,0625$-$354\ and 3C\,78. Note that there is already some evidence that the X-ray emission from the M\,87 core is dominated by the jet, albeit different jet regions seem to be pronounced at different activity level of the source \citep{harr09,harr11}. {One should emphasize however that non-detection of the Fe-K line in a source spectrum does not prove robustly the dominance (or even a presence) of a jet component.} Optical [O III] line emission is also a meaningful indicator of pronounced disk emission, since this line is emitted by the extended gas photoionized by strong disk UV emission. Figure\,\ref{o3} shows a plot of the X-ray luminosity versus [OIII] luminosity. {The plot reveals some (weak) hints for a $L_{\rm X} - L_{\rm [O\ III]}$ correlation in the case of Seyferts, but it is not obvious for the analyzed radio galaxies. In particular, the three HERGs in the sample (NGC\,1275, 3C\,111, and 3C\,120) seem to be located in the same region as Seyferts, thus obeying the correlation, while the LERGs seem to be located significantly off the track. This is in agreement with the results of \citet{hard09} and \citet{min14}, who showed that the $L_{\rm X} - L_{\rm [O\ III]}$ correlation persists for HERGs, and is not followed by the LERGs in general. This finding can be considered as further support for the scenario} for the disk/corona emission dominating the X-ray spectra in ``Seyfert-like'' sources NGC\,1275, 3C\,111, and 3C\,120, and the jet emission dominating the X-ray output of the outliers like PKS\,0625$-$354. However, a caveat for this conclusion is that luminosity-luminosity correlations in flux-limited samples may not be real, but only induced by selection effects. We have summarized the evidence for the disk/corona versus jet origin for our sample, as discussed in this Section, in Table \ref{diskjetsummary}. \begin{table}[!t] {\scriptsize \begin{center} \caption{Summary of evidence for disk/corona versus jet origin for X-ray emission} \label{diskjetsummary} \vspace{0.2cm} \begin{tabular}{ccccccc} \hline \hline Source & Fe-K line & X-ray spectral index & X-ray variability & [O III] line & Type [ref.]\\ \hline 3C\,78\ & jet & jet & inconclusive & jet & LERG [B10]\\ 3C\,84 & disk/corona & inconclusive & inconclusive & disk/corona & HERG/LERG$^{\dagger}$\\ 3C\,111 & disk/corona & inconclusive & inconclusive & disk/corona & HERG$^{\ddagger}$ [E00]\\ 3C\,120 & disk/corona & inconclusive & inconclusive & disk/corona & HERG$^{\ddagger}$ [E00]\\ PKS\,0625$-$354\ & jet & jet & inconclusive & jet & LERG [M14]\\ M\,87 & jet & jet & jet & jet & LERG [G13]\\ Cen\,A & disk/corona & inconclusive & jet & inconclusive & HERG [E04]\\ NGC\,6251 & jet & inconclusive & inconclusive & jet & LERG [E11]\\ \hline \end{tabular} \end{center} Refs: [B10, E00, M14, G13, E04, E11]\citet{but10,era00,min14,gen13,eva04,eva11}.\\ $^{\dagger}$ 3C\,84 is diversely classified in the literature; see, e.g., \citet{hard09,but10,gen13}. $^{\ddagger}$ 3C\,111 and 3C\,120 are archetype examples of Broad-Line Radio Galaxies (BLRGs). } \end{table} \subsection{X-ray/$\gamma$-ray Connection} The GeV $\gamma$-ray emission from radio galaxies could originate from the pc/sub-pc scale jet, where the likely mechanism is synchrotron self-Compton (SSC) or external Compton (EC) scattering of the dust torus or broad-line region emission. It could also originate from EC scattering of CMB photons by electrons in the {100\,kpc-scale jets} or the radio lobes, as established for Cen\,A (Abdo et al. 2010b). {Somewhat tentative detections of the lobes' $\gamma$-ray emission have been reported also for the intermediate FR\,I/FR\,II sources NGC\,6251 \citep{tak12} and Cen\,B \citep{kat13}, based on the spatial offset between the best-fit position of the $\gamma$-ray source and the position of the radio core, or the extension of the $\gamma$-ray source aligned to the large-scale radio structure, respectively.} The $\gamma$-ray emission could not be localized {precisely or potentially resolved for 3C\,78\ or PKS\,0625$-$354, due to the large position errors and a combination of relatively large source distances (when compared with those of Cen\,A or Cen\,B) and relatively small sizes of their FR\,I-type radio structures. The variability of PKS\,0625$-$354\ however makes the pc scale origin much more likely for this source, and we favor this interpretation for 3C\,78\ as well.} This also allows us to make a connection between the $\gamma$-rays and X-ray emission from PKS\,0625$-$354\ and 3C\,78, which was established as likely of jet origin in the previous section. {We therefore model} the broadband SEDs of these two sources in the framework of a standard `misaligned blazar' scenario. We combined the new X-ray and $\gamma$-ray data presented above with archival radio, optical, and X-ray data from the NASA Extragalactic Database (NED), {\em XMM-Newton} optical monitor (OM) data from \citet{gli08}, and {\em Hubble Space Telescope} data from \citet{chi02}. The resulting SEDs are presented in Figure\,\ref{sed}. {It seems fairly clear, looking at the figure, that the nonthermal synchrotron peak in both targets lies between the core optical and X-ray emission, i.e., between $10^{15}$\ and $10^{17}$\,Hz, and we elaborate more on this point further below. } \begin{figure}[!t] \begin{center} \vspace{0.5cm} \includegraphics[scale=0.32]{fig7a.eps} \hspace{1cm} \includegraphics[scale=0.32]{fig7b.eps} \caption{SEDs of PKS\,0625$-$354\ ({left}) and 3C\,78\ ({right}). Black circles indicate the {\em Suzaku} X-ray and {\em Fermi}-LAT $\gamma$-ray data presented in this paper, green diamonds are archival data from NED, and red squares are the {\em XMM-Newton} OM data for PKS\,0625$-$354\ \citep{gli08} and core {\em HST} data for 3C\,78\ \citep{chi02}. The thick curves denote the synchrotron/SSC model fits with two different variability timescales, as given in the legend. The solid curves include $\gamma\gamma$ absorption with the EBL model of \citet{fin10}, while the dashed curves do not. The thin blue curves are the elliptical galaxy template from \citet{sil98}, adjusted to the redshifts of the sources.} \label{sed} \end{center} \end{figure} We fit the \citet{gli08} and \citet{chi02} optical, {\em Suzaku} X-ray, and LAT $\gamma$-ray data {assuming they originate as non-thermal synchrotron/SSC from a relativistic jet} with the one-zone synchrotron/SSC model from \citet{fin08}. The resulting model curves are shown in Figure\,\ref{sed} and the model parameters are listed in Table\,\ref{sedfitpara}. See \citet{fin08} for a description of the model parameters and other model details. {In our modeling we do not include a component from the disk/corona, consistent with our results from Section\,4.1. We also did not fit the radio data,} as this is likely to be from a superposition of self-absorbed jet components unrelated to the rest of the SED \citep{kon81}, and as such should be considered as upper limits in our one-zone synchrotron/SSC modeling. {The near infrared/integrated optical segments of the broadband spectra are clearly dominated by host galaxies, and therefore in our modeling we added a template of a giant elliptical from \citet{sil98}, adjusted to the redshifts of the analyzed sources; this template reproduces the optical data well.} We assumed a relatively large jet angle to the line of sight ($\theta$ in Table\,\ref{sedfitpara}), consistent with the sources being misaligned BL Lacs, and used two variability timescales to test the robustness with respect to this poorly-constrained parameter. {The models with two different variability timescales are given in Table \ref{sedfitpara}.} Also listed in Table\,\ref{sedfitpara} are the results of one-zone synchrotron/SSC models applied to reproduce several other LAT-detected FR\,I radio galaxies from the literature. Model parameters for PKS\,0625$-$354\ and 3C\,78\ are consistent with those for other radio galaxies which have been modeled previously, as shown in the table. The parameters $\Gamma$ and $\delta$ are lower than typically found in models of BL Lacs. We note that the black hole mass in PKS\,0625$-$354\ is estimated to be $10^{9.2} \, M_{\odot}$ \citep{bet03}, and in 3C\,78\ as $10^{8.7} \, M_{\odot}$ \citep{rin05}; these are the typical values for radio galaxies \citep[$10^{8.1-9.5} \, M_{\odot}$;][]{mcl04} and BL Lac objects \citep[$10^{7.9-9.2} \, M_{\odot}$;][]{bar03}. \begin{figure}[!t] \begin{center} \vspace{-1cm} \includegraphics[scale=0.4]{fig8.eps} \vspace{0.5cm} \caption{Relation between synchrotron peak frequencies and peak luminosities of PKS\,0625$-$354\ and 3C\,78, together with other sources from our sample of radio galaxies (red {circles}). For a comparison, radio galaxies, BL Lacs, and FSRQs from \citet{mey11} are also plotted (black circles, triangles, and crosses, respectively).} \label{fplp} \end{center} \end{figure} One major difference is the models for PKS\,0625$-$354, 3C\,78, and NGC\,6251 \citep{mig11} have a higher $\gamma_{brk}$ by a factor of 10 compared to other radio galaxies in the table. The larger $\gamma_{brk}$ leads to higher peak synchrotron frequencies and lower electron jet powers compared to magnetic jet powers. For Cen\,A, M\,87, and NGC\,1275, the models result in approximate equipartition between magnetic field and electron jet powers. The higher $\gamma_{brk}$ parameters for PKS\,0625$-$354\ and 3C\,78\ are in turn mainly the result of the harder $\gamma$-ray spectra and soft X-ray spectra. Cen\,A, NGC\,6251, M\,87, and NGC\,1275 have soft LAT spectra ($\Gamma_{\rm HE}>2.1$), while the LAT spectra for PKS\,0625$-$354\ and 3C\,78\ are harder ($\Gamma_{\rm HE}<2.1$), although NGC\,1275 is a borderline case. As we have already noted above, PKS\,0625$-$354\ and 3C\,78\ were the hardest sources of the {\em Fermi}-LAT `misaligned AGN' list of \citet{abd10c}. The X-ray spectra for Cen\,A \citep{abd10a}, M\,87 \citep{abd09b}, NGC\,1275 \citep{abd09a}, and NGC\,6251 \citep{mig11} are hard ($\Gamma_{\rm X}<2$), indicating they originate from the SSC component, although M\,87 likely has some synchrotron contribution as well \citep{abd09b}. Assuming jet origin, the soft X-ray spectra ($\Gamma_{\rm X}>2$) for PKS\,0625$-$354\ and 3C\,78\ indicates the X-rays originate from synchrotron emission, implying a high peak synchrotron frequency and $\gamma_{brk}$. \citet{mey11} proposed a scenario where low-power jets (BL Lacs and FR\,I radio galaxies) have longitudinal bulk Lorentz factor gradients. In this scenario, when viewing more aligned jets one observes the faster portion of the outflow resulting in high synchrotron peaked sources, while for progressively more misaligned sources one sees progressively slower portions of the jet and progressively lower synchrotron peak frequencies. \citeauthor{mey11} also argued that such gradients are absent in high-power jets (FSRQs and FR\,II radio galaxies). In Figure\,\ref{fplp}, we plot the synchrotron peak luminosity ($L_{peak}$) versus the synchrotron peak frequency ($\nu_{peak}$) for the sample analyzed by \citeauthor{mey11} (see Figure\,4 therein), along with the results of model fits from the literature and from this paper for $\gamma$-ray bright radio galaxies. {Error bars on $\nu_{peak}$ and $L_{peak}$ were found from visual inspection of the SEDs.} We do not include here NGC\,1275, since its synchrotron peak is poorly constrained \citep{abd09a}. The sources PKS\,0625$-$354\ and 3C\,78\ are found to have relatively high values of $\nu_{peak}$, not expected in the framework of the scenario of \citet{mey11}. This seems to disfavor to some extent their model, which states that high-peaked sources are only the most aligned jets. It should be emphasized here that the values of $\nu_{peak}$ and $L_{peak}$ from \citet{mey11} were found from polynomial fits to radio, optical, and X-ray data, while values derived or adopted by us for $\gamma$-ray bright radio galaxies are found from a synchrotron/SSC model fit. Our model fits are more physically-motivated, but also come with additional assumptions. From the point of view of the Lorentz factor gradient scenario, our models are probably preferred, since this scenario assumes synchrotron/SSC emission. 3C\,78\ is included in the sample of \citet{mey11}, but they obtained significantly lower values for $\nu_{peak}$ than we did (see their Table 3). We believe this is for two reasons: (i) their phenomenological model fit the radio data, while our synchrotron models do not; and (ii) the inclusion of the hard LAT data spectra require high values of $\gamma_{brk}$, which result in high values for $\nu_{peak}$. The latter indicates that LAT observations can be important for modeling the synchrotron portion of a radio loud AGN, even though the $\gamma$-rays are not directly produced by synchrotron emission. Finally, we note there is some ambiguity as to whether PKS\,0625$-$354\ is a BL Lac object or a radio galaxy. The optical spectrum of PKS\,0625$-$354\ resembles that of a BL Lac \citep{wil04}, although its radio morphology resembles an FR\,I radio galaxy \citep{ojh10}. PKS\,0625$-$354\ possesses a relatively bright unresolved core \citep{gov00}, as does 3C\,78\ \citep{chi02}, and all the LAT-detected radio galaxies in Table\,\ref{sedfitpara}. They probably all have intermediate jet alignments, with $\theta$ in the range $\sim10\arcdeg$ to $30\arcdeg$. \begin{table}[!t] {\scriptsize \begin{center} \caption{SED model parameters of radio galaxies} \label{sedfitpara} \vspace{0.2cm} \begin{tabular}{c|cc|cc|c|c|c|c} \hline \hline & \multicolumn{2}{c}{PKS\,0625$-$354} \vline & \multicolumn{2}{c}{3C\,78} \vline & Cen\,A & M\,87 & NGC\,1275 & NGC\,6251 \\ \hline $\Gamma$ & 5.8 & 5.7 & 2.93 & 5.75 & 7.0 & 2.3 & 1.8 & 2.4 \\ $\delta$ & 5.8 & 5.8 & 2.92 & 5.75 & 1.0 & 3.9 & 2.5 & 2.4 \\ $\theta$ [deg] & 10 & 19 & 20 & 20 & 30 & 10 & 25 & 25 \\ $B$ [G] & 0.82 & 0.11 & 0.77 & 0.02 & 6.2 & 0.055 & 0.05 & 0.04 \\ $t_v$ [Ms] & $0.1$ & $1$ & $0.1$ & $1$ & $0.1$ & $1.2$ & $30$ & $1.7$ \\ $R_b$ [$10^{16}$\,cm] & $1.6$ & $16$ & $0.85$ & $17$ & $0.3$ & $1.4$ & $200$ & $12$ \\ \hline $p_1$ & 2.5 & 2.5 & 2.7 & 2.7 & 1.8 & 1.6 & 2.1 & 2.75 \\ $p_2$ & 3.5 & 3.5 & 3.7 & 3.7 & 4.3 & 3.6 & 3.1 & 4.0 \\ $\gamma_{min}$ & $6\times10^3$ & $6\times10^3$ & $1\times10^3$ & $1\times10^4$ & $3\times10^2$ & 1 & $8\times10^2$ & 250 \\ $\gamma_{max}$ & $2\times10^6$ & $6\times10^6$ & $2\times10^7$ & $2\times10^7$ & $1\times10^8$ & $1\times10^7$ & $4\times10^5$ & $4.4\times10^5$ \\ $\gamma_{brk}$ & $2.9\times10^4$ & $4.6\times10^4$ & $7.3\times10^4$ & $1.4\times10^5$ & $8\times10^2$ & $4\times10^3$ & $9.6\times10^2$ & $2.0\times10^4$ \\ \hline $P_{j,B}$ [$10^{42}$\,erg s$^{-1}$] & $43$ & $740$ & $0.3$ & $2.5$ & $65$ & $0.02$ & $230$ & $0.4$ \\ $P_{j,e}$ [$10^{42}$\,erg s$^{-1}$] & $2$ & $10$ & $0.6$ & $13$ & $31$ & $7$ & $120$ & $160$ \\ \hline \end{tabular} \end{center} The model parameters are as follows: $\Gamma$ is the bulk Lorentz factor, $\delta$ is the Doppler factor, $\theta$ is the jet angle, $B$ is the magnetic field, $t_v$ is the variability timescale, and $R_b$ is the comoving blob size scale, $p_1$ and $p_2$ are the low-energy and high-energy electron spectral indices, respectively, $\gamma_{min}$, $\gamma_{max}$, and $\gamma_{brk}$ are the minimum, maximum, and break electron Lorentz factors, respectively, and $P_{j,B}$ and $P_{j,e}$ are the jet powers in magnetic field and electrons, respectively. Rerefences: \citet[][for Cen\,A]{abd10a}, \citet[][for M\,87]{abd09a}, \citet[][for NGC\,1275]{abd10b}, \citet[][for NGC\,6251]{mig11}. } \end{table} \section{Conclusions} We have presented {\em Suzaku} results for nearby {\em Fermi}-LAT detected low-power radio galaxies, three of which are analyzed here for the first time. Based on the Fe-K and X-ray spectral slope, X-ray variability, and [O III] line strength, we argued for the jet origin of the observed X-ray emission in PKS\,0625$-$354\ and 3C\,78. {This conclusion is in agreement with the optical spectral classification of both AGN as ``low-excitation radio galaxies''.} We have modeled the broadband SEDs of these two objects including the most recent HE $\gamma$-ray spectra following from the analysis of the 5 year accumulation of the LAT data. We found that the bulk Lorentz factors of both sources are typical of those found from modeling the SEDs of FR\,I radio galaxies, and lower than typically found for BL Lac objects. The peak synchrotron frequencies for PKS\,0625$-$354\ and 3C\,78\ are unusually high for radio galaxies, due to their unusually soft X-ray spectra and unusually hard LAT spectra. This seems at odds with the scenario outlined by \citet{mey11}, where high synchrotron peaked objects are the most aligned, and progressively less aligned objects have lower synchrotron peaks. Further studies of PKS\,0625$-$354\ with very long baseline interferometry will help to clarify this issue (\citealt{mul13}, Truestedt et al.\ in preparation). \acknowledgments The authors thank {the anonymous referee for helpful comments that helped to improve the paper,} and the {\em Suzaku} and {\em Fermi} {teams} for the operation, calibration, and data processing. Y. ~F. was supported by JSPS KAKENHI Grant Numbers 2400000401 and 2424401400. \L .~S. was supported by Polish NSC grant DEC-2012/04/A/ST9/00083. The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \`a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
train/arxiv
BkiUfek5qhLBtiOLc90X
5
1
\section{Moments of Spin Structure Functions} \vspace{-0.3 cm} Polarized DIS has provided a testing ground for the study of the strong force. Moments of spin structure functions (SSF), among them the Bjorken sum, has played an important r\^ole in this study. The n-th (Cornwall-Norton) moment of SSF is the integral of the $x^n g_{1,2}(x,Q^2)$ SSF over $x$. Moments are specially useful because sum rules relate them to other quantities. Such sum rules for $\Gamma_1$, the first moment of $g_1$, are the Ellis-Jaffe~\cite{EJSR} and the Bjorken sum rules~\cite{Bjorken}, derived at large $Q^2$, and the related Gerasimov-Drell-Hearn (GDH) sum rule~\cite{GDH} at $Q^2=0$. The first moment of $g_2$, $\Gamma_2$, is given by the Burkhardt-Cottingham (BC) sum rule~\cite{BC}. Rules can be also derived for higher moments, e.g., spin polarizability or $d_2$ sum rules. \\ These relations are useful in many ways: checks the theory on which the rule is based (e.g. QCD and the Bjorken sum rule); checks hypotheses used in the sum rule derivation (e.g. the Ellis-Jaffe sum rules); or checks calculations such as chiral perturbation theory ($\chi pt$), lattice QCD or Operator Product Expansion (OPE). If a sum rule rests on solid grounds or is well tested, it can be used to extract quantities otherwise hard to measure (e.g. generalized spin polarizabilities). Because $\Gamma_{1,2}$ are calculable at any $Q^2$ using either $\chi pt$, lattice QCD or OPE, they are particularly suited to study the transition between the hadronic to partonic descriptions of the strong force. Measurements in the transition region (intermediate $Q^2$) have recently been made at Jefferson Lab (JLab). \section{Measurements at Jefferson Lab} \vspace{-0.3 cm} At moderate $Q^2$, resonances saturate moments. JLab's accelerator delivers CW electron beam with a maximum energy up to 6 GeV. This makes JLab the suited place to measure moments up to $Q^2$ of a few GeV$^2$. The beam current can reach 200 $\mu$A with a polarization now reaching 85\% although at the time of the experiments reported here, it was typically 70\%. The beam is sent simultaneously to three halls (A, B and C), all of them equipped with polarized targets. In this talk, we report on results from halls A and B.\\ Hall A~\cite{HallA nim} contains a polarized $^3$He gaseous target and two high resolution spectrometers (HRS) with 6 mSr acceptance. The target can be polarized longitudinally or transversally at typically 40\% polarization with 10-15 $\mu$A of beam. The target's $\sim10$ atm. of $^3$He gives a luminosity greater than $10^{36}$cm$^{-2}$s$^{-1}$. Hall B~\cite{HallB nim} luminosity is typically 5$\times 10^{33}$cm$^{-2}$s$^{-1}$ but is compensated by the large acceptance (about 2.5$\pi$) of the CLAS spectrometer. Cryogenic polarized targets (NH$_3$ and ND$_3$) are well suited for the low beam currents ($\sim$nA) utilized in Hall B. The target is longitudinally polarized with average 75\% (NH$_3$) and 40\% (ND$_3$) polarizations. Both halls can cover the large region of $Q^2$ and $x$ needed to extract moments at various $Q^2$, either because of the large CLAS acceptance (Hall B) or because of large luminosity allowing to quickly gather data at various beam energies and HRS settings (Hall A).\\ I report here on the Hall A E94010~\cite{E94010} and Hall B EG1 experiments. EG1 was split in two runs: EG1a (1998) which results are published~\cite{eg1a}, and EG1b (2000) that is still being analyzed. SSF are extracted differently in halls A and B. In Hall A, \emph{absolute} cross sections asymmetries $\Delta \sigma^{\| (\bot)}$ were measured for longitudinal (transverse) target spin orientations. $g_1$ and $g_2$ are linear combinations of these $\Delta \sigma$ and are extracted without external input. Furthermore, unpolarized contributions, e.g. target cell windows or the (mostly) unpolarized protons in the $^3$He nucleus, cancel out. The \emph{relative} longitudinal asymmetry $A_{\|}$ is measured in Hall B. Models for $F_1$, $g_2$ and $R=\sigma _L / \sigma _T$ are then used to extract $g_1$. $F_1$ and $R$ are constrained at low $Q^2$ by recent Hall C data~\cite{E94110}. $g_2$ is estimated using models (resonance region) or its leading twist part $g_2^{ww}$ (DIS domain). The unmeasured low-$x$ part of the moment is estimated using a parametrization developed by the EG1 collaboration, while the E94010 group used a Regge-type fit of DIS data~\cite{Bianchi}. \begin{figure} \includegraphics[height=.48\textheight]{fig1.eps} \caption{First moments $\Gamma_1^{p}$, $\Gamma_1^{n}$, $\Gamma_1^{d}$ and the Bjorken sum $\Gamma_1^{p-n}$. The elastic contribution is excluded.} \end{figure} Results on $\Gamma_1^{p}$, $\Gamma_1^{n}$ and $\Gamma_1^{d}$ are shown in Fig. 1, together with $\chi pt$ calculations~\cite{meissner chipt,ji chipt} models~\cite{Burkert and Ioffe,Soffer} and leading twist OPE prediction. HERMES~\cite{HERMES} and SLAC~\cite{e143} results are also shown. The halls A and B data, reanalyzed at matched $Q^2$ points and with a consistent low-$x$ estimate~\cite{Bianchi} were used to form the Bjorken sum $\Gamma_1^{p-n}$~\cite{deur}. Preliminary $\Gamma_1^{p-n}$ from EG1b is also shown. $\Gamma_1^{p-n}$ is a unique quantity to study parton-hadron transition because its non-singlet structure makes it an easier quantity to handle for $\chi pt$, lattice QCD and OPE. These data form, for both nucleons, an accurate mapping at intermediate $Q^2$ that connects to SLAC, HERMES and CERN DIS data. At low $Q^2$, $\chi pt$ disagrees with the data above $Q^2=0.2$ GeV$^2$, while models based on different physics reproduce equally well the data. Twist 2 description also works well down to low $Q^2$, indicating an overall suppressed higher twist r\^ole. Indeed, in OPE analysis results~\cite{deur,osipenko,ZEM}, twist 4 and 6 coefficients are either small or canceling each others at $Q^2$=1 GeV$^2$.\\ The availability of transverse data in Hall A allows us to form $\Gamma_2^n$ and thereby check the BC sum rule $(\Gamma_2=0)$ on the neutron (fig. 2). The sum rule is based on dispersion relations and is $Q^2$-invariant. A striking feature is the almost perfect cancellation between elastic and resonance contributions leading to the verification of the sum rule. \begin{figure} \includegraphics[height=.29\textheight]{fig2.eps} \caption{Moments $\Gamma_2^{n}$ and $\overline{d}_2^n$ (left), and generalized spin polarizabilities $\gamma_0$ and $\delta_{LT}$ (right)} \end{figure} Other sum rules link SSF moments to the generalized spin polarizabilities $\gamma_0$ and $\delta_{LT}$: \begin{eqnarray} \gamma_{0}=\frac{4e^{2}M^{2}}{\pi Q^{6}} \int_{0}^{1^-}x^{2}(g_{1}-\frac{4M^{2}}{Q^{2}}x^2g_{2})dx~~;~~ \delta_{LT}=\frac{4e^{2}M^{2}}{\pi Q^{6}} \int_{0}^{1^-}x^{2}(g_{1}+g_{2})dx \nonumber \end{eqnarray} Results from Hall A can be seen in fig. 2~\cite{E94010-3}. $\delta_{LT}$ is interesting because the $\Delta_{1232}$ r\^ole is suppressed. Hence $\delta_{LT}$ is easier to access by $\chi pt$. However, calculations and data disagree for both $\gamma_0$ and $\delta_{LT}$. The MAID model~\cite{MAID}, however, well reproduces the data. Another higher moment that can be formed is $d_2^n$, the integral of $x^{2}(g_{2} - g_{2}^{ww})$ where $g_2^{ww}$ is the leading twist part of $g_2$. Thus $d_2$ is sensitive to twist 3 and higher. The measured $\overline{d}_2^n$ (the bar indicates the exclusion of $x=1$) trends toward the lattice QCD results, although larger $Q^2$ data are necessary to establish a possible agreement. \section{Summary and Perspectives} \vspace{-0.3 cm} The hadron-parton transition region is covered by data of the SSF moments from JLab. These can be calculated at any $Q^2$, thus providing a ground for studying the link between hadronic and partonic descriptions of the strong force. An OPE analysis reveals that in this domain, high twist effects are small. The BC sum rule was shown on the neutron and found to be valid. Data and sum rules were used to extract neutron generalized spin polarizabilities. Those disagree with the present $\chi pt$ calculations. Further data from Hall A E01-012~\cite{e01012}, Hall B EG1b, and Hall C RSS~\cite{RSS} will be available shortly in the resonance region. New data at very low $Q^2$ have been taken on the neutron in Hall A~\cite{e97110} and will be gathered early 2006 for the proton in Hall B~\cite{e03006}. The 12 GeV upgrade of JLab will allow us to access both larger-$x$ and lower-$x$. This will allow for more precise measurements of the moments, in particular by addressing the low$x$ issue. \begin{theacknowledgments} \vspace{-0.3 cm} \footnotesize{ This work is supported by the U.S. Department of Energy (DOE) and the U.S. National Science Foundation. The Southeastern Universities Research Association operates the Thomas Jefferson National Accelerator Facility for the DOE under contract DE-AC05-84ER40150.} \end{theacknowledgments} \bibliographystyle{aipproc}
train/arxiv
BkiUbDU5ixsDMKBa1UVO
5
1
\section{Problem statement and results} Resistance to antibiotics in bacteria is a central problem in medicine. One method for mitigating its consequences is to rotate the use of antibiotics. A natural question arises: what is the optimal sequence of antibiotics one should apply? In this work, we follow the model proposed by Mira \latin{et al.}~\cite{mira2014rational}. Suppose we have a population of bacteria, all of the same unmutated genotype (called the wild type). We are given a set of antibiotics. For each bacteria of a given type, an antibiotic mutates it to another type with some known probability. The problem is to find a sequence of antibiotics (called a treatment plan) that maximizes the fraction of bacteria returning to the wild type. This is equivalent to minimizing the fraction of antibiotics-induced mutations in the bacterial population. Here is a concrete description of the model. Let $S$ be a set of $d$ states, where each state is a bacterial genotype. A $d\times d$ matrix $T=(t_{ij})$ is a \defn{transition matrix} if $t_{ij}\in[0,1]$ and each row sums to~$1$. Let ${}^{\mathsf{T}}$ be a finite set consisting of $K$ transition matrices, each corresponding to the effect of a given antibiotic on the genotypes of the bacterial population. That is, $t_{ij}$ is the probability that a bacteria of genotype $i$ will have genotype $j$ after being treated with the corresponding antibiotic. The standard $(d-1)$-dimensional simplex $\Delta^{d-1}$ is given by $\Delta^{d-1}=\{(x_1,x_2,\dotsc,x_d)\in\mathbb{R}^d\mid\text{$\sum_{i=1}^dx_i=1$ and $x_i\geq0$ for all $i$}\}$. Let $\sigma=(1,0,\ldots,0)\in \Delta^{d-1}$ be the starting state of the population, where all bacteria are of the first state (the wild type). Let $N\in\mathbb{N}$ be the length of the treatment plan. The \defn{time machine $\mathcal{TM}(d, K, N)$} is the solution to the following optimization problem \begin{align} \mbox{ maximize} \hspace{1em} &\s T_1 T_2\dotsm T_N\s^\top \nonumber \\ \mbox{ subject to} \hspace{1em} & T_1, \dotsc, T_N \in {}^{\mathsf{T}}. \label{eqn:tm} \end{align} The term \emph{time machine}, coined by Mira \latin{et al.}, alludes to the idea of reversal to the wild type of unnecessary mutations done by antibiotic treatments. They proposed and solved a specific numerical instance of the time machine for $16$ genotypes of the TEM $\beta$-lactamase, with a specific model of mutation, and a set of up to six antibiotics. Their algorithm is to enumerate all possible treatment plans of a given length with the given set of antibiotics and output the optimal one. With $K$ antibiotics, there are $K^N$ many treatment plans of length~$N$, and thus direct enumeration is not an efficient algorithm. Mira \latin{et al.}\ raised the question of whether an efficient algorithm exists. In this paper, we show that the general problem is NP-hard. \begin{thm}\label{thm:tm} The time machine optimization problem in \textup{(\ref{eqn:tm})} is NP-hard. \end{thm} We first consider an easier approximation problem: for a given threshold $\a\in[0,1]$, decide whether there exists a treatment plan of length $N$ such that \begin{gather} \s T_1 T_2\dotsm T_N\s^\top \geq \a \nonumber \\ \mbox{ subject to } T_1, \dotsc, T_N \in {}^{\mathsf{T}}. \label{eqn:tm.decision} \end{gather} Clearly if one can solve (\ref{eqn:tm}) in polynomial time, then one can also solve (\ref{eqn:tm.decision}) in polynomial time. Our main result states that the latter problem is NP-hard, and thus Theorem~\ref{thm:tm} follows as a corollary. \begin{thm} The time machine decision problem in \textup{(\ref{eqn:tm.decision})} is NP-hard. \end{thm} Our proof relies on reducing $3$-SAT to a special instance of (\ref{eqn:tm.decision}) where $\a=1$. Since $3$-SAT is NP-hard \cite{karp1972reducibility}, we have that (\ref{eqn:tm.decision}) is also. The reduction construction is described in Section~\ref{sec:3sat.reduction}. The proof that this is a correct reduction is in Section~\ref{sec:3sat.proof}. \subsection{Related literature} The time machine may also arise in large scale robotics control, where one has a large number of robots with $d$ possible states, and each $T_i$ is an instruction which changes each robot's state independently at random. In general, it can be viewed as a Markov decision problem with infinitely many states. Indeed, if there was only one bacteria instead of a population, one may apply an antibiotic $T_i$, check the new state of the bacteria, and repeat. In this case, the time machine is a Markov decision process on $d$ states, which can be solved by dynamic programming. When there is a population of bacteria, the population state is a point in the standard $(d-1)$-dimensional simplex $\Delta^{d-1}$, which is an infinite set. There is a large literature on Markov decision processes, see \cite{feinberg2002handbook} and references therein. However, we are unaware of any existing results on the time machine, or the equivalent of Theorem~\ref{thm:tm}. Closely related to the time machine is the partially observable Markov decision process \cite{dutech2013partially}, where the underlying system is assumed to be in one of the unobservable $d$ states, and $s \in \Delta^d$ reflects the uncertainty over the true state. It would be interesting to see if techniques from this setup can yield polynomial time approximations to the time machine. \subsection{Open problems} Many interesting questions remain on the antibiotics time machine. We mention two examples. The last question was raised by Joe Kileel. \begin{enumerate} \item For what classes of antibiotics $\mathcal{T}$ is the problem solvable in polynomial time? Are there biologically relevant instances? \item When does the limit $\lim_{N \to \infty}\max_{T_i \in \mathcal T}\s T_1 T_2\dotsm T_N\s^\top$ exist? If so, how fast is the convergence? In this case, for large $N$, can one produce an approximate time machine in polynomial time? \end{enumerate} \section{Reduction construction} \label{sec:3sat.reduction} Let $X=\{x_1,\dotsc,x_n\}$ be a set of Boolean variables taking Boolean values $\{+,-\}$. A triple $(\varepsilon_ix_i,\varepsilon_jx_j,\varepsilon_kx_k)$ with $\varepsilon_i,\varepsilon_j,\varepsilon_k\in\{+,-\}$ is called a \defn{clause}. Let $\mathcal{C}=\{c_1,\dotsc,c_m\}$ be a set of clauses. Write $(x_i = v_i)_i$ to mean $x_i = v_i$ for $i = 1,\dotsc, n$. An \defn{assignment} $(x_i=v_i)_i$ \defn{satisfies} a clause $c=(\varepsilon_ix_i,\varepsilon_jx_j,\varepsilon_kx_k)$ if $(\varepsilon_i,\varepsilon_j,\varepsilon_k)\neq(v_i,v_j,v_k)$. Without loss of generality, assume each $x\in X$ actually occurs (negated or not) somewhere. The $3$-SAT problem is: given a set of variables and clauses, decide whether there exists an assignment that satisfies all clauses. In this section, we reduce a $3$-SAT instance to an instance of (\ref{eqn:tm.decision}). Let $N=m+2$ and $\a=1$. Create $d=3n+m+3$ states and $K=7m+2$ transition matrices. Create the start state~$\sigma$, a ``death'' state~$\d$, and a ``tally'' state~$\state{f}$. For each variable $x\in X$, create states $\mathbf{x},\mathbf{x}^-,\mathbf{x}^+$. For each clause $c=(\varepsilon_ix_i,\varepsilon_jx_j,\varepsilon_kx_k)\in\mathcal{C}$, create state~$\c$. We identify each state with its characteristic (row) vector and define each transition matrix by its action on these basis vectors and extend linearly. \renewcommand{\S}{S} \renewcommand{\mathbf{F}}{F} \newcommand{T_c^{v_i,v_j,v_k}}{T_c^{v_i,v_j,v_k}} For each choice of $(v_i,v_j,v_k)\in\{+,-\}^3\smallsetminus\{(\varepsilon_i,\varepsilon_j,\varepsilon_k)\}$, create a transition matrix $T=T_c^{v_i,v_j,v_k}$ as follows: \[ \mathbf{z} T=\begin{cases} \d & \mathbf{z}=\sigma \\ \state{f} & \mathbf{z}=\c \\ \mathbf{x}_\l^{v_\l} & \mathbf{z}=\mathbf{x}_\l,\ \l\in\{i,j,k\} \\ \d & \mathbf{z}=\mathbf{x}_\l^{-v_\l},\ \l\in\{i,j,k\} \\ \mathbf{z} & \text{otherwise.} \end{cases} \] Define a starting matrix $\S$ by \[ \mathbf{z} \S=\begin{cases} p\left(\sum_{x\in X}\mathbf{x} + \sum_{c\in\mathcal{C}}\c\right) & \mathbf{z}=\sigma \\ \d & \text{otherwise,} \end{cases} \] where $p=1/(n+m)$. Lastly, define a final matrix $\mathbf{F}$ by \[ \mathbf{z} \mathbf{F}=\begin{cases} \sigma & \mathbf{z}\in\left\{\mathbf{x}_i^{v_i}:v_i\in\{+,-\}\right\} \\ \sigma & \state{f} \\ \d & \text{otherwise.} \end{cases}\] Let $\mathcal{T}$ consist of $\S$, $\mathbf{F}$, and the $7m$ transition matrices $T_c^{v_i,v_j,v_k}$ defined above. This concludes the reduction construction. \section{Proof of Main Theorem} \label{sec:3sat.proof} We shall prove that with the setup in Section~\ref{sec:3sat.reduction}, the time machine decision problem (\ref{eqn:tm.decision}) solves the $3$-SAT problem with the given clauses. Since the latter is NP-hard, this implies that (\ref{eqn:tm.decision}) is also. We refer to the fraction of the bacterial population in a given state as its \defn{weight}. First, suppose $(x_i=v_i)_i$ is a satisfying assignment. Apply $\S$ to~$\sigma$. For each $c=(\varepsilon_ix_i,\varepsilon_jx_j,\varepsilon_kx_k)\in\mathcal{C}$, apply~$T_c^{v_i,v_j,v_k}$, which exists as $c$ is satisfied. Finally, apply~$\mathbf{F}$. \begin{lem} The sequence of matrices given above sends all weights back to~$\sigma$. That is, it is a solution of \textup{(\ref{eqn:tm.decision})} for $\alpha = 1$. \end{lem} \begin{proof} For each $x\in X$, the matrix $\S$ sends some weight to~$\mathbf{x}$. The weight is moved to $\mathbf{x}^{v_i}$ the first time $\pm x$ occurs in a clause~$c$, and is moved back to $\sigma$ by $\mathbf{F}$ at the end. For each $c\in\mathcal{C}$, the matrix $\S$ sends some weight to~$\c$. This weight is moved to $\state{f}$ by $T_c^{v_i,v_j,v_k}$, and is moved back to $\sigma$ by $\mathbf{F}$ at the end. Therefore all the weights returns to~$\sigma$, as desired. \end{proof} Conversely, suppose there is a sequence $T_1,T_2,\dotsc,T_N$ of transition matrices so that $$\s T_1 T_2\dotsm T_N\s^\top=1.$$ The aim is to extract a satisfying assignment. Consider the process of applying the transition matrices $T_i$ to $\sigma$ sequentially in $N$ steps. Note that any weight at $\d$ stays at $\d$ forever. As such, to achieve full weight at~$\sigma$ after $N$ steps, the state $\d$ cannot receive weight at any point in the process.% \footnote{Intuitively, $\d$ is a ``death'' state, where bacteria goes to die, never to be recovered to the wild type.} Consequently, it is clear that the first matrix to apply has to be~$\S$, as we have $T(\sigma)=\d$ for $T\in{}^{\mathsf{T}}\smallsetminus\{\S\}$. The only way for $\sigma$ to gain weight is to apply~$\mathbf{F}$. Prior to applying $\mathbf{F}$ for the first time, all weights must be supported on the $\mathbf{x}_i^\pm$ and~$\state{f}$, as to avoid losing any weight to the death state~$\d$. In particular, for each clause $c=(\varepsilon_ix_i,\varepsilon_jx_j,\varepsilon_kx_k)\in\mathcal{C}$, state~$\c$ must no longer carry weight at this point. That is, after $\c$ receives some weight by~$\S$ in the first step, it must subsequently lose the weight, which can only be achieved by applying an associated matrix $T_c^{v_i,v_j,v_k}$ for some choice of $(v_i,v_j,v_k)$. This takes (at least) one step for each clause. Since the sequence of matrices is of length precisely $N=m+2$, we conclude that the sequence starts with $\S$, finishes with $\mathbf{F}$, and contains precisely one matrix corresponding to each clause. When $\S$ is applied, the full weight at $\sigma$ is split into $n+m$ \defn{packets}, each of weight $p=1/(n+m)$. Since no other matrices split weights, we may consider the remaining process as a discrete system moving each packet as a unit. We already saw that the $m$ packets associated to the clauses are moved to~$\state{f}$ during the middle $m$ steps. It remains to analyze the remaining $n$ packets associated to the variables. To avoid moving any weight to the death state~$\d$, the packet at $\mathbf{x}_i$ can only be moved to $\mathbf{x}_i^+$ or $\mathbf{x}_i^-$. Once this happens, the packet can only be moved again by $\mathbf{F}$ at the last step. So at the penultimate step, there is a packet on $\mathbf{x}_i^{\v_i}$ for exactly a choice of~$\v_i$ for each~$i$. The following lemma finishes the proof. \begin{lem} For each $i$, let $\v_i$ be such that $\mathbf{x}_i^{\v_i}$ has nonzero weight after $m+1$ steps, \latin{i.e.}, $$\sigma T_1 T_2\dotsm T_{N-1}(\mathbf{x}_i^{\v_i})^\top>0.$$ Then $(x_i=\v_i)_i$ is a satisfying assignment. \end{lem} Indeed, consider a clause $c=(\varepsilon_ix_i,\varepsilon_jx_j,\varepsilon_kx_k)\in\mathcal{C}$. We know that (exactly) one associated transition matrix $T_c^{v_i,v_j,v_k}$ is used. Suppose, towards a contradiction, that $\v_\l\neq v_\l$ for some $\l\in\{i,j,k\}$. After applying $T_c^{v_i,v_j,v_k}$, the packet corresponding to $x_\l$ is at $\mathbf{x}_\l^{v_\l}$ or~$\d$, with no way of moving to $\mathbf{x}_\l^{\v_\l}$, a contradiction. So $(\v_i,\v_j,\v_k)=(v_i,v_j,v_k)\neq(\varepsilon_i,\varepsilon_j,\varepsilon_k)$ by construction, implying that clause~$c$ is satisfied. \qed This shows that a $3$-SAT instance has a solution if and only if the associated time machine can attain a threshold of~$1$. We therefore conclude that the time machine decision problem is NP-hard, as desired. \subsection*{Acknowledgements} The authors wish to thank Bernd Sturmfels and Joel Kileel for stimulating discussions. The first-named author is supported by an award from the Simons Foundation (\#197982 to The University of Texas at Austin). The second-named author is supported by NSF RTG grant NSF/DMS-1148634. \bibliographystyle{alpha}
train/arxiv
BkiUeEw4eIOjR-dm55Ke
5
1
\section{Introduction} Device-to-device (D2D) communication is envisioned to be an integral part of the fifth generation (5G) cellular networks \cite{asadi2014survey}. Despite its potential advantages, the large-scale implementation of D2D communication has been delayed mainly due to spectrum scarcity in sub-6 GHz band, which leads to severe multi-user interference (MUI) \cite{tehrani2014device}. Utilizing the abundant unlicensed bandwidth in the millimeter wave (mmWave) band is seen as a promising candidate for addressing D2D communications impediments \cite{qiao2015enabling}. Radio propagation at mmWave band encounters several obstacles such as sever path-loss and sensitivity to blockage \cite{rappaport2013millimeter}. The small wavelength of mmWave signals, however, facilitates implementation of large directional and high-gain antenna arrays on D2D devices, which helps to compensate for additional path-loss \cite{wei2014key}. This, in turn, introduces a new challenge to D2D communication, as achieving the maximum directivity gain in a highly directional mmWave band system requires the transmitter and receiver to be precisely aligned. In the implementation of directional antennas, the angle-of-arrival (AoA), which represents the angle of incidence of incoming signal power, is acquired to enable D2D devices to steer and align their antenna bore-sight toward the desired signal. In practice, the AoA estimation is not completely accurate due to multiple sources of error such as antenna configuration perturbations and mobility of devices \cite{li2006outage}. Any error in estimating the AoA leads to beam alignment error, which subsequently may cause significant array gain variation or even signal outage at the receiver. Hence, it is crucial to analyze the impact of beam alignment error on the performance of the mmWave D2D network. Nevertheless, a majority of research work in the area of directional mmWave band communication assumes perfect beam alignment \cite{bai2015coverage,thornburg2016performance,bahadori2018device}. We note several works that explicitly account for alignment error in directional wireless networks \cite{li2006outage,yang2016analysis,wildman2014joint,thornburg2015ergodic}. Authors in \cite{li2006outage}, investigated the impact of an erroneous uniform linear array beamformer on network outage probability and reported a degradation in the network's performance in the presence of beamforming error. A stochastic geometry framework is used in \cite{wildman2014joint} to capture the effect of beam misdirection using the flat-top antenna model. The loss in the capacity and signal power due to misalignment in a mmWave band directional communication network is quantified in \cite{thornburg2015ergodic} and \cite{yang2016analysis}, respectively. However, most of the existing analyses are either performed in sub-6 GHz band or failed to consider the sensitivity of mmWave band communication to blockage. For analytical tractability, some works adopted the simplified flat-top antenna model, which fails to provide an accurate model for network assessment. Others merely assumed small alignment errors for tractability, which is not a valid assumption for mmWave band D2D network due to the users' mobility. We are aware of no work that considers the impact of erroneous beam alignment while taking into account all of the mentioned gaps simultaneously. In this paper, we analyze the impact of inaccurate AoA estimation on the coverage probability of a directional mmWave D2D network, where the network elements are modeled as a homogeneous Poisson point process (PPP). The directional antenna is approximated by adopting the cosine antenna model, which compared to the simplified flat-top model, provides a better approximation for the antenna pattern. The erroneous beam alignment due to AoA estimation is characterized by uniform and Gaussian distributions. We have used tools from stochastic geometry to derive the distribution of the received signal and interference, which is used to quantify the network performance. The analytical results are shown to be computationally precise through numerical evaluation. Lastly, in order to assess the impact of alignment error, the performance of the D2D network with beam misalignment is compared to the one with perfect beam alignment. Simulation results show that network performance in term of coverage probability can be affected significantly by beam misalignment. \setlength\belowcaptionskip{-2.45ex} \begin{figure} \centering \includegraphics[width=7.5cm,height=4.5cm, trim=1.5cm 8.6cm 1.5cm 9.0cm, clip]{networkC.pdf} \caption{A sample realization of the described network model, where blue rectangles represent the building blockages.} \label{fig:network} \end{figure} The remainder of this paper is organized as follows. The system model and performance measure are described in Section \ref{sec:system model}. The beam misalignment along with the directional antenna gain distribution in the presence beam alignment error is characterized in Section \ref{sec:antModel}. Then the coverage probability, as well as the impact of antenna misalignment for the mmWave D2D network, are derived in Section \ref{sec:coverageProb} using stochastic geometry. Numerical results are presented in Section \ref{sec:result} and finally, conclusions are drawn in Section \ref{sec:Conclusion}. \section{System model}\label{sec:system model} We consider a directional D2D network in the mmWave band in which D2D transmitters are spatially distributed according to a homogeneous Poisson point process (PPP), denoted as $\mathbf{\Phi}=\{x_i\}$ with density $\lambda$, where $x_i \in \mathbb{R}^2$ denotes the location of $i$-th D2D transmitter. Random size rectangular building blockages are also distributed randomly by another independent PPP. Figure \ref{fig:network} shows a sample realization of the network. Using the proposed mechanism in \cite{bahadori2018device}, all D2D transmitters are assumed to have a LOS corresponding receiver in its coverage area, and at least one packet ready for transmission. Without loss of generality, we consider that each D2D receiver has a single receive antenna and its corresponding D2D transmitter is equipped with an array of antennas which allows directional transmission, as depicted in Figure \ref{fig:systemModel}. Since we have no prior information about the D2D transmitter's antenna bore-sight angle, denoted by $\varphi_i$, it is modeled as a uniform random variable as $\varphi\sim \mathcal{U}(-\pi,\pi)$. Sidestepping the problem of power control, all D2D transmitters are transmitting at a constant transmit power $P_D$. Each communication link experiences i.i.d small-scale Rayleigh fading. Hence, the received signal power can be modeled as an exponential random variable with parameter 1. Moreover, no prior coordination among devices for interference mitigation is assumed. Here, we use the signal-to-interference-plus-noise ratio (SINR) coverage probability as a metric to assess the performance of the network. The coverage probability is defined as the probability that the received SINR is higher than a predefined threshold $\gamma$, i.e., $p_{\text{c}} (\gamma)={\mathbb{P}}[\text{SINR}\geq\gamma]$. The performance metric is obtained for a \emph{typical} D2D transmitter-receiver pair with the receiver located at the origin $(0,0)\in \mathbb{R}^2$, while the result holds for any generic D2D pair, based on the Slivnyak's theorem \cite{baccelli2010stochastic}. The SINR for the typical receiver can be written as \begin{equation} \text{SINR}=\frac{P_D h_0 G_0(\theta) C d_0^{-\alpha}}{\sigma^2+I}\label{eq:SINR}, \end{equation} where $h_0$ represents the corresponding transmitter's channel gain. Directional antenna gain is parameterized by $G_0(\theta)$, where $\theta$ represents the antenna angle, $C$ symbolizes the path-loss intercept and ${\alpha}$ is the path-loss exponent. The distance between the typical receiver and its transmitter at $x_0$ is denoted by $d_0=\|x_0\|$, and $\|.\|$ represents the Euclidean distance. Finally, $\sigma^2$ and $I$ represent the noise power and aggregate interference, respectively. \begin{figure} \centering \includegraphics[width=.6\columnwidth]{systemModelFinall.PNG} \caption{A sample realization of the D2D network with directional transmitter and omni-directional receiver.} \setlength{\abovecaptionskip}{-3cm} \label{fig:systemModel} \end{figure} \textbf{Blockage model-} In order to model the blockage effect in the mmWave band, we use the line-of-sight (LOS) ball model \cite{bai2015coverage}, in which the actual shape of the LOS region around each D2D receiver is approximated as a fixed-sized ball with radius $R$. Since NLOS transmission in mmWave band suffers from high attenuation, only the interference from LOS D2D transmitters are considered and NLOS transmissions are neglected \cite{rappaport2013millimeter}. Based on the LOS ball model definition, all LOS D2D transmitters nearby the typical D2D receiver are located inside a disk of radius $R$ centered at the origin. We note that using the thinning theorem of PPP \cite{baccelli2010stochastic}, LOS D2D transmitters form a PPP, denoted by $\mathbf{\Phi}_L$, with density $\lambda_L$. The aggregate interference received by the typical D2D receiver can be defined as \begin{equation} I = \sum_{i \in \mathbf{\Phi}_L} P_D h_i G_i(\beta_i) C \|x_i\|^{-\alpha}\label{eq:Inter}, \end{equation} where $h_i$ denotes channel gain of $i$-th LOS D2D transmitter located at $x_i \in \mathbf{\Phi}_L$. The antenna gain of D2D transmitter is characterized by $G_i(\beta_i) $, in which $\beta_i=|\varphi_i-\theta_{i,0}|$, and $\theta_{i,0}$ is the angle between $i$-th transmitter and the typical D2D receiver, as shown in Figure \ref{fig:systemModel}. Note that the antenna angle of interferers, denoted by $\beta_i$, is independent of the location of points in PPP, $\mathbf{\Phi}_L$, and can be considered uniformly distributed as $\varphi_i$ has uniform distribution \cite{wildman2014joint}. \section{Directional Antenna and Alignment Error}\label{sec:antModel} The directional antenna pattern is modeled using the cosine function. The cosine model provides a better approximation for antenna main-lobe \cite{yu2017coverage}, compared to the flat-top model which is widely used in the literature \cite{bai2015coverage,bai2014analysis,wildman2014joint}. The antenna gain can be defined as \begin{align}\allowdisplaybreaks G(\theta)=& \begin{cases} G_m\cos^2(\frac{\tau \theta}{2})& \hspace{2mm} |\theta|\leq \frac{\pi}{\tau} \\ 0 & \hspace{2mm} \text{otherwise}, \end{cases}\label{eq:pattern} \end{align} where $G_m$ represents the maximum gain, and $\tau$ controls the spread of antenna beam. $\theta$ symbolizes the antenna angle relative to the antenna's bore-sight angle, denoted by $\varphi$, as illustrated in Figure \ref{fig:Angles}. We assume that each D2D user is enabled to find the direction of its intended peer, using AoA spectrum \cite{bahadori2018device}, and steer its antenna bore-sight toward the direction of its receiver with a simple rotation around its location to transmit the maximum gain. Under the assumption of perfect alignment, each transmitter determines the direction of its receiver accurately. However, accurate beam alignment is not a practical assumption. Any error in estimating the AoA will cause the antenna array to point away from the desired signal and will lead to a reduction of the received power of the desired signal. \textbf{Beam alignment error-} Each D2D transmitter determines the AoA (orientation) of its receiver with an additive error, denoted by $\varepsilon_i$. The alignment error is measured relative to the transmitter's antenna bore-sight angle and characterizes the angle between the actual and estimated bearing of the receiver, as shown in Figure \ref{fig:Angles}. In this work, the AoA estimation error is characterized by two different random variables, namely, uniform and normal distributions. The uniform distribution is used to model the scenario where our knowledge about the estimation error is limited to the error bounds, and we have no prior information on error magnitude. Therefore, we assume that all error values between the minimum and maximum occur with equal likelihood. On the other hand, the normal distribution is used to model the scenario where we know some values of error are more probable (the ones near the mean value) than others. Note that according to the central limit theorem, the normalized sum of mutually independent random variables with finite variance is well-approximated by normal distribution. Hence, the beam alignment error which stems from multiple independent sources of uncertainty in the system can be modeled as a normal distribution. \begin{figure} \centering \includegraphics[width=.57\columnwidth]{anglesEdited.PNG} \caption{A directional D2D transmitter and its corresponding pair. The red arrow depicts the direction of antennas bore-sight angle, $\varphi$. The D2D transmitter determines the receiver's direction with error of $\varepsilon$ an the antenna gain in the presence of error is $G(\varepsilon)$. } \setlength{\abovecaptionskip}{-3cm} \label{fig:Angles} \end{figure} \begin{lemma}\label{lem:lemma1} Given that the AoA estimation error is distributed uniformly, as $\varepsilon \sim \mathcal{U}(-\varepsilon_0, \varepsilon_0)$ with zero mean and $| \varepsilon_0 |< \pi$, the probability distribution function (pdf) of D2D transmitter's antenna gain is \begin{align}\allowdisplaybreaks &f_{G}(g)=\frac{1}{\tau \varepsilon_0 \sqrt{g}\sqrt{G_m-g}}\label{eq:pdf}, \end{align} where \begin{align}\allowdisplaybreaks &\text{for} \hspace{2mm} |\varepsilon_0|<\frac{\pi}{\tau}, \hspace{4mm} g \in \left[\kappa G_m ,G_m\right]\nonumber\\ &\text{for} \hspace{2mm} \frac{\pi}{\tau}\leq |\varepsilon_0|<\pi , \hspace{4mm} g \in \left[0 ,G_m\right]\nonumber \end{align} and $\kappa=\cos^2(\frac{\tau \varepsilon_0}{2})$. \end{lemma} \vspace{2mm} \begin{IEEEproof} As illustrated in Figure \ref{fig:Angles}, the alignment error reduces the antenna gain to $G(\varepsilon)$, which is less than the antenna maximum gain, $G(\theta=0)=G_m$. In order to characterize the antenna gain distribution, the cumulative distribution function (CDF) of antenna gain can be derived as \begin{align}\allowdisplaybreaks F_{G}(g)&={\mathbb{P}}\left[G(\varepsilon)\leq g\right] \nonumber\\ &={\mathbb{P}}\left[ -\varepsilon_0\leq \varepsilon \leq -G^{-1}(g) \right]+{\mathbb{P}}\left[ G^{-1}(g) \leq \varepsilon \leq \varepsilon_0\right]\nonumber\\ &= \int^{-G^{-1}(g)}_{-\varepsilon_0} \frac{1}{2\varepsilon_0}\text{d}\varepsilon + \int_{G^{-1}(g)}^{\varepsilon_0} \frac{1}{2\varepsilon_0}\text{d}\varepsilon\nonumber\\ &= 1-\frac{2}{\varepsilon_0\tau}\arccos \left(\sqrt{\frac{g}{G_m}}\right)\nonumber\\ F_G(g)&= \begin{cases} 0 & g < 0\\ 1-\frac{\pi}{\tau \varepsilon_0} & g = 0 \\ 1-\frac{2}{\varepsilon_0\tau}\arccos \left(\sqrt{\frac{g}{G_m}}\right) & 0 < g \leq G_m\\ 1 & g > G_m \label{eq:CDFCosine} \end{cases} \end{align} where $G^{-1}(g)=\frac{2}{\tau}\arccos (\sqrt{\frac{g}{G_m}})$. The pdf function in (\ref{eq:pdf}) is derived by taking the derivative of CDF function in (\ref{eq:CDFCosine}). \end{IEEEproof} \vspace{2mm} \begin{lemma}\label{lem:lemma2} Given that the orientation AoA estimation error is modeled as a truncated Gaussian distribution as $\varepsilon \sim N_t (0,s^2, -\varepsilon_0, \varepsilon_0)$ and $| \varepsilon_0 |< \pi$, the probability distribution function (pdf) of D2D transmitter's antenna gain is \begin{align}\allowdisplaybreaks &f_{G}(g)=\frac{\sqrt{\zeta}\exp\left(-\zeta\arccos^2(\sqrt{\frac{g}{G_m}})\right)}{\text{erf}\left(\frac{\varepsilon_0}{\sqrt{2s^2}}\right) \sqrt{\pi}\sqrt{g}\sqrt{G_m-g}},\label{eq:pdfGauss} \end{align} where \begin{align}\allowdisplaybreaks &\text{for} \hspace{2mm} |\varepsilon_0|<\frac{\pi}{\tau}, \hspace{4mm} g \in \left[\kappa G_m ,G_m\right]\nonumber\\ &\text{for} \hspace{2mm} \frac{\pi}{\tau}\leq |\varepsilon_0|<\pi , \hspace{4mm} g \in \left[0 ,G_m\right]\nonumber \end{align} and $\zeta=\frac{2}{\tau^2s^2}$. \end{lemma} \vspace{2mm} \begin{IEEEproof} Following the same procedure as proof of Lemma \ref{lem:lemma1}, the antenna gain CDF with Gaussian distribution misalignment can be written as \begin{align}\allowdisplaybreaks F_{G}(g)&={\mathbb{P}}\left[G(\varepsilon)\leq g\right] \nonumber\\ &=2\int_{G^{-1}(g)}^{\varepsilon_0} \frac{\exp(\frac{-\varepsilon^2}{2s^2})}{\sqrt{2\pi s^2}.\text{erf}\left(\frac{\varepsilon_0}{\sqrt{2s^2}}\right)}\text{d}\varepsilon\nonumber\\ &= 1-\frac{\text{erf}\left(\sqrt{\zeta}\arccos(\sqrt{\frac{g}{G_m}})\right)}{\text{erf}\left(\frac{\varepsilon_0}{\sqrt{2s^2} }\right)}\nonumber \\ F_G(g)&= \begin{cases} 0 & g < 0\\ 1-\frac{\pi}{\tau \varepsilon_0} & g = 0 \\ 1-\frac{\text{erf}\left(\sqrt{\zeta}\arccos(\sqrt{\frac{g}{G_m}})\right)}{\text{erf}\left(\frac{\varepsilon_0}{\sqrt{2s^2} }\right)} & 0 < g \leq G_m\\ 1 & g > G_m \label{eq:CDFGuass} \end{cases} \end{align} \end{IEEEproof} It is worth noting that beam alignment error would not change the distribution of interference, as the bore-sight angle of interferes are distributed uniformly and independently from the desired transmitter's signal. We will justify the accuracy of this assumption through simulations in Section \ref{sec:result}. \section{Coverage Probability}\label{sec:coverageProb} Using equations (\ref{eq:SINR}) and (\ref{eq:Inter}), and the antenna gain pdf in (\ref{eq:pdf}) and (\ref{eq:pdfGauss}), the SINR coverage probability for the typical receiver can be written as \begin{align}\allowdisplaybreaks p_\text{c}(\gamma) &= {\mathbb{P}}\left[\text{SINR}\geq \gamma\right]\nonumber\\ &={\mathbb{P}}\left[\frac{h_0 G_0 (\varepsilon) d_0^{-\alpha}}{\sigma_n^2+I_n}\geq\gamma\right]\nonumber\\ &={\mathbb{P}} \left[ h_0 \geq \frac{\gamma(\sigma_n^2+I_n)}{G_0 (\varepsilon) d_0^{-\alpha}}\right]\nonumber\\ &=\int {\mathbb{P}} \left[ h_0 \geq \frac{\gamma(\sigma_n^2+I_n)}{g_0 d_0^{-\alpha}}\big|G_0(\varepsilon)=g_0\right]f_{G_0}(g_0)\text{d}g_0 \nonumber\\ &= \int {\mathbb{E}}_{I_n}\left[e^{-\rho I_n}\right] e^{-\rho\sigma_n^2}f_{G_0}(g_0)\text{d}g_0\label{eq:SINRCovProb}, \end{align} where $\sigma_n^2=\frac{\sigma^2}{P_D C}$ and $I_n=\frac{I}{P_D C}$ denote the normalized noise power and normalized aggregate interference, respectively. Notice that ${\mathbb{E}}[e^{-\rho I_n}]$ represents the Laplace transform of $I_n$ evaluated at $\rho=\frac{\gamma d_0^{\alpha}}{g_0}$ and can be written as \begin{align}\allowdisplaybreaks \mathcal{L}_{I_n}(\rho) &= {\mathbb{E}}_{I_n}\left[e^{-\rho I_n}\right]\nonumber\\ &={\mathbb{E}}_{\mathbf{\Phi}_L}\left[e^{-\rho \sum_{i \in \mathbf{\Phi}_L} h_i G_i(\beta_i) \|x_i\|^{-\alpha}}\right]\nonumber\\ &={\mathbb{E}}_{\mathbf{\Phi}_L}\left[\prod_{i \in \mathbf{\Phi}_L}{\mathbb{E}}_{h,G} \left[e^{-\rho h G \|x_i\|^{-\alpha}}\right]\right]\nonumber\\ &\overset{(a)}=e^{-2\pi \lambda_L {\mathbb{E}}_{h,G}\left[\int_{0}^{R}\left(1-e^{-\rho h G r^{-\alpha}}\right)r\text{d}r\right]}\nonumber\\ &=e^{-2\pi\lambda_L \chi(\rho)}\label{eq:Laplace}, \end{align} where \begin{align}\allowdisplaybreaks \chi(\rho)&= \frac{R^2}{2\tau}\left(1-{}_2F_1\left(-\delta,\frac{1}{2};1-\delta;-\rho R^{-\alpha}\right)\right)\nonumber\\ &\mathrel{\phantom{=}}-\frac{\Gamma(-\delta)\delta}{2\pi \rho^{-\delta}} \Gamma(1+\delta) \vartheta, \nonumber \end{align} where (a) follows due to moment generation function of Poisson distribution and also mapping the 2D Poisson point process, $\mathbf{\Phi}_L$ onto ${\mathbb{R}}^+$ by letting $\|x_i\|=r_i$ be the distances of points of $\mathbf{\Phi}_L$ from the typical receiver. $\delta=\frac{2}{\alpha_L}$, $\Gamma(z)=\int_{0}^{\infty}x^{z-1}e^{-x}\text{d}x$ represents the Gamma function, and ${}_2F_1(.)$ is the hyper-geometric function. Finally, $\vartheta=\int_{0}^{\frac{\pi}{\tau}}G_m^{\delta}\cos^{2\delta}(\frac{\tau\theta}{2})$. \begin{IEEEproof} Part of the Laplace transform of $I_n$, denoted by $\chi(\rho)$, can be written as \begingroup\makeatletter\def\f@size{8}\check@mathfonts \begin{align}\allowdisplaybreaks\label{eq:proof} &{\mathbb{E}}_{h,G}\left[\int_{0}^{R}\left(1-e^{-\rho h G r^{-\alpha}}\right)r\text{d}r\right]\nonumber\\ &=\frac{R^2}{2}-{\mathbb{E}}_{h,G}\left[\frac{\delta}{2}(\rho hG)^{\delta}\Gamma(-\delta,\rho hGR^{-\alpha})\right]\nonumber\\ &\overset{(a)}=\frac{R^2}{2}-\frac{\delta }{2}{\mathbb{E}}_{h,G}\left[ \frac{\Gamma(-\delta)}{(\rho hG)^{-\delta}}- \sum_{k=0}^{\infty}\frac{(-\rho hGR^{-\alpha})^k R ^2} {k!(k-\delta)}\right]\nonumber \\ &=\frac{R^2}{2}-\frac{\Gamma(-\delta)\delta}{2\rho ^{-\delta}}{\mathbb{E}}[h^{\delta}G^{\delta}]+\frac{\delta R^2}{2}\sum_{k=0}^{\infty}\frac{(-\rho R^{-\alpha})^k}{k!(k-\delta)}{\mathbb{E}}_{h,G}[h^kG^k]\nonumber\\ &\overset{(b)}=-\frac{\Gamma(-\delta)\delta}{2\pi \rho^{-\delta}} \Gamma(1+\delta)\int_{0}^{\frac{\pi}{\tau}}G_m^{\delta}\cos^{2\delta}(\frac{\tau \beta}{2})\text{d}\beta\nonumber\\ &\mathrel{\phantom{=}}+\frac{\delta R^2} {2\pi} \sum_{k=1}^{\infty}\frac{(-\rho R^{-\alpha})^k}{k!(k-\delta)}\Gamma(k+1) \int_{0}^{\frac{\pi}{\tau}}G_m^k\cos^{2k}( \frac{\tau \beta}{2})\text{d}\beta\nonumber\\ &=-\frac{\Gamma(-\delta)\delta}{2\pi \rho^{-\delta}} \Gamma(1+\delta) \vartheta\nonumber+\frac{\delta R^2}{2\tau} \sum_{k=1}^{\infty} \frac{(-\rho G_m R^{-\alpha}) ^k \Gamma(k+1)} {k!(k-\delta)} \frac{(2k)!}{4^k(k!)^2} \nonumber\\ &=-\frac{\Gamma(-\delta)\delta}{2\pi s^{-\delta}} \Gamma(1+\delta) \vartheta\nonumber+\frac{\delta R^2} {2\tau}\sum_{k=1}^{\infty}\frac{(-sR^{-\alpha})^k}{k!(k-\delta)}\frac{\Gamma(k+\frac{1}{2})}{\sqrt{\pi}}\nonumber\\ &=-\frac{\Gamma(-\delta)\delta}{2\pi \rho^{-\delta}} \Gamma(1+\delta) \vartheta\nonumber+\frac{R^2} {2\tau}\left(1- {}_2F_1\left(-\delta,\frac{1}{2};1-\delta;-\rho G_m R^{-\alpha}\right)\right) \end{align}\endgroup where $\Gamma(a,z)=\int_{z}^{\infty}t^{a-1}e^{-t}dt$ is the upper incomplete Gamma function. Step (a) is from the series expansion of $\Gamma(-\delta,shGR^{-\alpha})$. Step (b) follows due to the exponential distribution of channel gain, $h$, ${\mathbb{E}}[h^k]=\Gamma(k+1)$. \end{IEEEproof} \begin{table} \caption{Simulation Parameters} \centering \begin{tabular}{|c|c|c|c} \hline Parameter&Notation& Value \\ \hline\hline\hline Antenna gain & $G_m$ & $10$ (dBi) \\ Mainlobe spread & $\tau$& $1$,$2$,$3$,$4$\\ Bore-sight angle & $\varphi_i$ & $\varphi \sim \mathcal{U}(-\pi,\pi)$\\ Density of PPP & $\lambda$ & $50$ ($\text{km}^{-2}$)\\ Radius of LOS ball & $R$ & $300$ (m)\\ D2D TX power& $P_D$ & $1$ (watt)\\ Path-loss exponent & $\alpha$& $2.1$ \\ Path-loss intercept & $C$& $-62$ (dB) \\ Bandwidth & $B$ & $1$ (GHz)\\ Carrier frequency& $f$ & $28$ (GHz)\\ Noise power & $\sigma^2$ & $-174+10 \log_{10}B+10$ {\footnotesize(dBm)} \\ \hline \end{tabular}\label{params} \end{table} \section{Numerical Results and Discussions}\label{sec:result} In this section, we evaluate the performance of the mmWave D2D network using the obtained expression for coverage probability in (\ref{eq:SINRCovProb}) and (\ref{eq:Laplace}). The impact of antenna misalignment on the network performance, due to inaccurate AoA estimation, is captured using the antenna gain distributions in the presence of the error in equations (\ref{eq:pdf}) and (\ref{eq:pdfGauss}). Moreover, to validate our analytical results, we simulated a network similar to the one discussed in Section \ref{sec:system model}. For our simulations, we consider an area of the size $10$ $km$ $\times$ $10$ $km$ which is --given the transmit power of D2D devices-- large enough to avoid the boundary effect. D2D transmitters along with various size rectangular blockages are distributed in the area according to PPP. In addition, we assume that all the transmitters use a constant power for transmission. Table \ref{params} summarizes the simulation parameters. To thwart the effect of noisy data, we used Monte Carlo simulation with $10,000$ iterations and averaged out the results. In the following figures, simulation results are represented by "$+$" symbol. \begin{figure} \centering \includegraphics[width=.75\columnwidth, trim=4cm 8.5cm 4.0cm 9.1cm, clip]{CovEpsilonMagC.pdf} \caption{mmWave D2D network's SINR coverage probability vs. SINR threshold with $\tau = 3$ and $s^2=1$.} \label{fig:Coverage_error} \end{figure} Figure \ref{fig:Coverage_error} and \ref{fig:Cov-sigma} show the SINR coverage probability of the directional D2D network in the mmWave band as a function of the SINR threshold, for three different scenarios, namely, perfect beam alignment, erroneous alignment with the Gaussian error distribution and erroneous alignment with uniform error distribution. Figure \ref{fig:Coverage_error} demonstrates the impact of magnitude of error on the network coverage probability, with two different error magnitudes, i.e. $\varepsilon_0 = 0.2\pi$ and $\varepsilon_0 = 0.4 \pi$. It can be seen that the network coverage probability decreases as the error magnitude increases. Moreover, in case the ratio of error to antenna beam spread is bigger than one, $\frac{\varepsilon_0}{\pi/\tau}>1$, the chance of transmission with zero antenna gain increases which degrades the coverage probability significantly. This evaluation indicates that big alignment error has a significant impact on network performance, and should not be neglected as in \cite{yang2016analysis, thornburg2015ergodic}. Moreover, it is shown that the analytical results match the simulations with negligible gaps, which indicates the accuracy of equation (\ref{eq:SINRCovProb}). In Figure \ref{fig:Cov-sigma}, the impact of variance of the normally distributed error is investigated on network's performance. As the variance of error increases the coverage probability decreases. This is mainly due to the higher chance of non-zero transmission gain at smaller variance. Larger values of variance leads to higher variation in transmission gain, which eventually cause the coverage probability degradation. Moreover, the graph shows that Gaussian distributed error with $s^2=9$ almost matches the uniformly distributed error. It is intuitive, as truncated Gaussian distribution with large variance resembles the uniform distribution. \begin{figure} \centering \includegraphics[width=.75\columnwidth, trim=4.1cm 8.5cm 4.0cm 9.1cm, clip]{covDiffsigmaC.pdf} \caption{mmWave D2D network's SINR coverage probability vs. SINR threshold with $\tau = 3$ and $\varepsilon_0 = 0.4 \pi$.} \label{fig:Cov-sigma} \end{figure} \begin{figure} \centering \includegraphics[width=.8\columnwidth, trim=3.9cm 8.5cm 4.0cm 9.1cm, clip]{CDFgainC.pdf} \caption{CDF of the antenna gain of the typical D2D transmitter in the directional mmWave D2D network with $\tau =3 $.} \label{fig:antGain} \end{figure} Figure \ref{fig:antGain} shows the CDF of D2D transmitters antenna gain with Gaussian and uniform error distribution, with different error magnitudes, i.e. $\varepsilon_0=0.2\pi$ and $\varepsilon_0=0.4\pi$. This simulation investigates the accuracy of Lemma \ref{lem:lemma1} and \ref{lem:lemma2}. It can be seen that for $\frac{\varepsilon_0}{\pi/\tau}<1$, antenna gain ranges from $\kappa G_m$ to $G_m$, while for $\frac{\varepsilon_0}{\pi/\tau}>1$ it changes from $0$ to $G_m$. Figure \ref{fig:interf} shows CDF of integrated interference in a directional D2D network with and without alignment error. As they are approximately matched, thus, our assumption on not considering the impact of the error on alignment is accurate. \begin{figure} \centering \includegraphics[width=.8\columnwidth, trim=3.9cm 8.5cm 4.2cm 9.1cm, clip]{interferenceCDFC.pdf} \caption{CDF of cumulative interference in a directional mmWave D2D network with $\tau=3$, $\varepsilon_0=0.4\pi$ and $\lambda=300$ users per $km^2$.} \label{fig:interf} \end{figure} \begin{figure}[h] \centering \includegraphics[width=.8\columnwidth, trim=4.1cm 8.5cm 4.2cm 9.1cm, clip]{distanceC.pdf} \caption{mmWave D2D network's SINR overage probability vs. D2D pair's distance with $\gamma = -5 $ (dB).} \label{fig:distance} \end{figure} Figure \ref{fig:distance} shows SINR coverage probability of D2D network as a function of the distance among D2D transmitter-receiver pairs for SINR threshold $\gamma = -5$ dB. It is shown that, increasing the distance of D2D pairs, degrades the performance of the D2D network with and without misalignment. However, in the presence of the misalignment increasing the distance drops the network performance even more, due to the decrease in transmitter antenna gain. \section{Conclusion and Future Work}\label{sec:Conclusion} In this paper, we proposed a mathematical framework to analyze the impact of AoA estimation on the performance of a mmWave D2D network. Based on the prior information we have about the error, the AoA estimation error is modeled using normal and uniform distributions. We have used stochastic geometry to provide a complete framework to analyze the D2D network performance in the presence of error in terms of the received SINR coverage probability, for which analytical formulas are derived. Simulation results show that the coverage of the network with erroneous beam alignment can be degraded by about $35\%$ compared to the one with perfect beam alignment. Moreover, our simulations validate the analytical results discussed in the paper. Considering the significant impact of beam alignment error on the network performance, proposing a mechanism that corrects and compensate the beam alignment error using a feedback loop is a promising future direction. \medskip \bibliographystyle{ieeetr}
train/arxiv
BkiUa8U5qWTD6eww2mQY
5
1
\section{Introduction} \label{s1} In the preceding paper \cite{GSU96}, the problem of finding ground state energies and configurations for a Frenkel-Kontorova model in a periodic potential formed by parabolic segments of identical (positive) curvature, was reduced to that of minimizing a certain convex function over a finite simplex. While various aspects of the corresponding phase diagram in the $N=2$ case can be worked out in a relatively straightforward manner \cite{gri 90a,dl95,klt95}, minimizing the convex function for larger values of $N$ represents a non-trivial problem in numerical analysis. The basic reason is that the convex function of interest does not have continuous derivatives, and in the case of an irrational winding number, it possesses a dense set of singularities. Hence standard gradient methods run into difficulties. The approach we employ is based upon the concepts of subdifferential and subgradient from the theory of convex functions \cite{rocka}, as explained in Sec.~\ref{s2}. The algorithm itself, described in detail in Sec.~\ref{s3}, is motivated by a physical model involving quasiparticles, Sec.~\ref{s3a}. An essential part of the procedure is a standard linear programming procedure which we have simplified and adapted to the problem at hand, App.~A, so as to speed it up substantially. As a result, ground states for $N$ on the order of 100 are readily calculated, and larger values of $N$ are accessible with, of course, a longer running time; see Sec.~\ref{s3d}. \section{Minimization Using Subdifferentials} \label{s2} As in part I, with some minor changes in notation, we assume the energy per particle can be written in the form \begin{equation} \epsilon = \epsilon_0 + \epsilon_1, \label{e2.1} \end{equation} where \begin{equation} \epsilon_0=\sum_{j<k} \Delta t_j \Delta t_k {\cal G}(\zeta_{kj}), \label{e2.2} \end{equation} \begin{equation} \epsilon_1=\bar{\bf h}\cdot{\mbox{\boldmath $\psi$}}=-\bar{\mbox{\boldmath $\eta$}}\cdot{\mbox{\boldmath $\zeta$}}= -\sum_{i=0}^{N-1}\bar\eta_i\zeta_i, \label{e2.3} \end{equation} with $\bar h_0$ equal to zero, \begin{equation} \zeta_{kj}=-\zeta_{jk}=\zeta_k - \zeta_j = \sum_{i=j+1}^k \psi_i, \label{e2.4} \end{equation} and \begin{equation} {\cal G}(\psi) = {\cal G}(-\psi) = {\cal G}(1-\psi) = \sum_{\nu=-\infty}^\infty w g({\psi + \nu \over w}). \label{e2.5} \end{equation} The boldface letters denote $N$-component vectors, for example, \begin{equation} {\mbox{\boldmath $\zeta$}}= (\zeta_0, \zeta_1, \ldots \zeta_{N-1}). \label{e2.6} \end{equation} A constant term independent of {\mbox{\boldmath $\zeta$}}\ has been omitted from (\ref{e2.2}), and bars have been added to {\bf h}\ and {\mbox{\boldmath $\eta$}}\ in (\ref{e2.3}) to distinguish them from quantities which we shall define later. Note that \begin{equation} \sum_{i=0}^{N-1} \bar\eta_i = 0 \label{e2.7} \end{equation} because $\bar\eta_i=\bar h_{i+1}-\bar h_i$, and $\bar h_j$ is periodic in $j$, with period $N$. Our task is to find the {\mbox{\boldmath $\zeta$}}, or equivalently {\mbox{\boldmath $\psi$}}, which minimize $\epsilon$ for a given $\bar{\mbox{\boldmath $\eta$}}$, or equivalently $\bar{\bf h}$. The function ${\cal G}(\psi)$ is convex on the interval $0\leq\psi\leq 1$. If its derivative \begin{equation} {\cal B}(\psi)=d{\cal G}/d\psi=-{\cal B}(-\psi)={\cal B}(1+\psi) \label{e2.8} \end{equation} were a continuous function, the minimum would satisfy the equation: \begin{equation} \bar{\mbox{\boldmath $\eta$}}={\mbox{\boldmath $\eta$}} \label{e2.9} \end{equation} obtained by differentiating (\ref{e2.1}) , where the components of {\mbox{\boldmath $\eta$}}\ are given by \begin{equation} \eta_k=\sum_{j(\neq k)} \eta_{kj}, \label{e2.10} \end{equation} \begin{equation} \eta_{kj}=\Delta t_k \Delta t_j \, \beta_{kj} = - \eta_{jk}, \label{e2.11} \end{equation} \begin{equation} \beta_{kj} = {\cal B}(\zeta_{kj}) = - \beta_{jk}. \label{e2.12} \end{equation} But in fact ${\cal B}(\psi)$ has lots of discontinuities, see Fig.~\ref{fg1}, and therefore we need some way to interpret the (formal) solution (\ref{e2.9}) in this case. For this purpose it is convenient to employ the concepts of subgradient and subdifferential as defined by Rockafellar \cite{rocka}. Suppose that $F({\mbox{\boldmath $\zeta$}})$ is a real-valued function (which need not be convex) defined on some domain in ${\Bbb R}^N$. We shall say that {\mbox{\boldmath $\eta$}}\ is a {\it subgradient} of $F$ at the point ${\mbox{\boldmath $\zeta$}}=\bar{\mbox{\boldmath $\zeta$}}$ provided \begin{equation} F({\mbox{\boldmath $\zeta$}}) \geq F(\bar{\mbox{\boldmath $\zeta$}})+{\mbox{\boldmath $\eta$}}\cdot ({\mbox{\boldmath $\zeta$}}-\bar{\mbox{\boldmath $\zeta$}}) \label{e2.13} \end{equation} for all {\mbox{\boldmath $\zeta$}}\ where $F$ is defined. The collection of all {\mbox{\boldmath $\eta$}}\ values for which this inequality is satisfied for a given $\bar{\mbox{\boldmath $\zeta$}}$ is easily shown to be a convex subset of ${\Bbb R}^N$, and is called the {\it subdifferential} of $F$ at $\bar{\mbox{\boldmath $\zeta$}}$, denoted by $\partial F({\mbox{\boldmath $\zeta$}})$. Given this definition, it is easy to show that $\epsilon$ in (\ref{e2.1}) has a minimum at ${\mbox{\boldmath $\zeta$}}=\bar{\mbox{\boldmath $\zeta$}}$ if and only if $\bar{\mbox{\boldmath $\eta$}}$ is an element of $\partial\epsilon_0(\bar{\mbox{\boldmath $\zeta$}})$, the subdifferential of $\epsilon_0$ at $\bar{\mbox{\boldmath $\zeta$}}$. In addition, the subdifferential of a sum ($F_1+F_2+\cdots$) of convex functions is the sum of the subdifferentials of the individual functions \cite{rockab}, understood as sums of sets of ${\Bbb R}^N$; thus: \begin{equation} A + B = \{{\mbox{\boldmath $\zeta$}} + {\mbox{\boldmath $\zeta$}}' : {\mbox{\boldmath $\zeta$}}\in A, {\mbox{\boldmath $\zeta$}}'\in B\}, \label{e2.14} \end{equation} together with its obvious generalization to the sum of three or more sets. These observations provide the key for interpreting equations (\ref{e2.10}) to (\ref{e2.12}). At some point $\psi'$ where ${\cal B}(\psi)$ is discontinuous, the subdifferential of ${\cal G}(\psi)$, for $\psi$ in the range $0\leq\psi\leq 1$, consists of all points $\beta$ lying in the interval \begin{equation} {\cal B}_-(\psi')\leq \beta \leq {\cal B}_+(\psi') \label{e2.15} \end{equation} where ${\cal B}_-(\psi')$ and ${\cal B}_+(\psi')$ are the left and right derivatives of ${\cal G}(\psi)$ at $\psi'$, that is, the bottom and top of the discontinuity in the graph of ${\cal B}$. More generally, for $k > j$ we interpret $\beta_{kj}$ in (\ref{e2.12}) as any point in the interval \begin{equation} {\cal B}_-(\zeta_{kj})\leq \beta_{kj} \leq {\cal B}_+(\zeta_{kj}), \label{e2.16} \end{equation} where the lower limit is set equal to $-\infty$ if $\zeta_{kj}=0$, and the upper limit is $+\infty$ if $\zeta_{kj}=1$, as a consequence of the constraints \begin{equation} \zeta_0 \leq \zeta_1 \leq \zeta_2 \leq \ldots \leq \zeta_{N-1} \leq \zeta_0+1. \label{e2.17} \end{equation} For $j < k$, we define $\beta_{jk}=-\beta_{kj}$. Of course, if ${\cal B}(\psi)$ is continuous at $\psi=\zeta_{kj}$, then $\beta_{kj}$ is the single point ${\cal B}(\zeta_{kj})$. Consequently, $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$ is simply the collection of all ${\mbox{\boldmath $\eta$}}$ obtained using (\ref{e2.10}), where for each $k > j$, $\beta_{kj}$ in (\ref{e2.12}) is allowed to vary over the interval (\ref{e2.16}), and for $j < k$, $\beta_{jk}=-\beta_{kj}$. Notice that this means that $\eta_{jk}+\eta_{kj}$ is zero, and therefore \begin{equation} \sum_{i=0}^{N-1} \eta_i = 0 \label{e2.18} \end{equation} for any ${\mbox{\boldmath $\eta$}}$ in the subdifferential $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$, which corresponds to (\ref{e2.7}). Note that the $\beta_{kj}$ are allowed to vary {\it independently}, aside from the restriction $\beta_{jk}=-\beta_{kj}$. As there are thus $N(N-1)/2$ independent variables, some of which may be constant because they do not correspond to discontinuities of ${\cal B}$, $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$ is, in general, a fairly complicated polyhedron, of dimension less than or equal to $N-1$, the dimension of the space in ${\Bbb R}^N$ satisfying the constraint (\ref{e2.18}). In the case $N=3$, the subdifferentials $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$ are closed sets, either hexagons, lines, or points, depending upon the value of ${\mbox{\boldmath $\zeta$}}$. Some of the lines and hexagons extend to infinity. A hexagon occurs provided \begin{equation} \zeta_{10} = \mu_1 w - \nu_1,\ \ \zeta_{20} = \mu_2 w - \nu_2, \label{e2.19} \end{equation} where $\mu_1$, $\mu_2$, $\nu_1$, and $\nu_2$ are integers, in which case $\zeta_{10}$ and $\zeta_{20}$, as well as $\zeta_{21}=\zeta_{20}-\zeta_{10}$, are at discontinuities of ${\cal B}$. A line occurs when ${\cal B}$ is discontinuous at one of the three values $\zeta_{10}$, $\zeta_{20}$, or $\zeta_{21}$, but not at the other two, and $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$ is a point if ${\cal B}$ is continuous at all three values. If $w$ is irrational, the discontinuities of ${\cal B}(\psi)$ are a dense set in $\psi$, and consequently the subdifferentials of $\epsilon_0$ for different ${\mbox{\boldmath $\zeta$}}$ have no points in common. If $w$ is rational, adjacent hexagons overlap at their common edges and vertices, and each edge and each vertex is itself a subdifferential of $\epsilon_0$ for a range of ${\mbox{\boldmath $\zeta$}}$ values. For both rational and irrational $w$, the hexagons cover the entire plane satisfying the constraint (\ref{e2.18}), with the exception, when $w$ is irrational, of a set of zero measure. A similar comment applies to larger values of $N$, and therefore in numerical studies it suffices to consider the $N-1$ dimensional polyhedra obtained when every $\zeta_{kj}$ falls on a discontinuity of ${\cal B}$. It is sometimes helpful to think of the collection of subdifferentials $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$ as ${\mbox{\boldmath $\zeta$}}$ varies as generated by placing a set of $N(N-1)/2$ points on the graph of ${\cal B}(\psi)$ at positions $(\zeta_{kj},\beta_{kj})$. Note that the $\zeta_{kj}$ cannot be varied independently, as they are determined by a set of $N-1$ parameters, see (\ref{e2.4}). However, each of the $N(N-1)/2$ points on the graph can be moved independently in the vertical direction, as long as it is on a discontinuity of ${\cal B}$, to form the collection of $\beta_{kj}$ values which generate the subdifferential for a fixed ${\mbox{\boldmath $\zeta$}}$. \section{Numerical Procedure} \label{s3} \subsection{Introduction} \label{s3a} The problem of finding the ${\mbox{\boldmath $\zeta$}}$ which minimizes $\epsilon$, (\ref{e2.1}) , for a given ${\mbox{\boldmath $\eta$}}$ is equivalent, as noted in Sec.~\ref{s2} above, to finding the ${\mbox{\boldmath $\zeta$}}$ such that $\bar{\mbox{\boldmath $\eta$}}$ falls in the subdifferential $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$. Furthermore, the $N-1$ dimensional polyhedra which arise when all the $\zeta_{kj}$ fall at the discontinuities of ${\cal B}(\psi)$ fill up the relevant $N-1$ dimensional hyperplane (\ref{e2.18}) \ except for a set of measure zero, and as we assume that ${\mbox{\boldmath $\eta$}}$ is only specified with some limited numerical precision, we can in practice limit ourselves to a consideration of such polyhedra. The general idea of the algorithm is as follows. Starting from some ${\mbox{\boldmath $\zeta$}}$ with all $\zeta_{kj}$ at discontinuities of ${\cal B}$, test whether the target $\bar{\mbox{\boldmath $\eta$}}$ lies inside $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$. If it does, the problem has been solved. If it does not, use the information obtained from the test in order to choose a new {\mbox{\boldmath $\zeta$}}\ closer to the desired value, and repeat the test. The test itself, steps 3 and 5 in the algorithm as summarized below, involves a linear optimization procedure with an execution time which (typically) varies as $N^3$, which is relatively expensive when $N$ is large. Consequently, the test is preceded in our algorithm by various steps whose aim is to provide, with a relatively small number of operations, a value of ${\mbox{\boldmath $\zeta$}}$ close to the final solution. To begin with, we replace the actual ${\cal B}(\psi)$ with an approximate, piecewise constant function ${\cal B}^*(\psi)$ which has discontinuities at the points \begin{equation} \psi=\mu w - \nu, \label{e3.1} \end{equation} where $\mu$ and $\nu$ are integers, and \begin{equation} |\mu| \leq M \label{e3.2} \end{equation} for some finite bound $M$, which can be increased later if necessary. This is a sensible procedure, because the size of the discontinuities decreases exponentially with $|\mu|$. Between two successive discontinuities $\psi'$ and $\psi''$, the function ${\cal B}^*$ is defined to be a constant lying halfway between ${\cal B}_+(\psi')$ and ${\cal B}_-(\psi'')$, see Fig.~\ref{fg1}. Consequently, the discontinuities of ${\cal B}^*$ are somewhat larger than those of the exact ${\cal B}$, and (\ref{e2.16}) is replaced by: \begin{equation} {\cal B}^*_-(\zeta_{kj})\leq \beta_{kj} \leq {\cal B}^*_+(\zeta_{kj}). \label{e3.3} \end{equation} Note that if \begin{equation} w=p/q \label{e3.4} \end{equation} is a rational number, with $p$ and $q$ relatively prime positive integers, the discontinuities of ${\cal B}$, (\ref{e3.1}), are the points \begin{equation} \psi = s/q, \label{e3.5} \end{equation} where $s$ is any integer (and may have factors in common with $q$). The definition of ${\cal B}^*$ is the same as before; though it should be noted that the discontinuity interval of ${\cal B}$ at a point (\ref{e3.5}) is made up of contributions from an infinite number of discontinuities from derivatives of terms on the right side of (\ref{e2.5}). Of course, if $M$ in (\ref{e3.2}) is equal to $q-1$ (or larger), ${\cal B}^*$ and ${\cal B}$ are identical, and step 5 can be eliminated from the algorithm described below. In order to motivate the initial steps in the algorithm, it is helpful to think of $\zeta_0, \zeta_1,\ldots$ as representing the positions of a set of $N$ quasiparticles located on a circle of unit circumference, Fig.~\ref{fg2}, and subjected to two kinds of forces, corresponding to $\epsilon_0$ and $\epsilon_1$, thought of as potential energies. In this picture, $\bar\eta_k$ represents an external force exerted on particle $k$, and $\eta_{kj}$, (\ref{e2.11}), the force which particle $k$ exerts on particle $j$. The minimization condition (\ref{e2.9}) can then be interpreted as stating that, for every $k$, the external force exerted on particle $k$ is equal to the sum $\eta_k$ of the forces which it exerts on the other particles, which is the same as saying that the net force on particle $k$ is zero. The pair force $\eta_{kj}$ is constant as long as $\zeta_{kj}$ is not at a discontinuity of ${\cal B}^*$, while if it is at such a discontinuity, it can take any value, see (\ref{e2.11}), corresponding to $\beta_{kj}$ in the range (\ref{e3.3}). In addition, there is a hard core interaction which prevents two quasiparticles from passing through each other, and ensures that the inequalities (\ref{e2.17}) are satisfied. Note that there can very well be solutions to the minimization problem in which some of these inequalities are equalities. If, for example, $\zeta_2=\zeta_3$, then $\beta_{32}$ can be very large and negative, see the remarks following (\ref{e2.16}), corresponding to the fact that the hard core allows particle 3 to exert a very large (negative) force on particle 2. The initial steps of the algorithm consist of a number of horizontal and vertical shifts; the terminology comes from the picture of points on the graph of ${\cal B}$, Sec.~\ref{s2}. A {\it horizontal shift} is a change in the positions of the quasiparticles, and thus the $\zeta_{kj}$, with the $\beta_{kj}$ (and thus the $\eta_{kj}$) held fixed, while a {\it vertical shift} is a change in the set of $\beta_{kj}$ values with the quasiparticle positions, and thus the $\zeta_{kj}$, held fixed. In addition, we shall make use of the concept of a {\it cluster}, which means a collection of quasiparticles, with their labels belonging to an index set $J$ containing $|J|$ members, with the property that the collection is connected by a set of ``pair bonds'' $(kj)$, $k$ and $j$ members of $J$, with $\zeta_{kj}$ at one of the discontinuities of ${\cal B}^*$. For example, if $\zeta_{21}$ and $\zeta_{42}$ fall on discontinuities of ${\cal B}^*$, then it is possible, but not necessary, to define a cluster $J=\{1,2,4\}$ of $|J|=3$ quasiparticles, as in Fig.~\ref{fg2}. We shall always think of the entire collection of quasiparticles as divided up among a set of mutually disjoint clusters, where a quasiparticle which does not belong to a larger cluster constitutes its own cluster containing only one element. If all the quasiparticles belong to a single cluster of $|J|=N$ elements, we shall call this a {\it complete cluster}. In any horizontal shift, the clusters are moved rigidly, in the sense that $\zeta_{jk}$ does not change if $j$ and $k$ belong to the same cluster. (This must obviously be the case whenever the cluster is linked together by bonds for which the $\beta_{kj}$ fall in the interiors of the corresponding discontinuity intervals (\ref{e3.3}).) Conversely, a vertical shift is always applied to a single cluster. \subsection{Summary of the Algorithm} \label{s3b} The algorithm for finding a minimum consists of the following steps, details of which are given below in Sec.~\ref{s3c}. 0. Initialization: Choose an initial approximate ${\cal B}^*$ by, for example, setting $M=4$ in (\ref{e3.2}), and some initial values for the $\zeta_i$ satisfying (\ref{e2.17}), with a set of clusters specified (e.g., each quasiparticle might belong to its own cluster). 1. Horizontal shift I: Calculate a ``velocity'' for each cluster, and use this to carry out a horizontal shift until for the first time some $\zeta_{kj}$ for $j$ and $k$ belonging to different clusters reaches a discontinuity of ${\cal B}^*$, in which case we shall say that these two clusters have ``collided'' to form a temporary combined cluster. Go to step 2. 2. Vertical shift: Carry out a vertical shift on the temporary combined cluster following the prescription given in (\ref{e3.8}) below and in the remarks which follow, and apply the test which is described there. If the result of the test is negative, the combined cluster is rejected, the collection of clusters is defined to be the same as it was before the collision, and the algorithm returns to step 1. If the result of the test is positive, the temporary combined cluster becomes permanent, and is considered part of the collection of clusters for the next step in the algorithm. If this cluster is complete, go to step 3; if not, return to step 1. 3. Linear optimization I: With all the quasiparticles in a single cluster, apply linear optimization, as discussed in Sec.~\ref{s3c} below, to produce a vertical shift which maximizes a non-negative parameter $\lambda$. If $\lambda\geq 1$, then the target $\bar{\mbox{\boldmath $\eta$}}$ is inside the polyhedron $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$ associated with the current ${\mbox{\boldmath $\zeta$}}$ in the approximation in which ${\cal B}$ has been replaced by ${\cal B}^*$; go to step 5. If $\lambda < 1$, then $\bar{\mbox{\boldmath $\eta$}}$ is not inside the polyhedron; go to step 4. 4. Horizontal shift II: Use the $\beta_{kj}$ resulting from the linear optimization in 3 to divide the collection of quasiparticles into two clusters, which undergo a horizontal shift relative to each other until they collide (as in step 1) to form a new, combined cluster which is a complete cluster. Return to step 3. 5. Linear optimization II: Repeat the linear optimization step 4, but with each $\beta_{kj}$ now restricted to the corresponding {\it exact} interval (\ref{e2.16}). If, however, $\zeta_{kj}$ falls at a point where ${\cal B}^*$, unlike ${\cal B}$, has no discontinuity, then the corresponding $\beta_{kj}$ is placed at the center of the corresponding discontinuity of ${\cal B}$, and is treated as a constant, not a variable, during the linear optimization. If the optimization yields $\lambda\geq 1$, the current ${\mbox{\boldmath $\zeta$}}$ is the desired solution to the minimization problem, and the algorithm stops. If $\lambda$ is less than 1, ${\cal B}^*$ is replaced by another approximation to ${\cal B}$ constructed in the same way, but using a larger value of $M$ in (\ref{e3.2}). The current ${\mbox{\boldmath $\zeta$}}$ values are changed by very small amounts so that none of the $\zeta_{kj}$ fall at discontinuities of the new ${\cal B}^*$, and the algorithm returns to step 1. \subsection{Details of the Algorithm} \label{s3c} The explanations given below are numbered in the same way as the steps in the preceding summary. 1. Given a set of $\eta_{kj}$ values, the net force on the $k$'th quasiparticle is \begin{equation} \bar\eta_k - \eta_k = \bar\eta_k - \sum_{j(\neq k)} \eta_{kj}. \label{e3.6} \end{equation} Were the force given by a continuous function, it would be possible to find the energy minimum by assigning to each quasiparticle a velocity proportional to the force acting on it, and then solving the resulting dynamics. What is actually done in the algorithm is to assign to each cluster a velocity given by the total force acting on all the quasiparticles in the cluster divided by the number of particles in the cluster, which is the average force per particle: \begin{equation} v_J= (1/|J|) \sum_{k\in J} (\bar\eta_k - \eta_k). \label{e3.7} \end{equation} It is only the relative cluster velocities which are of interest in determining the horizontal shift; adding the same constant to every $v_J$ will make no difference, and one can arrange (for example) that the cluster containing $\zeta_0$ remains fixed. The clusters are then shifted by amounts proportional to their respective velocities until the first ``collision'' occurs, in the sense that $\zeta_{kj}$ for $k$ in one cluster and $j$ in another reaches a discontinuity of ${\cal B}^*$. 2. Once the temporary, combined cluster $J_c$ has been formed, a vertical shift is applied to the $\eta_{kj}$ for $k$ and $j$ in $J_c$. This is done by first calculating a preliminary value for the shift of the $\eta_{kj}$ or $\beta_{kj}$ values from the formula \begin{equation} \delta\eta_{kj} = \Delta t_k \Delta t_j \, \delta\beta_{kj} = [(\bar\eta_k - \eta_k) - (\bar\eta_j - \eta_j)] / |J_c|. \label{e3.8} \end{equation} The motivation for this choice is the following. If all the quasiparticles were in a single cluster, $|J_c|=N$, the change (\ref{e3.8}) would result in a new set of pair forces \begin{equation} \eta'_{kj} = \Delta t_k \Delta t_j \, \beta'_{kj} = \eta_{kj} +\delta\eta_{kj} \label{e3.9} \end{equation} with the property that \begin{equation} \bar\eta_k=\eta'_k = \sum_{j(\neq k)}\eta'_{kj}, \label{e3.10} \end{equation} that is, one would have solved the minimization problem. With $|J_c| < N$, the result would, instead, be to make the difference $\bar\eta_k-\eta'_k$, the sum of the forces acting on quasiparticle k, independent of $k$ for all $k$ in $J_c$, and to minimize \begin{equation} \sum_{k\in J_c} (\bar\eta_k-\eta'_k)^2 \label{e3.11} \end{equation} as much as is possible by changing only the pair interactions $\eta_{kj}$ inside the cluster $J_c$. However, formula (\ref{e3.8}) does not take account of the possibility that the $\beta'_{kj}$ in (\ref{e3.9}) might lie outside the interval (\ref{e3.3}) determined by ${\cal B}^*$. When this is the case, the new $\beta'_{kj}$ is placed at whichever end of the discontinuity interval lies closest to the value given by (\ref{e3.9}). The test for rejecting or retaining the combined cluster $J_c$ is then the following. If for each of the pairs $k$ and $j$ for which $k$ belongs to one of the clusters involved in the collision and $j$ to the other, the new $\beta'_{kj}$ is at one of the ends of the interval (\ref{e3.3}), the combined cluster $J_c$ is rejected, whereas if at least one of these values falls in the interior of the corresponding interval, $J_c$ is accepted. Note that whether the cluster $J_c$ is accepted or rejected, the new values of $\beta'_{kj}$, and thus the corresponding $\eta'_{kj}$, produced in the vertical shift are retained when going on to the next step of the algorithm, which is either step 1 or, in the case in which $J_c$ is accepted and $|J_c|=N$, step 3. The algorithm would still function correctly if a combined cluster were never rejected. However, this would mean having to apply linear optimization, step 3, more often, and would result in a slower computation. The process of allowing clusters to move relative to each other past discontinuities which represent relatively small changes compared to the large forces representing a situation ``far from equilibrium'' helps to achieve a better preliminary value of ${\mbox{\boldmath $\zeta$}}$ before going on to step 3. 3, 4. The linear optimization step is basically a test to see whether the target $\bar{\mbox{\boldmath $\eta$}}$ lies inside the polyhedron representing $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$ in the approximation in which ${\cal B}$ is replaced by ${\cal B}^*$. The idea is to begin at a particular vertex ${\mbox{\boldmath $\eta$}}^c$ of the polyhedron, Fig.~\ref{fg3}, and draw a straight line from ${\mbox{\boldmath $\eta$}}^c$ to the target $\bar{\mbox{\boldmath $\eta$}}$. Points along this line are of the form \begin{equation} {\mbox{\boldmath $\eta$}} ={\mbox{\boldmath $\eta$}}^c + \lambda (\bar{\mbox{\boldmath $\eta$}} - {\mbox{\boldmath $\eta$}}^c), \label{e3.12} \end{equation} where $\lambda$ is a number between 0 and 1. The linear optimization procedure, the details of which are given in the appendix, determines the largest value of $\lambda$ for which a point of the form (\ref{e3.12}) lies inside or on the boundary of the polyhedron. If this value is less than 1, as in Fig.~\ref{fg3}, the target lies outside the polyhedron, and the point (\ref{e3.12}) determined by the maximum value of $\lambda$ specifies a facet of the polyhedron lying in the direction of the target, as viewed from the starting vertex ${\mbox{\boldmath $\eta$}}^c$. This facet is generated by letting ($N-2$) of the $\beta_{kj}$ vary over their entire discontinuity intervals, while the remaining $\beta_{kj}$ are fixed either at the top or at the bottom of their discontinuity intervals. The $\zeta_{kj}$ corresponding to the former are ``rigid'' in the sense that they cannot be altered by a horizontal shift (which, by definition, must leave the $\beta_{kj}$ unchanged), and one can identify two clusters of quasiparticles, each one connected by such rigid bonds. Once these two clusters have been identified, they can be shifted relative to each other, in a direction which is obvious, until they collide at the first discontinuity of ${\cal B}^*$. This collision results in a new, complete cluster, and the corresponding $\partial\epsilon_0({\mbox{\boldmath $\zeta$}})$ is a polyhedron adjacent to the one considered earlier, and shares with it the facet which was identified in the previous linear optimization step. Note that this new cluster is accepted without carrying out the test used in step 2 of the algorithm. Also, in the unlikely event that the maximum $\lambda$ corresponds to the intersection of two or more facets of the polyhedron, the actual optimization algorithm described in the appendix will, in effect, ``choose'' one of these facets, and thus the polyhedron adjacent to it in the direction of the target $\bar{\mbox{\boldmath $\eta$}}$. 5. If the optimization carried out in step 3 yields $\lambda \geq 1$, the target $\bar{\mbox{\boldmath $\eta$}}$ lies inside the polyhedron generated by the discontinuities of ${\cal B}^*$, but it may or may not lie inside the corresponding polyhedron generated by restricting the $\beta_{kj}$ to lie in the exact interval (\ref{e2.16}) \ for the corresponding discontinuity. To test whether this is the case, one repeats the linear optimization technique of step 3, but now starting with each $\beta_{kj}$ at its {\it exact} maximum possible value, and constrained to be greater than or equal to its {\it exact} minimum possible value, with the exception of those $\beta_{kj}$ for which $\zeta_{kj}$ does not fall at a discontinuity of ${\cal B}^*$, which are assigned fixed values at the center of the appropriate (exact) discontinuity intervals of ${\cal B}$. One could, of course, make {\it all} of the $\beta_{kj}$ variable during the linear optimization process. However, as the discontinuity intervals in ${\cal B}$ which are not in ${\cal B}^*$ are, by construction, relatively small, the main effect of using a larger set of variables would be to slow down the linear optimization without much hope of actually finding a solution with $\lambda \geq 1$ when the restricted search yields one with $\lambda < 1$. Note that if such a restriction does result in overlooking a $\lambda \geq 1$ solution, this solution, or one equivalent to it, will nevertheless be found later when the number of discontinuities of ${\cal B}^*$ is increased. If the maximum value of $\lambda$ obtained by using linear optimization with the exact discontinuity intervals of ${\cal B}$ is still greater than or equal to 1, then the current ${\mbox{\boldmath $\zeta$}}$, and thus the current set of $\zeta_{kj}$, represents an actual solution to the minimization problem, and this does not depend upon the approximations used in constructing ${\cal B}^*$, because the resulting $\beta_{kj}$ values all fall within the range where ${\cal B}^*$ is identical to ${\cal B}$. If, on the other hand, one finds that $\lambda$ is less than 1, this means that the current ${\mbox{\boldmath $\zeta$}}$ is not a solution to the true minimization problem; instead, it is as good as one can do using the approximate ${\cal B}^*$. To do better, it is necessary to increase the number of discontinuities. Our procedure at this point is to throw away all the information associated with the $\beta_{kj}$ values obtained in the immediately preceding step of linear optimization, and simply start over again at step 1 using the current ${\mbox{\boldmath $\zeta$}}$. One might be able to improve the algorithm in this respect, but since the initial steps of the algorithm are relatively fast, it does not seem likely that one would obtain a significant increase in speed. \subsection{Implementation and Performance} \label{s3d} The program we constructed to implement the algorithm described above was tested in the following way. We chose a winding number equal to the inverse golden mean (0.618\dots) and a value of $\kappa$, see \cite{GSU96}, of 0.6, resulting in a set of 60 pairs ($\psi$ and $1-\psi$) of discontinuities of ${\cal B}$ in the interval $0\leq\psi\leq 1$ with a magnitude greater than a resolution of $10^{-20}$. (Some tests used alternative values for $\kappa$, 0.1, 0.5, and 1.0, for which there are 146, 66, and 47 pairs of discontinuities, respectively, exceeding this resolution.) The parabolas were assumed to be equally spaced, with $\Delta t_l$ independent of $l$. Then we employed the following ``inverse strategy''. With $N$ fixed, random values of $\zeta_{j}$, lying on the full set of discontinuities of ${\cal B}$ were chosen, subject to the constraints (\ref{e2.17}), and values $\eta_{kj}$ inside the discontinuity intervals were also chosen randomly, thus defining---see (\ref{e2.10}), (\ref{e2.9}), and (\ref{e2.3})---$\bar{\mbox{\boldmath $\eta$}}$ for a model of $N$ parabolas with the solution to its energy minimization problem already known. The algorithm was then applied to this model starting at the initialization step 0, with a (different) random collection of $\zeta_{i}$, and a choice of $M$ (the number of pairs of discontinuities in the approximate ${\cal B}^*$) to search for the correct solution. Note that the running time increases linearly with $M$. But since as $M$ increases, the size of the discontinuities in ${\cal B}^*$ is decreasing, the probability that a random choice of $\bar{\mbox{\boldmath $\eta$}}$ will actually require a larger value of $M$ goes to zero exponentially with increasing $M$. We found that using an initial value of $M=2$ when $N$ is small saves a lot of running time, and in almost all cases $M=10$ was sufficient to find a ground state with $N$ up to 100. To determine the $N$ dependence of the running time, we generated and timed 10 distinct potentials for each $N$ in an increasing series up to $N=115$. Using an HP 9000 model 735 workstation with 64 MB of RAM and a CPU with 20 MFLOPs, the average time in seconds required to find a solution was approximately $10^{-5}N^4$, or a quarter of an hour for $N=100$. The time required for linear optimization varies as $N^3$, but as $N$ becomes larger, ``quasiparticle dynamics'' takes up a larger fraction of the time: 60-70\% for $N=100$. The algorithm found the correct ground state in particular cases for $N$ as large as 200. \section*{ Acknowledgments } We would like to thank S. Aubry, P. Delaly, A. Hamm, R. S. MacKay and P. Rujan for useful discussions. This work was supported by the Deutsche Forschungsgemeinschaft (Bonn, Germany).
train/arxiv
BkiUaYjxK0wg09FXTGR_
5
1
\section{Introduction} The issue of management of uncertainty for robust system operation is of interest in a large family of complex networked systems. Examples include power, thermal and communication networks which arise in several instances such as more electric aircrafts, integrated building systems and sensor networks. Such systems typically involve a large number of heterogeneous, connected components, whose dynamics is affected by possibly an equally large number of uncertain parameters and disturbances. Uncertainty Quantification (UQ) methods provide means of calculating probability distribution of system outputs, given probability distribution of input parameters. Outputs of interest could include for example, latency in communication network, power quality and stability of power networks, and energy usage in thermal networks. The standard UQ methods such as Monte Carlo(MC) \cite{MCS} either exhibit poor convergence rates or others such as Quasi Monte Carlo (QMC) \cite{nets}\cite{lattice}, generalized Polynomial Chaos(PC) \cite{gPC} and the associated Probabilistic Collocation method (PCM) \cite{xiu}, suffer from the curse of dimensionality (in parameter space), and become practically infeasible when applied to network as a whole. Improving these techniques to alleviate the curse of dimensionality is an active area of current research (see \cite{cursedim} and references therein): notable methods include sparse-grid collocation method \cite{sparse},\cite{MEPCM} and ANOVA decomposition \cite{Anova} for sensitivity analysis and dimensional reduction of the uncertain parametric space. However, none of such extension exploits the underlying structure and dynamics of the networked systems. In fact, many networks of interest (e.g. power, thermal and communication networks), are often composed of weakly interacting subsystems. As a result, it is plausible to simplify and accelerate the simulation, analysis and uncertainty propagation in such systems by suitably decomposing them. For example, authors in \cite{gp1,gp2} used graph decomposition to facilitate stability and robustness analysis of large-scale interconnected dynamical systems. Mezic et al. \cite{igorcdc} used graph decomposition in conjunction with Perron Frobenius operator theory to simplify the invariant measure computation and uncertainty quantification, for a particular class of networks. While these approaches exploit the underlying structure of the system, they do not take advantage of the weakly coupled dynamics of the subsystems. In this paper, we propose an iterative UQ approach that exploits the weak interactions among subsystems in a networked system to overcome the dimensionality curse associated with traditional UQ methods. We refer to this approach as Probabilistic Waveform Relaxation (PWR), and propose both intrusive and non-intrusive forms of PWR. PWR relies on integrating graph decomposition techniques and waveform relaxation scheme, with gPC and PCM. Graph decomposition to identify weakly interacting subsystems, can be realized by spectral graph theoretic techniques \cite{Tutorial},\cite{Chung}. Waveform relaxation \cite{wave} (WR), a parallelizable iterative method, on the other hand, exploits this decomposition and evolves each subsystem forward in time independently but coupled with the other subsystems through their solutions from the previous iteration. In the intrusive PWR, the subsystems obtained from decomposing the original system are used to impose a decomposition on system obtained by Galerkin projection based on the gPC expansion. Further the weak interactions are used to discard terms which are expected to be insignificant in the gPC expansion, leading to what we call an Approximate Galerkin Projected (AGP) system. We then propose to apply WR relaxation on the decomposed AGP system to accelerate the UQ computation. In the non-intrusive form of PWR, rather than deriving the AGP system, one works directly with subsystems obtained from decomposing the original system. At each waveform relaxation iteration we propose to apply PCM at subsystem level, and use gPC to propagate the uncertainty among the subsystems. Since UQ methods are applied to relatively simpler subsystems which typically involve a few parameters, this renders a scalable non-intrusive iterative approach to UQ. We prove convergence of the PWR approach under very general conditions. Note that spectral graph decomposition can be done completely in a distributed fashion using a recently developed wave equation based clustering method \cite{ref:wave}. Moreover, one can further exploit timescale separation in the system to accelerate WR using an adaptive form of WR \cite{AWR}. PWR when combined with wave equation based distributed clustering and adaptive WR can lead to highly scalable and computationally efficient approach to UQ in complex networks. This paper is organized in six sections. In section \ref{UQ} we give the mathematical preliminaries for setting up the UQ problem for networked dynamical systems, and present an overview of gPC and PCM techniques. In section \ref{nd} we discuss graph decomposition and waveform relaxation methods, which form basic ingredients of PWR. Here we also describe adaptive WR and wave equation based distributed graph decomposition techniques. We introduce the intrusive and non-intrusive PWR in section \ref{scaleUQ} through a simple example, and then describe these methods in a more general setting. We also prove convergence of PWR, and analyze the scalability of the method. In section \ref{examples} we illustrate the intrusive and non-intrusive PWR on several examples. Finally, in section \ref{conc} we summarize the main results of this paper, and present some future research directions. \section{Uncertainty Quantification in Networked Systems}\label{UQ} Consider a nonlinear system described by a system of random differential equation \begin{eqnarray} \dot{x}_1&=&f_1(\mathbf{x},\mathbf{\xi}_1,t),\notag\\ \vdots\notag\\ \dot{x}_n&=&f_n(\mathbf{x},\mathbf{\xi}_n,t),\label{complexsys} \end{eqnarray} where, $\mathbf{f}=(f_1,f_2,\cdots,f_n)\in\mathbb{R}^n$ is a smooth vector field, $\mathbf{x}=(x_1,x_2,\cdots,x_n)\in \mathbb{R}^n$ are state variables, $\mathbf{\xi}_i\in R^{p_i}$ is vector of random variables affecting the $i-$th system. Let $\mathbf{\xi}=(\mathbf{\xi}_1^T,\cdots,\mathbf{\xi}_n^T)^T\in \mathbb{R}^p$ be the $p=\sum_{i=1}^n p_i$ dimensional random vector of uncertain parameters affecting the complete system. The solution to initial value problem $\mathbf{x}(t_0)=\mathbf{x}_0$ will be denoted by $\mathbf{x}(t;\mathbf{\xi})$, where for brevity we have suppressed the dependence of solution on initial time $t_0$ and initial condition $\mathbf{x}_0$. We shall assume that the system (\ref{complexsys}), is Lipschitz \begin{equation}\label{LipF} ||\mathbf{f}(\mathbf{x}_1,\xi,t)-\mathbf{f}(\mathbf{x}_2,\xi,t)||\leq L(\xi)||\mathbf{x}_1-\mathbf{x}_2||, \end{equation} where, the Lipschitz constant $L(\xi)$ depends on the random parameter vector and $||\cdot||$ is a Euclidean norm. We will assume that $\sup_{\mathbf{\xi}\in \mathbb{R}^p} L(\xi)=L<\infty$. Let us also define a set of quantities \begin{equation}\label{observe1} \mathbf{z}=(z_1,z_2,\cdots,z_d)=G(\mathbf{x})=(g_1(\mathbf{x}),\cdots,g_d(\mathbf{x})), \end{equation} as observables or quantities of interests. The goal is to numerically establish the effect of input uncertainty of $\mathbf{\xi}$ on output observables $\mathbf{z}$. Naturally, the solution for system (\ref{complexsys}) and the observables (\ref{observe1}) are functions of same set of random variables $\mathbf{\xi}$, i.e \begin{equation}\label{observe2} \mathbf{x}=\mathbf{x}(t;\mathbf{\xi}),\qquad \mathbf{z}=\mathbf{z}(t,\xi)=G(\mathbf{x}). \end{equation} In what follows we will adopt a probabilistic framework and model $\mathbf{\xi}=(\xi_1,\xi_2,\cdots,\xi_p)$ as a $p-$ variate random vector with independent components in the probability space $(\Omega,\mathcal{A},\mathcal{P})$, whose event space is $\Omega$ and is equipped with $\sigma-$algebra $\mathcal{A}$ and probability measure $\mathcal{P}$. Throughout this paper, we will assume that the parameters $\Sigma=\{\xi_1,\xi_2,\cdots,\xi_p\}$ are mutually independent of each other. Let $\rho_i:\Gamma_i\rightarrow \mathbb{R}^+$ be the probability density of the random variable $\xi_i(\omega)$, with $\Gamma_i=\xi_i(\Omega)\subset \mathbb{R}$ being its image. Then, the joint probability density of any random parameter subset $\Lambda=\{\xi_{i_1},\xi_{i_2},\cdots,\xi_{i_m}\}\subset \Sigma$ is given by \begin{equation}\label{jointpdfsub} \mathbf{\rho}_{\Lambda}(\xi_{i_1},\cdots,\xi_{i_m})=\prod_{j=1}^{|\Lambda|}\rho_{i_j}(\xi_{i_j}),\quad \forall (\xi_{i_1},\cdots,\xi_{i_m})\in \Gamma_{\Lambda}, \end{equation} with a support \begin{equation}\label{jointspace} \Gamma_{\Lambda}=\prod_{j=1}^{|\Lambda|}\Gamma_{i_j}\subset \mathbb{R}^{|\Lambda|}, \end{equation} where, $|\cdot|$ denotes the cardinality of the set. Without loss of generality we will assume that $\Gamma_i=[-1\quad 1],i=1,\cdots,p$. \begin{remark} While throughout the paper we will work with ODEs (\ref{complexsys}) with parametric uncertainty, the PWR framework developed in this paper can be naturally extended to deal with 1) System of differential algebraic equations (DAEs), and 2) Time varying uncertainty. Both these extensions would be illustrated through examples in section \ref{examples}. \end{remark} \subsection{Uncertainty Quantification Methods} In this section, we describe two interrelated UQ approaches: generalized polynomial chaos (gPC) and probabilistic collocation method (PCM). The gPC is an intrusive approach which requires explicit access to system equations (\ref{complexsys}), while PCM is a related sampling based non-intrusive (and hence treats the system (\ref{complexsys}) as a black box) way of implementing gPC. \subsubsection{Generalized Polynomial Chaos}\label{gPC} In the finite dimensional random space $\Gamma_{\Sigma}$ defined in (\ref{jointspace}), the gPC expansion seeks to approximate a random process via orthogonal polynomials of random variables. Let us define one-dimensional orthogonal polynomial space associated with each random variable $\xi_k, k=1,\cdots,p$ as \begin{equation}\label{polyspacegenloc} W^{k,d_k}\equiv\{v:\Gamma_k\rightarrow \mathbb{R}:v\in\mbox{span}\{\psi_i(\xi_k)\}_{i=0}^{d_k}\}, \end{equation} where, $\{\psi_i(\xi_k)\}_{i=0}^{d_k}$ denotes the orthonormal polynomial basis from the so called Wiener-Askey polynomial chaos \cite{gPC}. The Askey scheme of polynomials contains various classes of orthogonal polynomials such that their associated weighting functions coincide with probability density function of the underlying random variables. The corresponding $P$-variate orthogonal polynomial space in $\Gamma_{\Sigma}$ is defined as \begin{equation}\label{polyspacegen} W^{\Sigma,P}\equiv \bigotimes_{|\mathbf{d}|\in \mathbb{P}}W^{i,d_i}, \end{equation} where the tensor product is over all possible combinations of the multi-index $\mathbf{d}=(d_1,d_2,\cdots,d_{|\Sigma|}) \in \mathbb{N}^{|\Sigma|}$ in set $\mathbb{P}$, \begin{equation}\label{Pdef} \mathbb{P}=\{\mathbf{d}\in \mathbb{N}^{|\Sigma|} : |\mathbf{d}|=\sum_{i=1}^{|\Sigma|} d_i\leq \overline{P} \quad \mbox{and} \quad d_i\leq P_i\} \end{equation} and, $P=(P_1,\cdots,P_{|\Sigma|})^T\in \mathbb{N}^{|\Sigma|}$ is vector of integers which restricts the maximum order of expansion of $i$-th variable $\xi_i$ to be $P_i$, and $\overline{P}=\max_{i}P_i$. Thus, $W^{\Sigma,P}$ is the space of N-variate orthonormal polynomials of total degree at most $\overline{P}$ with additional constraints on individual degrees of polynomials, and its basis functions $\Psi_i^{\Sigma,P}(\mathbf{\xi})$ satisfy \begin{equation}\label{orthogen} \int_{\Gamma_{\Sigma}}\Psi_i^{\Sigma,P}(\mathbf{\xi})\Psi_j^{\Sigma,P}(\mathbf{\xi})\mathbf{\rho}_{\Sigma}(\xi)d\mathbf{\xi}=\delta_{ij}, \end{equation} for all $1\leq i,j\leq N_{\Sigma}=\mbox{dim}(W^{\Sigma,P})$. Note that in standard gPC all expansion orders are taken to be identical i.e. $P_1,=P_2\cdots=P_{|\Sigma|}=\overline{P}$, so that $\mbox{dim}(W^{\Sigma,p})=\frac{(\overline{P}+|\Sigma|)!}{\overline{P}!|\Sigma|!}$. We will however take advantage of an adaptive expansion, a notion which will be fully developed in section \ref{approxPWR}. The major advantage of applying the gPC is that a random differential equation can be transformed into a system of deterministic equations. A typical approach is to employ a stochastic Galerkin projection, in which all the state variables are expanded in polynomial chaos basis with corresponding modal coefficients ($a_{ik}(t)$), as \begin{equation}\label{expx} x_k(t,\mathbf{\xi})\approx x_k^{\Sigma,P}(t,\mathbf{\xi})=\sum_{i=1}^{N_{\Sigma}} a_{ik}(t)\Psi_i^{\Sigma,P}(\mathbf{\xi}),\quad k=1,\cdots,n, \end{equation} where, the sum has been truncated to a finite order. Substituting, these expansions in Eq. (\ref{complexsys}), and using the orthogonality property of polynomial chaos (\ref{orthogen}), we obtain for $k=1,\cdots,n,\quad j=1,\cdots,N_{\Sigma}$, \begin{equation}\label{galproj} \dot{a}_{jk}=F_{jk}(\mathbf{a},t), \end{equation} a system of deterministic ODE�s describing the evolution of the modal coefficients, with initial conditions \begin{equation}\label{galprojinit} a_{jk}(0)=\int_{\Gamma_{\Sigma}}x_k(0,\mathbf{\xi})\Psi_j^{\Sigma,P}(\mathbf{\xi})\mathbf{\rho}_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}, \end{equation} and $\mathbf{a}=(a_{11},\cdots,a_{N_{\Sigma}1},\cdots,a_{1n},\cdots,a_{N_{\Sigma}n})^T$, \begin{equation}\label{Fdef} F_{jk}(\mathbf{a},t)=\int_{\Gamma_{\Sigma}}f_k(\mathbf{x}^{\Sigma,P}(\mathbf{\xi},t),\mathbf{\xi}_k,t)\Psi_j^{\Sigma,P}(\mathbf{\xi})\mathbf{\rho}_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}, \end{equation} with $\mathbf{x}^{\Sigma,P}(t,\mathbf{\xi})=(x_1^{\Sigma,P}(t,\mathbf{\xi}),\cdots,x_n^{\Sigma,P}(t,\mathbf{\xi}))$. This system can be solved with any numerical method dealing with initial-value problems, e.g., the Runge-Kutta method. Similarly, the observable can be expanded in gPC basis, as \begin{equation}\label{expz} z_k(t,\mathbf{\xi})\approx z_k^{\Sigma,P}(t,\mathbf{\xi})=\sum_{i=1}^{N_{\Sigma}} b_{ik}(t)\Psi_i^{\Sigma,P}(\mathbf{\xi}), \end{equation} where, \begin{eqnarray} b_{jk}(t)&=&\int_{\Gamma_{\Sigma}}z_k^{\Sigma,P}(\mathbf{\xi})\Psi_j^{\Sigma,P}(\mathbf{\xi})\mathbf{\rho}_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}\label{z1approx} \end{eqnarray} with $\quad k=1,\cdots,d$. Hence, once the solution to the system (\ref{galproj}) has been obtained, the coefficients $b_{jk}$ can be approximated as \begin{equation} b_{jk}\approx \int_{\Gamma_{\Sigma}}g_k(\mathbf{x}^{\Sigma,P}(t,\mathbf{\xi}))\Psi_j^{\Sigma,P}(\mathbf{\xi})\mathbf{\rho}_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}. \end{equation} Such Galerkin procedures have been used extensively in the literature. In many instances Galerkin projection may not be possible due to unavailability of direct access to the system equations (\ref{complexsys}). In many other instances such intrusive methods are not feasible even in cases when the system equations are available, because of the cost of deriving and implementing a Galerkin system within available computational tools. To circumvent this difficulty, probabilistic collocation method has been developed. \subsubsection{Probabilistic Collocation Method}\label{PCM} PCM is a non-intrusive approach to solving stochastic random processes with the gPC. Instead of projecting each state variable onto the polynomial chaos basis, the collocation approach evaluates the integrals of form (\ref{z1approx}) by evaluating integrand at the roots of the appropriate basis polynomials \cite{xiu}. Given a 1D probability density function $\rho_j(\xi_j)$, the PCM based on Gauss quadrature rule, approximates an integral of a function $g$ with respect to density $\rho_j(\xi_j)$, as follows \begin{equation}\label{1D} \int_{-1}^1g(\xi_j)\rho(\xi_j)d\xi_j\approx\mathcal{U}_{l_j}[g]=\sum_{k=1}^{m_{l_j}}w_{l_jk}g(r_{l_j k}), \quad j=1,\cdots,p, \end{equation} where, $r_{l_j k}\in C_{l_j}$ is the set of Gauss collocation points with associated weights $w_{l_jk}$, $l_j$ is the accuracy level of quadrature formula, and $m_{l_j}$ is the number of quadrature points corresponding to this accuracy level. Building on the 1D quadrature formula, the full grid PCM leads to following cubature rule, \begin{eqnarray} &&\int_{-1}^1\int_{-1}^1\cdots \int_{-1}^1 g(\xi_1,\cdots,\xi_p)\mathbf{\rho}_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}\notag\\ &\approx& \mathcal{I}(l_1,\cdots,l_p,p)[g]=(\mathcal{U}_{l_1}\otimes\mathcal{U}_{l_2}\cdots\mathcal{U}_{l_p})[g]\notag\\ &=&\sum_{j_1=1}^{m_{l_1}}\cdots \sum_{j_p=1}^{m_{l_p}}w_{\mathbf{l}\mathbf{j}}g(\mathbf{r}_{\mathbf{l}\mathbf{j}}),\label{ndint} \end{eqnarray} where, $\mathbf{r}_{\mathbf{l}\mathbf{j}}=(r_{l_1j_1},\cdots,r_{l_pj_p})$, with $\mathbf{l}=(l_1,\cdots,l_p)$, and $\mathbf{j}=(j_1,\cdots,j_p)$ and $w_{\mathbf{l}\mathbf{j}}=w_{l_1j_1}\cdots w_{l_nj_n}$. To compute $\mathcal{I}(\mathbf{l},p)$ we need to evaluate the function on the full collocation grid $C(\mathbf{l},p)$ which is given by tensor product of 1D grids \begin{equation}\label{fullgird} \mathcal{C}(\mathbf{l},p)=C_{l_1}\times\cdots \times C_{l_p}, \end{equation} with a total number of collocation points being $Q=\prod_{j=1}^p m_{l_j}$. In this framework, therefore, for any $t$, the approximations to the model coefficients $a_{jk}(t)$ (see Eq. (\ref{expx})) and $b_{jk}(t)$ (see Eq. (\ref{expz})) can be obtained as \begin{eqnarray} a_{jk}(t)&=&\int_{\Gamma_{\Sigma}}x_k^{\Sigma,P}(t,\mathbf{\xi})\Psi_j^{\Sigma,P}(\mathbf{\xi})\mathbf{\rho}_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}\notag\\ &\approx&\sum_{j_1=1}^{m_{l_1}}\cdots \sum_{j_p=1}^{m_{l_p}}w_{\mathbf{l}\mathbf{j}}\Psi_j^{\Sigma,P}(\mathbf{r}_{\mathbf{l}\mathbf{j}})x_k(t,\mathbf{r}_{\mathbf{l}\mathbf{j}}), \label{z4approx} \end{eqnarray} with similar expression for $b_{jk}(t)$. Note to compute summations arising in (\ref{z4approx}), the solution $\mathbf{x}(t,\mathbf{r}_{\mathbf{l}\mathbf{j}})$ of the system (\ref{complexsys}) is required for each collocation point $\mathbf{r}_{\mathbf{l}\mathbf{j}}$ in the full collocation grid $C(\mathbf{l},p)$. Thus, simplicity of collocation framework only requires repeated runs of deterministic solvers, without explicitly requiring the projection step in gPC. If we choose the same order of collocation points in each dimension, i.e. $m_{l_1}=m_{l_2},\cdots=m_{l_p}\equiv l$, the total number of points is $Q=l^p$, and the computational cost increases rather steeply with the number of uncertain parameters $p$. Hence, for large systems ($n \gg 1$) with large number of uncertain parameters ($p\gg1$), PCM becomes computationally intensive. As discussed in the introduction, alleviating this curse of dimensionality is an active area of current research \cite{cursedim}. In this paper we propose a new uncertainty quantification approach which exploits the underlying network structure and dynamics to overcome the dimensionality curse associated with PCM. The key methodologies for accomplishing this are the graph decomposition and waveform relaxation, which are discussed in subsequent sections. \section{Graph Decomposition and Waveform Relaxation}\label{nd} \subsection{Waveform Relaxation}\label{wave} In this section we describe the basic mathematical concept of the Waveform Relaxation (WR) method for iteratively solving the system of differential equations of the form (\ref{complexsys}). For purposes of discussion here, we fix the parameter values $\mathbf{\xi}$ in the system (\ref{complexsys}) to fixed mean values. The general structure of a WR algorithm for analyzing system (\ref{complexsys}) over a given time interval $[0, T]$ consists of two major steps: \emph{assignment partitioning process} and the \emph{relaxation process} \cite{waveform1,wave}. \emph{Assignment-partitioning process}: Let $\mathcal{N}=\{1,\cdots,n\}$ be the set of state indices, and $\mathcal{C}_i,i=1,\cdots,m$ be a partition of $\mathcal{N}$ such that \begin{equation}\label{part} \mathcal{N}=\bigcup_{i=1}^m\mathcal{C}_i,\qquad \mathcal{C}_i\bigcap \mathcal{C}_j=\emptyset,\forall i\neq j. \end{equation} We shall represent by $\mathcal{D}:\mathcal{N}\rightarrow \mathcal{M}\equiv\{1,2,\cdots,m\}$ a map which assigns the state index to its partition index, i.e. $\mathcal{D}(i)=j$ where, $j$ is such that $i\in \mathcal{C}_j$. Without loss of generality, we can rewrite Eq. \ref{complexsys} after the assignment-partitioning process as: \begin{eqnarray} \dot{\mathbf{y}}_1 &=&\mathbf{F}_1(\mathbf{y}_1,\mathbf{d}_1(t),\Lambda_1,t)\notag \\ \vdots\notag\\ \dot{\mathbf{y}}_m &=&\mathbf{F}_m(\mathbf{y}_m,\mathbf{d}_m(t),\Lambda_m,t),\label{decomp} \end{eqnarray} where, for each $i=1,\cdots,m$, \begin{equation}\label{Fdef} \mathbf{F}_i\equiv (f_{j^i_1},\cdots,f_{j^i_{M_i}})^T, \end{equation} \begin{equation}\label{ydef} \mathbf{y}_i\equiv(x_{j^i_1},\cdots,x_{j^i_{M_i}})^T, \end{equation} with initial condition \begin{equation}\label{icfory} \mathbf{y}_{0i}\equiv(x_{0j^i_1},\cdots,x_{0j^i_{M_i}})^T, \end{equation} and \begin{equation}\label{lamdef} \Lambda_i\equiv (\mathbf{\xi}_{j^i_1},\cdots,\mathbf{\xi}_{j^i_{M_i}})^T, \end{equation} are the subvectors assigned to the $i-$th partitioned subsystem, such that $ j^i_k\in \mathcal{C}_i,\quad k=1,\cdots,M_i=|\mathcal{C}_i|$ and \begin{equation}\label{ddef} \mathbf{d}_i(t)\equiv(\mathbf{y}_{j_i}^T,\cdots,\mathbf{y}_{j_{N_i}}^T)^T, \end{equation} is a decoupling vector, with $j_k\in \mathcal{M}_i$ and $k=1,\cdots,N_i=|\mathcal{M}_i|$. Here, $\mathcal{M}_i$ is the set of indices of the partitions (or subsystems) with which the $i-$th partition (or subsystem) interacts, and is given by \begin{equation}\label{neigh} \mathcal{N}_i=\mathcal{M}\setminus \Im(\mathcal{D}(\mathcal{N}_i^c)), \end{equation} where, $\mathcal{N}_i^c=\{j\in\mathcal{N}:\frac{\partial \mathbf{F}_i}{\partial x_j}=\mathbf{0}\}$ and $\Im(\cdot)$ denotes the image of the map $\mathcal{D}$. \emph{Relaxation Process}: The relaxation process is an iterative procedure, with following steps \begin{itemize} \item Step 1: (Initialization of the relaxation process) Set $I=1$ and guess an initial waveform $\{\mathbf{y}_i^0(t): t\in [0\quad T]\}$ such that $\mathbf{y}^0_i(0)=\mathbf{y}_{0i}$, $i=1\cdots,m$. \item Step 2: (Analyzing the decomposed system at the I-th WR iteration) For each $i=1,\cdots,m$, set \begin{equation}\label{dk1} \mathbf{d}_i^I(t)=\equiv((\mathbf{y}_{j_i}^{I-1})^T,\cdots,(\mathbf{y}_{j_{N_i}}^{I-1})^T)^T, \end{equation} and solve the subsystem \begin{equation}\label{dk2} \dot{\mathbf{y}}^I_i=\mathbf{F}_i(\mathbf{y}^I_i,\mathbf{d}_i^I(t),\Lambda_i,t), \end{equation} over the interval $[0,T_s]$ with initial condition $\mathbf{y}^I_i(0)=\mathbf{y}_{0i}$, to obtain $\{\mathbf{y}^I(t) : t \in [0,T_s]\}$. \item Step 3 Set $I = I + 1$ and go to step 2 until satisfactory convergence is achieved.. \end{itemize} The general conditions for convergence of WR for a system of differential algebraic equations (DAEs) can be found in \cite{waveform1,AWR}. Here, we recall a result from \cite{AWR} specializing it for a system of differential equations. \begin{proposition}Convergence of WR for ODE's (see \cite{waveform1} for proof):\label{WRConv} Given that the system (\ref{complexsys}) is Lipschitz (condition~\ref{LipF}), then for any initial piecewise continuous waveform $\{\mathbf{y}_i^0(t): t\in [0,T_s]\}$ such that $\mathbf{y}^0_i(0)=\mathbf{y}_{0i}$ (see definition (\ref{icfory})), $i=1\cdots,m$, the WR algorithm converges to the solution of (\ref{complexsys}) with initial condition $\mathbf{x}_0$. \end{proposition} A more intuitive analysis of error at each waveform iteration is described in \cite{AWR}. Let $ \bar{\mathbf{y}}$ be the exact solution of the differential equation (\ref{decomp}) and define $ E_I$ to be the error of the $ I $-th iterate, that is \begin{equation} E_I(t)= \mathbf{y}^I(t) - \bar{\mathbf{y}}(t). \end{equation} As shown in \cite{AWR}, the error $|E_{I}| $ on the interval $ [0, T] $ is bounded as follows \begin{equation} \label{BoundsTheo} |E_I(t)| \leq \frac{C^I \eta^I T^I}{I!} |E_0(t)|, \end{equation} with $ C = e^{\mu T}$. Here $C$ and $\eta$ are related to the Lipschitz constants of the waveform relaxation operator~\cite{AWR}. It is important to note here that the $I!$ in the denominator dominates the error. Thus, with enough iterations one can make the error fall below any desired threshold. It is also evident from Eq.~\ref{BoundsTheo} that the error of standard waveform relaxation crucially depends on $ T $. The longer the time interval, the greater is the number of iterations needed to bound the error below a desired tolerance. Based on this observation, a novel ``adaptive'' version of waveform relaxation has been developed in~\cite{AWR}, which we refer to as adaptive waveform relaxation (AWR). The idea here is to perform waveform relaxation in ``windows'' of time that are picked so as to reduce $I$ in Eq.~\ref{BoundsTheo}. Specifically, one can pick small time intervals for computation when the solution to (\ref{complexsys}) changes significantly (implying $E_{0}(t)$ is large) and pick large intervals when the solution changes little (consequently $E_{0}(t)$ is small). The solution from one time interval is extrapolated to the next using a standard extrapolation formula~\cite{Stoer} and the initial error is estimated using, \begin{equation} \label{E0form} \tilde{E}_{I+1, 0}(t) = \frac{\phi^{(l)}(x^{I+1}(\xi), x^{I+1}(\xi))}{(l+1)!} \, \omega(t), \end{equation} where, \begin{equation} \label{omegaform} \omega(t) = (t - t_0)(t - t_1) \hdots (t - t_l). \end{equation} Here $t_{0}, t_{1},\hdots,t_{l}$ are points through which one passes the extrapolating polynomial~\cite{AWR}. Note that, $ \phi^{(l)} = \frac{d^l\phi}{dt^l}$ is the $ l $-th derivative of the waveform relaxation operator $ \phi $ with respect to $ t $ (see.~\cite{Stoer,AWR}). The algorithm to compute the length of the time windows is as follows: \emph{Adaptive Waveform Relaxation}: To compute the time interval for $ \Delta T_{I+1} $, execute the following steps: \begin{enumerate} \item Set $ \Delta T_{I+1} = 2 \Delta T_I $ and $ \delta = \frac{1}{20} \Delta T_I $. \item Evaluate $ \tilde{E}_{I+1, 0}(T_I + \Delta T_{I+1}) $ using Eqn.~\ref{E0form} to estimate $ \norm{E_{I+1, 0}} $ and compute $ \norm{\hat{E}_{I+1, r}} $ with the aid of the following equation, \begin{equation} \label{Test} \norm{\hat{E}_{I+1, r}} = \frac{ \left(e^{\mu \Delta T_{I+1}} \eta \Delta T_{I+1}\right)^r}{r!} \norm{E_{I+1, 0}}. \end{equation} \item If $ \norm{\hat{E}_{I+1, r}} > \varepsilon $ and $ \Delta T_{I+1} > \frac{1}{2} \Delta T_I $, set $ \Delta T_{I+1} = \Delta T_{I+1} - \delta $ and repeat step~2. \end{enumerate} We define the minimal window length to be $ \Delta T_{I+1} = \frac{1}{50} T $. The above procedure gives a sequence of time intervals $ [0, T_1] $, $ [T_1, T_2] $, $ \dots $, $ [T_{\nu-1}, T_{\nu}] $, where $ T_{\nu} = T $, on which WR is performed (as described in relaxation process) with an initial ``guess'' waveform provided by an extrapolation of the solution on the previous interval ~\cite{AWR}. AWR is found to accelerate simulations by orders of magnitude over traditional WR ~\cite{AWR}. In this work, we propose to use AWR for simulating the set of differential equations that arise from intrusive polynomial chaos. As mentioned before, the curse of dimensionality gives rise to a combinatorial number of equations~\cite{gPC} making AWR particularly attractive. While the convergence of WR or AWR is guaranteed irrespective of how the system is decomposed in the assignment-partitioning step, the rate of convergence depends on the decomposition~\cite{AWR}. For a given nonlinear system, determining a decomposition that leads to an optimal rate of AWR convergence is an NP-complete problem~\cite{AWR}. Ideally, to minimize the number of iterations required for convergence, one would like to place strongly interacting equations/variables on a single processor, with weak interactions between the variables or equations on different processors. We show in~\cite{AWR}, that spectral clustering~\cite{Tutorial} along with horizontal vertical decomposition~\cite{igorcdc} is a good heuristic for decomposing systems for fast convergence in WR and AWR. For this task, we now discuss a novel decentralized spectral clustering approach~\cite{ref:wave} that when coupled with AWR~\cite{AWR} provides a powerful tool for simulating large dynamical systems. \subsection{Graph Decomposition}\label{specclus} The problem of partitioning the system of equations (\ref{complexsys}) into subsystems based on how they interact or are coupled to each other, can be formulated as a graph decomposition problem. Given the set of states $x_1,\cdots,x_n$ and some notion of dependence $\overline{w}_{ij}\ge 0,i=1,\cdots,n,j=1,\cdots,n$ between pairs of states, an undirected graph $G=(V,E)$ can be constructed. The vertex set~$V = \{1,\dots,n\}$ in this graph represent the states $x_i$, and the edge set is $E\subseteq V\times V$, where a weight~$\bar{w}_{ij} \geq 0$ is associated with each edge $(i,j)\in E$, and $\overline{W}=[\bar{w}_{ij}]$ is the $n\times n$ weighted adjacency matrix of~$\mathcal{G}$. In order to quantify coupling strength $\bar{w}_{ij}$ between nodes or states, we propose to use \begin{equation}\label{similiarity} \bar{w}_{ij}=\frac{1}{2}[|\overline{J}_{ij}|+|\overline{J}_{ji}|], \end{equation} where, $\overline{J}=[\frac{1}{T_s}\int_{t_0}^{t_0+T_s}J_{ij}(\mathbf{x}(t;\mathbf{\xi}_m),\mathbf{\xi}_m,t)dt]$, is time average of the Jacobian, \begin{equation}\label{Jac} J(\mathbf{x},\mathbf{\xi},t)=\left[\frac{\partial f_i(\mathbf{x}(t;\mathbf{\xi}),\mathbf{\xi},t)}{\partial x_j}\right], \end{equation} computed along the solution $\mathbf{x}(t;\mathbf{\xi})$ of the system (\ref{complexsys}) for nominal values of parameters $\mathbf{\xi}_m$. Use of system Jacobian for horizontal vertical graph decomposition can also be found in \cite{igorcdc}. We will now discuss, spectral clustering (see~\cite{Tutorial} a popular graph decomposition/clustering approach that allows one to partition a undirected graph given its adjacency matrix $\overline{W}$ . In this method first a (normalized) graph Laplacian is constructed as follows ~\cite{Chung,Fiedler,Fiedler2}, \begin{align} L_{ij} = \begin{cases} 1 & \mbox{if}\: i = j\\ -\bar{w}_{ij}/\sum_{\ell=1}^N \bar{w}_{i\ell} & \mbox{if}\: (i,j) \in E\\ 0 & \mbox{otherwise}\,, \end{cases} \label{eq:ldef} \end{align} or equivalently as, $L= I-D^{-1}\overline{W}$ where $D$ is the diagonal matrix with the row sums of $\overline{W}$. The clustering assignment/decomposition is then obtained by computing the eigenvectors/eigenvalues of $L$. In particular, one uses the signs of the components of the second and higher eigenvectors to partition the nodes in the graph into clusters~\cite{Tutorial}. Traditionally, one can use standard matrix algorithms for eigenvectors/eigenvalues computation~\cite{GolubVanLoan96}. However, as the size of the dynamical system or network (and thus corresponding adjacency matrix) increases, the execution of these standard algorithms becomes infeasible on monolithic computing devices. To address this issue, distributed eigenvalue/eigenvector computation methods have been developed, see for example \cite{KempeMcSherry08}. In~\cite{ref:wave}, a wave equation based distributed algorithm to partition large graphs has been developed which computes the partitions without constructing the entire adjacency matrix $\overline{W}$ of the graph~\cite{Tutorial}. In this method one ``hears'' clusters in the graph by computing the frequencies (using Fast Fourier Transform (FFT) locally at each node) at which the graph ``resonates''. In particular, one can show that these ``resonant frequencies'' are related to the eigenvalues of the graph Laplacian $L$ (\ref{eq:ldef}) and the coefficients of FFT expansion are the components of the eigenvectors. Infact, the algorithm is provably equivalent to the standard spectral clustering~\cite{Tutorial}, for details see ~\cite{ref:wave}. The steps of the wave equation based clustering algorithm are as follows. One starts by writing the local update equation at node $i$ based on the discretized wave equation, \begin{align} u_{i}(t) = 2u_{i}(t-1) - u_{i}(t-2) - c^2\displaystyle\sum_{j\in\mathcal{N}_i}L_{ij} u_{j}(t-1)\, \label{onenodewave} \end{align} where $u_{i}(t)$ is the value of $u$ at node $i$ at time $t$ and $L_{ij}$ are the local entries of the graph Laplacian. At each node $i$, $u_{i}(0) = u_{i}(-1)$ is set to a random number on the interval $\left[0,1\right]$. One then updates the value of $u_{i}$ using Eqn. (\ref{onenodewave}) until $t=T_{max}$ (for a discussion on how to pick $T_{max}$ see~\cite{ref:wave}). Note that, $u_{i}(t)$ is a scalar quantity and one only needs nearest neighbor information in Eqn. (\ref{onenodewave}) to compute it. One then performs a local FFT on $\left[u_{i}(1),\dots\dots,u_{i}(T_{max})\right]$ and then assigns the coefficients of the peaks of the FFT to $v_{ij}$. Here the frequency of the $j$-th peak is related to $\lambda_{j}$, the $j$-th eigenvalue of the graph Laplacian $L$, and $v_{ij}$ is the $i$-th component of the $j$-th eigenvector. In~\cite{ref:wave}, it has been shown that for the wave speed $c<\sqrt{2}$ in Eqn. (\ref{onenodewave}) above wave equation based iterative algorithm is stable and converges. Moreover, the algorithm converges in $O(\sqrt{\tau})$ steps, where $\tau$ is the mixing time of the Markov Chain~\cite{ref:wave} associated with the graph $G$. The competing state-of-the-art algorithm~\cite{KempeMcSherry08} converges in $O(\tau(\log(n)^{2})$. For large graphs or datasets, $O(\sqrt{\tau})$ convergence is shown to provide orders of magnitude improvement over algorithms that converge in $O(\tau(\log(n)^{2})$. For detailed analysis and derivations related to the algorithm, we refer the reader to~\cite{ref:wave}. \section{Scalable Uncertainty Quantification Approach}\label{scaleUQ} \begin{figure}[hbt] \begin{center} \includegraphics[scale=0.5]{schematic}\\ \caption{Schematic of intrusive (left) and non-intrusive (right) PWR.}\label{schematicPWR} \end{center} \end{figure} In this section we discuss how gPC and PCM can be integrated with WR scheme extending it to a probabilistic setting. As mentioned earlier we refer to this iterative UQ approach as PWR. Figure \ref{schematicPWR} shows the schematic of PWR framework. In the intrusive PWR, the subsystems obtained from decomposing the original system are used to impose a decomposition on system obtained by Galerkin projection based on the gPC expansion. Further the weak interactions are used to discard terms which are expected to be insignificant in the gPC expansion, leading to what we call an Approximate Galerkin Projected (AGP) system. We then propose to apply standard or adaptive WR on the decomposed AGP system to accelerate the UQ computation. In the non-intrusive form of PWR, rather than deriving the AGP system, one works directly with subsystems obtained from decomposing the original system. At each waveform relaxation iteration we propose to apply PCM at subsystem level, and use gPC to propagate the uncertainty among the subsystems. Note that unlike intrusive PWR (where deterministic decoupling vectors or deterministic waveforms are exchanged), in non-intrusive PWR, stochastic decoupling vector or probabilistic waveforms represented in gPC basis are exchanged between subsystems at each iteration. We first describe the key technical ideas behind intrusive and non-intrusive PWR though an illustration on a simple example in section \ref{sec:mainidea}. These notions are fully generalized later in sections \ref{sec:GPdecomp}-\ref{waveprob}. We also prove the convergence of PWR approach (in section \ref{sec:convPWR}), and in section \ref{sec:gainPWR} discuss the computational gain it offers over standard application of gPC and PCM. \subsection{Main Ideas of PWR}\label{sec:mainidea} We illustrate the proposed PWR framework through an example of parametric uncertainty in a simple system (\ref{complexsys}). Consider the following coupled oscillator system: \begin{eqnarray} \dot{x_1}&=&f_1(x_1,x_2,\omega_1,t)=\omega_1+K_{12} \sin(x_1-x_2),\notag\\ \dot{x_2}&=&f_2(x_1,x_2,x_3,\omega_2,t)\notag\\ &=& \omega_2+K_{21}\sin(x_1-x_2)+K_{23}\sin(x_3-x_2),\label{simpsys}\\ \dot{x_3}&=&f_3(x_2,x_3,\omega_3,t)=\omega_1+K_{32} \sin(x_2-x_3)\notag \end{eqnarray} Here, $\omega_i$ is the uncertain angular frequency of the $i^{th}$ ($i=1,2,3$) oscillator. We assume that parameters $\omega_i$ are mutually independent, each having probability density $\rho_i=\rho_i(\omega_i)$. The coupling matrix $K$ is \begin{equation}\label{Kdef} K=\left( \begin{array}{ccc} 0 & K_{12} & 0 \\ K_{21} & 0 & K_{23} \\ 0 & 0 & K_{32} \\ \end{array} \right) \end{equation} is assumed deterministic with $K_{ij}=\mathcal{O}(\epsilon),\epsilon \ll 1$, so that the three oscillators in (\ref{complexsys}) weakly interact with each other, i.e. the subsystem $2$ weakly affects subsystem $1$ and $3$, and vice versa. \subsubsection{Approximate Galerkin projection for the simple example}\label{sec:agpsimpeg} In standard gPC, states $x_i$ are expanded in a polynomial chaos basis as \begin{equation}\label{expx1} x_i^{\Sigma,P}(t,\mathbf{\omega}) = \sum_{j=1}^{N_{\Sigma}} a_{ji}(t)\Psi_j^{\Sigma,P}(\mathbf{\omega}),\quad i=1,2,3 \end{equation} where, $\Psi_j^{\Sigma,P}\in W^{\Sigma,P}$, the $P$ variate polynomial chaos space formed over $\Sigma=\{\omega_1,\omega_2,\omega_3\}$ and $P=(P_1,P_2,P_3)$ determines the expansion order(see section \ref{gPC} for details). Note that in this expansion (\ref{expx1}), the system states are expanded in terms of all the random variables $\mathbf{\omega}$ affecting the entire system. From the structure of system (\ref{complexsys}) it is clear that the $1^{st}$ subsystem is directly affected by the parameter $\omega_1$ and indirectly by parameter $\omega_2$ through the the state $x_2$. We neglect second order effect of $\omega_3$ on $x_1$. A similar statement holds true for subsystem $3$, while subsystem $2$ will be weakly influenced by $\omega_1$ and $\omega_3$ through states $x_1$ and $x_3$, respectively. This structure can be used to simplify the Galerkin projection as follows. For $x_1$ we consider the gPC expansion over $\Sigma_1=\Lambda_1\bigcup\Lambda_1^c$, \begin{equation x_1^{\Sigma_1,P_1}(t,\omega_1,\omega_2) = \sum_{j=1}^{N_{\Sigma_1}} \overline{a}_{j1}(t)\Psi_j^{\Sigma_1,P_1}(\omega_1,\omega_2), \end{equation} where, \begin{equation} \Lambda_1=\{\omega_1\},\qquad \Lambda_1^c=\{\omega_2\} \end{equation} and $P_1=(P_{11},P_{12})$. Note that since $\omega_2$ weakly affects $x_1$, the order of expansion $P_{12}$ can be chosen to be smaller compared to $P_{11}$. Similarly, one can consider simplification for $x_3^{\Sigma_3,P_3}(t,\omega_1,\omega_3)$. For $x_2$ following similar steps, we define \begin{equation} \Lambda_2=\{\omega_2\},\qquad \Lambda_2^c=\{\omega_1,\omega_3\} \end{equation} and $P_2=(P_{21},P_{22},P_{23})$. By similar argument, one will choose $P_{21},P_{23}$ much smaller than $P_{22}$. We also introduce the following two projections associated with the state $x_2$: \begin{equation}\label{proj11} \mathcal{P}^{2,i}(x_2^{\Sigma_2,P_2}) = \sum_{j=1}^{N_{\Sigma_i}} \left\langle x_2^{\Sigma_2,P_2},\Psi_{j}^{\Sigma_i,P_2} \right\rangle \Psi_{j}^{\Sigma_i,P_i}. \end{equation} where $i=1,3$ and $\langle \cdot,\cdot \rangle$ is the appropriate inner product on $W^{\Sigma,P}$ (see section \ref{approxPWR} for details). With these expansions, and using standard Galerkin projection we obtain the following system of deterministic equations \begin{equation}\label{agp1} \dot{\overline{\mathbf{a}}}=\overline{\mathbf{F}}(\overline{\mathbf{a}},t), \end{equation} with appropriate initial conditions, where \begin{eqnarray}\label{Fbar1} \overline{F}_{j1}(\overline{\mathbf{a}}) &=& \int_{\Gamma_{\Sigma}}f_1(x_1^{\Sigma_1,P_1},\mathcal{P}^{2,1} (x_2^{\Sigma_2,P_1}),\omega_1,t)\Psi_j^{\Sigma_1,P_1}(\mathbf{\omega}) \mathbf{\rho}(\mathbf{\omega})d\mathbf{\omega},\notag, \\ \overline{F}_{j2}(\overline{\mathbf{a}})&=&\int_{\Gamma_{\Sigma}}f_2(x_1^{\Sigma_1,P_1},x_2^{\Sigma_2,P_2},x_3^{\Sigma_3,P_3},\omega_2,t)\Psi_j^{\Sigma_2,P_2}(\mathbf{\omega}) \mathbf{\rho}(\mathbf{\omega})d\mathbf{\omega},\notag\\ \overline{F}_{j3}(\overline{\mathbf{a}})&=&\int_{\Gamma_{\Sigma}}f_3(\mathcal{P}^{2,3}(x_2^{\Sigma_2,P_2}),x_1^{\Sigma_1,P_1},\omega_3,t)\Psi_j^{\Sigma_3,P_3}(\mathbf{\omega}) \mathbf{\rho}(\mathbf{\omega})d\mathbf{\omega},\notag \end{eqnarray} and $\overline{\mathbf{a}} = (\overline{\mathbf{a}}_1, \overline{\mathbf{a}}_2, \overline{\mathbf{a}}_3)^T$, with $\mathbf{a}_i = (\overline{a}_{i1},\cdots,\overline{a}_{iN_{\Sigma_{i}}})$ and $\overline{\mathbf{F}} = (\overline{\mathbf{F}}_1, \overline{\mathbf{F}}_2, \overline{\mathbf{F}}_3)^T$ with $\overline{\mathbf{F}}_i=(\overline{F}_{i1},\cdots,\overline{F}_{1N_{\Lambda_{i}}})$. We will refer to (\ref{agp1}) as an approximate Galerkin projected (AGP) system. The notion of AGP in more general setting is described in section \ref{approxPWR}. \subsubsection{Intrusive PWR illustrated on the simple example}\label{sec:intpwrsimpeg} In intrusive PWR, after performing the AGP explicitly, the system (\ref{agp1}) is decomposed as \begin{equation}\label{apsdecomp1} \dot{\overline{a}}_{ij}=\overline{F}_{ji}(\overline{\mathbf{a}}_i,\overline{\mathbf{d}}_i,t), \end{equation} where $\overline{\mathbf{d}}_1 = \mathcal{P}^{2,1}(\overline{\mathbf{a}}_2)$, $\overline{\mathbf{d}}_2 = (\overline{\mathbf{a}}_1, \overline{\mathbf{a}}_3)$ and $\overline{\mathbf{d}}_3=\mathcal{P}^{2,1}(\overline{\mathbf{a}}_2)$ are the decoupling vectors (here we overloaded notation for $\mathcal{P}^{2,i}(\overline{\mathbf{a}}_2)$ to imply the coefficients in expansion (\ref{proj11})). Note that the decomposition of system (\ref{agp1}) is based on the decomposition of the original system (\ref{simpsys}). Adaptive or standard WR described in section \ref{wave}, can then be applied to solve the decomposed system (\ref{apsdecomp1}), iteratively. Since, the the system (\ref{apsdecomp1}) is deterministic, deterministic waveforms or deterministic decoupling vectors $\overline{\mathbf{d}}_i,i=1,2,3$ are exchanged in each WR iteration (see Figure \ref{schematicPWR} for an illustration). \subsubsection{Non-Intrusive PWR illustrated on the simple example}\label{sec:nonintpwrsimpeg} In the non-intrusive form of PWR, rather than deriving the AGP, one works directly with subsystems obtained from decomposing the original system. The main idea here is to apply PCM at subsystem level at each PWR iteration, use gPC to represent the probabilistic waveforms and iterate among subsystems using these waveforms. Recall that in standard PCM approach (\ref{PCM}), the coefficients $\overline{a}_{m}^i(t)$ are obtained by calculating integral \begin{eqnarray} \overline{a}_{mi}(t)&=&\int x_{i}^{\Lambda_i,P_i}(t,\mathbf{\omega})\Psi_m^{\Lambda_i,P_i}(\mathbf{\omega})\mathbf{\rho}_{\Lambda_i}(\mathbf{\omega})d\mathbf{\omega}\label{z4approx1} \end{eqnarray} The integral above is typically calculated by using a quadrature formula and repeatedly solving the $i^{th}$ subsystem over an appropriate collocation grid $\mathcal{C}^i(\Sigma_i)=\mathcal{C}^i(\Lambda_i)\times\mathcal{C}^i(\Lambda_i^c)$, where, $\mathcal{C}^i(\Lambda_i)$ is the collocation grid corresponding to parameters $\Lambda_i$ (and let $l_s$ be the number of grid points for each random parameter in $\Lambda_i$), $\mathcal{C}^i(\Lambda_i^c)$ is the collocation grid corresponding to parameters $\Lambda^c_i$ (and let $l_c$ be the number of grid points for each random parameter in $\Lambda^c_i$ ). Since, the behavior of $i^{th}$ subsystem is weakly affected by the parameters $\Lambda^c_i$, we can take a sparser grid in $\Lambda^c_i$ dimension, i.e. $l_c<l_s$, as we took lower order expansion for these random variables in section \ref{sec:agpsimpeg}. Below we outline key steps in non intrusive PWR: \begin{itemize} \item Step 1: (Initialization of the relaxation process with no coupling effect): Set $I=1$, guess an initial waveform $x_i^0(t)$ consistent with initial condition. Set $\mathbf{d}_{1}^1=x^0_{2},\quad \mathbf{d}_{2}^1=(x^0_{1},x^0_3),\quad \mathbf{d}_{3}^1=x^0_2,$ and solve \begin{equation} \dot{x}_i^1=f_i(x_i^1,\mathbf{d}_{i}^1(t),\omega_i,t), \end{equation} with an initial condition $x^1_i(0)=x_i^0(0)$ on a collocation grid $\mathcal{C}^i(\Lambda_i)$. Determine the gPC expansion $x_{i}^{\Lambda_i,P_i,1}(t,\cdot)$ by computing the expansion coefficients from the quadrature formula (\ref{z4approx1}). \item Step 2: (Initialization of the relaxation process, incorporating first level of coupling effect): Set $I =2$ and let $\mathbf{d}_{1}^2=x_{2}^{\Lambda_2,P_2,1},\quad \mathbf{d}_{2}^2=(x_{1}^{\Lambda_1,P_1,1},x_{3}^{\Lambda_3,P_3,1}),\quad \mathbf{d}_{3}^2=x_{2}^{\Lambda_2,P_2,1}$ be the \emph{stochastic decoupling vectors}. Solve \begin{equation} \dot{x}_i^2=f_i(x_i^2,\mathbf{d}_{i}^2(t,\cdot),\omega_i,t), \end{equation} over a collocation grid $\mathcal{C}^i(\Sigma_i)$ to obtain $x_i^{\Sigma_i,P_i,2}(t,\cdot)$. From now on we shall denote the solution of the $i^{th}$ subsystem at $I^{th}$ iteration by $x_{i}^{\Sigma_i,P_i,I}$. \item Step 3 (Analyzing the decomposed system at the I-th iteration): Set the decoupling vectors, $\mathbf{d}_{1}^I=\mathcal{P}^{2,1}(x_{2}^{\Sigma_2,P_2,I-1}),\quad \mathbf{d}_{2}^I=(x_{1}^{\Sigma_1,I-1},x_{3}^{\Sigma_3,P_3,I-1})$, $\mathbf{d}_{3}^I=\mathcal{P}^{2,3}(x_{2}^{\Sigma_2,P_2,I-1})$ and solve \begin{equation} \dot{x}_i^I=f_i(x_i^I,\mathbf{d}_{i}^I(t,\cdot),\omega_i,t), \end{equation} over a collocation grid $\mathcal{C}^i(\Sigma_i)$ and obtain the expansion $x_i^{\Sigma_i,P_i,I}(t,\cdot)$. \item Step 4 (Iteration) Set $I = I + 1$ and go to step 5 until satisfactory convergence has been achieved. \end{itemize} Note that in above non-intrusive PWR, the decoupling vectors are stochastic and so at each iteration \emph{probabilistic waveforms} are exchanged between subsystems ((see Figure \ref{schematicPWR} for an illustration). We next generalize the intrusive and non-intrusive PWR introduced above, in the forthcoming sections. \subsection{Decomposition of Galerkin Projected System}\label{sec:GPdecomp} We begin by revisiting the complete Galerkin system (\ref{galproj}). To apply WR, recall that the first step is the assignment-partitioning (see section \ref{wave}). There are two possible approaches for partitioning the complete Galerkin system. One can first split the original dynamical system (\ref{complexsys}), and then use this decomposition to partition the complete Galerkin projection (\ref{galproj}) by assigning the model coefficients in (\ref{expx}) for each state to the cluster to which state is assigned to while decomposing system (\ref{complexsys}). As previously explained in section \ref{specclus}, the partitioning is performed by representing the dynamical system (\ref{complexsys}) as a graph with the symmetrized time averaged Jacobian (\ref{similiarity}) as the weighted adjacency matrix. One can then apply the wave equation based decentralized clustering algorithm outlined in section \ref{specclus}. Alternatively, one can perform this decomposition directly on the complete Galerkin projection (\ref{galproj}). Let the symmetrized time averaged Jacobian for the resulting system (\ref{galproj}) be, \begin{equation}\label{similiarity_gpc} \tilde{w}_{ij}=\frac{1}{2}[|\tilde{J}_{ij}|+|\tilde{J}_{ji}|], \end{equation} where, $\tilde{J}=[\frac{1}{T_s}\int_{t_0}^{t_0+T_s}J^{'}_{ij}(a(t),t)dt]$, is time average of the Jacobian, \begin{equation}\label{Jac_gpc} J^{'}(\mathbf{a},t)=\left[\frac{\partial F_{ik}(\mathbf{a}(t))}{\partial a_{ik}}\right], \end{equation} computed along the solution $\mathbf{a}(t)$ of the system (\ref{galproj}). This gives, \begin{equation}\label{Jac_int} J^{'}(\mathbf{a},t)= \int_{\Gamma_{\Sigma}}\frac{\partial f_k(\mathbf{x}^{\Sigma,P}(\mathbf{\xi},t),\mathbf{\xi}_k,t)}{\partial a_{jk}}\Psi_i^{\Sigma,P}(\mathbf{\xi})\mathbf{w}_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}. \end{equation} Taylor expanding $f_k(\mathbf{x}(\mathbf{\xi},t),\mathbf{\xi}_k,t)$ locally, gives, \begin{equation}\label{Jac_int} J^{'}(\mathbf{a},t)\approx J(\mathbf{a},t). \end{equation} Thus, one expects to get similar results by performing clustering on the original system (in (\ref{complexsys})) to that obtained based on complete Galerkin system (\ref{galproj}). Since the dimensionality of system (\ref{complexsys}) is much lower than that of system (\ref{galproj}), the first decomposition is less computationally challenging than the latter. In this work, we will use the original system to determine the decomposition and use that to impose the partition of the Galerkin projection. Given the decomposition of system (\ref{galproj}), one can use the WR or its adaptive form to simulate the system in a parallel fashion. However, before doing this one can further exploit the weak interaction between subsystems to reduce the dimensionality of the complete Galerkin system, as described in section \ref{sec:agpsimpeg}. We next describe this approximate Galerkin projection in a more general setting. \subsection{Approximate Galerkin Projection and Intrusive Probabilistic Waveform Relaxation}\label{approxPWR} Recall that in the gPC expansion (\ref{expx}), all the system states are expanded in terms of random variables affecting the entire system. However, the $i-$th subsystem in the decomposition (see Eq. (\ref{decomp})) is directly affected by the parameters $\Lambda_i$ (see definition (\ref{lamdef})) and indirectly by other parameters through the decoupling vector (see definition (\ref{ddef})). We shall denote by \begin{equation}\label{complement} \Lambda^c_i=\bigcup_{j\in\mathcal{N}_i}\Lambda_{j}, \end{equation} the set of parameters which indirectly affect the $i-$th subsystem through the immediate neighbor interactions and by \begin{equation}\label{complement} \Sigma_i=\Lambda_i\cup\Lambda_i^c, \end{equation} the set of parameters that directly and indirectly (through nearest neighbor interaction) affect the $i-$th subsystem. Under the hypothesis that the $i-$th subsystem is dynamically weakly coupled with its nearest neighbors, uncertainty in parameters $\Lambda_c^i$ will weakly influence the states in $i-$th subsystem through the decoupling vector, while the uncertainty in parameters $\Sigma\setminus \Sigma_i$ can be neglected. To capture this effect, consider a $P$-variate space (analogous to the $P$-variate space introduced in the section \ref{gPC}) \begin{equation}\label{polyspacegensub} W^{\Lambda,P}\equiv \bigotimes_{|\mathbf{d}|\in \mathbb{P}}W^{k_i,d_{k_i}}, \end{equation} formed over any random parameter subset $\Lambda=\{\xi_{i_1},\xi_{i_2},\cdots,\xi_{i_n}\}\subset \Sigma$. We shall denote the basis elements of $W^{\Lambda,P}$ by $\Psi_{i}^{\Lambda,P},i=1,\cdots,N_{\Lambda}$, where $N_{\Lambda}=\mbox{dim}(W^{\Lambda,P})$. Note that for any $\Lambda_1\subset\Lambda_2\subset \Sigma$, \begin{equation}\label{sub} W^{\emptyset}\subset W^{\Lambda_1,P_1}\subset W^{\Lambda_2,P_2}\subset W^{\Sigma,P}, \end{equation} where, $W^{\emptyset}=\{0\}$ is the $P-$variate space corresponding to the empty set. Also, recall that $P_1,P_2$ are vectors which control the expansion order in gPC expansion. The inner product on $W^{\Sigma,P}$ induces an inner product on $W^{\Lambda,P}$ as follows \begin{equation}\label{inducedip} <X_1(\mathbf{\xi}),X_2(\mathbf{\xi})>_{\Lambda}=\int_{\Gamma_{\Lambda}}X_1(\mathbf{\xi}) X_2(\mathbf{\xi}) \mathbf{\rho}_{\Lambda}(\mathbf{\xi})d\mathbf{\xi}, \end{equation} for any $X_1(\mathbf{\xi}),X_2(\mathbf{\xi})\in W^{\Lambda,P}$. Using this inner product, we introduce a projection operator \begin{equation}\label{Projdef} Pr_{\Lambda_1}^{\Lambda_2}:W^{\Lambda_2,P_2}\rightarrow W^{\Lambda_1,P_1}, \end{equation} such that for any $X(\mathbf{\xi})\in W^{\Lambda_2,P_2}$ \begin{equation}\label{proj} Pr_{\Lambda_1,P_1}^{\Lambda_2,P_2}(X)(\mathbf{\xi})=\sum_{i=1}^{N_{\Lambda_1}}<X,\Psi_{i}^{\Lambda_1,P_1S}>_{\Lambda}\Psi_{i}^{\Lambda_1,P_1}(\mathbf{\xi}). \end{equation} With respect to the given decomposition $\mathcal{D}$ imposed on the system (see section \ref{wave}), we define a projection operator $\mathcal{P}^{i,j}$ indexed by subsystem $i$ and state $x_j$ \begin{equation} \mathcal{P}^{i,j}\equiv \begin{cases} \begin{array}{c} Pr_{\Sigma_i,P_i}^{\Sigma,P},\quad \mbox{if} \quad \mathcal{D}(j)=i,\\ Pr_{\Lambda_{D(j)}\bigcup(\Lambda_{D(j)}^c\bigcap \Lambda_{i}),P_i}^{\Sigma,P}\quad \mbox{if} \quad \mathcal{D}(j)\neq i,\mathcal{N}_{i}\bigcap \mathcal{N}_{\mathcal{D}(j)}\neq\emptyset, \\ Pr_{\emptyset}^{\Sigma,P} \quad \mbox{if} \quad \mathcal{D}(j)\neq i, \mathcal{N}_{i}\bigcap \mathcal{N}_{\mathcal{D}(j)}=\emptyset. \end{array}\notag \end{cases} \end{equation} \begin{remark}\label{adaptiveexp} For any subsystem $i$, since the parameters $\Lambda_i^c$ weakly affect it, we can adaptively select component of vector $P_i=(P_{i1},\cdots,P_{i,|\Sigma_i|})$ so that a lower order expansion is used in components corresponding to random variables in $\Lambda_i^c$. \end{remark} For any subsystem $i$ and a vector $\mathbf{x}^{\Sigma,P}(\xi,t)=(x_{k_1}^{\Sigma,P}(\xi,t),\cdots,x_{k_n}^{\Sigma,P}(\xi,t))$ (where $x_{i}^{\Sigma,P}(\xi,t)$ is standard gPC expansion (\ref{expx})), we associate a vector valued projection operator as follows \begin{equation}\label{vecP} \mathcal{P}^{i}(\mathbf{x}^{\Sigma,P})=(\mathcal{P}^{i,k_1}(x_{k_1}^{\Sigma,P}),\cdots,\mathcal{P}^{i,k_n}(x_{k_n}^{\Sigma,P})). \end{equation} In terms of these operators, for any state $x_k$, an approximate Galerkin projected equation is defined as, \begin{equation}\label{kth} \frac{d}{dt}\mathcal{P}^{i,k}[x_k^{\Sigma,P}(\xi,t)]=f_k(\mathcal{P}^{i}(\mathbf{x}^{\Sigma,P}(\mathbf{\xi},t)),\mathbf{\xi}_k,t), \end{equation} where, $i=\mathcal{D}(k)$ is the index of the subsystem to which the state $k$ belongs. More, precisely the above system can be expressed as: \begin{equation}\label{galproj1} \dot{\overline{a}}^i_{jk}=\overline{F}_{jk}^i(\overline{\mathbf{a}},t), \end{equation} where, $\overline{\mathbf{a}}=(\overline{a}^{\mathcal{D}(1)}_{11},\cdots,\overline{a}^{\mathcal{D}(1)}_{N_{\Sigma_{\mathcal{D}(1)},1}},\cdots,\overline{a}^{\mathcal{D}(n)}_{1n},\cdots,\overline{a}^{\mathcal{D}(n)}_{N_{\Sigma_{\mathcal{D}(n)},n}})^T$ \begin{equation}\label{Fbar} \overline{F}_{jk}^i=\int_{\Gamma_{\Sigma_i}}f_k(\mathcal{P}^{i}(\mathbf{x}^{\Sigma,P}(\mathbf{\xi},t)),\mathbf{\xi}_k,t)\Psi_j^{\Sigma_i,P_i}(\mathbf{\xi}) \mathbf{\rho}_{\Sigma_i}(\mathbf{\xi})d\mathbf{\xi}, \end{equation} and, $j=1,\cdots,N_{\Sigma_i}$, $k=1,\cdots,n$ with $i=\mathcal{D}(k)$. Let \begin{equation} \overline{\mathbf{F}}=(\overline{F}^{\mathcal{D}(1)}_{11},\cdots,\overline{F}^{\mathcal{D}(1)}_{N_{\Sigma_{\mathcal{D}(1)},1}},\cdots,\overline{F}^{\mathcal{D}(n)}_{1n},\cdots,\overline{F}^{\mathcal{D}(n)}_{N_{\Sigma_{\mathcal{D}(n)},n}})^T,\nonumber \end{equation} then the system (\ref{galproj1}) can be compactly written as \begin{equation}\label{aps} \dot{\overline{\mathbf{a}}}=\overline{\mathbf{F}}(\overline{\mathbf{a}},t), \end{equation} with appropriate initial condition (see expression \ref{galprojinit}) and will be referred to as the approximate Galerkin projected (AGP). Using this generalization of AGP system, it is straightforward to generalize the intrusive PWR introduced in section \ref{sec:intpwrsimpeg}. \subsubsection{Intrusive PWR Algorithm}\label{compPWR} In the intrusive PWR, one applies the WR to AGP system. As per discussion in section \ref{sec:GPdecomp}, the decomposition $\mathcal{D}$ on the original system (\ref{complexsys}) is used to imposes a decomposition on the system (\ref{aps}) leading to \begin{equation}\label{apsdecomp} \dot{\overline{\mathbf{a}}_i}=\overline{\mathbf{F}}_i(\overline{\mathbf{a}}_i,\overline{\mathbf{d}}_i,t), \end{equation} for $i=1,\cdots,m$, where \begin{equation} \overline{\mathbf{a}}_i=(\overline{a}^{i}_{1k_1},\cdots,\overline{a}^{i}_{N_{\Sigma_{i},k_1}},\cdots,\overline{a}^{i}_{1k_{|\mathcal{C}_i|}},\cdots,\overline{a}^{i}_{N_{\Sigma_{i}},k_{|\mathcal{C}_i|}})^T, \end{equation} \begin{equation} \overline{\mathbf{F}}_i=(\overline{F}^{i}_{1k_1},\cdots,\overline{F}^{i}_{N_{\Sigma_{i},k_1}},\cdots,\overline{F}^{i}_{1k_{|\mathcal{C}_i|}},\cdots,\overline{F}^{i}_{N_{\Sigma_{i}},k_{|\mathcal{C}_i|}})^T, \end{equation} $k_i\in\mathcal{C}_i$, and $\overline{\mathbf{d}}_i=(\overline{\mathbf{a}}_{j_i}^T,\cdots,\overline{\mathbf{a}}_{j_{N_i}}^T)^T$ is the decoupling vector (recall notation from section \ref{wave}). One then follows the procedure for waveform relaxation or its adaptive version, as described in section \ref{wave}. Adaptive WR can lead to a significant increase in convergence of WR as demonstrated in \cite{AWR}, and would be illustrated later in the section \ref{examples}. As discussed in section \ref{gPC}, the projection step (\ref{kth}) can be very tedious and in some cases not possible. Hence, applying waveform relaxation directly to the system (\ref{apsdecomp}) may not be practical in many instances. In the next section, we describe an alternative non-intrusive approach using probabilistic collocation, which does not require the projection step (\ref{kth}) explicitly. \subsection{Non-Intrusive Probabilistic Waveform Relaxation}\label{waveprob} In terms of the projection operator (\ref{vecP}), we can rewrite each subsystem in (\ref{decomp}) as \begin{eqnarray}\label{decompparam} \dot{\mathbf{y}}_i &=&\mathbf{F}_i(\mathbf{y}_i,\mathcal{P}^i(\mathbf{d}_{i}(t,\cdot)),\Lambda_i,t), \end{eqnarray} where, $\mathbf{d}_{i}(t,\cdot)$ is the \emph{stochastic decoupling vector} or \emph{probabilistic waveform}, \begin{equation}\label{decoupstoc} \mathbf{d}_{i}(t,\cdot)=((\mathbf{y}_{j_i}^{\Sigma_{j_1},P_{j_1}})^T,\cdots,(\mathbf{y}_{j_{N_i}}^{\Sigma_{j_{N_i}},P_{j_{N_i}}})^T)^T. \end{equation} where, we have explicitly indicated the dependence on the parameters ( see definition (\ref{ddef})). Here for any $i=1,\cdots,m$, $\mathbf{y}^{\Sigma_i,P_i}=(x_{j^i_1}^{\Sigma_i,P_i},\cdots,x_{j^i_{M_i}}^{\Sigma_i,P_i})^T$, with \begin{equation}\label{expnapprox} x^{\Sigma_{i},P_i}_{j^i_k}(\mathbf{\xi},t)=\sum_{m=1}^{N_{\Sigma_i}} \overline{a}_{mj^i_k}(t) \Psi_m^{\Sigma_i,P_i}(\mathbf{\xi})=\mathcal{P}^{i,j^i_k}(x_{j^i_k}^{\Sigma,P}), \end{equation} By definition the coefficients $\overline{a}_{mj^i_k}(t)$ in above expansion satisfy the system (\ref{galproj1}). These coefficients can be obtained by using the quadrature formula (\ref{z4approx}), by repeatedly solving the system (\ref{decompparam}) over an appropriate collocation grid $C(\mathbf{l},n_i)$ \begin{equation}\label{fullgirdgen} \mathcal{C}(\mathbf{l},n_i+n_i^c)=\mathcal{C}(\mathbf{o},n_i)\times\mathcal{C}(\mathbf{m},n_i^c), \end{equation} where, $\mathbf{l}=(\mathbf{o},\mathbf{m})$, $\mathcal{C}(\mathbf{o},n_i)=C_{o_1}^1\times\cdots \times C_{o_{n_i}}^1,$ is the collocation grid corresponding to parameters $\Lambda_i$, with $n_i=|\Lambda_i|$, $\mathbf{o}=(o_1,\cdots,o_{n_i})$, and $\mathcal{C}(\mathbf{m},n_i^c)=C_{m_1}^1\times\cdots \times C_{m_{n_i^c}}^1,$ is the collocation grid corresponding to parameters $\Lambda_i^c$, with $n_i^c=|\Lambda_i^c|$ and $\mathbf{m}=(m_1,\cdots,m_{n_i^c})$. For simplicity we take $o_{1}=\cdots=o_{n_i}=l_s$ and $m_{1}=\cdots=m_{n_i^c}=l_c$ for $i=1\cdots,m$. Since, the behavior of $i-$th subsystem is weakly affected by the parameters $\Lambda^c_i$ through the decoupling vector, then consistent with remark (\ref{adaptiveexp}) we can take \begin{equation}\label{maxcond} l_c<l_s, \end{equation} leading to an adaptive collocation grid for each subsystem. With this, we are ready to generalize the non-intrusive PWR approach introduced in section \ref{sec:nonintpwrsimpeg}. \subsubsection{Non-Intrusive PWR Algorithm}\label{PWR} \begin{itemize} \item Step 1: Apply graph decomposition (see section \ref{nd} for details) to identify weakly interacting subsystems in the system (\ref{complexsys}). \item Step 2 (Assignment-partitioning process): Partition (\ref{complexsys}) into $m$ subsystems (obtained in Step I) leading to system of equations given by (\ref{decomp}). Obtain, $\Lambda_i$, $\Lambda_i^c$ and $\Sigma_i$ for each subsystem, $i=1,\cdots,m$. Choose the parameters $l_{si},l_{ci},P_i$. \item Step 3: (Initialization of the relaxation process with no coupling effect): Set $I=1$ and , guess an initial waveform $\{\mathbf{y}_i^0(t): t\in [0,T_s]\}$ for each $i=1,\cdots,m$ consistent with initial condition (see Step 1 in relaxation process described in section \ref{wave}). Set \begin{equation}\label{ydk0} \mathbf{d}_{i}^1(t)=(\mathbf{y}_{j_1}(t),\cdots,\mathbf{y}_{j_{N_i}}(t)), \end{equation} $i=1,\cdots,m$, and solve for $\{\mathbf{y}_i^{\Lambda_i,P_i,1}(t), t \in [0,T_s]\}$ using \begin{equation} \dot{\mathbf{y}}_i^1=\mathbf{F}^i(\mathbf{y}_i^1,\mathbf{d}_{i}^1(t),\Lambda_i,t), \end{equation} with an initial condition $\mathbf{y}^1_i(0)=\mathbf{y}_i^0(0)$ on a collocation grid $C(\mathbf{o},n_i)$. Determine the gPC expansion $\mathbf{y}_{i}^{\Lambda_i,P_i,1}(t,\cdot)$ over $P-$variate polynomial space $W^{\Lambda_i,P_i}$ by computing the expansion coefficients from the quadrature formula (\ref{z4approx}). \item Step 4: (Initialization of the relaxation process, incorporating first level of coupling effect): Set $I =2$ and for each $i=1,\cdots,m$, set \begin{equation}\label{ydk1} \mathbf{d}_{i}^2(t,\cdot)=(\mathbf{y}_{j_1}^{\Lambda_{j_1}, P_{j_1},1}(t,\cdot),\cdots,\mathbf{y}_{j_{N_i}}^{\Lambda_{j_{N_i}},P_{j_{N_i}},1}(t,\cdot)), \end{equation} and solve for $\{\mathbf{y}_i^{2}(t), t \in [0,T_s]\}$ from \begin{equation} \dot{\mathbf{y}}_i^2=\mathbf{F}^i(\mathbf{y}_i^2,\mathbf{d}_{i}^2(t,\cdot),\Lambda_i,t), \end{equation} with an initial condition $\mathbf{y}^2_i(0)=\mathbf{y}^0_i(0)$, over a collocation grid $C(\mathbf{l},n_i+n_i^c)$. Obtain the expansion $\mathbf{y}_i^{\Sigma_i,P_i,2}(t,\cdot)$ using (\ref{expnapprox}). From now on we shall denote the solution vector of the $i-$th subsystem at $I-$th iteration by $\mathbf{y}_{i}^{\Sigma_i,P_i,I}$. \item Step 5 (Analyzing the decomposed system at the I-th iteration): For each $i=1,\cdots,m$, set \begin{equation}\label{ydk2} \mathbf{d}_{i}^I=(\mathbf{y}_{j_1}^{\Sigma_{j_1}P_{j_1},(I-1)},\cdots,\mathbf{y}_{j_{N_i}}^{\Sigma_{j_{N_i}},P_{j_{N_i}},(I-1)}), \end{equation} and solve for $\{\mathbf{y}^I(t) : t \in [0,T_s]\}$ from \begin{equation} \dot{\mathbf{y}}_i^I=\mathbf{F}^i(\mathbf{y}_i^I,\mathcal{P}^{i}(\mathbf{d}_{i}^I(t,\cdot)),\Lambda_i,t), \end{equation} with initial condition $\mathbf{y}^I_i(0)=\mathbf{y}^0_i(0)$ over a collocation grid $C(\mathbf{l},n_i+n_i^c)$. Obtain the expansion $\mathbf{y}_i^{\Sigma_i,P_i,I}(t,\cdot)$ using the expansions (\ref{expnapprox}). \item Step 6 (Iteration) Set $I = I + 1$ and go to step 5 until satisfactory convergence has been achieved. \end{itemize} \subsection{Convergence of PWR} \label{sec:convPWR} Below we prove that the iterative PWR approach converges. The proof is based on showing that the AGP system is Lipschitz if the orginal systems is Lipschitz (see condition \ref{LipF}), and then invoking standard WR convergence result (\ref{WRConv}). \begin{proposition}Convergence of PWR: The intrusive and non-intrusive PWR algorithms described in sections \ref{compPWR} and \ref{PWR}, respectively converge. \end{proposition} \emph{Proof:} We prove the result for intrusive PWR. By construction, since non-intrusive PWR algorithm solves the AGP system (\ref{aps}) in a different way, the convergence result holds for non-intrusive PWR as well. Consider the AGP system(\ref{aps}) and let \begin{equation} \mathbf{a}_1=(\overline{a}^{1\mathcal{D}(1)}_{11},\cdots,\overline{a}^{1\mathcal{D}(1)}_{N_{\Sigma_{\mathcal{D}(1)},1}},\cdots,\overline{a}^{1\mathcal{D}(n)}_{1n},\cdots,\overline{a}^{1\mathcal{D}(n)}_{N_{\Sigma_{\mathcal{D}(n)},n}})^T, \end{equation} and \begin{equation} \mathbf{a}_2=(\overline{a}^{2\mathcal{D}(1)}_{11},\cdots,\overline{a}^{2\mathcal{D}(1)}_{N_{\Sigma_{\mathcal{D}(1)},1}},\cdots,\overline{a}^{2\mathcal{D}(n)}_{1n},\cdots,\overline{a}^{2\mathcal{D}(n)}_{N_{\Sigma_{\mathcal{D}(n)},n}})^T. \end{equation} Let for a given $k=1,\cdots,n$, $i=\mathcal{D}(k)$, then \begin{equation} \mathcal{P}^{i,k}(x_k^{l,\Sigma,P}(t,\mathbf{\xi}))=\sum_{j=1}^{N_{\Sigma_i}} \overline{a}_{jk}^{li}(t)\Psi_j^{\Sigma_i,P}(\mathbf{\xi}), \end{equation} and $\mathcal{P}^{i}(\mathbf{x}^{l,\Sigma,P})=(\mathcal{P}^{i,1}(x_1^{l,\Sigma,P}),\cdots,\mathcal{P}^{i,n}(x_n^{l,\Sigma,P}))$, for $l=1,2$ and for simplifying notation we have dropped subscripts on $P$ vectors. Then for each $k=1,\cdots,n$,$i=\mathcal{D}(k)$, $j=1,\cdots,N_{\Sigma_{i}}$, \begin{eqnarray} &&||\overline{F}^i_{jk}(\mathbf{a}_2)-\overline{F}^i_{jk}(\mathbf{a}_1)||\notag\\ &=&|\int_{\Gamma_{\Sigma}}(f_k(\mathcal{P}^{i}(\mathbf{x}^{\Sigma,P,2}),\xi,t)-f_k(\mathcal{P}^{i}(\mathbf{x}^{\Sigma,P,1}),\xi,t))\notag\\ &&\times \Psi_j^{\Sigma_i,P}(\mathbf{\xi})w_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}|\notag\\ &\leq&\int_{\Gamma_{\Sigma}}L(\xi)(\sum_{m=1}^n\sum_{p=1}^{N_{\Sigma_{\mathcal{D}(m)}}}|(a_{pm}^{l,\mathcal{D}(m)}-a_{pm}^{2,\mathcal{D}(m)})\Psi_p^{\Sigma_{\mathcal{D}(m)},P}|)\notag\\ &&\times|\Psi_j^{\Sigma_i,P}(\mathbf{\xi})|w_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}\notag\\ &\leq&\sum_{m=1}^n\sum_{p=1}^{N_{\Sigma_{\mathcal{D}(m)}}}L_{pj}^{i\mathcal{D}(m)}|(a_{pm}^{1,\mathcal{D}(m)}-a_{pm}^{2,\mathcal{D}(m)})| \end{eqnarray} where, \begin{eqnarray} L_{pj}^{iq}&=& \int_{\Gamma_{\Sigma}}L(\xi)|\Psi_p^{\Sigma_{q},P}(\xi)||\Psi_j^{\Sigma_i,P}(\xi)|w_{\Sigma}(\mathbf{\xi})d\mathbf{\xi}. \end{eqnarray} For a given $i=1,\cdots,n$ and $j=1\cdots,N_{\Sigma_{\mathcal{D}(i)}}$, let \begin{equation} L^g_{ij}=[L_{j1}^{\mathcal{D}(i)\mathcal{D}(1)} \cdots L_{jN_{\Sigma_{\mathcal{D}(1)}}}^{\mathcal{D}(i)\mathcal{D}(1)} \cdots L_{j1}^{\mathcal{D}(i)\mathcal{D}(n)} \cdots L_{jN_{\Sigma_{\mathcal{D}(n)}}}^{\mathcal{D}(i)\mathcal{D}(n)}],\nonumber \end{equation} and define \begin{equation} \quad L^g_i=\left[ \begin{array}{c} L_{i1}^g \\ \vdots \\ L_{iN_{\Sigma_{\mathcal{D}(i)}}}^g\\ \end{array} \right], L^g=\left[\begin{array}{c} L^g_1\\ \vdots \\ L^g_n \end{array}\right], \end{equation} then, \begin{eqnarray}\label{LipH} ||\overline{\mathbf{F}}(\mathbf{a}_2)-\overline{\mathbf{F}}(\mathbf{a}_1)||&\leq&\overline{L}||\mathbf{a}_2-\mathbf{a}_1||, \end{eqnarray} where, $\overline{L}=||L^g||$ is a matrix norm of $L^g$. Hence, the system (\ref{aps}) is Liptchitz. Thus, given the original system (\ref{complexsys}) is Lipschitz (condition (\ref{LipF})), the approximate system (\ref{aps}) is also Lipschitz as shown above (\ref{LipH}). Hence, by proposition \ref{WRConv} (adaptation of Theorem 5.3 in (\cite{waveform1})), we conclude that PWR converges. The question of how the decomposition of the system and the choice of the PWR algorithm parameters $P,l_s,l_c$ influence: 1) the rate of convergence of PWR, and 2) the approximation error (due to the truncation introduced in the AGP system (\ref{aps}), the use of adaptive collocation grid i.e. condition \ref{maxcond} and computation of the modal coefficients by the quadrature formula), needs to be further investigated. \subsection{Scalability of PWR}\label{sec:gainPWR} The scalability of non-intrusive PWR relative to full grid PCM is shown in Figure \ref{fig2}, where the ratio $\mathcal{R}_F/\mathcal{R}_I$ indicates the computation gain over standard full grid approach applied to the system (\ref{complexsys}) as a whole. Here $\mathcal{R}_F=l^{p}$ is the number of deterministic runs of the complete system (\ref{complexsys}), which comprises of $m$ subsystems each with $p_i,i=1,\cdots,m$ uncertain parameters, such that $p=\sum_{i=1}^mp_i$ and $l$ denotes the level of full grid. Similarly, $\mathcal{R}_I=1+\sum_{i=1}^ml_s^{p_i}+I_{\mbox{max}}(\sum_{i=1}^m l_s^{p_i}\bigotimes_{j\neq i}l_c^{p_j})$ is the total computational effort with PWR algorithm, where $I_{\max}$ is the number of PWR iterations. Clearly, the advantage of PWR becomes evident as the number of subsystems $m$ and parameters in the network increases. Moreover PWR is inherently parallelizable. \begin{figure}[hbt] \begin{center} \includegraphics[scale=0.3]{scalability}\\ \caption{Scalability of PWR algorithm, when implemented with full grid collocation as the subsystem level UQ method, with $p_i=5,\forall i=1,\cdots,m$, $l=l_s=5$, $l_c=3$ and $I_{\max}=10,50,100$. The computational gain of PWR becomes insensitive to $I_{\max}$, as the number of subsystems $m$ increase. }\label{fig2} \end{center} \end{figure} \section{Example Problems}\label{examples} In this section we illustrate intrusive and non-intrusive PWR through several examples of linear and nonlinear networked systems with increasing number of uncertain parameters. While most examples are of ODE's, we also give an example application of PWR to an algebraic system. This illustrates how in principle one can extend application of PWR to DAEs, just like WR approach extends to DAEs \cite{wave}). Through some examples we study how the strength of interaction between subsystems affects the convergence rate and the approximation error of PWR. In one of the examples related to building model, we also show how time-varying uncertainty can be incorporated into standard UQ framework by using Karhunen Loeve expansion. In all the examples, we compare solution accuracy of PWR with other UQ approaches (e.g. Monte Carlo and Quasi Monte Carlo methods), and wherever appropriate mention computation gain offered by PWR over the standard application of gPC and PCM. \subsection{Stability Problem} We first consider a simple system, with two states $(x_1,x_2)\in \mathbb{R}^2$, \begin{eqnarray}\label{simp} \dot{x}_1&=&ax_1^2+cx_2^2-v_1, \\ \dot{x}_2&=&cx_1^2+bx_2^2-v_2, \end{eqnarray} where, $c,v_1,v_2$ are fixed parameters, and $a,b$ are uncertain with Gaussian distribution $\mathcal{G}$ and tolerance $20\%$ (i.e. $\sigma=0.2\mu$, where $\mu$ is the mean and $\sigma$ is the standard deviation of $\mathcal{G}$). The parameter $c$ determines the coupling strength between two subsystems described by the two equations. The output of interest is the stability of the system, which is determined by $\lambda_{m}$, the maximum eigenvalue of the Jacobian $ J(x_{10},x_{20};a,b,c)=\left( \begin{array}{cc} 2ax_{10} & 2cx_{20} \\ 2cx_{10} & 2bx_{20} \\ \end{array} \right)$, where, $x_{10},x_{20}$ is the equilibrium point satisfying \begin{eqnarray} ax_{10}^2+cx_{20}^2-v_1&=&0, \notag \\ cx_{10}^2+bx_{20}^2-v_2&=&0.\label{simpeq} \end{eqnarray} Figure \ref{case4c} shows the UQ results obtained by applying PWR (with $l_s=5$, $l_c=3$ and $P_1=(5,3),P_2=(3,5)$) to iteratively solve the algebraic system (\ref{simpeq}). We make comparison with the \emph{true} (to imply more accurate result obtained by solving the complete system (\ref{simpeq})) distribution of $\lambda_{m}$ obtained by using a full collocation grid on the parameter space $(a,b)$ with $l_a=5,l_b=5,P=(5,5)$. PWR converges to the true mean and variance as shown in the left and right panel of the Figure \ref{case4c} for two different values of $c$. As the coupling strength $c$ increases (see right panel in Figure \ref{case4c}), the number of iterations required for the convergence increases, as expected. \begin{figure}[hbt] \begin{center} \includegraphics[width=5.7cm]{algebriac_c01} \includegraphics[scale=0.33]{algebriac_c28} \caption{Left Panel: Convergence of mean ($\mu$) and variance ($\sigma$) of $\lambda_{m}$ for $c=0.1$. Right panel: Convergence of mean and variance of $\lambda_{m}$ for $c=2.8$. Black line indicates the \emph{true} values.}\label{case4c} \end{center} \end{figure} \subsection{Building Example} For energy consumption computation, a building can be represented in terms of a reduced order thermal network model of the form \cite{zheng2009}, \begin{equation}\label{Buildmodel} \frac{d \mathbf{T}}{dt}=A(\mathbf{u}(t);\xi)T+B\left( \begin{array}{c} \mathbf{Q}_{e}(t) \\ Q_{i}(t) \\ \end{array} \right) \end{equation} where, $\mathbf{T}\in R^n$ is a vector comprising of internal zone air temperatures, and internal and external wall temperatures; $A(\mathbf{u}(t);\xi)$ is the time dependent matrix with $\xi$ being parameters, $\mathbf{u}(t)$ is control input vector (comprising of zone supply flow rate and supply temperature), and vectors $\mathbf{Q}_e=(T_{amb}(t),Q_{s}(t))^T$ represent the external (outside air temperature and solar radiation), and $Q_i$ is the internal (occupant) load disturbances. We consider the problem of computing uncertainty in building energy consumption due to uncertainty in building thermal related properties and uncertain disturbance loads. These uncertainties can be categorized into: (i) static parametric uncertainty which include parameters such as wall thermal conductivity and thermal capacitance, heat transfer coefficient, window thermal resistance etc.; and (ii) time varying uncertainties which include the external and internal load disturbances. Recall, that the traditional UQ approaches and PWR which builds on them, can only deal parametric uncertainty. To account for time varying uncertain processes, we employ Karhunen Loeve (KL) expansion \cite{KL}. The KL expansion allows representation of second order stochastic processes as a sum of random variables. In this manner, both parametric and time varying uncertainties can be treated in terms of random variables. We next demonstrate both intrusive and non-intrusive PWR methods. \subsubsection{Two Zone Example} \begin{figure}[htb] \begin{center} \includegraphics[scale=0.4]{Two_Zone_Small} \end{center} \caption{Diagram of the two zone thermal model of a building. $T_{amb}(t)=293$K in this example.} \label{Fig:two_zone} \end{figure} We first consider a simplified two zone building model as shown in Fig.~\ref{Fig:two_zone}. Here the state $\mathbf{T}$ is a $10$ dimensional vector comprising of internal wall temperatures and the internal zone air temperatures, where we have assumed that the outer wall surfaces are held at ambient temperature. We also assume that the ambient temperature and solar load are deterministic fixed quantities and there is no internal occupant load. Thus, in computing the uncertainty in the zone temperatures, we only consider parametric uncertainty. Specifically, we assume that the heat transfer coefficient and the thermal conductivity of the walls in each zone have standard deviations of $10\%$ around their nominal values of $3.16 W/m^{2}/K$ and $4.65 W/m/K$, respectively. Thus, locally each zone is affected by two uncertain parameters, with heat transfer coefficient being a common (i.e. same) parameter and thermal conductivity being the other. Using complete Galerkin projection with $P_i=(2,2,2),i=1,2$, gives rise to a $60$ state ODE model. To apply WR/AWR to this system, we first identify the weakly interacting states. By construction the two zones weakly affect each other, which is identified by the spectral clustering~\cite{Tutorial} (or wave equation based clustering~\cite{ref:wave}) applied to the system (\ref{Buildmodel}). This decomposition is imposed on the complete Galerkin system, as explained in section \ref{compPWR}. As expected, we found that if ones applies spectral clustering to the complete Galerkin system instead, one recovers same decomposition. \begin{figure}[htb] \begin{center} \subfigure[]{\includegraphics[scale=0.35]{CompletevsTruncationgpc1}} \subfigure[]{\includegraphics[scale=0.5]{CompletevsTruncationgpcDiff}} \end{center} \caption{(a) Comparison of Monte Carlo, complete Galerkin projection and approximate Galerkin projection. (b)Normalized error in waveform relaxation as a function of iteration count with increasing coupling. Complete Galerkin and approximate Galerkin are shown. Approximate Galerkin system is found to have greater error as a function of iteration number.} \label{Fig:Completevstrunc} \end{figure} Treating $1000$ Monte Carlo samples as the truth, we compare the results of simulated full Galerkin projected system using both standard waveform relaxation~\cite{waveform1} as well as adaptive waveform relaxation~\cite{AWR} in Fig.~\ref{Fig:Completevstrunc}a). AWR provides a speed-up by a factor of $\approx 12$. In Fig.~\ref{Fig:Completevstrunc}a), one can visually see that the complete Galerkin Projection predicts the same temperature variation over $8$ hours as Monte Carlo based methods. As explained before, one can further exploit the weak interaction between the two zones to reduce the overall number of equations in Galerkin projection. To construct the AGP system, we reduce the order of expansion for the random parameters indirectly affecting each zone so that $P_1=(2,2,1)$ and $P_2=(1,2,2)$. With this the number of equations in Galerkin projection reduces from $60$ to $50$. The resulting solution is shown in Fig.~\ref{Fig:Completevstrunc}a). We see that the error starts to grow as time increases. However, over $8$ hours the max error in the room temperatures is $5\times10^{-2}K$. Thus, despite reducing the computational effort, one can still get a fairly accurate answer. Figure ~\ref{Fig:Completevstrunc}b) shows the effect of coupling (which is the reciprocal of the coefficient of thermal conductivity of the internal wall) on errors introduced in complete and approximate Galerkin projections. As expected, the approximate Galerkin projection has higher error (given by $E_{T}(t)$) than complete Galerkin projection (given by $E_{C}(t)$). Moreover this error is more pronounced at low iteration numbers. From the figure, it also clear that as the coupling increases, the number of iterations required for obtaining same solution accuracy increases. For further discussion on the relationship between the coupling and number of iterations, see~\cite{AWR}. \subsubsection{Multi Zone Example} In this section, we consider a larger $6$ zone building thermal network model with $68$ states. This model admits a decomposition into $23$ subsystems, as revealed by the spectral graph approach (see figure \ref{Fig:spectraldecomposition}b). This decomposition is consistent with three different time scales (associated with external and internal wall temperature, and internal zone temperatures) present in the system, as shown by the three bands in figure \ref{Fig:spectraldecomposition}a). \begin{figure}[htb] \begin{center} \subfigure[]{\includegraphics[scale=0.4]{eigvalues_sys}} \subfigure[]{\includegraphics[scale=0.4]{eigvalues_lap}} \end{center} \caption{(a) Shows the there bands of eigenvalues of the time averaged $A(t;\xi)$ for nominal parameter values. (b) First spectral gap in graph Laplacian revealing $23$ subsystems in the network model.} \label{Fig:spectraldecomposition} \end{figure} \begin{figure}[hbt] \begin{center} \subfigure[]{\includegraphics[scale=0.4]{externalLoad_RST}} \subfigure[]{\includegraphics[scale=0.32]{rst}} \end{center} \caption{a) Covariance Kernel (\ref{covgauss1} for external load with $T_c=0.1$ and $\sigma=0.1$. b) Covariance kernel (\ref{covgauss2}) for internal load with $t_1=t_2=0.3$, $a=20$, $\sigma=0.1$.}\label{covplots} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[scale=0.5]{building_UQ} \end{center} \caption{Histogram of building energy computation for two iterations in PWR. Also shown is the corresponding histogram obtained by QMC for comparison.} \label{Fig:buildPWR} \end{figure} Next we demonstrate non-intrusive PWR approach to compute uncertainty in energy consumption due to both parametric uncertainty and time varying uncertain loads. As described earlier, we use KL expansion to transform time varying uncertainty into parametric form. \paragraph{KL Expansion \cite{KL}:} Let $\{X_t=X(\xi,t),t\in[a,b]\}$ be a quadratic mean square second-order stochastic process with covariance functions $R(t,s)$. If $\{\phi_n(t)\}$ are the eigenfunctions of the integral operator with kernel $R(\cdot,\cdot)$ and $\{\lambda_n\}$ the corresponding eigenvalues, i.e. \begin{equation}\label{kernel} \int_{a}^bR(t,s)\phi_n(s)ds=\lambda_n\phi_n(t),\qquad t\in[a,b] \end{equation} then, \begin{equation}\label{KL} X(t,\theta)=\overline{X}(t)+\lim_{N\rightarrow\infty}\sum_{n=1}^N\sqrt{\lambda_n}a_n(\xi)\phi_n(t),\qquad \mbox{uniformaly for } t\in[a,b] \end{equation} where, $\overline{X}(t)$ is the mean of the process and the limit is taken in the quadratic mean sense. The random coefficients $\{a_n\}$ satisfy \begin{equation}\label{coeff} a_n(\theta)=\frac{1}{\sqrt{\lambda_n}}\int_{a}^b(X(\xi,t)-\overline{X}(t))\phi_n(t)dt \end{equation} and are uncorrelated $E[a_ma_n]=\delta_{mn}$. The basis functions also satisfy orthogonality property \begin{equation}\label{ucor} \int_{a}^b\phi_m(t)\phi_n(t)dt=\delta_{mn}, \end{equation} and the kernel admits an expansion of the form \begin{equation}\label{KLkernle} R(s,t)=\lim_{N\rightarrow\infty}\sum_{n=1}^N\lambda_n\phi_n(t)\phi_n(s). \end{equation} Generally, analytical solution to the eigenvalue problem (\ref{kernel}), also known as Fredholm equation of second kind is not available. Several numerical techniques have been proposed, we used the expansion method described in \cite{expnKL}. For applying KL expansion to the building problem, we assume that the stochastic disturbances $(T_{amb}(t),Q_{s}(t),Q_{int}(t))$ are Gaussian processes. This guarantees that the random variables $a_n$ in the KL expansion are independent Gaussian random variables with a zero mean (\cite{expnKL}). Let the joint distribution of a nonstationary Gaussian process be, \begin{equation}\label{jtgauss} f(X_t,X_s)=\frac{1}{2\pi\sigma(s)\sigma(t)\sqrt{1-\rho(t,s)}}e^{-\frac{1}{2(1-\rho^2(t,s))}\left(\frac{x_t^2}{\sigma^2(t)}+\frac{x_s^2}{\sigma^2(s)}-\frac{2\rho(t,s)x_sx_t}{\sigma(t)\sigma(s)}\right)} \end{equation} where, $\rho(t,s)$ is the correlation coefficient and is related to covariance kernel as \begin{equation}\label{covgauss} R(t,s)=\rho(t,s)\sigma(t)\sigma(s). \end{equation} We assumed the processes $T_{amb}(t),Q_{s}(t)$ to have a stationary exponential correlation function \begin{equation}\label{covgauss1} R(t,s)=\sigma^2e^{-\frac{|t-s|}{T_c}}, \end{equation} with a constant variance $\sigma^2$ and a constant correlation time scale $T_c$. For the internal occupancy load $Q_{int}(t)$ we constructed $R(t,s)$ as follows. For a typical office building, we know that the occupancy load is negligible with low variance during early and later parts of the day. On the other hand during peak hours in the middle of the day the occupant load can show significantly high variability. To capture this effect we divided the normalized time domain $[0,1]=[0,t_1]\cup(t_1,t_2)\cup(t_2,1)$ and obtained the desired variation by choosing (in expression \ref{covgauss}) \begin{equation}\label{covgauss2} \sigma(t)=\sigma\left(\tanh(a(t-t_1))-\tanh(a(x-t_2))\right)/2,\quad \rho(t,s)=e^{-\frac{|t-s|}{T_c(s,t)}}, \end{equation} with the correlation time scale \begin{eqnarray}\label{Tc} T_c(s,t)&=&(1-\tanh(a(t-t_1)))(1-\tanh(a(s-t_1)))/4\notag\\ & &+(1+\tanh(a(t-t_2)))(1+\tanh(a(s-t_2)))/4, \end{eqnarray} and parameter $a$ controls the slope of $\tanh$ function. Figure \ref{covplots} shows the covariance kernel for external $(T_{amb}(t),Q_{s}(t))$ and internal $Q_{int}(t)$ loads. For the choice of parameters indicated in the figure \ref{covplots}, we found using the expansion method \cite{expnKL} with Legendre polynomials as the basis functions, that KL expansion upto order $3$ and upto order $6$ can capture more that $90\%$ of total variance, for internal and external loads, respectively. In UQ computation, we considered the effect of $14$ random variables comprising of external wall thermal resistance in the $6$ zones, and first dominant random variable obtained in the KL representation of internal load (for each zone) and first two dominant random variable obtained in KL expansion for solar load. Figure \ref{Fig:buildPWR} show the non-intrusive PWR results on the decomposed network model. As is evident, the iterations converge rapidly in two steps with a distribution close to that obtained from QMC (using a 25000-sample Sobol sequence) applied to the $68$ thermal network model (\ref{Buildmodel}) as a whole. \begin{figure}[hbt] \begin{center} \includegraphics[scale=0.3]{oscillator} \includegraphics[scale=0.3]{lambda80}\\ \caption{Left panel shows a network of $N=80$ phase only oscillators. Right panel shows spectral gap in eigenvalues of normalized graph Laplacian, that reveals that there are $40$ weakly interacting subsystems.}\label{fig1} \end{center} \end{figure} \begin{figure}[hbt] \begin{center} \includegraphics[scale=0.4]{pwr_qmc}\\ \caption{Convergence of mean of the magnitude $R(t)$ and phase $\phi(t)$, and the respective histograms at $t=0.5$.}\label{fig2} \end{center} \end{figure} \subsection{Coupled Oscillators} Finally, we consider a coupled phase only oscillator system which is governed by nonlinear equations \begin{equation}\label{ocslliator} \dot{x}_i=\omega_i+\sum_{j=1}^N K_{ij} \sin(x_j-x_i),\qquad i=1,\cdots,n, \end{equation} where, $n=80$ is the number of oscillators, $\omega_i,i=1,\cdots,n$ is the angular frequency of oscillators and $K=[K_{ij}]$ is the coupling matrix. The frequencies $\omega_i$ of every alternative oscillator i.e. $i=1,3,\cdots,79$ is assumed to be uncertain with a Gaussian distribution with $20\%$ tolerance (i.e. with a total $p=40$ uncertain parameters); all the other parameters are assumed to take a fixed mean value. We are interested in the distribution of the synchronization parameters, $R(t)$ and phase $\phi(t)$, defined by $R(t)e^{\phi(t)}=\frac{1}{N}\sum_{j=1}^N e^{ix_j(t)}$. Figure \ref{fig1} shows the topology of the network of oscillators (left panel), along with the eigenvalue spectrum of the graph Laplacian (right panel). The spectral gap at $40$, implies $40$ weakly interacting subsystems in the network. Figure \ref{fig2} shows UQ results obtained by application of non-intrusive PWR to the decomposed system with $l_s=5$, $l_c=2$. We make comparison with QMC, in which the complete system (\ref{ocslliator}) is solved at $25,000$ Sobol points \cite{Kuo1}. Remarkably the PWR converges in $4-5$ iterations giving very similar results to that of QMC. It would be infeasible to use full grid collocation for the networks as a whole, since even with lowest level of collocation grid, i.e. $l=2$ for each parameter, the number of samples required become $\mathbf{R}_F=2^{40}=1.0995e+012!$. \section{Conclusion and Future Work}\label{conc} In this paper we have proposed an uncertainty quantification approach which exploits the underlying dynamics and structure of the system. In specific we considered a class of networked system, whose subsystems are dynamically weakly coupled to each other. We showed how these weak interactions can be exploited to overcome the dimensionality curse associated with traditional UQ methods. By integrating graph decomposition and waveform relaxation with generalized polynomial chaos and probabilistic collocation framework, we proposed an iterative UQ approach which we called \emph{probabilistic waveform relaxation}. We developed both intrusive and non-intrusive forms of PWR. We proved that this iterative scheme converges and illustrated it on several examples with promising results. Several questions need to be further investigated, these include: how the choice parameters associated with PWR algorithm affects its rate of convergence and the approximation error. In order to exploit multiple time scales that may be present in a system, multigrid extension \cite{mg} of PWR will be desirable. \section{Acknowledgements} This work was in part supported by DARPA DSO (Dr. Cindy Daniell PM) under AFOSR contract FA9550-07-C-0024 (Dr. Fariba Fahroo PM). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the AFOSR or DARPA. \bibliographystyle{unsrt}
train/arxiv
BkiUdG425YjgKOMQwyEf
5
1
\section{Introduction} \label{sec:Introduction} If two hadrons $A,B$ are linked by $A \to B \pi$, then necessarily hadronic pairs of $AB$ or $A\bar{B}$ have the potential to feel a force from $\pi$ exchange. This force will be attractive in at least some channels. Long ago\cite{torn,ericson} the idea of $\pi$ exchange between flavored mesons, in particular charm, was suggested as a source of potential ``deusons"\cite{torn}. Using the deuteron binding as normalization, the attractive force between the $J^P=0^-$ charmed $D$ and its $J^P=1^-$ counterpart $D^*$ was calculated for the $D\bar D^* + c.c.$ S-wave combination with total $J^{PC}=1^{++}$, and the results compared with the enigmatic charmonium state $X(3872)$\cite{pdg08,torn,classics,fcthomas}. In all these examples, as in the more traditional case of the nucleon, where the $NN\pi$ coupling is the source of an attractive force that helps to form the deuteron, the exchanged $\pi$ was emitted and absorbed in a relative P-wave with respect to the hadrons. In such cases, the binding energies that result are ${\cal O}(1-10)$MeV; this in particular has encouraged interest in the $X(3872)$ which is within errors degenerate with the $D^0D^{*0}$ threshold. It has recently been pointed out\cite{cdprl} that the exchange of a $\pi$ in S-wave, between pairs of hadrons that are themselves in relative S-wave, leads to deeply bound states between those hadrons. Instead of binding of a few MeV, as in the cases considered historically, there is now the potential for binding on the scale of ${\cal O}(100)$MeV, leading to a rich spectroscopy of states that are far from the di-hadron channels that create them. We shall argue that examples of such a spectroscopy appear to be manifested among charmonium-like mesons. We organize this paper as follows. First we summarize the general arguments for expecting large binding energies due to S-wave $\pi$ exchange. We shall consider a chiral Lagrangian model to illustrate and quantify the phenomenon of energy shifts of $\mathcal{O}(100)$MeV, focusing specifically on the charmonium-like $1^{--}$ isoscalar ($I=0$) channel. In Section \ref{sec:ChiralModel} we investigate the connection between the chiral potential and the decay width of the relevant charmed mesons, first in the heavy quark limit with point particles and subsequently in the non-heavy quark limit and with form-factors. Then, we solve the Schr\"odinger equation and discuss the uncertainties within the model. Detailed results for the charmonium-like $1^{--}, I=0$ are presented in Section \ref{sec:Spectroscopy} along with results for other $J^{PC}$, isospin, and flavor channels. We discuss the limitations of our approximation to the strong interaction in Section \ref{sec:Uncertaintities}, give phenomenological implications and suggest experimental searches in Section \ref{sec:Phenomenlogy} and finish with conclusions in Section \ref{sec:Conclusions}. \section{Molecules and S-wave $\pi$ exchange} \label{sec:GeneralArguments} Several groups have studied the following meson-pairs looking for bound states in total $J^{PC}$ channels due to pion exchange (from here on $D\bar D^*$ etc. will be taken to include the charge conjugate channel): \begin{eqnarray*} D^*(1^-) \to D(0^-) \pi &\textrm{ leading to the deuson } \bar{D}D^*\; X(3872)\ J^{PC}=1^{++}\\ D^*(1^-) \to D^*(1^-) \pi &\textrm{ leading to the deusons } D^*\bar D^* \; J^{PC}=0^{++},1^{+-},2^{++} \end{eqnarray*} These combinations were discussed in \cite{torn}. In all of these examples parity conservation requires that the $\pi$ is emitted in a P-wave; the hadrons involved at the emission vertices have their constituents in a relative $s$-wave, (we use S,P to denote the angular momentum between hadrons, and $s,p$ to denote internal angular momentum of the constituents within a hadron). In such cases, the $\pi$ emission being in P-wave causes a penalty for small momentum transfer, $\vec q$, which is manifested by the interaction\cite{ericsonwise} \begin{equation} V_P(\vec{q}) = - \frac{g^2}{f_{\pi}^2} \frac{(\vec{\sigma}_i \cdot \vec{q})(\vec{\sigma}_j \cdot \vec{q})} {|\vec{q}|^2 + \mu^2}(\vec{\tau}_i \cdot \vec{\tau}_j) . \label{pwavepi} \end{equation} \noindent where $\mu^2 \equiv m_{\pi}^2 - (m_B-m_A)^2$, $m_{A,B}$ being the masses of the mesons in the process $A \to B\pi$. (For a discussion of this interaction, and its sign, see Eq.\ (20) in ref\cite{fcthomas}). \noindent The resulting potential is $\propto \vec{q}^2$ for low momentum transfer and has been found to give bindings on the scale of a few MeV, which is in part a reflection of the P-wave penalty. There is no such penalty when $\pi$ emission is in S-wave, which is allowed when the hadrons $A,B$ have opposite parities. Examples involving the lightest charmed mesons are $D_1(1^+) \to D^*(1^-) \pi$ and $D_0(0^+) \to D(0^-) \pi$. P-wave $\pi$ exchange carries a $\vec{q}$ penalty at each $\pi A B$ vertex. One might naively anticipate that the transition from a $D$ or $D^*$, with constituents in $s$-wave, to the $D_1$ or $D_0$, with constituents in $p$-wave, would restore a $\vec q$ penalty, leading to small binding effects as in the cases previously considered. However, as we now argue, this need not be the case, and energy shifts of $\mathcal{O}(100)$MeV can arise. There is a long history of data on $\pi$ transitions between hadrons of opposite parity which indicate that the S-wave coupling is significant when $\vec{q} \to 0$. In the charm sector of interest here, the large widths\cite{pdg08} for $\Gamma(D_0 \to D\pi) \sim 260 \pm 50$MeV and $\Gamma(D_1(2430) \to D^*\pi) \sim 385 \pm \mathcal{O}(100)$MeV suggest that, even after phase space is taken into account, there is a significant transition amplitude. This non-suppression was specifically commented upon in the classic quark model paper of ref.\cite{fkr}. It arises from a derivative operator acting on the internal hadron wave function, which enables an internal $s$ to $p$ transition to occur without suppression even when $\vec q$ vanishes. This can be seen when $\bar{\psi}\gamma_5\psi$ is expanded to $\sigma.(\vec{q}-\omega \vec{p}/m)$, where $\vec{p}$ the internal quark momentum\cite{divgi}. Feynman, {\it et al.}\cite{fkr} argued for this form on general grounds of Galilean invariance. The presence of $\vec{p}$ gives the required derivative operator, and hence the unsuppressed $p \to s$ transition. An unsuppressed transition, when applied to $\pi$ exchange in the $D_1\overline{D^*}$ system (e.g. \cite{liu}) causes the chiral model analogue of Eq.\eqref{pwavepi} in the $I=0, 1^{--}$ channel to become \begin{equation} V_S(\vec{q}) =\frac{h^2}{2f_\pi^2}\frac{(m_{D_1} - m_{D^*})^2} {|\vec{q}|^2 + \mu^2} (\vec{\tau}_i \cdot \vec{\tau}_j) {\cal F}(\vec q)^2 \label{swavepi} \end{equation} where $h/(\sqrt{2}f_{\pi})$ is the $D_1D^*\pi$ coupling constant (up to a phase), $f_\pi=132$ MeV, $\vec{q}$ is the exchanged three-momentum, $\mu^2 \equiv -(m_{D_1}-m_{D^*})^2 + m_\pi^2$ ($\mu^2<0$ for the $D_1D^*$ system), and $(\vec{\tau}_i \cdot \vec{\tau}_j)$ is the usual contraction of Pauli matrices resulting from the exchange of an isovector by two isospin-half particles. ${\cal F}$ is the model dependent form-factor which regulates the potential and would be unity in the chiral model. In the derivation of the potential a static approximation has been made to the pion propagator. The full propagator is $q^2 - m_\pi^2 = (E_A - E_B)^2 - \vec q^2 -m_\pi^2$. Approximating $E_A =m_A$ and $E_B = m_B$ one recovers the form of the propagator presented in the potential, Eq.~\eqref{swavepi}. The potential is similiar to one presented in Table 1 of Ref.\ \cite{liu}, who were investigating a $\pi$ exchange model of the $1^{-}, I=1$ $Z^+$(4430). They considered only the $I=1$ channel and omitted the $\vec{\tau}_i \cdot \vec{\tau}_j$ factor (which is unity for $I=1$). We have made this factor explicit as it will become crucial when we study the $I=0$ sector later. The absence of a $\vec{q}^2$ penalty factor is immediately apparent. The scale is now being set by the mass gap squared, which is equal to the timelike component of the momentum transfer vector squared, $q_0^2\rightarrow(m_{D_1}-m_{D*})^2$ as $|\vec{q}| \rightarrow 0$. This potential and any bound states have immediate implications for a rather rich set of physics. The potential also applies for the $D_0\overline{D}$ system, and the bottomonium and strangeonium analogs of $D_1\overline{D^*}$ and $D_0\overline{D}$, by exchanging the masses with their appropriate counterparts. Note that the potential in Eq.~\eqref{swavepi} has no spin dependence and therefore any results apply equally to the $D_1 \overline{D^*}$ spins coupled to total spin $0$, $1$ or $2$. For example, if an isoscalar $1^{--}$ bound state is found, then we also expect degenerate $0^{--}$ and $2^{--}$ states. Thus on rather general grounds we may anticipate significant energy shifts, $\sim \mathcal{O}(100$MeV), due to $\pi$ exchange at least in some channels between such hadrons in a relative S-wave. Signals may be anticipated below or near threshold in the following channels (in the charmonium analogues, involving charm and anti-charm mesons for either $I =$ 0 or 1, or in exotic states with manifest charm involving two charm mesons): \begin{eqnarray*} D_0(0^+) \to D(0^-) \pi &\textrm{leading to the deusons }\phantom{1}\, D\bar D_0\; J^{PC}=0^{-\pm}\phantom{12345}\\ D_1(1^+) \to D^*(1^-) \pi &\textrm{leading to the deusons } D^*\bar D_1 \; J^{PC}=(0,1,2)^{-\pm}\\ \end{eqnarray*} We also find that it is possible that $L>0$ states could bind which would lead to more $J^P$ configurations. Pion exchange depends on the presence of $u,d$ flavors, therefore there will be no such effects in the $D_s\overline{D}_s$ analogues. Further, the potential depends only on the quantum numbers of the light quarks. Therefore, there will be effects in the strange and bottom analogues, which can add to the test of such dynamics at different kinematics. The parameter $h$ in Eq.~\eqref{swavepi} is closely connected to the width of the $D_1\to D^*\pi$ decay. Data exists on this decay which constrains the value of $h$ and hence the spectrum of the model. We discuss the extraction of $h$ from the decay width in the next section. \section{The Coupling Constant $h$ } \label{sec:ChiralModel} Being simply related to the $D_1D^*\pi$ coupling constant, $h$ also appears in the chiral formula for the $D_1 \to D^*$ decay width. Eq.~(137) of Ref.\ \cite{casalbuoni97} gives: \begin{equation} \Gamma(D_1^0 \to D^{*+} \pi^-) = \frac{1}{2\pi}\left(\frac{h}{f_{\pi}}\right)^2 (m_{D_{1}} - m_{D^{*}})^3. \label{d1width} \end{equation} which is valid in the heavy quark limit. \begin{table} \begin{tabular}{lccccc} $A\to B\pi$ & $m_A$/MeV & $m_B$/MeV & $\Gamma$/MeV & BF & $|\vec{q}|$/MeV\\ \hline $D_1(2430)\to D^*(2010)\pi$ & 2427 $\pm$ 40 & 2010.27 $\pm$ .17 & 384 $^{+130}_{-110}$ & N/A & 359\\ $D^*_0(2400)^0\to D\pi$ & 2352 $\pm$ 50 & 1896.62 $\pm$ .20 & 261 $\pm$ 50 & N/A & 414\\ $D^*_0(2400)^\pm \to D\pi$ & 2403 $\pm$ 40 & 1864.84 $\pm$ .17 & 283 $\pm$ 40 & N/A & 461\\ $B_1(5721)\to B^*(5325)\pi$ & 5723.4 $\pm$ 2.0 & 5325.1 $\pm$ .5 & N/A & dominant & 360 \\ $K_1(1400)^\pm \to K^*(892)\pi$ & 1403 $\pm$ 7 & 891.66 $\pm$ .26 & 174 $\pm$ 13 & 94 $\pm$ 6\% & 402\\ $K^*_0(1430)^\pm \to K\pi$ & 1425 $\pm$ 50 & 493.677 $\pm$ .016 & 270 $\pm$ 80 & 93 $\pm$ 10\% & 619\\ \end{tabular} \caption{\label{tab:PDGData} Data of low lying mesons of different flavor sectors with opposite parity which exhibt a large width. Values taken from the Particle Data Group\cite{pdg08}. No width data are available for the bottom sector and no branching fractions are given for the charmed sector.} \end{table} In order to extract $h$ using Eq.~\eqref{d1width}, we use the data from the PDG listed in Table~\ref{tab:PDGData}. In the absence of a branching fraction we assume that the total width is saturated by the $D^*\pi$ channels. We are using chiral formulae for the charged $\pi$ width which may be related to the total $\pi$ decay width by $\Gamma(D_1^0 \to D^{*+} \pi^-) = \frac{2}{3} \Gamma(D_1^0 \to D^{*} \pi)$ \cite{falkluke}. Therefore, we use $m_{\pi^+} = 140.$MeV and $f_\pi = 132$MeV throughout. For the $D_1\to D^*\pi$ system we have $h = 0.63^{+.16}_{-.13}$. There are theoretical and empirical reasons to suspect that Eq.~\eqref{d1width} may be a poor estimate for $h$ given $\Gamma$. Firstly, in the heavy quark limit assumed for Eq.~\eqref{d1width}, $m_{D_1} = m_{D_0}$, $m_{D^*} = m_D $, and thus $\Gamma ( D_1 \rightarrow D^*\pi) = \Gamma(D_0\rightarrow D\pi)$ as Eq.~\eqref{d1width} applies equally well to the $D_0\rightarrow D\pi$ decay. However, these relations do not hold experimentally. Finite mass effects (including mass differences) have been used to derive a correction to Eq.~\eqref{d1width}\cite{casalbuoni97}: \begin{equation} \Gamma(D_1^0 \to D^{*+} \pi^-) = \frac{h^2}{8\pi f_{\pi}^2} \frac{|\vec{q}| m_{D^*}}{m_{D_1}^3} (m_{D_1}^2-m_{D^*}^2)^2 \times \frac{1}{3}\left( 2+\frac{(m_{D_1}+m_{D^*})^2}{4m_{D_1}^2m_{D^*}^2} \right) \label{d1widthfs} \end{equation} Using this expression we have $h = 0.80^{+.20}_{-0.17}$. We have mentioned that our analysis of the $D_1\overline{D^*}$ system applies equally well to the $D_0\overline{D}$ system. Indeed, Eq.~\eqref{d1width} applies to both systems with a trivial substition of the appropriate mass. However, when finite mass effects are included, chiral model gives a different formula for the decay widths of the $D_1$ and $D_0$ mesons. The analogous formula to Eq.~\eqref{d1widthfs} is\cite{casalbuoni97} \begin{equation} \Gamma(D_0 \to D^+ \pi^-) = \frac{h^2}{8\pi f_{\pi}^2} \frac{|\vec{q}| m_{D}}{m_{D_0}^3} (m_{D_0}^2-m_{D}^2)^2 . \label{d0widthfs} \end{equation} Due to the larger mass gap (and hence the larger $|\vec{q}|$), Eqs.~\eqref{d1widthfs} and \eqref{d0widthfs} imply that $\Gamma(D_0\rightarrow D\pi) \approx 1.5\Gamma(D_1\rightarrow D^*\pi)$. Empirically\cite{pdg08}, $$\Gamma(D_0 \to D\pi) \sim 260 \pm 50\ \textrm{MeV and } \Gamma(D_1(2430) \to D^*\pi) \sim 385 \pm \mathcal{O}(100)\ \textrm{MeV}.$$ Although the uncertainties are large, $\Gamma(D_0 \to D\pi)$ has a smaller width than $\Gamma(D_1 \to D^{*} \pi)$ even though the phase space is larger, in contrast with the expectations of Eqs.~\eqref{d1widthfs} and~\eqref{d0widthfs}. In general, processes such as $\Gamma(D_1^0 \to D^{*+} \pi^-)$ involve form factors that summarize the penalty for selecting the exclusive process of single $\pi$ emission, which is increasingly improbable at large $|\vec{q}|$ relative to multi-pion, inclusive, channels. Thus, the assumption that the $D_1D^*\pi$ ($D_0D\pi$) coupling is constant in the chiral model does not take account of the full dynamics at the vertex. The data suggest that we must include the effects of exclusive form factors. The effects of form factors may be modelled by making the replacement $h \rightarrow h\mathcal{F}(|\vec{q}|)$ everywhere. $\mathcal{F}(|\vec{q}|)$ is a smooth, decreasing function such that $\mathcal{F}(|\vec{q}| = 0)=1$. The exact form of $\mathcal{F}$ is model dependent; however, the introduction of a form factor will in general lead to an increased estimate of $h$ and so, naively, to an increased binding energy. As an explicit example, consider the form factor resulting from a dynamical model of $\pi$ emission\cite{cs}: \begin{equation} \mathcal{F}(x) = \left(1-\frac{2}{9}x^2\right)e^{-x^2 / 12} \label{form-factor} \end{equation} with $x \equiv |\vec{q}|/\beta$ and $\beta \sim 0.4$GeV\cite{cs}. For $D_1^0 \to D^{*+} \pi^-$ one has $x = 0.89$ while for $D_0^0 \to D^+\pi^-$ $x= 1.18$. This plays a significant role in the relative widths as \begin{equation} \Bigl[\frac{\mathcal{F}(x = 0.89)}{\mathcal{F}(x = 1.18)}\Bigr]^2 = 1.6 \end{equation} which drives the ratio of widths in favour of the $D_1$. In turn the form-factor also shows that $h$, extracted earlier from the chiral model, is an underestimation. In such a model the more general Eq.~\eqref{d1widthfs} modified the heavy quark value of $h = 0.63 ^{+.16}_{-.13}$ to $h = 0.80 ^{+.20}_{-.17}$ and the effect of form-factors further increases $h$ to $h = 1.0^{+0.3}_{-0.2}$. Therefore, the inclusion of finite mass corrections and the effects of exclusive form factors has a significant impact on the value of $h$ extracted from experimental decay widths. We emphasise that although the form factor itself is model dependent, the suppression for larger $|\vec{q}|$ is expected in general. \begin{table} \begin{tabular}{lccc} System \hspace{.2in} & \hspace{.4in} HQ\hspace{.4in} & \hspace{.0in} NHQ \hspace{.4in}&\hspace{.3in} NHQFF\hspace{.4in} \\ \hline $D_1(2430)\to D^*(2010)\pi$& $0.63^{+.16}_{-.13} $ & $0.80^{+.20}_{-0.17}$ & $1.0^{+0.3}_{-0.2}$ \\ $D^*_0(2400)^0\to D\pi$ & $0.41\pm0.06$ & $0.55\pm0.08$ & $0.79\pm 0.11$ \\ $D^*_0(2400)^\pm \to D\pi$ & $0.36\pm0.04$ & $0.50\pm0.05$ & $0.80\pm 0.08$ \\ $K_1(1400)^\pm \to K^*(892)\pi$& $0.30\pm0.02$ & $0.50\pm0.04$ & $0.70\pm 0.05$ \\ $K^*_0(1430)^\pm \to K\pi$ & $0.15\pm0.04$ & $0.47\pm0.13$ & $1.2\pm 0.3$ \end{tabular} \caption{\label{tab:hvalues} Values of $h$ extracted for various systems which may experience S-wave $\pi$ exchange. The adaptation of equations for the charm-system to their appropraite form for flavor analog systems by making obvious mass substitutions is assumed. } \end{table} In summary, from these different determinations we find values ranging from $h \approx 0.5$ to $1.3$: the value of $h$ is highly model and data dependent. We collect these results and present other results for analogous systems in Table\ \ref{tab:hvalues}. The HQ column presents the values of $h$ extracted in the heavy quark limit using Eq.~\eqref{d1width}. The NHQ column is similiar but extracts the values in the non-heavy quark limit using Eqs.~\eqref{d1widthfs} and \eqref{d0widthfs}. The NHQFF column presents extracted $h$ values which would be required to overcome the form-factor suppresion, Eq.~\eqref{form-factor}, and to reproduce the correct width in the non-heavy quark limit. In the following section we will present results for a range of $h$ and show that the spectrum is highly sensitive to the value of $h$. \section{Molecular Spectroscopy} \label{sec:Spectroscopy} Previously\cite{cdprl} we performed a variational calculation with the potential in Eq.~\eqref{swavepi} and $h = 0.8 \pm 0.1$ using trial wave functions. With this technique we agreed with Ref.\ \cite{liu} that there was no reason to expect an isovector $1^{--}, D_1 \bar{D}^*$ bound state. Additionally, we found deep binding in the isoscalar $D_1 \bar{D}^*$ system. The presence of deep binding in the 1$^{--}$ channel motivated the present study where we solve the Schr\"{o}dinger equation and analyze the spectroscopy emerging from S-wave $\pi$ exchange binding of the $D_1 \bar{D}^*$ and analogous systems. We solve the Schr\"odinger equation and quantify the bound states from S-wave $\pi$ exchange using a range of $h$ to set the scale. The resulting spectrum contains several potential bound states. The $1^{--}, I=0$ channel includes 1S and 2S states which are consistent with the $Y(4260)$ and $Y(4360)$ structures claimed in $e^+e^-$ annihilation. Results for the charmonium-like exotic $1^{-+}$ and isovector $1^{--} $ channels are also presented. We find the binding energies are highly sensitive both to the value used for $h$ in all channels, and attempts to model finite size effects in some channels. We first consider the potential from the chiral model involving a pointlike interaction, and then discuss modification of the potential due to finite size effects. \subsection{Point-like pion exchange} \label{sec:results:pointlike} The Fourier transform of Eq.\eqref{swavepi} gives the $D_1 \bar{D}^*$ $1^{--}$ potential with S-wave $\pi$ exchange in coordinate space. When $\mu^2 \equiv -\tilde{\mu}^2 < 0$ the real part of the potential is: \begin{equation} V_{\rm S}(r) =\frac{h^2 (m_{D_1}-m_{D^*})^2 }{8 \pi f_{\pi}^2} \frac{\cos(\tilde{\mu} r)}{r} (\tau_i\cdot\tau_j) \label{equ:swavepi_posspace} \end{equation} in agreement with Ref. \cite{liu}. We numerically solve the Schr\"{o}dinger equation using this position space potential as described in Ref.\ \cite{fcthomas}. Clearly the results for larger values of $h$, which yield significant binding, will have important finite size corrections. Therefore the point particle results can only be considered to give a cursory quantitative examination of the implications of our general argument. However, given the unusual nature of the oscillatory potential, it is beneficial to study the unregulated potential in order to contextualize the effects of a form-factor which are explored in the next subsection. In Table \ref{table:isoscalarcharm} we show the binding energies of some of the low-lying isoscalar $D_1 \bar{D}^*$ states in relative S-wave with $C=-$ (the parity obviously depends on the relative orbital angular momentum of the system; since the potential is independent of spin, the binding energies are degenerate across all possible total $J$ combinations). Binding energies are given for a few values of $h \in [0.8, 1.3]$. The binding energies are seen to be very sensitive to the value of $h$. \begin{table}[htp] \begin{tabular}{l|c|c|c|c|c|c|} & \multicolumn{6}{c}{Binding Energy / MeV} \\ State & $h=0.8$ & $h=0.9$ & $h=1.0$ & $h=1.1$ & $h=1.2$ & $h=1.3$ \\ \hline 1S $(0,1,2)^{--}$ & $230$ & $415$ & $680$ & $1000$ & $1500$ & $2100$ \\ 2S & $12$ & $20$ & $29$ & $39$ & $76$ & $210$ \\ 3S & $1.5$ & $3.6$ & $6.7$ & $11$ & $51$ & $65$ \\ \hline \end{tabular} \caption{Binding energies for various isoscalar $D_1 \bar{D}^*$ states in $L=0$ with $C=-$; the binding energies are given for a few values of $h$ in the range identified above.} \label{table:isoscalarcharm} \end{table} If for example $h=1.0$, we find that two or even three S-states may arise, with binding energies $680\ \text{MeV}$ (1S), $29\ \text{MeV}$ (2S) and $7$ MeV (3S). The rms radii, $r_{\rm rms}$ for these states are then approximately $0.2\ \text{fm}$, $3\ \text{fm}$ and $7\ \text{fm}$ respectively. This shows that the ground state is typically hadronic, the 2S consistent with a canonical molecule and the 3S dubious. Using Fig.~\ref{fig:FFEffect}, we can interpret the results for the $r_{\rm rms}$ values obtained for the S-wave states with the point particle potential. The 1S state had an $r_{\rm rms}\approx 0.2$fm clearly indicating that the state is bound in the first attractive well. In contrast, the 2S state had an $r_{\rm rms}\approx 3$fm indicating that the particles are bound in the second attractive well of the potential. The 3S state has an $r_{\rm rms}\approx7$fm suggesting that it is bound by the third attractive well. If we take the potential, Eq.~\eqref{equ:swavepi_posspace}, and applied it to $L>0$ systems unchanged apart from the centrifugal potential, we would find multiple bound states in the P- and D-waves including some with binding energies of $\mathcal{O}(50)$MeV. Firm conclusions regarding the possiblity of such states would require a more extensive analysis of the origin of Eq.~\eqref{swavepi}~than presented here. The potential energy scales $\sim h^2$ but the binding energies are much more sensitive to $h$ (ground state binding energies scale like $\sim h^6$). This sensitivity to $h$ may not be unexpected, as the oscillatory nature of the potential in position space makes the potential turn over to repulsive when $r > 0.7$fm, and gives considerable sensitivity to these oscillations even for the short range 1S level, and critically so for the 2S. In a Coulomb potential the binding energies would scale as $\sim h^4$; this further explains the sensitivity noted above in the numerical calculation. \subsection{Form Factor} \label{sec:results:form-factor} As noted previously, the form factor has a significant impact on the calculation of $h$ from the decay width. In the previous section we used this ``form-factor-renormalised" value of $h$, but otherwise continued to treat the potential as if the hadrons were pointlike. Therefore, it is prudent to investigate what effect form factors may have on the analysis of the molecular spectroscopy. To examine this question we attach dipole form factors to the potential, Eq.~\eqref{swavepi}. Such ideas have been discussed in refs\cite{torn},\cite{newheavymesons} and \cite{liulambda}. Following those ideas, we specifically choose to parametrise the form factor as \begin{equation} \mathcal{F} = \left( \frac{ \Lambda^2 - m_\pi^2}{\Lambda^2 - q^2 } \right) \approx \left( \frac{ \Lambda^2 - m_\pi^2}{\Lambda^2 + \mu^2 - m_\pi^2 + \vec q^2 } \right) \end{equation} and the potential is multiplied by $\mathcal{F}^2$ -- we use the latter expression for ${\cal F}$. We have made the same static approximation as in Eq.~\eqref{swavepi}. In nuclear physics dipole form factors have only $\Lambda^2 + \vec q^2$ in the denominator due to the $\pi$ exchange between the nearly degenerate nucleons. In position space, the form factor changes the potential from that in Eq.~\eqref{equ:swavepi_posspace} to: \begin{equation} V_S(r) = \frac{h^2 (m_{D_1}-m_{D^*})^2 }{8 \pi f_{\pi}^2} \left[ \frac{\cos(\tilde{\mu} r)}{r} - \frac{e^{-Xr}}{r} - \frac{(\Lambda^2 - m_{\pi}^2)}{2X} e^{-Xr} \right] \left( \tau_i \cdot \tau_j \right) \label{equ:swavepi_posspace_ff} \end{equation} with $X^2 \equiv \Lambda^2 + \mu^2 - m_{\pi^2} = \Lambda^2 - (m_{D_1}-m_{D^*})^2$ and where $\mu^2 \equiv -\tilde{\mu}^2 < 0$. We plot this potential for the isoscalar $1^{--}$ channel for various $\Lambda$ in Fig.~\ref{fig:FFEffect} and for a fixed $\Lambda$ and the various $1^-$ channels in Fig.~\ref{fig:PotPlots}. \begin{figure} \includegraphics[scale=0.6]{FFEffect.eps} \caption{The potential, Eq.~\eqref{equ:swavepi_posspace_ff}, plotted against $r$ in the isoscalar $1^{--}$ channel for $h=0.8$ and a variety of $\Lambda$s. The solid line is the point particle result -- in effect $\Lambda \to \infty $; the dashed line is the result for $\Lambda = 1.5$GeV; the dash-dot line is for $\Lambda = 1.0$ GeV; the dash-dot-dot line is for $\Lambda = .75$GeV; and the dotted line is for $\Lambda = .5$GeV. \label{fig:FFEffect}} \end{figure} Here $\Lambda$ is a purely phenomenological constant. Although its value should be related to the convolution of the spatial wave functions of the hadrons, its value is fairly arbitrary in practice. In the data-rich nucleon-nucleon sector, dipole form factors have been used in the Bonn nucleon-nucleon potentials. In CD-Bonn one finds values of $\Lambda = 1.3 - 1.7 \text{GeV}$\cite{cdbonn}. However, there is no reason to believe that the value used in nuclear forces should be related to the value most appropriate for use in $\pi$ exchange between charmed mesons. In the literature, other practitioners using dipole form factors in meson exchange molecular models employ values of: $\approx$1.2 GeV\cite{torn}, $\approx 1.2$-$2.3$GeV\cite{newheavymesons}, and $\approx 0.4$-$10$GeV\cite{liulambda}. The qualitative effects of this form-factor are made apparent in Fig.~\ref{fig:FFEffect}. Regulating the potential introduces a soft-repulsive core instead of a singular attraction at the origin. As $\Lambda$ decreases, the first attractive well in the potential is entirely overwritten as a repulsive core. We present the results for the binding energy as a function of $h$ and $\Lambda$ for the 1S and 2S isoscalar $1^{--}$ $D_1 \bar{D}^*$ states in Figs. \ref{fig:1SBE} and \ref{fig:2SBE}. The horizontal axes are $1/\Lambda$ so that the origin corresponds to the point-like case. As one can see, the ground state binding energy falls off rapidly with decreasing $\Lambda$. Eventually the ground state binding energy finds a stable point and remains at approximately that energy for the rest of the considered values of $\Lambda$. This behavior is sharply contrasted by the binding energy of the 2S state. The 2S binding energy is initially insensitive to a decrease in $\Lambda$ before falling steeply and finally becoming insensitive again. This behavior can be understood from the behavior of the potential in Fig.~\ref{fig:FFEffect}. As $\Lambda$ is decreased from $\infty$, the potential is increasingly regulated. This manifests as overwriting the initial attraction from the potential and eventually replacing it by an entirely repulsive core interaction for $\Lambda \lesssim 800$MeV. Thus we would expect a steep fall off in ground state binding energy as $\Lambda$ is decreased. This expectation is borne out in Fig.~\ref{fig:1SBE}. In contrast, the 2S state is bound primarily by the second attractive well, which is unaffected by decreasing $\Lambda$ as long as $\Lambda \gtrsim 800$MeV. Thus, we would expect the 2S binding energy to be relatively stable against decreasing $\Lambda$ as Fig.~\ref{fig:2SBE} confirms. At some point, which is $h$ dependent, there will no longer be enough attraction in the first attractive well to bind the system, and so the ground state will begin to require presence in the second attractive well in order to bind, displacing the 2S state and decreasing its binding energy. When the first attractive well is completely overwritten, both the 1S and 2S states should have a relatively stable binding energy as the attractive wells (second and third) which bind them are stable against decreased $\Lambda$. Indeed the binding energies of the 1S state decrease slightly as $\Lambda\to 500$MeV corresponding with the alteration of the second attractive well in Fig.~\ref{fig:FFEffect}. \begin{figure}[htp] \includegraphics[scale=.6]{DvDbar1_CmI0_1S_BE.eps} \caption{Plot of the 1S $1^{--}$ isoscalar binding energy for multiple values of $h$ as the form factor parameter $\Lambda$ is varied.}\label{fig:1SBE} \end{figure} \begin{figure}[htp] \includegraphics[scale=.6]{DvDbar1_CmI0_2S_BE.eps} \caption{Plot of the 2S $1^{--}$ isoscalar binding energy for multiple values of $h$ as the form factor parameter $\Lambda$ is varied.}\label{fig:2SBE} \end{figure} This analysis shows that the molecular spectroscopy is very sensitive to the parameters. While a simple abstraction of parameters from existing data support the idea that a spectroscopy of molecules could arise, one cannot with certainty predict this. However, the result of strong binding appears relatively robust. Indeed our results show that the existence of robust bound states (assuming $h$ is sufficently large) does not depend on deep attraction at the origin, and that, even in the presence of a strong repulsive core interaction, a bound state should exist with a binding energy largely determined by long-range ($\gtrsim 2$fm) virtual pion effects. If the $Y(4260)$ and $Y(4360)$ are confirmed as genuine signals, then within this simple modelling, their energies are qualitatively consistent with those expected for S-wave binding. Indeed, the differing sensitivities of the 1S and 2S states to $\Lambda$ would allow one to tune the model to reproduce the binding energy of the $Y(4260)$, $174\pm9$ MeV and the $Y(4360)$, $76\pm 13$ MeV assuming the mass of the $D_1$ was exactly 2427 MeV. Since the mass of the $D_1$ affects both binding energies in a systematic way, we cannot simply add its error in quadrature for both to obtain our binding energies with their error. Instead, we study the system for $m_{D_1} = 2427-40$MeV; $2427$MeV; $2427 + 40$ MeV requiring binding energies of: $134\pm 9$MeV, $36\pm13$MeV; $174\pm 9$MeV, $76\pm13$MeV; $214\pm 9$MeV, $116\pm13$MeV. We present the ``tune-ability'' of the model in Fig. \ref{fig:YsFit}. \begin{figure}[htp] \includegraphics[scale=0.6,angle=-90]{YsFit.ps} \caption{Countour plot of the values of $h_{\rm Eff}$ and $\Lambda$. The interior of the boxes corresponds to values which reproduce the binding energies of the $Y(4260)$ and $Y(4360)$ to within errors. The center box is for the experimental $D_1$ mass while the box on the left is for the $D_1$ mass minus its error and the box on the right is the $D_1$ mass plus the error. The dotted line corresponds to the value for $h_{\rm Eff}=1.0+0.3$ from the $D_1$ experimental width.\label{fig:YsFit}} \end{figure} The mass of the $D_1$ effects the potential in two straightforward ways. First, it factors into the calculation of $h$ from the decay widths. Secondly, it helps determine the mass gap which, along with $h$ controls the strength of the potential. Although the value of $h$ and the mass gap depend on the value of the $D_1$ mass, \begin{equation} h_{\rm Eff} = h\frac{m_{D_1} - m_{D^*}}{2427 - m_{D^*}} \end{equation} is unchanged as $m_{D_1}$ varies over its error. This allows us to plot the different mass cases on a single axis. (The mass of the $D^*$ has an insignificant error.) Fig. \ref{fig:YsFit} was produced by parameterizing the binding energies. We assumed that the 2S binding energy was approximately independent of $\Lambda$ and so could be used to determine $h$. This assumption has been explicitly verified for the values of $h$, $\Lambda$ considered and is found to be a good approximation. Then the 1S binding energies were parameterized as a quadratic function of $1/\Lambda$ whose coeffecients were quadratic functions of $h$. This parameterization reproduced the 1S binding energies over the relevant range of these parameters. The quadratic formula could then be used to extract the range of $\Lambda$ from the $Y(4260)$ binding energy at each $h$. The region of compatability extends to just below the error bounds for $h_{\rm eff}$ to slightly above it. $\Lambda$ values are undetermined by experiment, however the compatible values lie around 1 GeV which is near values used by other practictioners. Thus, a consistent, physically reasonable parameterization of $h$ and $\Lambda$ is possible which permits the identification of the $Y(4260)$ and $Y(4360)$ as the 1S and 2S bound states of the $D_1 \bar{D}^*$ system respectively. In general within the chiral model, $h$ is a function of the experimental $\Gamma (D_1 \to D^* \pi)$. If experiment were to show that the width were different than the current values that we have used, the consequent alteration of the molecular binding energies could be considerable. It is here that some major limitations in the robustness of the model lie. \subsection{Other $D_1$ $D^*$ bound states and flavour exotics} \label{sec:results:othercharm} The potentials in the $C=\pm$ and isovector/isoscalar channels are related by a simple constant. The potentials of isovector and isoscalar channels are related by a $\tau_i\cdot\tau_j$ factor while the potential in channels with opposite charge conjugation are related by a relative phase. Therefore, we can use Fig.~\ref{fig:FFEffect} to interpet the binding in all these channels against finite size effects and a possible repulsive core. The robustness of our results in the isoscalar $1^{--}$ channel were discussed previously. \begin{figure} \includegraphics[scale=0.6]{PotentialPlots.eps} \caption{The potential, Eq.~\eqref{equ:swavepi_posspace_ff}, plotted against $r$ in the various $1^-$ channels for $h=1$ and $\Lambda = 1$GeV. The dotted lines are the isovector potentials while the solid lines are the isoscalar potentials. The left panel shows the $C=-$ potentials and the right panel gives the $C=+$ potentials. \label{fig:PotPlots} } \end{figure} In Table \ref{table:morecharm} we show the binding energies of some of the low-lying isoscalar and isovector $D_1 \bar{D}^*$ states in relative S-wave with $C=\pm$. Binding energies are given for a few values of $h \in [0.8, 1.3]$ and $\Lambda=1$ GeV. Interestingly we find potentially robust binding in all isospin and charge-conjugation states. \begin{table}[htp] \begin{tabular}{l c|c|c|c|c|c|c|} & & \multicolumn{6}{c}{Binding Energy / MeV} \\ State & Isospin & $h=0.8$ & $h=0.9$ & $h=1.0$ & $h=1.1$ & $h=1.2$ & $h=1.3$ \\ \hline 1S $(0,1,2)^{--}$ & 0 & $12$ & $20$ & $29$ & $60$ & $110$ & $160$ \\ 2S & & $1.6$ & $3.6$ & $23$ & $39$ & $51$ & $65$ \\ 3S & & -- & $0.7$ & $6.7$ & $11$ & $15$ & $21$ \\ \hline 1S $(0,1,2)^{--}$ & 1 & $4.2$ & $8.8$ & $15$ & $21$ & $29$ & $38$ \\ 2S & & $-$ & $-$ & $0.2$ & $0.7$ & $1.5$ & $2.8$ \\ \hline 1S $(0,1,2)^{-+}$ & 0 & $47$ & $67$ & $90$ & $120$ & $150$ & $180$ \\ 2S & & $4.2$ & $8.1$ & $13$ & $19$ & $27$ & $35$ \\ 3S & & $0.5$ & $1.6$ & $3.5$ & $6.1$ & $10$ & $14$ \\ \hline 1S $(0,1,2)^{-+}$ & 1 & $0.1$ & $0.5$ & $1.6$ & $3.4$ & $5.9$ & $8.9$ \\ 2S & & $-$ & $-$ & $-$ & $0.1$ & $0.4$ & $0.9$ \\ \hline \end{tabular} \caption{Binding energies for $D_1 \bar{D}^*$ states with various isospins and charge conjugations; the binding energies are given for a few values of $h$ in the range identified above and $\Lambda = 1$GeV.} \label{table:morecharm} \end{table} We note that the pattern of binding described here is valid for $\Lambda=1$ GeV and the pattern will be altered as $\Lambda$ changes. In particular, the pattern will change if the finite size effects wipe away less of the deep attractive core which binds the isoscalar $1^{--}$ and isovector $1^{-+}$ channels. In general a higher value of $\Lambda$ will lead to (significantly) more deeply bound isoscalar $1^{--}$ and isovector $1^{-+}$ bound states, and slightly less bound isovector $1^{--}$ and isoscalar $1^{-+}$ states. The pattern of relative binding energies between the channels may be understood from Fig.~\ref{fig:PotPlots}. The most deeply bound states occur in the isoscalar $1^{-+}$ channel where the potential is repulsive near the origin but has a deep attractive well (due again to the isospin factor of 3) near 1fm. The second most deeply bound states occur in the isoscalar $1^{--}$ channel where the $\tau_i\cdot \tau_j$ term contributes a --3 factor and there is a deep attraction near the origin. We can see in Fig. \ref{fig:PotPlots} that the form-factor has reduced the magnitude of the first dip in the oscillating potential (around 0.3 fm), making it smaller than the first bump (around 1.2 fm). This is why the isoscalar $1^{-+}$ channel has deeper binding than the $1^{--}$ channel. The isovector channels lose the isospin factor of 3, leading to significantly reduced binding in these channels. However, their relative binding is the same: the $1^{--}$ potential retains the deeper attraction near 1 fm whereas the isovector $1^{-+}$ channel loses the attraction around the origin due to form-factor effects. Hence the isovector $1^{-+}$ channel is the least deeply bound when $\Lambda =1$ GeV. Consequently the prediction of bound states in the isovector $1^{-+}$ is the least robust. The situation is very different for the isovector $1^{--}$ and isoscalar $1^{-+}$ channels. In both of these channels the point particle potential is repulsive at the origin and they must bind in the first attractive well which is $\approx$1fm away from the origin. Therefore we expect these numerical results to be robust against finite size effects and a repulsive core. However, in the presence of intense regulation of the potential then, in this channel, the deep attraction being overwritten to strong repulsion with decreasing $\Lambda$, shown in Fig.~\ref{fig:FFEffect}, becomes a strong repulsion being overwritten to deep attraction. Therefore, we conclude that the existence of deep binding in these channels is a very robust result which should be insensitive to strong, short-range dynamics and totally independent of finite size effects of the potential, though both may contribute to deeper binding. The ranges of $h$ and $\Lambda$ which reproduce binding energies for the $Y(4260)$ and $Y(4360)$ (see Fig. \ref{fig:YsFit}) are of particular interest. The case h=1.3 (Table \ref{table:morecharm}) illustrates how it is possible to identify the 1S and 2S 1$^{--}$ respectively as $Y$(4260) and $Y$(4360). In such a scenario it is possible that a third 1$^{--}$ state could occur around 4400 MeV. But of most interest is the prediction of a robust isoscalar exotic 1$^{-+}$ bound state in the vicinity of, or even below, the $Y$(4260). If this exotic state is below the $Y$(4260) then it may possibly be observed through $Y(4260)\rightarrow Y(4200?) +\gamma$. Table \ref{table:morecharm} shows binding in both isovector $1^-$ channels. We therefore must reverse our previous concurrence\cite{cdprl} with the conclusions of Ref.~\cite{liu}: when subjected to a more complete analysis we find that a bound state may exist due to one pion exchange between $D_1\overline{D^*}$ in the isovector $1^-$ channel. We find it interesting to note that the $Z(4430)$ has a mass of $4433 \pm 4$ MeV\cite{z4430}. Therefore if it were a $D_1\overline{D^*}$ molecule, it would have a binding energy of $4\pm 9$ MeV. This binding energy is compatible with a charged partner of the $1^{-+}$ isovector result for the range of $h$ and $\Lambda$ which reproduces the $Y(4260)$ and $Y(4360)$. A more complete analysis than that provided here is necessary to make a definitive identification. However we find the possiblity that one pion exchange might provide a consistent description of the $Y(4260)$, $Y(4360)$, and the $Z(4430)$ with physically reasonable parameters encouraging. In addition, we predict doubly charmed ($D_1 D^*$ as opposed to $D_1 \overline{D^*}$) isoscalar and isovector states degenerate with respectively the isoscalar and isovector $D_1 \overline{D^*}$ states in $C=-$. We refer to Ref.\ \cite{fcthomas} for a discussion of the signs involved. \subsection{Bottom analogues} \label{sec:results:bottomkaon} In Table \ref{table:isoscalarbottomkaon} we present the binding energies of some of the low-lying isoscalar $B_1 \bar{B}^*$ states in relative S-wave with $C=-$, along with the analogous $D_1 \bar{D}^*$ states for comparison. Binding energies are given for a few values of $h \in [0.8, 1.3]$ and $\Lambda = 1$ GeV. \begin{table}[htp] \begin{tabular}{l|c|c|c|c|c|c|} & \multicolumn{6}{c}{Binding Energy / MeV} \\ State & $h=0.8$ & $h=0.9$ & $h=1.0$ & $h=1.1$ & $h=1.2$ & $h=1.3$ \\ \hline 1S $D_1 \bar{D}^*$ $(0,1,2)^{--}$ & $12$ & $20$ & $29$ & $60$ & $110$ & $160$ \\ 2S & $1.6$ & $3.6$ & $23$ & $39$ & $51$ & $65$ \\ 3S & -- & $0.7$ & $6.7$ & $11$ & $15$ & $21$ \\ \hline 1S $B_1 \bar{B}^*$ $(0,1,2)^{--}$ & $56$ & $93$ & $140$ & $190$ & $250$ & $320$ \\ 2S & $20$ & $29$ & $38$ & $49$ & $61$ & $75$ \\ 3S & $6.3$ & $9.8$ & $14$ & $19$ & $24$ & $30$ \\ \hline \end{tabular} \caption{Binding energies for various isoscalar $B_1 \bar{B}^*$ and $D_1 \bar{D}^*$ states in $L=0$ with $C=-$; the binding energies are given for a few values of $h$ in the range identified above.} \label{table:isoscalarbottomkaon} \end{table} The binding energies are generally deeper than in the charmed analogues. This is easily understood: the higher mass of the $B$ mesons result in a lower kinetic energy. In general we predict analogous effects in the $B$ analogs of the charmed system, subject to differences in the width which is experimentally undetermined for the $B_1$. Similar effects may exist in the $K$ system. However the phenomenology of the $K_1$ is more complex and the heavy quark approximation is certainly inadequate. Together with the constraint on $h$ implied by the width, this prevents us making quantitative conclusions, we only note the qualitative possibility that S-wave pion exchange may produce binding in the $K$ system. \section{Discussion} \label{sec:Uncertaintities} The results for binding energies, and even whether states bind at all, are sensitive to parameters, and also to more complicated (possibly more realistic) modelling of the strong interactions. We have focussed solely on the $t$-channel force from virtual pion exchange, specifically, the four fermion intermediate states in the Fock state. Therefore, we have taken only the real part of the potential and solved the Schr\"{o}dinger equation and ignored the imaginary part arising from the exchange of a real, on-shell, pion. There are also $s$-channel forces arising from intermediate $c\bar{c}$ excited states. More immediately in our molecular approach there are intermediate states with a real pion, of form $D^*\pi \bar{D^*}$. The ability of a virtual exchanged particle to be on-shell introduces an imaginary component to the matrix element and, hence, to the potential. The effect is to make the energy complex: the real part is taken as the binding energy while the imaginary part is interpreted as the width of the state. That the on-shell intermediate state would manifest as a width seems natural as it represents a direct connection between the bound state and a possible decay channel. The picture is then that the $D_1$ decays into a $D^*\pi$ and the ``would be" quasi-molecular bound state disintegrates, or even fails to form. Thus, we expect the on-shell pion contribution will endow any state produced by this mechanism with a width, or that it produces a non-resonant background which may obscure the signal. Within our approximations we find deeply bound meta-stable states. The $D^*\pi \bar{D^*}$ generates widths and background. Whether these states remain visible is then dependent upon the relative importance of neglected forces, such as mixing with $c\bar{c}$ or $D^*\pi \bar{D^*}$. In general it is difficult to calculate the impact of neglected effects, not least because strong interactions are complicated and we are approximating one particular force as dominant. If the $Y(4260)$ is an example of our states, then its visibility shows that Nature is kind, at least in the $1^{--}$ channel. It has given a width of ${\cal O}(100)$MeV and a visibility above background. It could be that this fortune is because a $c\bar{c}$ component drives the production, and the $D_1\bar{D^*}$ rearrangement then drives the $\psi \pi\pi$ signal. Thus the conclusion of this analysis is that while it is {\it possible} that a deep bound molecular spectroscopy with signals visible above a background can arise, {\it it is not mandatory}. However, as we have already noted, the appearance of $Y(4260)$ and $Y(4360)$ are consistent with being the first two states observed in such a spectroscopy. The immediate test of this is to seek evidence for these states in $D\bar{D}\pi\pi\pi$. Unless there is some dynamical suppression, such channels must show strength if a $D^*\bar{D_1}$ bound system is present. If this first test is passed, then a search for other transitions and evidence for analogous states in $B^*\bar{B_1}$ would be warranted. In this latter case we note the apparent presence of anomalous state $\Upsilon(10.88)$\cite{georgehou} This and other phenomenological implications are the theme of the next section. \section{Phenomenology} \label{sec:Phenomenlogy} We have studied the $D^*\bar{D_1}$ molecules and found deeply bound states with $I=0$, which are degenerate for the $(0,1,2)^{--}$ channels. However, the number of potentially deeply bound states is very sensitive to parameters. Typically we anticipate the binding energies of the $I=0$ states to have the orders of magnitude as follows: 1S $O(10-100)$MeV; 2S $O(1-10)$MeV, with an exotic $1^{-+}$ between the 1S and 2S levels. Further reasons to anticipate a rich spectroscopy are that this S-wave $\pi$ exchange also can occur for $D_0\to D$ and the off-diagonal $D\bar{D_1} \to D_0\bar{D^*}$. The strengths for each of these in the heavy quark limit are identical. In practice there will be model dependent perturbations due to mass shifts and mixings; these are beyond the present paper and only merit study if the general features of our model show up in the data. In general: if S-wave pion exchange forms deeply bound charmed molecules comprised of $D^*\bar{D_1}; D^*\bar{D_0};D\bar{D_1};D\bar{D_0}$ (and manifest charm analogs) , there will be a rich spectroscopy of states in the $3.9 - 4.5$ GeV mass range. These can include states that are superficially charmonium, such as $I=0$, $0^{-+}$ and $1^{--}$, as well as exotic $J^{PC}: 0^{--} $ and $1^{-+}$. In addition there are also states with charmonium character but $I=1$. States such as $I=0$, $0^{-+}$ and $1^{--}$ may contain $c\bar{c}$~ in their Fock state and hence be produced at measurable rates; the other states have no such aid, but may be produced in radiative or strong transitions from higher lying molecules. Manifestly charm ($D^*D_1$, etc.) states are also expected and are degenerate with the $C=-$ charmonium like states. The pattern and observability of these will depend on the detailed pattern of the spectroscopy. The states that are most amenable to experimental study are the $I=0$, $1^{--}$. These occur in $D_1D^*$, and also can arise from the off-diagonal S-wave potential for $DD_1 \to D_0 D^*$. Hence there can be a rather rich spectroscopy in the $1^{--}$ sector. As there are apparently several states of varying statistical significance emerging in the data, we shall primarily focus on this channel here. The best established enigmatic structure in the $1^{--}$ sector here is $Y(4260)$ which is seen in $\psi \pi\pi$. Its typical hadronic width $\Gamma(4260) \sim 90$MeV implies that either $\psi \pi\pi$ is not the dominant decay channel or that 40 years of experience with the OZI rule and strong interactions is wrong. Given the nearness of the $D(L=0)+D(L=1)$ thresholds which can be accessed in S-wave, rearrangement into $\psi \pi\pi$ at low momentum seems reasonable, and has been invoked as a qualitative explanation of these phenomena\cite{closepage}. As $D^*$ and $D_0 \to D\pi$, whereas $D_1 \to D \pi\pi$, then if the dynamics are associated with the nearby $D\bar{D_1}$ and $D^*\bar{D_0}$ thresholds, such as $Y(4260)$ being a $D\bar{D_1}$ molecule, or a hybrid $c\bar{c}$~ that is dynamically attracted towards that threshold, then strength should be seen in the $D \bar{D} \pi\pi$ channels\cite{closepage,closetalk}. However, if the $Y(4260)$ is a $D^*\bar{D_1}$ bound state, then the favored strong decay will be $\to D\bar{D} 3 \pi$ in contrast to the aforementioned $D\bar{D_1}$ or $D^*\bar{D_0} \to D\bar{D} 2\pi$. A preliminary report from Belle\cite{belle2pi} sees no evidence for $D^*D\pi$ in the $Y(4260)$ region. This disfavors $D\bar{D_1}$ and potentially also the $D\bar{D}\pi\pi$ channel. Thus by default, the possibility that the strength is driven by $D\bar{D}\pi\pi\pi$ becomes tantalizing. Thus an immediate consequence of this interpretation is that if the $Y(4260)$ is a $D^*\bar{D_1}$ molecule, there {\it must} be significant coupling of the $Y(4260) \to D\bar{D}\pi\pi\pi$ that could exceed that to $D\bar{D}\pi\pi$. More generally an unavoidable conclusion of this dynamics is that in the $1^{--}$ sector the $e^+e^- \to D \bar{D} \pi\pi\pi$ channel has significant strength in the region of any $D^*\bar{D_1}$ molecular states. Hence we urge measurement of the relative importance of the channels $e^+e^- \to D \bar{D} \pi\pi\pi$ and of $e^+e^- \to D \bar{D} \pi\pi$ (when, in the latter, $D^*\bar{D^*}$ has been removed). The depth of binding of the ground state with trial wave functions already suggested\cite{cdprl} the tantalizing possibility that a radially excited state could also be bound. Numerical solutions of the Schrodinger equation confirmed that this is likely to be the case in the range of models discussed here. The excitation energy for radial excitation of a compact QCD $c\bar{c}$~state is $\mathcal{O}(500)$MeV; it takes less energy $\mathcal{O}(100)$MeV to excite the extended molecular system which has no linearly rising potential. The spatial extent of the molecular $2$S system is significantly greater than that of $c\bar{c}$~ hadrons. The rearrangement of constituents leading to final states of the form $\psi$ + light mesons then rather naturally suggests that the lower (radial) states convert to $\psi \pi\pi$ ( $\psi' \pi\pi$) respectively. In this context it is intriguing that there are states observed with energies and final states that appear to be consistent with this: $Y(4260) \to \psi \pi\pi$ \cite{4260} and the possible higher state $Y(4360) \to \psi' \pi\pi$ \cite{babar1,belle1} are respectively 170MeV and 70 MeV below the $D^*(2010)\bar{D_1}(2420)$ combined masses of 4430MeV. Here again, for a $D^*\bar{D_1}$ molecular state, we would expect significant coupling to $D\bar{D}\pi\pi\pi$. If these states were to be established as members of molecular systems, one could tune the model accordingly. Further, this could be an interesting signal for a $D^*\bar{D_1}$ quasi-molecular spectroscopy with transitions among states that could be revealed in, for example, $e^+e^- \to \psi\gamma\gamma\pi\pi$. Indeed, if we identify $1$S$(4260)$ and $2$S$(4360)$, then we expect the exotic $1^{-+}$ to occur in the vicinity of the $Y(4260)$. Given that lattice QCD finds activity for a hybrid cc* signal in this channel in this region, one should now actively search for evidence. A clear signature is that the $1^{-+}$ hybrid will couple to $D\bar{D}\pi\pi$ in either the $D^*\bar{D_0}$ or $D\bar{D_1}$ combinations; looking for the presence of strength in $e^+e^- \to \gamma D\bar{D}\pi\pi$ which does not include $D^*\bar{D^*}$ should thus be a primary endeavor. The absence of such a channel could have far reaching implications for theory. While our discussion has centered on charmonium, the remarks hold more generally. Since the attraction of the potential depends only on the quantum numbers of the light $q\bar q$, it follows immediately that the flavor of the heavy quarks is irrelevant, at least qualitatively. Hence we expect similar effects to occur in the $b\bar{b}$ and $s\bar{s}$ sectors. It has been noted that $\Upsilon(10.86)$GeV appears to have an anomalous affinity for $\Upsilon \pi\pi$\cite{upsilon}. This state is $\sim 130$ MeV below $B^*\overline{B}_1$ threshold. In the $\phi\pi\pi$ channel there is an enhancement at 2175MeV\cite{phi}. This is approximately 125MeV below the $K^*\overline K_1(1400)$ threshold. This is consistent with the $K^*\bar{K_1}$ spectroscopy; however, as commented earlier, analysis here is less reliable, as the heavy quark approximation fails, and the phenomenology of the $K_1(1270;1400)$ pair is more complicated\cite{barnes,lipkin}. The primary test for this picture is that if the states in the $4$ to 4.5 GeV region are deeply bound $D^*\bar{D_1}$ spectroscopy, then their decays in charm pairs must show strength in the $D\bar{D}\pi\pi\pi$ channels. The energy dependence of this channel and that of $D\bar{D}\pi\pi$ (with no $D^*\bar{D^*}$) can reveal the mixings between $D^*\bar{D_1}$ and $D\bar{D_1}/D^*\bar{D_0}$ molecular systems. The presence of exotic $1^{-+}$ is also expected. \section{Conclusions} \label{sec:Conclusions} In general we find that deeply bound molecules in the $D_1\bar{D^*}$ system should occur as a result of $\pi$ exchange in S-wave, leading to a potentially rich spectroscopy. Whether such states are narrow enough to show up above background is a question that experiment may resolve. We note however that the emerging data on the $1^{--}$ states known as $Y(4260)$ and $Y(4360)$ are consistent with being examples of these molecular states. The immediate test is to verify if the prominent channels with manifest charm in this mass region are $D\bar{D}\pi\pi\pi$. If this is confirmed, then more detailed studies will be merited, in particular searches for an exotic $1^{-+}$ in the vicinity of 4.2GeV. This state could be produced via $Y(4260) \to (1^{-+}) + \gamma$, and/or be revealed in 1$^{-+} \to \psi \gamma$. Table \ref{table:morecharm} with h=1.3 shows a possible spectroscopy consistent with the Y(4260) and Y(4360) as the 1S and 2S $1^{--}$ states. In this case, the exotic states expected are I=0 $0^{--}$ also at 4260 and 4360 (in both charmonium-like and manifestly charm channels); the isoscalar $1^{-+}$ at 4250 and 4395; and also I=1 ``charmonium" states, including $1^{--}$ at 4390. As long as one picks and chooses which datum one will fit, it is possible to fit it in a molecular model. A reason is that binding energies are very sensitive to parameters that are not well determined elsewhere. Thus a model designed to fit a single state has limited appeal. The more relevant test is whether a group of states share a common heritage, and their production or decay properties reveal the underlying molecular structure. In the particular case here, one can fit the masses and decay widths in tetraquark, hybrid and molecular models. As such the existence of these states does not discriminate among them. However, the pattern of $J^{PC}$ and the decay channels differ. Thus the sharpest tests of their dynamical structure appears to be in the decay branching ratios. Hence, for example, the $Y(4260)$ as a $cs\bar{c}\bar{s}$ tetraquark would couple to $D_s\bar{D_s}$; a hybrid or molecule associated with $D\bar{D_1}$ threshold would be expected to appear in $D\bar{D_1} \to D\bar{D}\pi\pi$; molecules associated with the $D^*\bar{D_1}$ threshold by contrast would have significant strength in the $D\bar{D}\pi\pi\pi$ channels. Thus the decay branching ratios of states seem likely to be sharper indicators of their dynamical nature than simply their masses. If our hypothesis is correct, we expect significant strength in the $e^+e^- \to D\bar{D}\pi\pi\pi$ channels in the 4 to 5 GeV region. Such evidence may already exist among the data sets for $e^+e^-$ annihilation involving ISR at BaBar and Belle. \section*{Acknowledgements} One of us (FEC) thanks Jo Dudek for a question at a Jefferson Lab seminar which stimulated some of this work and T. Burns for discussion. This work is supported by grants from the Science \& Technology Facilities Council (UK), in part by the EU Contract No. MRTN-CT-2006-035482, ``FLAVIAnet.', and in part authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.
train/arxiv
BkiUfHPxK6Ot9Pm3xEOo
5
1
\section{Introduction} The Large High Altitude Air Shower Observatory ( LHAASO )~\citep{LHAASO} project aims to study 40 GeV-1 PeV gamma ray astronomy and 20 TeV-1 EeV cosmic ray physics at Yangbajing ( 4300 m a.s.l. ), Tibet, China, near the AS$_{\gamma}$ and ARGO-YBJ experiments. The Wide Field of View ( FOV ) Cherenkov telescope Array ( WFCTA ), as a component of the LHAASO project, is designed to study cosmic ray energy spectrum specie by specie by measuring the energy and $X_{max}$ depth of each air shower. Two WFCTA prototype telescopes have been constructed and placed nearby the ARGO-YBJ experimental hall. The two telescopes can be operated in both monocular and stereo modes, while coincident observation with the ARGO-YBJ detector is achieved off-line. Each telescope is made up of two main parts, the reflector and the camera. The reflector consists of 20 spherical mirrors with a radius curvature $R$ of 4740$\pm$20 mm, corresponding to a total area of 4.7 m$^{2}$. The reflecting efficiency of the mirrors is about 82\% for light with wavelength larger than 300 nm. A camera is placed at the focal plane which is 2305 mm away from the reflector center to optimize the spot shape of a point-like object. The camera is composed of 256 flat hexagonal photomultipliers tubes ( PMTs ) each of which has a diameter of 40 mm, corresponding to a FOV of about $1^{\circ}\times1^{\circ}$. The PMTs are arranged in 16 columns and 16 rows forming a total FOV of $14^{\circ}\times16^{\circ}$~\citep{He-2007}. The maximum quantum efficiency of PMTs can reach $30\%$ at 420 nm~\citep{PMT}. The signals of the PMTs are digitized by 50 MHz Flash Analog to Digital Converters ( FADCs ). The whole system is hosted in a shipping container with a dimension of 2.5 m$\times$2.3 m$\times$3 m. The container is mounted on a standard dump-truck frame with a hydraulic lift that allows the container to be tilted in any elevation angles from 0 to 60 degrees. The pointing direction of the telescope can be easily changed~\citep{He-2007}. The pointing accuracy and geometry properties of the telescopes are crucial to the arrival direction and $X_{max}$ reconstructions. The energy reconstruction relies on the recorded Cherenkov image, however, due to the imperfect physical junction between PMTs, some Cherenkov photons are unrecorded in the joints. In order to improve the accuracy of energy reconstruction the optical properties are studied. In this paper, we describe a method to calibrate the geometry and optical properties by using UV bright stars. While monitoring the UV Cherenkov light from air showers, the telescopes are also sensitive to the UV light from stars crossing the FOV of the telescopes. With their well known positions, orders of magnitude more accurate than the required resolution of WFCTA, and their point-like shape, the stars are ideal tools to test the pointing direction of each telescope. Using stars the optical properties of the telescope are studied. \section{Night sky background and star signal} In addition to record the Cherenkov light from air showers, the camera also records the night sky background light (NSB). When a bright star enters the FOV of the telescope, light from the star is added to the diffused NSB. The changes of the recorded NSB reflect the light sources crossing the FOV of the PMTs and the weather conditions. A star stays about 4 minutes in the FOV of a PMT, during this time light from the star is added to the diffused NSB. While NSB change due to weather change behaves differently, which usually lasts much longer than 4 minutes and almost all PMTs are affected at the same time. This enables star signals to be discriminated from weather change. For each air shower event, the signal of Cherenkov light only lasts a few nanoseconds, while the trigger window lasts $18\mu s$, so telescopes record NSB in most of the trigger window. In order to reduce the fluctuation the recorded NSB is averaged in every $10s$. Fig.~\ref{nsb} shows a typical NSB curve in one night recorded by a PMT. After subtracting the diffused background, peaks due to stars are clearly seen. The peak amplitude of a star light curve in a PMT depends on its UV brightness and its projected position to the PMT on the camera. In a typical night, many stars can be seen by each PMT. \begin{center} \includegraphics[width=9cm]{eps/bkg_example_119.eps} \figcaption{\label{nsb} A typical NSB curve recorded by a PMT in one night before (upper) and after (bottom) subtracting the diffused NSB. } \end{center} \section{Pointing direction} Stars with well known positions and brightness are used as guiders of the telescopes. In our analysis, the TD 1 catalog is used, which has four different wavelength bands, 1565, 1965, 2365 and 2740 $\AA$ respectively~\citep{TD1}. Since the WFCTA telescopes are sensitive in the near UV band, stars with flux in the 2740${\AA}$ band above $1\times10^{-11}erg/cm^{2}/s$ are used. The pointing direction of each telescope can be changed through the container which encloses the whole telescope system. Only very rough pointing direction (about 200$^{\circ}$ in azimuth and 60$^{\circ}$ in elevation) is known through it. Using the rough pointing direction, the time when the brightest star appears in the FOV of the telescope can be found through the brightest PMT. However, due to the large size of the PMT, the position of the PMT can not show accurately the position of the brightest star on the camera. So the weighted center position ($x_{0}$, $y_{0}$) of the PMT and its neighbors within $2^{\circ}$ is considered as the position of the star on the camera. When the star is in the middle of the camera in the horizontal direction, the star has the same azimuth angle with the telescope, while the elevation angle of the star is equal to the elevation of telescope plus the distance between the position of the star and the center of the camera. The obtained pointing direction of the telescope is more accurate than the one from the container. After getting the pointing direction of the telescope through the brightest star, orphan stars which have no surrounding stars within $2^{\circ}$ are used to correct the pointing direction of the telescope by using iteration method. The accuracy of the pointing direction can be described by differences between positions of stars in the local coordinates and the obtained ones. Fig.~\ref{pointing} shows the distribution of the differences for one telescope. An accuracy better than $0.05^{\circ}$ is obtained in less than 20 minutes with five stars in the FOV. \begin{center} \includegraphics[width=8cm]{eps/delta_a.eps} \includegraphics[width=8cm]{eps/delta_e.eps} \figcaption{\label{pointing} The distributions of the differences between positions of stars and the obtained ones in azimuth (upper) and elevation (bottom). A Gaussian fit is superposed to each distribution.} \end{center} Fig.~\ref{PointingMonitor} shows the pointing of one telescope changing with time in two observation periods, i.e. two months. Both azimuth and elevation are very stable. The big change of the elevation was due to lowering and elevating the container between the two periods. \begin{center} \includegraphics[width=8cm]{eps/azimuth_day.eps} \includegraphics[width=8cm]{eps/elevation_day.eps} \figcaption{\label{PointingMonitor} The pointing directions of one telescope versus time in two months (Up: azimuth, Bottom: Elevation). Each point represents one day.} \end{center} \section{Camera geometry} The camera geometry calibration is done after pointing correction of each telescope. The calibration includes the following four parameters. The first one ( $P_{1}$ ) is a scaling of the tubes away from the center of the tube cluster. To first order $P_{1}$ corrects for deviations in the radius curvature of a mirror and for changes in the effective camera-mirror distance due to the treatment of the flat camera as a curved surface. The second parameter ( $P_{2}$ ) describes the rotation angle of the camera around the mirror axis. The last two parameters ( $P_{3}$, $P_{4}$) indicate the offsets of the shift in the position of the entire camera with respect to the mirror axis. After the four parameters corrections the positions ($x_{c,t}$, $y_{c, t}$) of stars on the camera at time $t$ are modified by Eq.~(\ref{eq2}) and Eq.~(\ref{eq3}). \begin{equation} \label{eq2} x_{c, t}'=(1+p_{1})(x_{c, t}cos(p_{2})-y_{c, t}sin(p_{2}))+p3 \end{equation} \begin{equation} \label{eq3} y_{c, t}'=(1+p_{1})(x_{c, t}sin(p_{2})+y_{c, t}cos(p_{2}))+p4 \end{equation} The $x_{c, t}'$ and $y_{c, t}'$ in Eq.~(\ref{eq2}) and Eq.~(\ref{eq3}) are the stars' coordinates after geometry correction. The four parameters can be obtained by the least squared method. The $\chi^{2}$ is shown in Eq.~(\ref{eq1}). \begin{equation} \label{eq1} \chi^{2}=\sum_{star}\sum_{t}\frac{(x_{c,t}'-x_{star,t})^{2}+(y_{c,t}'-y_{star,t})^{2}}{\sigma_{x}^{2}+\sigma_{y}^{2}} \end{equation} In Eq.~(\ref{eq1}), the $x_{star, t}$, $y_{star, t}$ are the stars' exact positions on the camera at time $t$. The $\sigma_{x}$ and $\sigma_{y}$ are the RMS of the $x_{c, t}'$ and $y_{c, t}'$, respectively. By minimizing the $\chi^{2}$, the values of $P_{1}$, $P_{2}$, $P_{3}$ and $P_{4}$ are found to be $-1.5\%$, $0.6^{\circ}$, $-0.05^{\circ}$ and $-0.05^{\circ}$. These parameters are used in the detector simulation and data reconstruction. \section{Spot size and variations of the number of observed photons} Photons from any given direction form a quasi-Gaussian-shaped spot on the camera, which is infected by imperfection of the mirrors surface. The spot size depends on the angular distance to the optical axis. The larger the angular distance to the optical axis, the larger the spot size and the more it deviates from a Gaussian shape due to coma of the image. The spot size as an important parameter that affects the Cherenkov images of air showers is taken into account in the ray tracing procedure in both data analysis and detector simulation. \begin{center} \includegraphics[width=9cm]{eps/spot_xy.eps} \figcaption{\label{spot_xy} One example of light curves in X (left) and Y (right) directions when a star passes through a PMT's FOV.} \end{center} Bright stars can be considered as perfect point sources. The light from a bright star is collected by mirrors and projected to the camera, forming a light spot. The camera records the light spot in poor resolution due to the large pixel size. If a PMT is on the track of a bright star, the PMT gets brighter and brighter as the star goes nearer, dimmer and dimmer as the star leaves. If the spot size is much smaller than the track length of the star in the FOV of the PMT, the light curve recorded by the PMT will be rectangular-shaped, with the width approximately equal to the track length, while if the spot size is much larger, the light curve also will be rectangular-shaped, with the width equal to the spot size. In our case, the spot size is similar to the pixel size, the light curve becomes quasi-Gaussian-shaped. Fig.~\ref{spot_xy} shows one example. The spot size can be estimated by fitting the light curve accordingly. In order to get rid of the affects of nearby stars, only orphan stars, which have no nearby stars within $2^{\circ}$, are used. Fig.~\ref{spot_fit} shows the variation of the spot size with angular distance to mirror axis. The spots become larger from the center of the camera to the edge. The spot size obtained from bright stars is used to improve the energy and arrival direction reconstruction. \begin{center} \includegraphics[width=8cm]{eps/spot-distance.eps} \figcaption{\label{spot_fit} Spot size varies with angular distance to mirror axis.} \end{center} Photons which fall in the insensitive areas of the camera are never recorded. In the data analysis and detector simulation, the unrecorded photons have been taken into account as part of the ray tracing procedure. This is important in shower energy estimation and tested using the stars crossing the field of view of the telescopes. When a bright star with a constant flux passes through the FOV of the camera, the number of recorded photons varies due to the different positions of the star on the camera. When the star is near to the center of a PMT, most photons fall in the sensitive areas of the camera, while when the star is near the conjunction of two PMTs, most photons fall in the insensitive areas of the camera, so the variations of the recorded photons from the star on its track can be observed which is shown in the Fig.~\ref{lightlost}. The variations are also effected by the weather conditions, so several clear nights data is used to get an average behavior of it. In the Fig.~\ref{lightlost}, the simulation of the variations along the track of the star is also shown which is consistent with the observed one. This demonstrates that the ray tracing simulation correctly copes with the fraction of the unrecorded photons in the insensitive areas of the camera. \begin{center} \includegraphics[width=9cm]{eps/lightlost.eps} \figcaption{\label{lightlost} The back squares show the variations of the observed photons on the track of a star, while the black curve shows the simulated one. } \end{center} \section{Conclusions} Bright stars are used as a guider to calibrate the pointing direction of each telescope and the geometry of the camera. The pointing accuracy obtained through bright stars is better than $0.05^{\circ}$. The long term stability of the pointing direction of the telescope is also monitored by bright stars. Moreover, as point sources, the bright stars are also used to study the spot shape. The spot size becomes larger from the center to the edge of the camera. Besides the spot size, the fraction of the unrecorded photons in the insensitive areas between PMTs is compared with the simulated one, and they are consistent with each other. The errors caused by this effect are well understood and under control in the energy reconstruction. The pointing direction and the correction of the geometry and optics of the telescopes are used in the simulation and data analysis to improve the reconstructions of the energy and arrival direction of the air shower. \acknowledgments{ This work is supported by 100 Talents Programme of The Chinese Academy of Sciences, Knowledge Innovation Program of The Chinese Academy of Sciences ( H85451D0U2 ), National Natural Science Foundation of China ( 10975145 ).} \vspace{-2mm} \centerline{\rule{80mm}{0.1pt}} \vspace{2mm}
train/arxiv
BkiUdNc5qoYAw9DL6Sqz
5
1
\section{Introduction} We continue our recent work on additive problems with prime summands. In \cite{LanguascoZ2012a} we studied the \emph{average} number of representations of an integer as a sum of two primes, whereas in \cite{LanguascoZ2012b} we considered individual integers. In this paper we study a Ces\`aro weighted \emph{explicit} formula for Hardy-Littlewood numbers (integers that can be written as a sum of a prime and a square) and the goal is similar to the one in \cite{LanguascoZ2012f}, that is, we want to obtain an asymptotic formula with the expected main term and one or more terms that depend explicitly on the zeros of the Riemann zeta-function. Letting \begin{equation} \label{r2-def} r_{\textit{HL}}(n) = \sum_{m_1 + m_2^2 = n} \Lambda(m_1), \end{equation} the main result of the paper is the following theorem. \begin{Theorem} \label{CesaroHL-average} Let $N$ be a sufficiently large integer. We have \begin{align*} \sum_{n \le N} r_{\textit{HL}}(n) \frac{(1 - n/N)^k}{\Gamma(k + 1)} &= \frac{\pi^{1 / 2}}2 \frac{N^{3 / 2}}{\Gamma(k + 5 / 2)} - \frac 12 \frac{N}{\Gamma(k + 2)} - \frac{\pi^{1 / 2}}2 \sum_{\rho} \frac{\Gamma(\rho)}{\Gamma(k + 3 / 2 + \rho)} N^{1 / 2 + \rho} \\ &+ \frac 12 \sum_{\rho} \frac{\Gamma(\rho)}{\Gamma(k + 1 + \rho)} N^{\rho} + \frac{N^{3 / 4 - k / 2}}{\pi^{k + 1}} \sum_{\ell \ge 1} \frac{J_{k + 3 / 2} (2 \pi \ell N^{1 / 2})}{\ell^{k + 3 / 2}} \\ &- \frac{N^{1 / 4 - k / 2}}{\pi^k} \sum_{\rho} \Gamma(\rho) \frac{N^{\rho / 2}}{\pi^\rho} \sum_{\ell \ge 1} \frac{J_{k + 1 / 2 + \rho} (2 \pi \ell N^{1 / 2})} {\ell^{k + 1 / 2 + \rho}} + \Odip{k}{1}. \end{align*} for $k > 1$, where $\rho$ runs over the non-trivial zeros of the Riemann zeta-function $\zeta(s)$ and $J_{\nu} (u)$ denotes the Bessel function of complex order $\nu$ and real argument $u$. \end{Theorem} Similar averages of arithmetical functions are common in the literature, see, e.g., Chan\-dra\-sekharan-Narasimhan \cite{ChandrasekharanN1961} and Berndt \cite{Berndt1975} who built on earlier classical works (Hardy, Landau, Walfisz and others). In their setting the generalized Dirichlet series associated to the arithmetical function satisfies a suitable functional equation and this leads to an asymptotic formula containing Bessel functions of real order and argument. In our case we have no functional equation, and, as far as we know, it is the first time that Bessel functions with complex order arise in a similar problem. Moreover, from a technical point of view, the estimates of such Bessel functions are harder to perform than the ones already present in the Number Theory literature since the real argument and the complex order are both unbounded while, in previous papers, either the real order or the argument is bounded. The method we will use in this additive problem is based on a formula due to Laplace \cite{Laplace1812}, namely \begin{equation} \label{Laplace-transf} \frac 1{2 \pi i} \int_{(a)} v^{-s} e^v \, \dx v = \frac1{\Gamma(s)}, \end{equation} where $\Re(s) > 0$ and $a > 0$, see, e.g., formula 5.4(1) on page 238 of \cite{ErdelyiMOT1954a}. In the following we will need the general case of \eqref{Laplace-transf} which can be found in de Azevedo Pribitkin \cite{Azevedo2002}, formulae (8) and (9): \begin{equation} \label{Laplace-eq-1} \frac1{2 \pi} \int_{\mathbb{R}} \frac{e^{i D u}}{(a + i u)^s} \, \dx u = \begin{cases} \dfrac{D^{s - 1} e^{- a D}}{\Gamma(s)} & \text{if $D > 0$,} \\ 0 & \text{if $D < 0$,} \end{cases} \end{equation} which is valid for $\sigma = \Re(s) > 0$ and $a \in \mathbb{C}$ with $\Re(a) > 0$, and \begin{equation} \label{Laplace-eq-2} \frac1{2 \pi} \int_{\mathbb{R}} \frac 1{(a + i u)^s} \, \dx u = \begin{cases} 0 & \text{if $\Re(s) > 1$,} \\ 1 / 2 & \text{if $s = 1$,} \end{cases} \end{equation} for $a \in \mathbb{C}$ with $\Re(a) > 0$. Formulae \eqref{Laplace-eq-1}-\eqref{Laplace-eq-2} enable us to write averages of arithmetical functions by means of line integrals as we will see in \S\ref{settings} below. We will also need Bessel functions of complex order $\nu$ and real argument $u$. For their definition and main properties we refer to Watson \cite{Watson1966}. In particular, equation (8) on page 177 gives the Sonine representation: \begin{equation} \label{Bessel-def} J_\nu(u) := \frac{(u / 2)^\nu}{2 \pi i} \int_{(a)} s^{- \nu - 1} e^s e^{- u^2 / 4 s} \, \dx s, \end{equation} where $a > 0$ and $u,\nu \in \mathbb{C}$ with $\Re(\nu) > -1$. We will use also a Poisson integral formula (see eq.~(3) on page 48 of \cite{Watson1966}), i.e., \begin{equation} \label{Poisson-int-rep} J_\nu(u) : = \frac{2(u/2)^{\nu}}{\pi^{1/2}\Gamma(\nu+1/2)} \int_{0}^{1} (1-t^2)^{\nu-1/2} \cos (ut)\ \dx t \end{equation} which holds for $\Re (\nu) > -1/2$ and $u\in \mathbb{C}$. An asymptotic estimate we will need is \begin{equation} \label{Lebedev-asymp} J_\nu(u) = \Bigl(\frac{2}{\pi u}\Bigr)^{1/2} \cos \Bigl(u -\frac{\pi \nu}{2} -\frac{\pi}{4}\Bigr) + \Odip{\vert \nu \vert}{u^{-5/2}} \end{equation} which follows from eq.~(1) on page 199 of Watson \cite{Watson1966}. As in \cite{LanguascoZ2012f}, we combine this approach with line integrals with the classical methods dealing with infinite sums over primes, exploited by Hardy \& Littlewood (see \cite{HardyL1916} and \cite{HardyL1923}) and by Linnik \cite{Linnik1946}. The main difference here is that the problem naturally involves the modular relation for the complex theta function, see eq.~\eqref{func-eq-theta}; the presence of the Bessel functions in our statement strictly depends on such modularity relation. It is worth mentioning that it is not clear how to get such ``modular'' terms using the finite sums approach for the Hardy-Littlewood function $r_{\textit{HL}}(n)$. We thank A.~Perelli and J.~Pintz for several conversations on this topic. \section{Settings} \label{settings} We need $k> 0$ in this section. Let $z = a + i y$ with $a > 0$, \begin{equation} \label{Stilde-omega-def} \widetilde{S}(z) = \sum_{m \ge 1} \Lambda(m) e^{- m z} \quad \text{and} \quad \omega_2(z) = \sum_{m \ge 1} e^{-m^2 z}. \end{equation} Letting further $\theta(z) = \sum_{m = -\infty}^{+\infty} e^{-m^2 z}$, we notice that $\theta(z) = 1 + 2 \omega_2(z)$ and, recalling the functional equation for $\theta$ (see, e.g., Proposition VI.4.3 of Freitag-Busam \cite[page 340]{FreitagB2009}): \begin{equation} \label{func-eq-theta} \theta(z) = \Bigl( \frac \pi z \Bigr)^{1/2} \theta\Bigl( \frac{\pi^2} z \Bigr), \end{equation} we immediately get \begin{equation} \label{func-eq-omega} \omega_2(z) = \frac 12 \Bigl( \frac \pi z \Bigr)^{1 / 2} - \frac12 + \Bigl( \frac \pi z \Bigr)^{1 / 2} \omega_2 \Bigl( \frac {\pi^2} z \Bigr). \end{equation} Recalling \eqref{r2-def}, we can write \[ \widetilde{S}(z) \omega_2(z) = \sum_{m_1 \ge 1} \sum_{m_2 \ge 1} \Lambda(m_1) e^{-(m_1 + m_2^2) z} = \sum_{n \ge 1} r_{\textit{HL}}(n) e^{- n z} \] and, by \eqref{Laplace-eq-1}-\eqref{Laplace-eq-2}, we see that \begin{equation} \label{first-step} \sum_{n \le N} r_{\textit{HL}}(n) \frac{(N - n)^k}{\Gamma(k + 1)} = \sum_{n \ge 1} r_{\textit{HL}}(n) \Bigl( \frac{1}{2 \pi i} \int_{(a)} e^{(N- n)z} z^{- k - 1} \, \dx z \Bigr). \end{equation} Our first goal is to exchange the series with the line integral in \eqref{first-step}. To do so we have to recall that the Prime Number Theorem (PNT) is equivalent, via Lemma~\ref{Linnik-lemma2} below, to the statement \begin{equation*} \widetilde{S}(a) \sim a^{-1} \qquad\text{for $a \to 0+$,} \end{equation*} which is classical: for the proof see for instance Lemma~9 in Hardy \& Littlewood \cite{HardyL1923}. We will also use the inequality \begin{equation} \label{omega-estim} \vert \omega_2(z)\vert \le \omega_2(a) \le \int_{0}^{\infty} e^{-at^{2}} \dx t \le a^{- 1 / 2} \int_{0}^{\infty} e^{-v^{2}} \dx v \ll a^{- 1 / 2} \end{equation} from which we immediately get \begin{align*} \sum_{n \ge 1} \bigl\vert r_{\textit{HL}}(n) e^{- n z} \bigr\vert &= \sum_{n \ge 2} r_{\textit{HL}}(n) e^{- n a} = \widetilde{S}(a) \omega_2(a) \ll a^{- 3 / 2}. \end{align*} Taking into account the estimates \begin{equation} \label{z^-1} \vert z \vert^{-1} \asymp \begin{cases} a^{-1} &\text{if $\vert y \vert \le a$,} \\ \vert y \vert^{-1} &\text{if $\vert y \vert \ge a$,} \end{cases} \end{equation} where $f\asymp g$ means $g \ll f \ll g$, and \[ \vert e^{N z} z^{- k - 1}\vert \asymp e^{N a} \begin{cases} a^{- k - 1} &\text{if $\vert y \vert \le a$,} \\ \vert y \vert^{- k - 1} &\text{if $\vert y \vert \ge a$,} \end{cases} \] we have \begin{align*} \int_{(a)} \vert e^{N z} z^{- k - 1}\vert \, \vert \widetilde{S}(z) \omega_2(z) \vert \, \vert \dx z \vert &\ll a^{- 3 / 2} e^{N a} \Bigl( \int_{-a}^a a^{- k - 1} \, \dx y + 2 \int_a^{+\infty} y^{- k - 1} \, \dx y \Bigr) \\ &\ll a^{- 3 / 2} e^{N a} \Bigl( a^{-k} + \frac{a^{-k}}k \Bigr), \end{align*} but the last estimate is valid only if $k > 0$. So, for $k > 0$, we can exchange the line integral with the sum over $n$ in \eqref{first-step} thus getting \begin{equation} \label{main-form-omega} \sum_{n \le N} r_{\textit{HL}}(n) \frac{(N - n)^k}{\Gamma(k + 1)} = \frac 1{2 \pi i} \int_{(a)} e^{N z} z^{- k - 1} \widetilde{S}(z) \omega_2(z) \, \dx z. \end{equation} This is the fundamental relation for the method. \section{Inserting zeros and modularity} We need $k> 1/2$ in this section. The treatment of the integral at the right hand side of \eqref{main-form-omega} requires Lemma \ref{Linnik-lemma2}. Letting $E(a,y)$ be the error term in \eqref{expl-form-err-term-strong}, formula \eqref{main-form-omega} becomes \begin{align*} \sum_{n \le N} r_{\textit{HL}}(n) \frac{(N - n)^k}{\Gamma(k + 1)} &= \frac 1{2 \pi i} \int_{(a)} \Bigl( \frac 1z - \sum_{\rho} z^{-\rho} \Gamma(\rho) \Bigr) \omega_2(z) e^{N z} z^{- k - 1} \, \dx z \\ &\qquad+ \Odi{\int_{(a)} \vert E(a,y)\vert \, \vert e^{N z}\vert \, \vert z\vert ^{- k - 1} \vert \omega_2(z)\vert \, \vert \dx z\vert}. \end{align*} Using \eqref{omega-estim}-\eqref{z^-1} and \eqref{expl-form-err-term-strong} we see that the error term is \begin{align*} &\ll a^{- 1 / 2} e^{Na} \Bigl( \int_{-a}^{a} a^{{-k-1/2}} \dx y + \int_{a}^{+\infty} y^{{-k-1/2}} \log^2(y/a)\, \dx y \Bigr) \\ &\ll_k e^{N a} a^{-k} \Bigl( 1+ \int_{1}^{+\infty}v^{{-k-1/2}} \log^2 v \, \dx v \Bigr) \ll_k e^{N a} a^{-k}, \end{align*} provided that $k>1/2$. Choosing $a = 1 / N$, the previous estimate becomes \( \ll_k N^{k}. \) Summing up, for $k> 1/2$, we can write \begin{equation} \label{first-step-hl} \sum_{n \le N} r_{\textit{HL}}(n) \frac{(N - n)^k}{\Gamma(k + 1)} = \frac 1{2 \pi i} \int_{(1 / N)} \Bigl(\frac 1z - \sum_{\rho} z^{-\rho} \Gamma(\rho) \Bigr) \omega_2(z) e^{N z} z^{- k - 1} \, \dx z + \Odip{k}{N^{k}}. \end{equation} We now insert \eqref{func-eq-omega} into \eqref{first-step-hl}, so that the integral on the right-hand side of \eqref{first-step-hl} becomes \begin{align} \notag & \frac 1{2 \pi i} \int_{(1 / N)} \Bigl( \frac 1z - \sum_{\rho} z^{-\rho} \Gamma(\rho) \Bigr) \Bigl( \frac 12 \Bigl( \frac \pi z \Bigr)^{1/2} - \frac12 \Bigr) e^{N z} z^{- k - 1} \, \dx z \\ \notag &\qquad+ \frac 1{2 \pi i} \int_{(1 / N)} \Bigl( \frac \pi z \Bigr)^{1/2} \Bigl( \frac 1z - \sum_{\rho} z^{-\rho} \Gamma(\rho) \Bigr) \omega_2 \Bigl( \frac{\pi^2}z \Bigr) e^{N z} z^{- k - 1} \, \dx z \\ \label{hl-splitting} &= \I_1 + \I_2, \end{align} say. We now proceed to evaluate $\I_1$ and $\I_2$. \section{Evaluation of $\I_1$} We need $k> 1/2$ in this section. By a direct computation we can write that \begin{align*} \I_1 &= \frac 1{4 \pi i} \int_{(1 / N)} \Bigl( \frac{\pi^{1/2}}{z^{1/2}} - 1 \Bigr) e^{N z} z^{- k - 2} \, \dx z - \frac {\pi^{1/2}}{4 \pi i} \int_{(1 / N)} \sum_{\rho} \Gamma(\rho) e^{N z} z^{- k -\rho - 3/2} \, \dx z \\ &\qquad+ \frac 1{4 \pi i} \int_{(1 / N)} \sum_{\rho} \Gamma(\rho) e^{N z} z^{- k - \rho - 1} \, \dx z = \J_1 + \J_2 + \J_3, \end{align*} say. We see now how to evaluate $\J_1$, $\J_2$ and $\J_3$. \subsection{Evaluation of $\J_1$} Using the substitution $s=Nz$, by \eqref{Laplace-transf} we immediately have \begin{align} \notag \J_1 &= \frac{\pi^{1/2}}2 N^{k + 3/2} \frac 1{2 \pi i} \int_{(1)} e^s s^{- k - 5 / 2} \, \dx s - \frac 12 N^{k + 1} \frac 1{2 \pi i} \int_{(1)} e^s s^{- k - 2} \, \dx s \\ \label{J1-eval} &= \frac{\pi^{1/2}}2 \frac{N^{k + 3/2}}{\Gamma(k + 5/2)} - \frac 12 \frac{N^{k + 1}}{\Gamma(k + 2)}. \end{align} \subsection{Evaluation of $\J_2$} Exchanging the sum over $\rho$ with the integral (this can be done for $k>0$, see \S\ref{exchange-rho-integral}) and using the substitution $s=Nz$, we have \begin{align} \notag \J_2 &= - \frac{\pi^{1/2}}2 \sum_{\rho} \Gamma(\rho) \frac 1{2 \pi i} \int_{(1 / N)} e^{N z} z^{-k -\rho - 3/2} \, \dx z \\ \notag &= - \frac{\pi^{1/2}}2 \sum_{\rho} \Gamma(\rho) N^{k + \rho + 1/2} \frac 1{2 \pi i} \int_{(1)} e^s s^{-k -\rho - 3/2} \, \dx s \\ \label{J2-eval} &= - \frac{\pi^{1/2}}2 \sum_{\rho} \frac{\Gamma(\rho)}{\Gamma(k + 3 / 2 + \rho)} N^{k + 1 / 2 + \rho}, \end{align} again by \eqref{Laplace-transf}. By the Stirling formula \eqref{Stirling}, we remark that the series in $\J_2$ converges absolutely for $k>-1/2$. \subsection{Evaluation of $\J_3$} Arguing as in \S\ref{exchange-rho-integral} with $-k-1$ which plays the role of $-k-3/2$ there, we see that we can exchange the sum with the integral provided that $k>1/2$. Hence, performing again the usual substitution $s=Nz$, we can write \begin{equation} \label{J3-eval} \J_3 = \frac 12 \sum_{\rho} \Gamma(\rho) N^{k + \rho} \frac 1{2 \pi i} \int_{(1)} e^s s^{- k - 1 - \rho} \, \dx s \\ = \frac 12 \sum_{\rho} \frac{\Gamma(\rho)}{\Gamma(k + 1 + \rho)} N^{k + \rho}. \end{equation} By the Stirling formula \eqref{Stirling}, we remark that the series in $\J_3$ converges absolutely for $k>0$. \medskip Summing up, by \eqref{J1-eval}-\eqref{J3-eval} and for $k> 1/2$ we get \begin{align} \notag \I_1 &= \frac{\pi^{1 / 2}}2 \frac{N^{k + 3 / 2}}{\Gamma(k + 5 / 2)} - \frac 12 \frac{N^{k + 1}}{\Gamma(k + 2)} - \frac{\pi^{1 / 2}}2 \sum_{\rho} \frac{\Gamma(\rho)}{\Gamma(k + 3 / 2 + \rho)} N^{k + 1 / 2 + \rho} \\ &\qquad+ \label{final-eval-I1} \frac 12 \sum_{\rho} \frac{\Gamma(\rho)}{\Gamma(k + 1 + \rho)} N^{k + \rho}. \end{align} \section{Evaluation of $\I_2$ and conclusion of the proof of Theorem \ref{CesaroHL-average}} We need $k> 1$ in this section. Using the definition of $\omega_2 (\pi^2/ z )$, see \eqref{Stilde-omega-def}, we have \begin{align} \notag \I_2 &= \frac 1{2 \pi i} \int_{(1 / N)} \Bigl( \frac \pi z \Bigr)^{1/2} \Bigl( \sum_{\ell \ge 1} e^{- \ell^2 \pi^2 / z} \Bigr) e^{N z} z^{- k - 2} \, \dx z \\ & \label{I2-splitting} \qquad - \frac 1{2 \pi i} \int_{(1 / N)} \Bigl( \frac \pi z \Bigr)^{1/2} \Bigl( \sum_{\ell \ge 1} e^{- \ell^2 \pi^2 / z} \Bigr) \Bigl( \sum_{\rho} z^{-\rho} \Gamma(\rho) \Bigr) e^{N z} z^{- k - 1} \, \dx z = \J_4 + \J_5, \end{align} say. We see now how to evaluate $\J_4$ and $\J_5$. \subsection{Evaluation of $\J_4$} By means of the substitution $s = N z$, since the exchange is justified in \S\ref{exchange-ell-integral} for $k> -1/2$, we get \begin{equation*} \J_4 = \pi^{1 / 2} N^{k + 3 / 2} \sum_{\ell \ge 1} \frac 1{2 \pi i} \int_{(1)} e^s e^{- \ell^2 \pi^2 N / s} s^{- k - 5 / 2} \, \dx s. \end{equation*} Setting $u = 2 \pi \ell N^{1/2}$ in \eqref{Bessel-def} we obtain \begin{equation} \label{J-nu} J_\nu \bigl( 2 \pi \ell N^{1/2} \bigr) = \frac{(\pi \ell N^{1/2})^\nu}{2 \pi i} \int_{(a)} e^s e^{- \pi^2 \ell^2 N / s} s^{- \nu -1}\, \dx s, \end{equation} and hence we have \begin{equation} \label{J4-eval} \J_4 = \frac{N^{k / 2 + 3 / 4}}{\pi^{k + 1}} \sum_{\ell \ge 1} \frac{J_{k + 3 / 2} (2 \pi \ell N^{1 / 2})}{\ell^{k + 3 / 2}}. \end{equation} The absolute convergence of the series in $ \J_4$ is studied in \S\ref{sums-abs-conv}. \subsection{Evaluation of $\J_5$} With the same substitution used before, since the double exchange between sums and the line integral is justified in \S\ref{exchange-double-sum-ell-rho} for $k>1$, we see that \begin{equation*} \J_5 := - \pi^{1 / 2} \sum_{\rho} \Gamma(\rho) N^{k + 1 / 2 + \rho} \sum_{\ell \ge 1} \Bigl( \frac 1{2 \pi i} \int_{(1)} e^s e^{- \ell^2 \pi^2 N / s} s^{- k - 3 / 2 - \rho} \, \dx s \Bigr). \end{equation*} Using \eqref{J-nu} we get \begin{equation} \J_5 = \label{J5-eval} - \frac{N^{k / 2 + 1 / 4}}{\pi^k} \sum_{\rho} \Gamma(\rho) \frac{N^{\rho / 2}}{\pi^\rho} \sum_{\ell \ge 1} \frac{J_{k + 1 / 2 + \rho} (2 \pi \ell N^{1 / 2})} {\ell^{k + 1 / 2 + \rho}}. \end{equation} In this case the absolute convergence of the series in $\J_{5}$ is more delicate; such a treatment is again described in \S\ref{sums-abs-conv}. \medskip Substituting \eqref{J4-eval}-\eqref{J5-eval} in \eqref{I2-splitting} we have \begin{equation} \I_2 = \frac{N^{k / 2 + 3 / 4}}{\pi^{k + 1}} \sum_{\ell \ge 1} \frac{J_{k + 3 / 2} (2 \pi \ell N^{1 / 2})}{\ell^{k + 3 / 2}} - \label{I2-eval} \frac{N^{k / 2 + 1 / 4}}{\pi^k} \sum_{\rho} \frac{\Gamma(\rho)N^{\rho / 2}}{\pi^\rho} \sum_{\ell \ge 1} \frac{J_{k + 1 / 2 + \rho} (2 \pi \ell N^{1 / 2})} {\ell^{k + 1 / 2 + \rho}}. \end{equation} Finally, inserting \eqref{final-eval-I1} and \eqref{I2-eval} into \eqref{hl-splitting} and \eqref{first-step-hl} we finally obtain \begin{align} \notag \sum_{n \le N} r_{\textit{HL}}(n) \frac{(N - n)^k}{\Gamma(k + 1)} &= \frac{\pi^{1 / 2}}2 \frac{N^{k + 3 / 2}}{\Gamma(k + 5 / 2)} - \frac 12 \frac{N^{k + 1}}{\Gamma(k + 2)} - \frac{\pi^{1 / 2}}2 \sum_{\rho} \frac{\Gamma(\rho)}{\Gamma(k + 3 / 2 + \rho)} N^{k + 1 / 2 + \rho} \\ \notag &+ \frac 12 \sum_{\rho} \frac{\Gamma(\rho)}{\Gamma(k + 1 + \rho)} N^{k + \rho} + \frac{N^{k / 2 + 3 / 4}}{\pi^{k + 1}} \sum_{\ell \ge 1} \frac{J_{k + 3 / 2} (2 \pi \ell N^{1 / 2})}{\ell^{k + 3 / 2}} \\ \label{expl-form-HL-bis} &- \frac{N^{k / 2 + 1 / 4}}{\pi^k} \sum_{\rho} \Gamma(\rho) \frac{N^{\rho / 2}}{\pi^\rho} \sum_{\ell \ge 1} \frac{J_{k + 1 / 2 + \rho} (2 \pi \ell N^{1 / 2})} {\ell^{k + 1 / 2 + \rho}} + \Odip{k}{N^{k}}, \end{align} for $k > 1$. Theorem \ref{CesaroHL-average} follows dividing \eqref{expl-form-HL-bis} by $N^{k}$. \section{Lemmas} We recall some basic facts in complex analysis. First, if $z = a + i y$ with $a > 0$, we see that for complex $w$ we have \begin{align*} z^{-w} &= \vert z \vert^{-w} \exp( - i w \arctan(y / a)) \\ &= \vert z \vert^{-\Re(w) - i \Im(w)} \exp( (- i \Re(w) + \Im(w)) \arctan(y / a)) \end{align*} so that \begin{equation} \label{z^w} \vert z^{-w} \vert = \vert z \vert^{-\Re(w)} \exp(\Im(w) \arctan(y / a)). \end{equation} We also recall that, uniformly for $x \in [x_1, x_2]$, with $x_1$ and $x_2$ fixed, and for $|y| \to +\infty$, by the Stirling formula we have \begin{equation} \label{Stirling} \vert \Gamma(x + i y) \vert \sim \sqrt{2 \pi} e^{- \pi |y| / 2} |y|^{x - 1 / 2}, \end{equation} see, e.g., Titchmarsh \cite[\S4.42]{Titchmarsh1988}. We will need the following lemmas from Languasco-Zaccagnini \cite{LanguascoZ2012f}. \begin{Lemma}[See Lemma 1 of \cite{LanguascoZ2012f}] \label{Linnik-lemma2} Let $z = a + iy$, where $a > 0$ and $y \in \mathbb{R}$. Then \begin{equation*} \widetilde{S}(z) = \frac{1}{z} - \sum_{\rho}z^{-\rho} \Gamma(\rho) + E(a,y) \end{equation*} where $\rho = \beta + i\gamma$ runs over the non-trivial zeros of $\zeta(s)$ and \begin{equation} \label{expl-form-err-term-strong} E(a,y) \ll \vert z \vert^{1/2} \begin{cases} 1 & \text{if $\vert y \vert \leq a$} \\ 1 +\log^2 (\vert y\vert/a) & \text{if $\vert y \vert > a$.} \end{cases} \end{equation} \end{Lemma} \begin{Lemma}[See Lemma 2 of \cite{LanguascoZ2012f}] \label{series-int-zeros} Let $\rho=\beta + i \gamma$ run over the non-trivial zeros of the Riemann zeta-function and $\alpha > 1$ be a parameter. The series \[ \sum_{\rho \colon \gamma > 0} \gamma^{\beta-1/2} \int_1^{+\infty} \exp\Bigl( - \gamma \arctan\frac 1u \Bigr) \frac{\dx u}{u^{\alpha+\beta}} \] converges provided that $\alpha > 3/2$. For $\alpha \le 3/2$ the series does not converge. The result remains true if we insert in the integral a factor $(\log u)^c$, for any fixed $c \ge 0$. \end{Lemma} \begin{Lemma}[See Lemma 3 of \cite{LanguascoZ2012f}] \label{series-int-zeros-alt-sign} Let $\alpha > 1$, $z=a+iy$, $a\in(0,1)$ and $y\in \mathbb{R}$. Let further $\rho=\beta+i\gamma$ run over the non-trivial zeros of the Riemann zeta-function. We have \[ \sum_{\rho} \vert \gamma\vert ^{\beta-1/2} \int_{\mathbb{Y}_1 \cup \mathbb{Y}_2} \exp\Bigl(\gamma \arctan\frac{y}{a} - \frac\pi2 \vert \gamma \vert\Bigr) \frac{\dx y}{\vert z \vert ^{\alpha+\beta}} \ll_{\alpha} a^{-\alpha}, \] where $\mathbb{Y}_1=\{y\in \mathbb{R}\colon y\gamma \leq 0\}$ and $\mathbb{Y}_2=\{y\in [-a,a] \colon y\gamma > 0\}$. The result remains true if we insert in the integral a factor $(\log (\vert y\vert /a))^c$, for any fixed $c \ge 0$. \end{Lemma} \section{Interchange of the series over zeros with the line integral} \label{exchange-rho-integral} We need $k>0$ in this section. We have to establish the convergence of \begin{equation} \label{conv-integral-J2-J3} \sum_{\rho} \vert \Gamma(\rho)\vert \int_{(1/N)} \vert e^{N z}\vert \vert z\vert ^{- k - 3/2} \vert z^{- \rho}\vert \, \vert \dx z\vert, \end{equation} where, as usual, $\rho=\beta + i \gamma$ runs over the non-trivial zeros of the Riemann zeta-function. By \eqref{z^w} and the Stirling formula \eqref{Stirling}, we are left with estimating \begin{equation} \label{conv-integral-J2-J3-1} \sum_{\rho} \vert \gamma\vert ^{\beta - 1/2} \int_{\mathbb{R}} \exp\Bigl( \gamma \arctan(N y) -\frac \pi2 \vert \gamma\vert \Bigr) \frac{\dx y}{\vert z \vert ^{k + 3/2 +\beta}}. \end{equation} We have just to consider the case $\gamma y >0$, $\vert y \vert > 1/N$ since in the other cases the total contribution is $\ll_k N^{k + 1 }$ by Lemma \ref{series-int-zeros-alt-sign} with $\alpha=k+3/2$ and $a=1/N$. By symmetry, we may assume that $\gamma > 0$. We have that the integral in \eqref{conv-integral-J2-J3-1} is \begin{align*} & \ll \sum_{\rho \colon \gamma > 0} \gamma ^{\beta - 1/2} \int_{1 / N}^{+\infty} \exp\Bigl( - \gamma \arctan\frac 1{N y} \Bigr) \frac{\dx y}{y^{k + 3/2 +\beta}} \\ &= N^{k} \sum_{\rho \colon \gamma > 0} N^{\beta} \gamma ^{\beta - 1/2} \int_1^{+\infty} \exp\Bigl( - \gamma \arctan\frac 1u \Bigr) \frac{\dx u}{u^{k + 3/2 +\beta}}. \end{align*} For $k > 0 $ this is $\ll_k N^{k + 1}$ by Lemma~\ref{series-int-zeros}. This implies that the integrals in \eqref{conv-integral-J2-J3-1} and in \eqref{conv-integral-J2-J3} are both $\ll_k N^{k + 1}$ and hence this exchange step is fully justified. \section{Interchange of the series over $\ell$ with the line integral} \label{exchange-ell-integral} We need $k> - 1/2$ in this section. We have to establish the convergence of \begin{equation} \label{conv-integral-J4} \sum_{\ell \ge 1} \int_{(1/N)} \vert e^{N z}\vert \vert z\vert ^{- k - 5/2} e^{-\pi^2 \ell^2 \Re(1/z)} \, \vert \dx z\vert . \end{equation} A trivial computation gives \begin{equation} \label{real-part-estim} \Re(1/z) = \frac{N}{1+N^2y^2} \gg \begin{cases} N & \text{if}\ \vert y \vert \leq 1/N \\ 1/(Ny^2) & \text{if}\ \vert y \vert > 1/N. \end{cases} \end{equation} By \eqref{real-part-estim}, we can write that the quantity in \eqref{conv-integral-J4} is \begin{equation} \label{J4-split} \ll \sum_{\ell \ge 1} \int_{0}^{1/N} \frac{e^{- \ell^2 N}}{\vert z\vert ^{k + 5/2}} \, \dx y + \sum_{\ell \ge 1} \int_{1/N}^{+\infty} \frac{e^{- \ell^2 / (Ny^2)}}{\vert z\vert ^{k + 5/2}} \, \dx y = U_1+U_2, \end{equation} say, since the $\pi^2$ factor in the exponential function is negligible. Using \eqref{omega-estim}-\eqref{z^-1}, we have \begin{equation} \label{U1-estim} U_1 \ll N^{k+3/2} \omega_{2} (N) \ll N^{k+1} \end{equation} and \begin{align} \notag U_2& \ll \sum_{\ell \ge 1} \int_{1/N}^{+\infty} \frac{e^{- \ell^2 / (Ny^2)}}{y^{k + 5/2}} \, \dx y \ll N^{k/2+3/4} \sum_{\ell \ge 1} \frac{1}{\ell^{k+3/2}} \int_{0}^{\ell^2 N} u^{ k/2 - 1/4} e^{- u} \, \dx u \\ \label{U2-estim} &\leq \Gamma \Bigl( \frac{2k+3}{4} \Bigr) N^{k/2+3/4} \sum_{\ell \ge 1} \frac{1}{\ell^{k+3/2}} \ll_{k} N^{k/2+3/4} \end{align} provided that $k>-1/2$, where we used the substitution $u=\ell^2 / (Ny^2)$. Inserting \eqref{U1-estim}-\eqref{U2-estim} into \eqref{J4-split} we get, for $k> - 1/2$, that the quantity in \eqref{conv-integral-J4} is $\ll N^{k+1}$. \section{Interchange of the double series over zeros with the line integral} \label{exchange-double-sum-ell-rho} We need $k>1$ in this section. We first have to establish the convergence of \begin{equation} \label{conv-integral-J5} \sum_{\ell \ge 1} \int_{(1/N)} \vert \sum_{\rho} \Gamma(\rho)z^{- \rho} \vert \vert e^{N z}\vert \vert z\vert ^{- k - 3/2} e^{-\pi^2 \ell^2 \Re(1/z)} \, \vert \dx z\vert . \end{equation} Using the PNT and \eqref{expl-form-err-term-strong}, we first remark that \begin{align} \notag \Bigl\vert \sum_{\rho} z^{-\rho} \Gamma(\rho) \Bigr\vert &= \Bigl\vert \widetilde{S}(z) - \frac 1z -E(y,\frac{1}{N}) \Bigr\vert \ll N + \frac{1}{\vert z\vert} + \Bigl\vert E(y,\frac{1}{N}) \Bigr\vert \\ \label{sum-over-rho-new} &\ll \begin{cases} N & \text{if $\vert y \vert \leq 1/N$,} \\ \vert z\vert ^{-1} + \vert z\vert ^{1/2} \log^2 (2N\vert y\vert) & \text{if $\vert y \vert > 1/N$.} \end{cases} \end{align} By \eqref{real-part-estim} and \eqref{sum-over-rho-new}, we can write that the quantity in \eqref{conv-integral-J5} is \begin{align} \notag &\ll N \sum_{\ell \ge 1} \int_{0}^{1/N} \frac{e^{- \ell^2 N}}{\vert z\vert ^{k + 3/2}} \, \dx y &+ \sum_{\ell \ge 1} \int_{1/N}^{+\infty} \frac{e^{- \ell^2 / (Ny^2)} }{\vert z\vert ^{k + 5/2}} \, \dx y + \sum_{\ell \ge 1} \int_{1/N}^{+\infty} \log^{2}(2Ny) \frac{e^{- \ell^2 / (Ny^2)}}{\vert z\vert ^{k + 1}} \, \dx y \\& \label{HL-split} = V_1+V_2+V_{3}, \end{align} say. $V_{1}$ and $V_{2}$ can be estimated exactly as $U_{1}, U_{2}$ in \S\ref{exchange-ell-integral}; hence we have \begin{equation} \label{V1-V2-estim} V_{1} + V_{2} \ll_{k} N^{k+1} \end{equation} provided that $k> - 1/2$. Using the substitution $u=\ell^2 / (Ny^2)$, we obtain \[ V_3 \ll \sum_{\ell \ge 1} \int_{1/N}^{+\infty} \log^{2}(2Ny) \frac{e^{- \ell^2 / (Ny^2)}}{y^{k + 1}} \, \dx y = \frac{N^{k/2}}{8} \sum_{\ell \ge 1} \frac{1}{\ell^k} \int_{0}^{\ell^2 N} u^{ k/2 - 1} \log^{2}\Bigl( \frac{4\ell^{2}N}{u} \Bigr) e^{- u} \, \dx u. \] Hence a direct computation shows that \begin{align} \notag V_3 &\ll N^{k/2} \sum_{\ell \ge 1} \frac{\log^{2}(\ell N)}{\ell^k} \int_{0}^{\ell^2 N} u^{ k/2 - 1} e^{- u} \, \dx u + N^{k/2} \sum_{\ell \ge 1} \frac{1}{\ell^k} \int_{0}^{\ell^2 N} u^{ k/2 - 1} \log^{2} (u)\, e^{- u} \, \dx u \\ \label{V3-estim} &\ll_{k} \Gamma(k/2) N^{k/2} \sum_{\ell \ge 1} \frac{\log^{2}(\ell N)}{\ell^k} + N^{k/2} \ll_{k} N^{k/2}\log^{2} N \end{align} provided that $k>1$. Inserting \eqref{V1-V2-estim}-\eqref{V3-estim} into \eqref{HL-split} we get, for $k>1$, that the quantity in \eqref{conv-integral-J5} is $\ll N^{k+1}$. Now we have to establish the convergence of \begin{equation} \label{conv-integral-5} \sum_{\ell \ge 1} \sum_{\rho}\vert \Gamma(\rho) \vert \int_{(1/N)} \vert e^{N z}\vert \vert z\vert ^{- k - 3/2} \vert z^{- \rho} \vert e^{-\pi^2 \ell^2 \Re(1/z)} \, \vert \dx z\vert . \end{equation} By symmetry, we may assume that $\gamma > 0$. For $y \in(-\infty, 0]$ we have $\gamma \arctan(y/a) -\frac \pi2 \gamma \le - \frac \pi2 \gamma$. Using \eqref{real-part-estim}, \eqref{z^-1} and the Stirling formula \eqref{Stirling}, the quantity we are estimating becomes \begin{align} \notag &\ll \sum_{\ell \ge 1} \sum_{\rho \colon \gamma>0} \gamma^{\beta-1/2} \exp\Bigl( -\frac \pi2 \gamma\Bigr) \Bigl( \int_{-1/N}^{0} N^{k + 3/2 + \beta}\ e^{- \ell^2 N} \, \dx y + \int_{-\infty}^{-1/N} \frac{e^{- \ell^2 / (Ny^2)}}{\vert y \vert^{k + 3/2 + \beta}} \, \dx y \Bigr) \\ \notag & \ll_{k} N^{k+3/2} \sum_{\ell \ge 1} e^{- \ell^2 N} \sum_{\rho \colon \gamma>0} \gamma^{\beta-1/2} \exp\Bigl( -\frac \pi2 \gamma\Bigr) \\ \notag &\hskip1cm + N^{k/2+1/4} \sum_{\ell \ge 1} \frac{1}{\ell^{k+1/2}} \sum_{\rho \colon \gamma>0} \frac{N^{\beta/2}}{\ell^{\beta}} \gamma^{\beta-1/2} \exp\Bigl( -\frac \pi2 \gamma\Bigr) \int_{0}^{\ell^2 N} u^{ k/2 - 3/4 + \beta/2} e^{- u} \, \dx u \\ \notag & \ll_{k} N^{k+1} + \Bigl( \max_{\beta} \Gamma\Bigl( \frac{\beta}{2} +\frac{k}2+ \frac14\Bigr) \Bigr) N^{k/2+3/4} \sum_{\ell \ge 1} \frac{1}{\ell^{k+1/2}} \sum_{\rho \colon \gamma>0} \gamma^{\beta-1/2} \exp\Bigl( -\frac \pi2 \gamma\Bigr) \\ \label{easy-case} & \ll_k N^{k+1} \end{align} provided that $k>1/2$, where we used the substitution $u = - \ell^2 / (Ny^2)$, \eqref{omega-estim} and standard density estimates. Let now $y>0$. Using the Stirling formula \eqref{Stirling} and \eqref{real-part-estim} we can write that the quantity in \eqref{conv-integral-5} is \begin{align} \notag \ll \sum_{\ell \ge 1} \sum_{\rho \colon \gamma>0} & \gamma^{\beta-1/2} \exp\Bigl( -\frac \pi4 \gamma\Bigr) \int_{0}^{1/N} \frac{e^{- \ell^2 N}}{\vert z\vert ^{k + 3/2 + \beta}} \, \dx y \\ \notag &+ \sum_{\ell \ge 1} \sum_{\rho \colon \gamma>0} \gamma^{\beta-1/2} \int_{1/N}^{+\infty} \exp\Bigl( \gamma (\arctan(N y) -\frac \pi2) \Bigr) \frac{e^{- \ell^2 / (Ny^2)}}{\vert z\vert ^{k + 3/2 + \beta}} \, \dx y \\ \label{HL-split-1} & = W_1+W_2, \end{align} say. Using \eqref{z^-1} and \eqref{omega-estim}, we have that \begin{equation} \label{W1-estim} W_1 \ll N^{k+3/2} \sum_{\ell \ge 1} e^{- \ell^2 N} \sum_{\rho \colon \gamma>0} \gamma^{\beta-1/2} \exp\Bigl( -\frac \pi4 \gamma\Bigr) \ll N^{k+1} \end{equation} by standard density estimates. Moreover we get \begin{align*} W_2 &\ll \sum_{\ell \ge 1} \sum_{\rho \colon \gamma>0} \gamma^{\beta-1/2} \int_{1/N}^{+\infty} y^{- k - 3/2 - \beta} \exp\Bigl( - \frac{\gamma}{N y} - \frac{\ell^2}{Ny^2} \Bigr) \, \dx y \\ &= N^{k/2+1/4} \sum_{\ell \ge 1} \frac{1}{\ell^{k+1/2}} \sum_{\rho \colon \gamma>0} \frac{N^{\beta/2} \gamma^{\beta-1/2}} {\ell^{\beta}} \int_{0}^{\ell \sqrt{N}} v^{ k - 1/2 + \beta} \exp\Bigl(-\frac{\gamma v}{\ell\sqrt{N}}- v^2\Bigr) \, \dx v, \end{align*} in which we used the substitution $v^2=\ell^2 / (Ny^2)$. Remark now that, for $k>1$, we can set $\varepsilon=\varepsilon(k)=(k-1)/2>0$ and that $k -\varepsilon =(k+1)/2>1$. We further remark that $\max_{v} (v^{k -\varepsilon}e^{-v^2})$ is attained at $v_0=((k -\varepsilon)/2)^{1/2}$, and hence we obtain, for $N$ sufficiently large, that \[ W_2 \ll_{k} N^{k/2+1/4} \sum_{\ell \ge 1} \frac{1}{\ell^{k+1/2}} \sum_{\rho \colon \gamma>0} \frac{N^{\beta/2} \gamma^{\beta-1/2}} {\ell^{\beta}} \int_{0}^{\ell \sqrt{N}} v^{\beta-1/2+\varepsilon}\exp\Bigl(-\frac{\gamma v}{\ell\sqrt{N}}\Bigr) \, \dx v . \] Making the substitution $u= \gamma v/(\ell\sqrt{N})$ we have \begin{align} \notag W_2 &\ll_{k} N^{k/2+1/2+\varepsilon/2} \sum_{\ell \ge 1} \frac{1}{\ell^{k -\varepsilon}} \sum_{\rho \colon \gamma>0} \frac{N^{\beta}} {\gamma^{1+\varepsilon}} \int_{0}^{\gamma} u^{\beta-1/2+\varepsilon} e^{- u} \, \dx u\\ \label{W2-estim} &\ll_{k} N^{k/2+3/2+\varepsilon/2} \sum_{\ell \ge 1} \frac{1}{\ell^{k -\varepsilon}} \sum_{\rho \colon \gamma>0} \frac{ 1 } {\gamma^{1+\varepsilon}} \Bigl( \max_{\beta} \Gamma \Bigl( \beta + \frac12+\varepsilon \Bigr) \Bigr) \ll_{k} N^{k/2+3/2+\varepsilon/2}, \end{align} by standard density estimates. Inserting \eqref{W1-estim}-\eqref{W2-estim} into \eqref{HL-split-1} and recalling \eqref{easy-case}, we get, for $k>1$, that the quantity in \eqref{conv-integral-5} is $\ll N^{k+1}$. \section{Absolute convergence of $\J_{4}$ and $\J_{5}$} \label{sums-abs-conv} \medskip To study the absolute convergence of the series in $ \J_4$ we first remark that, by \eqref{Bessel-def} and \eqref{J4-eval}, we get \[ \sum_{\ell \ge 1} \frac{\vert J_{k + 3 / 2} (2 \pi \ell N^{1 / 2}) \vert }{\ell^{k + 3 / 2}} \ll_{k} N^{- k / 2 - 3 / 4} \sum_{\ell \ge 1} \int_{(1/N)} \vert e^{N z}\vert \vert z\vert ^{- k - 5/2} e^{-\pi^2 \ell^2 \Re(1/z)} \, \vert \dx z\vert \] which is the quantity in \eqref{conv-integral-J4}. So the argument in \S\ref{exchange-ell-integral} also proves that the series in $\J_{4}$ converges absolutely for $k> -1/2$. In fact a more direct argument leads to a better estimate on $k$. Using, for $\nu>0$ fixed, $u\in \mathbb{R}$ and $u\to+\infty$, the estimate \begin{equation*} \vert J_\nu(u) \vert \ll_\nu u^{-1/2} \end{equation*} which immediately follows from \eqref{Lebedev-asymp} (or from eq.~(2.4) of Berndt \cite{Berndt1975}), and performing a direct computation, we obtain that $ \J_4$ converges absolutely for $k> -1$ (and for $N$ sufficiently large) and that $\J_4\ll_{k} N^{(k+1)/2}$. For the study of the absolute convergence of the series in $\J_5$ we have a different situation. In this case the direct argument needs a more careful estimate of the Bessel functions involved since both $\nu$ and $u$ are not fixed and, in fact, unbounded. In fact it is easy to see that \eqref{Lebedev-asymp} can be used only if $\nu \in \mathbb{C}$ is bounded, but we are not in this case since $\nu=k + 1 / 2 + \rho$, where $\rho$ is a nontrivial zero of the Riemann $\zeta$-function. On the other hand, \eqref{Poisson-int-rep} can be used only for $u$ bounded, but again this is not our case since $u= 2 \pi \ell N^{1 / 2}$ and $\ell$ runs up to infinity. Moreover, the use of the asymptotic relations for $J_\nu(u)$ when $\nu\in\mathbb{C}$ and $u\in\mathbb{R}$ are both ``large'' seems to be very complicated in this setting. So it turned out that the best direct approach we are able to perform is the following. By a double partial integration on \eqref{Poisson-int-rep}, we immediately get \begin{align} \notag J_\nu(u) &= \frac{2(u/2)^{\nu}(2\nu-1)}{\pi^{1/2}u^{2}\Gamma(\nu+1/2)} \int_{0}^{1} \Bigl( 1 - \frac{(2\nu-3)t^{2}}{1-t^2} \Bigr) (1-t^2)^{\nu-3/2} \cos (ut)\ \dx t \\ \notag &\ll_{\Re(\nu)} \frac{\vert u\vert ^{\Re(\nu)-2}\vert 2\nu-1\vert}{\vert \Gamma(\nu+1/2) \vert} \int_{0}^{1} \Bigl( 1 + \vert 2\nu-3 \vert \Bigr) \vert \cos (ut) \vert \ \dx t \\& \label{Poisson-estim2} \ll_{\Re(\nu)} \frac{\vert \nu \vert^{2}\vert u\vert ^{\Re(\nu)-2}}{\vert \Gamma(\nu+1/2) \vert}, \end{align} where the last two estimates hold for $\Re(\nu)>3/2$ and $u>0$. Inserting \eqref{Poisson-estim2} into \eqref{J5-eval} and using the Stirling formula \eqref{Stirling}, a direct computation shows the absolute convergence of the double sum in $\J_{5}$ for $k>2$ (and for $N$ sufficiently large). Unfortunately, such a condition on $k$ is worse than the one we have in \S\ref{exchange-double-sum-ell-rho}. So, coming back to the Sonine representation of the Bessel functions \eqref{Bessel-def} on the line $\Re(s)=1$ and using the usual substitution $s=Nz$, to study the absolute convergence of the double sum in $\J_{5}$ we are led to consider the quantity \begin{align*} \sum_{\rho} \Bigl\vert \Gamma(\rho) \frac{N^{\rho / 2}}{\pi^\rho} \Bigr\vert & \sum_{\ell \ge 1} \Bigl\vert \frac{ J_{k + 1 / 2 + \rho} (2 \pi \ell N^{1 / 2}) } {\ell^{k + 1 / 2 + \rho}} \Bigr\vert \\ & \ll_{k} N^{-k/2-1/4} \sum_{\rho}\vert \Gamma(\rho) \vert \sum_{\ell \ge 1} \int_{(1/N)} \vert e^{N z}\vert \vert z\vert ^{- k - 3/2} \vert z^{- \rho} \vert e^{-\pi^2 \ell^2 \Re(1/z)} \, \vert \dx z\vert, \end{align*} which is very similar to the one in \eqref{conv-integral-5} (the sums are interchanged). It is not hard to see that the argument used in \eqref{conv-integral-5}-\eqref{W2-estim} can be applied in this case too. It shows that the double series in $\J_{5}$ converges absolutely for $k>1$ and this condition fits now with the one we have in \S\ref{exchange-double-sum-ell-rho}. \newpage \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
train/arxiv
BkiUbvXxK3YB9ohkK5wz
5
1
\section*{\refname}} \usepackage{mathtools} \usepackage{multirow} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand{\TODO}[1]{\color{red}TODO:#1.\color{black}} \usepackage[round]{natbib} \renewcommand\bibsection{\section*{\refname}} \begin{document} \title{DPM: A State Space Model for Large-Scale Direct Marketing} \author{ Yubin Park \\ Accordino Health, Inc. \and Rajiv Khanna \\ UT Austin \and Joydeep Ghosh \\ UT Austin \and Daniel Mihalko \\ USAA } \maketitle \begin{abstract} We propose a novel statistical model to answer three challenges in direct marketing: \textit{which channel} to use, \textit{which offer} to make, and \textit{when} to offer. There are several potential applications for the proposed model; for example, developing personalized marketing strategies and monitoring members' needs. Furthermore, the results from the model can complement and can be integrated with other existing models. The proposed model, named Dynamic Propensity Model, is a latent variable time series model that utilizes both marketing and purchase histories of a customer. The latent variable in the model represents the customer's propensity to buy a product. The propensity derives from purchases and other observable responses. Marketing touches increase a member's propensity, and propensity score attenuates and propagates over time as governed by data-driven parameters. To estimate the parameters of the model, a new statistical methodology has been developed. This methodology makes use of particle methods with a stochastic gradient descent approach, resulting in fast estimation of the model coefficients even from big datasets. The model is validated using six months' marketing records from one of the largest insurance companies in the U.S. Experimental results indicate that the effects of marketing touches vary depending on both channels and products. We compare the predictive performance of the proposed model with lagged variable logistic regression. Limitations and extensions of the proposed algorithm are also discussed. \end{abstract} \section{Introduction} Direct marketing is a form of data-driven advertising that markets straight to potential consumers through various marketing channels. \cite{DMA2010} reported that in 2010, business and non-profit organizations spent $\$153$ billion on direct marketing, which is approximately $54\%$ of all advertisement expenditures in the United States. The scope of direct marketing channels has expanded from traditional direct mails to targeted online display ads, search, and social media sites, and the sizes of consumer databases have significantly increased. In recent years, web-service companies, such as Google and Yahoo, have applied large-scale data mining and machine learning techniques to online ad optimization e.g. \cite{perlich2013} detail one such system, \cite{Yan2009} empirically showed that behavioral targeting improves click-through rates, \cite{Gupta2012} applied matrix factorization for display advertisement matching, \cite{khanna2013} developed specialized methods for large scale modeling for targetted ads on hadoop, and in a recent KDD Workshop on Data Mining for Advertising, \cite{Shen2008} covered technological advances in online direct marketing. The landscape of direct marketing is dynamically changing and expanding to new applications. Two statistical models are particularly popular for direct marketing: uplift modeling and RFM (Recency, Frequency, and Monetary value analysis). Uplift modeling, also known as incremental modeling or net-lift modeling, aims to measure the effectiveness of advertisement through randomized control and test groups \citep{Radcliffe1999,Lo2002}. Although uplift modeling can provide strong return on investment cases for some applications, a large number of control and test samples are needed to estimate extremely low response rates e.g. insurance purchases. While uplift modeling predicts the difference that a marketing touch makes, RFM is a marketing-agnostic summary statistics for describing customers \citep{Fader2005}. In the RFM modeling, a customer is represented using three variables; the time of the most recent purchase, the frequency of purchases, and average spending per purchase. \cite{McCarty2007} compared the predictive performances of RFM, CHAID, and logistic regression, and concluded that RFM was comparable to the other two methods. However, since RFM is marketing-agnostic, high-lift customers, whose purchase behaviors are significantly affected by marketing touches, tend to be neglected when RFM is used as the primary targeting method. State space models, although not popular in direct marketing literature, can potentially address three main challenges in direct marketing: \textit{which channel} to use, \textit{which offer} to make, and \textit{when} to offer. A state space model, also known as dynamic linear model, is a latent variable time series model that represents a physical system as a set of input, output, and (latent) state variables. In direct marketing, the input and output refer to the marketing touch and the response respectively, and the state can be interpreted as propensity to purchase a product. Although state space models have been widely adopted in diverse academic domains such as machine learning, economics, and finance (see Section~\ref{sec:ssmest}), there have been few applications in direct marketing to the best of our knowledge. \cite{Dekimpe2000} provide two reasons for the poor adoption of time series modeling in marketing: lack of adequate data sources and lack of easy-to-use times series softwares for marketers. \cite{Pauwels2004} summarized potential challenges when time series models are applied in marketing. Recently, \cite{Dekimpe2010} noted that time series modeling has just started to gain popularity in the marketing science community due to recent developments in marketing databases and computational power: \cite{Fader2010} used negative beta-geometric and beta-Bernoulli distributions with two latent variables (dead or alive) to model recursive discrete-time donation behavior, and recently, \cite{Lee2014} adopted a state space model to measure brand inertia. In this paper, we present a new state space model for direct marketing, namely Dynamic Propensity Model (DPM). The proposed model is a latent variable time series model that utilizes both marketing and purchase histories of a customer where the latent variable (state) represents a member's propensity to buy a product. Our model tracks an individual's marketing touches and responses, then estimates his personalized probabilities for the first purchase at daily resolution. Marketing histories contain the records of when and how many emails, direct mails, phone calls, display ads, and referral are sent or clicked. In the model, such marketing touches increase an individual's propensity, and this propensity score attenuates and propagates over time. We use six months' marketing records from one of the largest insurance companies in the U.S. The degrees of the marketing effects and propagation strengths are statistically estimated from the data. To estimate the parameters of the model, a new statistical methodology has been developed, which combines particle methods and a stochastic gradient descent approach. The design of DPM was, in fact, motivated by several findings. Initially, our research goal was to find the retrospective correlation between marketing touches and purchase activities using generalized linear models. During the investigation, however, we have observed that some of the marketing touches have \textit{negative effects} on purchase behaviors (see Section~\ref{sec:empirical}). For some marketing practices, it is possible that some customers may be annoyed by frequent marketing touches. Moreover, there are several aspects of the data that couldn't be addressed by a simple generalized model: \begin{itemize} \item Direct mails, emails, promotion phone calls are sent out to a strategically filtered group of customers. The company targeted persuadable customers, whose actuarial purchase probabilities are expected to be positively impacted by marketing touches. \item Control and test groups are not readily identifiable for semi-targetable marketing events such as display ads and referrals. Uplift modeling cannot directly measure the effects of such semi-targetable marketing touches. \item Past marketing touches have some effect on purchase activities. Appropriate effect time windows need to be determined to accurately estimate the true effects of marketing touches. \end{itemize} To address these findings, we needed a model that can track individual customers with different marketing histories. The model also needed to provide consistent interpretations for building marketing strategies and monitoring customers' needs. We view a purchase as an instantiation of marketing satisfaction \citep{Westbrook1987}, rather than a cognitive decision making process involving the semantic meaning of product attributes \citep{Oliver1980}. A dynamic process of satisfaction \citep{Fournier1999} naturally leads to algebraic linkages between temporally close satisfaction variables. These findings and objectives led to the construction of our Dynamic Propensity Model. Table~\ref{tab:notation} shows the mathematical notation used in this paper. The rest of this paper is organized as follows: In Section~\ref{sec:background}, we cover the basics of particle methods, parameter estimation techniques in state-space models, and stochastic gradient descent. In Section~\ref{sec:dpm}, we formally introduce Dynamic Propensity model, and investigate the properties of the model. The parameter estimation method for the proposed model is provided In Section~\ref{sec:estimation}, and empirical results from the real-life dataset are illustrated in Section~\ref{sec:empirical}. Finally, we discuss the limitations of the proposed method and future work in Section~\ref{sec:conclusion}. \begin{table}\caption{Notation.}\label{tab:notation} \begin{center} \begin{tabular}{ l l } \hline Symbol & Explanation \\ \hline\hline $y_t^i$ & purchase indicator (output) \\ $\mathbf{r}_t^i$ & integer vector of non-actionable marketing touches (input)\\ $\mathbf{m}_t^i$ & integer vector of actionable marketing touches (input)\\ \hline $x_t^i$ & filtered propensity (latent state)\\ $s_t^i$ & predictive propensity (auxiliary latent state)\\ $\varepsilon_t^i$ & standard normal noise i.e. $\varepsilon_t^i \sim \text{N}(0, 1)$\\\hline $c$ & offset parameter for propensity \\ $\phi$ & damping factor for propensity \\ $\boldsymbol{\alpha}$ & coefficient vector for semi-targetable marketing touches e.g. display ads\\ $\boldsymbol{\beta}$ & coefficient vector for targetable marketing touches e.g. direct mail\\ $\boldsymbol{\theta}$ & parameter vector i.e. $\boldsymbol{\theta}=(c, \phi, \boldsymbol{\alpha},\boldsymbol{\beta})$\\\hline $\text{Pr}$ & probability measure \\ $\mathcal{L}$ & loss function or objective function \\ \hline \end{tabular} \end{center} \end{table} \section{Background}\label{sec:background} In this section, we cover related work on particle methods, parameter estimation techniques in state-space models, and basics of stochastic gradient descent. These techniques form the building blocks of our proposed model, DPM, which is a latent variable time series model for large-scale enterprise size data. We found that traditional parameter estimation approaches for SSMs become intractable when applied to the size of our dataset. As a result, we developed a new parameter estimation technique by adopting (1) Monte Carlo simulation methods, and (2) stochastic optimization algorithms for the DPM objective function. \subsection{Particle Methods} If both the observation and the latent variables are normally distributed, the optimal filtering is known to be the Kalman Filter (KF) \citep{Kalman1960}. For non-linear systems, several approximation techniques based on linearization, such as Extended KF (first-order approximation) and Unscented KF (second-order approximation), can be applied. However, such linearization usually causes non-diminishing bias, and even worse, the algorithms are typically difficult to implement and tune correctly \citep{Julier2004}. Particle methods \citep{Gordon1993} use a different kind of approximation technique: Monte Carlo simulation. Unlike those variants of KF, the state estimates from particle methods can be made arbitrarily accurate with enough particles. Particle methods are based on a sequence of importance sampling steps. Resampling techniques \citep{Liu1998} are typically adopted to decelerate the degeneracy of particles. Also, Auxiliary Particle Filter has been developed to prevent the degeneracy of the Sequential Monte Carlo \citep{Pitt1999}. Particle methods are powerful general state-space estimation techniques that are widely applicable to non-linear evolution and observation processes \citep{Doucet:2008us}. In this paper, we use a particle filter with resampling to estimate a sequence of dynamic propensity scores. \subsection{Latent Variable Time Series Models}\label{sec:ssmest} Latent variable time series models are categorized into two classes of models: state-space models (continuous latent variable) and hidden Markov models (discrete latent variable). Latent variables are effective for summarizing past observations, capturing an underlying dynamics, and providing human-interpretable results from complex observations \citep{JHo2013}. \cite{raghavan2012} developed a hidden Markov model for modeling activity profiles of terrorist activities. \cite{xing2008} developed a state space model to capture time altering networks. \cite{valentini2013} employed a spatially structured factor analysis to model house prices, while \cite{nagaraja2011} used an autoregressive technique to capture the time effect. \cite{aktekin2013} used a Bayesian state space model to estimate mortgage default risk. In this paper, we use a state space model to capture the (unobserved) propensity\footnote{Discrete latent variables in hidden Markov models cannot model continuous propensity scores.}. Parameter estimation techniques for SSMs fall into three main groups: Bayesian online, maximum-likelihood offline, and maximum-likelihood online settings \citep{Kantas2009}. In the Bayesian online setting, model parameters are assumed to be \textit{dynamic} over time series, and they are sequentially estimated. Some of the successful algorithms are Liu-West Filter \citep{Liu2001}, Storvik Filter \citep{Storvik2002}, and Particle Learning \citep{Carvalho2010}. For the maximum-likelihood online setting, \cite{Andrieu2005} have demonstrated an online estimation algorithm using block time series and pseudo-likelihood. Recall that the parameters in this paper are fixed but unknown. In the offline (or batch) maximum-likelihood setting, two approaches have been popular: Fisher's scoring and Expectation-Maximization (EM). The Fisher's scoring algorithm is a variant of Newton-Raphson algorithm based on the log-likelihood function. However, obtaining the log-likelihood of a time series is typically intractable. \cite{Doucet2003} proposed a general approach for approximating the log-likelihood using particle methods. Although this Fisher's scoring algorithm is generally applicable to several settings, it is difficult to scale the gradients for high dimensional parameters \citep{Kantas2009}. The EM algorithm is numerically more stable and usually computationally cheaper for high dimensional parameters. For a Gaussian SSM, the EM algorithm can be implemented using Kalman Filter and Smoother \citep{Shumway1982}. For non-linear systems, \cite{Zia2008} introduced the EM-PF (EM using Particle Filter) algorithm, but many of the assumptions are not applicable in our setting. For categorical time series, \cite{Park2014} proposed an efficient parameter estimation algorithm based on a stochastic EM algorithm \citep{Nielsen2000}. However, their problem setting is slightly different from our setting; they assume different latent dynamics for individuals, and reasonably balanced class labels. In this paper, we use an alternating maximization approach that is similar to the EM algorithm. There are some differences between our approach and the EM algorithm, which will be explained in Section~\ref{sec:estimation}. \subsection{Stochastic Gradient Descent} Stochastic Gradient Descent (SGD) is a large-scale optimization technique for a summation of differentiable objective functions. Consider an objective function $\mathcal{L}(\boldsymbol{\theta})$ based on a parameter $\boldsymbol{\theta}$ where the objective function can be written as $\mathcal{L}(\boldsymbol{\theta}) = \sum_{i } \mathcal{L}_i (\boldsymbol{\theta})$. An iterative method like gradient descent can be used to reach a local optimum in expectation. If $\boldsymbol{\theta}^{(v)}$ is the estimate for the parameter in the $v$th iteration, in the next iteration, a gradient descent algorithm produces: \begin{align*} \boldsymbol{\theta}^{(v+1)} = \boldsymbol{\theta}^{(v)} - \gamma^{(v)} \nabla \mathcal{L}(\boldsymbol{\theta}^{(v)}) = \boldsymbol{\theta}^{(v)} - \gamma^{(v)} \sum_i \nabla \mathcal{L}_i(\boldsymbol{\theta}^{(v)}) \end{align*} where $\gamma^{(v)}$ is a suitable step size. The expectation $E(\mathcal{L}(\boldsymbol{\theta}))$ is to be calculated over the entire dataset for every update of $\boldsymbol{\theta}$. This can be costly for larger datasets. A \emph{stochastic} version of the gradient descent, SGD, is a more practical approach for large-scale learning problems \citep{Bottou2007}. SGD guarantees convergence to the optimal $\boldsymbol{\theta}$ while doing away with the costly expectation calculation \citep{Bottou1998}. The update step for SGD is simpler: \begin{align*} \boldsymbol{\theta}^{(v+1)} = \boldsymbol{\theta}^{(v)} - \gamma^{(v)} \sum_{i \in \mathcal{I}}\mathcal{L}_i (\boldsymbol{\theta}^{(v)}) \end{align*} where $\mathcal{I}$ is a subset of the dataset. The subset can be as small as a single data point, though usually using small batches decreases the variance and leads to quicker convergence. In this paper, we use SGD in our alternating maximization approach. \section{Dynamic Propensity Model}\label{sec:dpm} Dynamic Propensity Model (DPM) is a latent variable time series model that utilizes both marketing and purchase histories of a customer. The latent variable of DPM represents a member's propensity to buy a product. The model tracks an individual' marketing touches and responses, and estimates his personalized probabilities for the first purchase at daily resolution. These probability scores can be used in multiple applications: (1) predicting when the customer is likely to buy the product (within-customer application) and (2) targeting likely-to-buy customers (across-customer application). DPM uses a different modeling strategy from traditional targeting models. Traditional targeting models estimate cross-sectional probabilities, whereas DPM models longitudinal probabilities. In other words, traditional targeting models answer whom to offer, and DPM suggests when to offer. Although these two goals may seem fundamentally different, one indirectly indicates the other given a limited marketing budget. This connection is shown in Figure~\ref{fig:duality}. In cross-sectional modeling, choosing a subset of customers is typically based on the rank orders of probability scores at a given time. For a different cross-section of data, a different subset of customers will be chosen. As a result, a customer may or may not be chosen for direct marketing at a given time, and this property indirectly determines when to offer. Although their outcomes may be superficially similar, each approach has different modeling limitations. Cross-sectional models are easy to estimate, but complex effects, such as time-varying effects and customer heterogeneity, are difficult to model. On the other hand, longitudinal models can capture dynamic and heterogeneous effects, but they typically need more samples for parameter estimation, and sometimes, some model parameters are almost intractable to estimate. Due to the complexity of time-series models, cross-sectional models are traditionally modified to handle complex effects, instead of directly using time-series models. \cite{Allenby1994} applied temporally correlated error terms for modeling household purchase behavior. \cite{Keane1997} suggested a variant of discrete choice model that captures both customer heterogeneity and temporal dependency. \begin{figure} \center \includegraphics[width=0.8\textwidth]{./duality} \caption{The connection between \textit{whom} to offer and \textit{when} to offer.}\label{fig:duality} \end{figure} Our goal is to build a time series model that can easily capture customer heterogeneity and temporal dependency, and furthermore, the model should be simple enough to extend with other covariates. In this paper, we demonstrate DPM on one product and associated marketing touches for promoting that product, although it can be extended to deal with multiple products through a matrix representation. In other words, marketing touches for promoting different products are ignored to keep the simplicity of our illustration. Recall that, in DPM, a customer has a latent factor (propensity) that evolves over time. We assume that marketing effects can be super-positioned i.e. they additively affect the propensity. A direct application of state-space model can capture these listed properties: \begin{align*} \text{(Filtered Propensity)} &\quad x_{t+1}^i = c_d + \phi_d x_{t}^i + \boldsymbol{\alpha}^\top_d \mathbf{r}_t^i + \boldsymbol{\beta}^\top_d \mathbf{m}_{t}^i + \varepsilon_{t}^i \\ \text{(Tomorrow's Purchase)} &\quad \text{Pr}(y_{t+1}^i = 1) = \frac{1}{1+\exp(-x_{t+1}^i)} \end{align*} where the superscript $i$ represents the $i$th customer, and the subscript $t$ indicates variables at time $t$. The subscript $d$ on the model parameters indicates a demographic segment that exhibit similar responses to marketing touches i.e. homogeneous marketing response group. Two different types of marketing touches are illustrated: semi-targetable $\mathbf{r}_t^i$ and targetable $\mathbf{m}_t^i$ marketing touches. The targetable marketing touches include emails, direct mails, and phone calls. In these cases, marketing can be directly targeted to a particular customers. The semi-targetable marketing touches represent display ads and referrals. Although a company can control the exposure to such semi-targetable marketing touches, such marketing touches cannot be targeted to a specific individual and the exposure involves a certain degree of randomness. Some semi-targetable marketing variables contain marketing responses e.g. clicking display ads, and we view that these activities increase the propensity. This direct application of state space model, however, faces two practical issues as follows: \begin{itemize} \item \textbf{Time difference within a day}: Even though $\mathbf{m}_t$ and $y_t$ are indexed by the same subscript, one occurs before the other e.g. $\mathbf{m}_t$ in the morning and $y_t$ in the afternoon. In this paper, we assume that a purchase decision $y_t$ is made at the very start of a day, thus $\mathbf{m}_t$ and $\mathbf{r}_t$ affect the next day's purchase decision $y_{t+1}$. \item \textbf{No data anchor on marketing effects}: If $\mathbf{x}$ is known, then estimating $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ only depends on $\mathbf{x}$ in the na\"{i}ve SSM application. If the estimates of $\mathbf{x}$ is noisy, then the error would propagate to the parameter estimates. In fact, we have observed that parameters do not converge when using this model. The estimated parameters should be anchored on actual data samples, rather than being determined by latent variables. \end{itemize} These issues can be addressed by introducing an auxiliary variable $s_{t+1}$ between $x_{t}$ and $x_{t+1}$. We now introduce the model equations for DPM: \begin{align*} \text{(Propagated Propensity)} &\quad\quad x_{t}^i = s_t^i + \varepsilon_{t}^i\\ \text{(Predicted Propensity)} &\quad\quad s_{t+1}^i = c_d + \phi_d x_{t}^i + \boldsymbol{\alpha}^\top_d \mathbf{r}_t^i + \boldsymbol{\beta}^\top_d \mathbf{m}_{t}^i \\ \text{(Tomorrow's Purchase)} &\quad\quad \text{Pr}(y_{t+1}^i = 1) = \frac{1}{1+\exp(-s_{t+1}^i)} \end{align*} where $\varepsilon_t^i$ is drawn from the standard normal distribution. The auxiliary variable $s_{t+1}$ is deterministic given today's propensity $x_{t}$ and marketing touches $\mathbf{m}_t$ and $\mathbf{r}_t$. This auxiliary variable can be viewed as the propensity in the evening that directly affects the purchase decision at the very start of the next day. Figure~\ref{fig:graphical} shows and compares the graphical representations of the naive SSM application and DPM. The auxiliary variable in DPM not only refines the time resolution of purchase process, but also provides a new perspective on parameter estimation. As an illustrative example, let us consider an alternating maximization approach for parameter estimation. In the SSM formulation, we basically solve two maximization problems: \begin{equation*} \begin{aligned}[c] \max~ \boldsymbol{\theta} &\mid \mathbf{x}, \mathbf{y} \\ \max~ \mathbf{x} &\mid \boldsymbol{\theta}, \mathbf{y} \end{aligned}\quad\Longrightarrow\quad \begin{aligned}[c] \max~ \boldsymbol{\theta} &\mid \mathbf{x} \\ \max~ \mathbf{x} &\mid \boldsymbol{\theta}, \mathbf{y} \end{aligned} \end{equation*} where $\boldsymbol{\theta} = \{c, \phi, \boldsymbol{\alpha}, \boldsymbol{\beta}\}$. Note that $\mathbf{y}$ in the first maximization problem is removed, since $\boldsymbol{\theta}$ is solely determined by $\mathbf{x}$ i.e. data-anchor issue. On the other hand, in DPM, we solve three maximization problems with respect to $\boldsymbol{\theta}$, $\mathbf{x}$, and $\mathbf{s}$: \begin{equation*} \begin{aligned}[c] \max~ \boldsymbol{\theta} &\mid \mathbf{x}, \mathbf{y}, \mathbf{s} \\ \max~ \mathbf{x} &\mid \boldsymbol{\theta}, \mathbf{y}, \mathbf{s}\\ \max~ \mathbf{s} &\mid \boldsymbol{\theta}, \mathbf{x}, \mathbf{y} \end{aligned}\quad\Longrightarrow\quad \begin{aligned}[c] \max~\boldsymbol{\theta} &\mid \mathbf{x}, \mathbf{y} \\ \max~ \mathbf{x} &\mid \boldsymbol{\theta}, \mathbf{y}\\ ( \mathbf{s} &\mid \boldsymbol{\theta}, \mathbf{x}, \mathbf{y} ) \end{aligned} \end{equation*} where the last maximization problem is trivial as $\mathbf{s}$ is determined by $\mathbf{x}$ and $\boldsymbol{\theta}$. In fact, these three maximization problems are actually two maximization problems. To remove the last maximization problem, we treat the auxiliary variable as a nuisance variable. We remove $\mathbf{s}$ using the relation $s_{t+1}=c_d + \phi_d x_{t} + \boldsymbol{\alpha}^\top_d \mathbf{r}_t + \boldsymbol{\beta}^\top_d \mathbf{m}_{t} $. Both the resultant two maximization problems are now anchored by the observation $\mathbf{y}$. The path from $x_t$ to $y_{t+1}$ in DPM involves the model parameter $\boldsymbol{\theta}$ (see Figure~\ref{fig:graphical}). The auxiliary variable in DPM resulted in a different set of maximization problems from the one from the SSM formulation. The complete joint probability distribution of DPM for a particular customer can be recursively factorized as follows: \begin{align*} &\text{Pr}(\mathbf{R}, \mathbf{M}, \mathbf{y}, \mathbf{x}) \\ &= \underbrace{\text{Pr}(\mathbf{R}, \mathbf{M} )}_{\text{marketing strategy}} \prod_t \text{Pr}(y_{t+1} \mid \underbrace{x_t}_{\mathclap{\text{customer heterogeneity}}},\mathbf{r}_t, \mathbf{m}_t ) \underbrace{\text{Pr}(x_t \mid x_{t-1} \mathbf{r}_{t-1}, \mathbf{m}_{t-1} )}_{\text{temporal dependency}} \end{align*} where we removed the superscript $i$ to make notation simple. As can be seen, the model captures both customer heterogeneity and temporal dependency. Note that, in DPM, customer heterogeneity originates from differences in marketing touches and responses over time. In fact, customer heterogeneity can be captured by demographic segments such as age, income, and family size. This demographic heterogeneity can affect customers' base purchase rates and response dynamics. Note that, in our DPM formulation, the model parameters are indexed by the demographic segment variable $d$. In practice, we build a separate model for each demographic segment that exhibits homogeneous behavioural patterns. For clarity, we focus on one demographic segment from here onwards, dropping the index $d$. \begin{figure}[h] \centering \definecolor{mycolor1}{RGB}{217,91,67} \definecolor{mycolor2}{RGB}{192,41,66} \definecolor{mycolor3}{RGB}{84,36,55} \definecolor{mycolor4}{RGB}{83,119,122} \tikzstyle{state}=[circle,thick,minimum size=1.2cm,draw=mycolor1] \tikzstyle{measurement}=[circle,thick,minimum size=1.2cm,draw=mycolor2] \tikzstyle{input}=[circle,thick,minimum size=1.2cm,draw=mycolor3] \tikzstyle{matrx}=[rectangle,thick,minimum size=1cm,draw=mycolor4] \tikzstyle{background}=[rectangle,fill=gray!10,inner sep=0.2cm,rounded corners=5mm] \begin{subfigure}[b]{0.8\textwidth} \begin{tikzpicture}[>=latex,text height=1.5ex,text depth=0.25ex] \matrix[row sep=0.5cm,column sep=0.3cm] { \node (m_t) [input] {$\mathbf{m}_{t}$}; & \node (r_t) [input] {$\mathbf{r}_{t}$}; & \node (m_t+1) [input] {$\mathbf{m}_{t+1}$}; & \node (r_t+1) [input] {$\mathbf{r}_{t+1}$}; & \node (m_t+2) [input] {$\mathbf{m}_{t+2}$}; & \node (r_t+2) [input] {$\mathbf{r}_{t+2}$}; & \\ \node (s_t-1) {$\cdots$}; & \node (x_t) [state] {${x}_{t}$}; & & \node (x_t+1) [state] {${x}_{t+1}$}; & & \node (x_t+2) [state] {${x}_{t+2}$}; & \node (s_t+2) {$\cdots$}; \\ &\node (y_t) [measurement] {${y}_{t}$}; & & \node (y_t+1) [measurement] {${y}_{t+1}$}; & & \node (y_t+2) [measurement] {${y}_{t+2}$}; & \\ }; \path[->] (s_t-1) edge[thick] node[above] {$\boldsymbol{\theta}$} (x_t) (x_t) edge[thick] node[above] {$\boldsymbol{\theta}$} (x_t+1) (x_t+1) edge[thick] node[above] {$\boldsymbol{\theta}$} (x_t+2) (x_t+2) edge[thick] node[above] {$\boldsymbol{\theta}$} (s_t+2) (x_t) edge (y_t) (x_t+1) edge (y_t+1) (x_t+2) edge (y_t+2) (r_t) edge (x_t) (m_t) edge (x_t) (r_t+1) edge (x_t+1) (m_t+1) edge (x_t+1) (r_t+2) edge (x_t+2) (m_t+2) edge (x_t+2) ; \end{tikzpicture} \caption{Na\"{i}ve Application of SSM} \end{subfigure} \begin{subfigure}[b]{0.8\textwidth} \begin{tikzpicture}[>=latex,text height=1.5ex,text depth=0.25ex] \matrix[row sep=0.5cm,column sep=0.3cm] { \node (m_t-1) [input] {$\mathbf{m}_{t-1}$}; & \node (r_t-1) [input] {$\mathbf{r}_{t-1}$}; & \node (m_t) [input] {$\mathbf{m}_{t}$}; & \node (r_t) [input] {$\mathbf{r}_{t}$}; & \node (m_t+1) [input] {$\mathbf{m}_{t+1}$}; & \node (r_t+1) [input] {$\mathbf{r}_{t+1}$}; & & \\ \node (x_t-1) {$\cdots$}; & \node (s_t) [matrx] {$s_{t}$}; & \node (x_t) [state] {${x}_t$}; & \node (s_t+1) [matrx] {$s_{t+1}$}; & \node (x_t+1) [state] {${x}_{t+1}$}; & \node (s_t+2) [matrx] {$s_{t+2}$}; & \node (x_t+2) [state] {${x}_{t+2}$}; & \node (s_t+3) {$\cdots$}; \\ & & \node (y_t) [measurement] {${y}_{t}$}; & & \node (y_t+1) [measurement] {${y}_{t+1}$}; & & \node (y_t+2) [measurement] {${y}_{t+2}$}; & \\ }; \path[->] (x_t-1) edge[thick] node[above] {$\boldsymbol{\theta}$} (s_t) (s_t) edge[thick] (x_t) (x_t) edge[thick] node[above] {$\boldsymbol{\theta}$} (s_t+1) (s_t+1) edge[thick] (x_t+1) (x_t+1) edge[thick] node[above] {$\boldsymbol{\theta}$} (s_t+2) (s_t+2) edge[thick] (x_t+2) (x_t+2) edge[thick] node[above] {$\boldsymbol{\theta}$} (s_t+3) (s_t) edge (y_t) (s_t+1) edge (y_t+1) (s_t+2) edge (y_t+2) (m_t-1) edge (s_t) (r_t-1) edge (s_t) (m_t) edge (s_t+1) (r_t) edge (s_t+1) (m_t+1) edge (s_t+2) (r_t+1) edge (s_t+2) ; \end{tikzpicture} \caption{Dynamic Propensity Model} \end{subfigure} \caption{Graphical Models for the simple SSM and DPM. The path from $x_t$ to $y_{t+1}$ in DPM involves the model parameter $\boldsymbol{\theta}$, while the SSM path from $x_t$ to $y_{t+1}$ is blocked by $x_{t+1}$. }\label{fig:graphical} \end{figure} \section{Parameter Estimation}\label{sec:estimation} The maximum likelihood parameter for DPM is hard to optimize, as can be seen from the likelihood equation: \begin{align*} &\arg\max_{\boldsymbol{\theta}} \prod_{i=1}^N \prod_{t=1}^{T^i} \text{Pr}_{\boldsymbol{\theta}}(y_{t+1}^i \mid \mathbf{r}_t^i, \mathbf{m}_t^i) \end{align*} where $N$ is the total number of customers, and $T^i$ is the number of days that the $i$th customer is monitored. The likelihood is defined on the observed variables: $y_t^i$, $\mathbf{m}_t^i$, and $\mathbf{r}_t^i$. To obtain the actual value of the likelihood, we need to integrate out the latent states: \begin{align*} &\arg\max_{\boldsymbol{\theta}} \prod_{i=1}^N \int_{\mathbf{x}} \prod_{t=1}^{T^i} \text{Pr}_{\boldsymbol{\theta}}(y_{t+1}^i \mid {x}_t^i, \mathbf{r}_t^i, \mathbf{m}_t^i)\text{Pr}_{\boldsymbol{\theta}}({x}_t^i \mid {x}_{t-1}^i \mathbf{r}_{t-1}^i, \mathbf{m}_{t-1}^i) d\mathbf{x}\\ & = \arg\max_{\boldsymbol{\theta}} \sum_{i=1}^N \log \int_{\mathbf{x}} \prod_{t=1}^{T^i} \text{Pr}_{\boldsymbol{\theta}}(y_{t+1}^i \mid {x}_t^i, \mathbf{r}_t^i, \mathbf{m}_t^i)\text{Pr}_{\boldsymbol{\theta}}({x}_t^i \mid {x}_{t-1}^i \mathbf{r}_{t-1}^i, \mathbf{m}_{t-1}^i) d\mathbf{x} \end{align*} where $\mathbf{x}$ represents $x_{1:T^i}^i$. The inner integral makes optimization of the log-likelihood intractable. This type of maximization problem has been traditionally approached by the EM algorithm \citep{Neal1998}. The EM algorithm derives a lower bound of the log-likelihood, and then maximizes this lower bound. The lower bound is obtained using Jensen's inequality for any arbitrary distribution $\text{Q}^i(\mathbf{x})$ : \begin{align} \sum_{i=1}^N \int_{\mathbf{x}} \text{Q}^i(\mathbf{x}) \log \frac{\prod_{t=1}^{T^i} \text{Pr}_{\boldsymbol{\theta}}(y_{t+1}^i \mid {x}_t^i, \mathbf{r}_t^i, \mathbf{m}_t^i)\text{Pr}_{\boldsymbol{\theta}}({x}_t^i \mid {x}_{t-1}^i \mathbf{r}_{t-1}^i, \mathbf{m}_{t-1}^i) }{\text{Q}^i(\mathbf{x})} d\mathbf{x} \label{eq:em} \end{align} This lower bound is maximized when \begin{align*} \text{Q}^i(\mathbf{x}) = \prod_{t=1}^{T^i} \text{Pr}_{\boldsymbol{\theta}}(y_{t+1}^i \mid {x}_t^i, \mathbf{r}_t^i, \mathbf{m}_t^i)\text{Pr}_{\boldsymbol{\theta}}({x}_t^i \mid {x}_{t-1}^i \mathbf{r}_{t-1}^i, \mathbf{m}_{t-1}^i) \end{align*} For dynamic linear systems with Gaussian noise, $\text{Q}^i(\mathbf{x})$ can be obtained in a closed form. However, the binary purchase indicators cannot be directly modeled as numeric variables, and the form of $\text{Q}^i(\mathbf{x})$ should be numerically approximated. The EM algorithm for DPM can be written as follows: \begin{enumerate} \item Initialize $\boldsymbol{\theta}$ \item Until convergence, \begin{enumerate} \item (E-step) Estimate $Q^i$ that maximizes Equation~\ref{eq:em} \item (M-step) Estimate $\boldsymbol{\theta}$ that maximizes Equation~\ref{eq:em} \end{enumerate} \end{enumerate} This method is practical only for a small $N$, but not for the size of our data. If we use particle filters for the E-step, the number of simulation particles to store is $P \times \sum_{i} T^i $. As an illustrative example, if we use 5000 particles for 200K customers with average 100 observations per customer, a hundred million particles need to be stored and flushed at every iteration. Furthermore, the M-step needs to run on the massive size of particles, which is also a challenging engineering problem. To facilitate practical optimization, we optimize a surrogate instead of the true likelihood function by viewing $x_t$ as a temporally correlated random offset parameter, rather than a latent variable. This view helps us maximize the \textit{conditional likelihood} function of DPM, instead of the marginal likelihood. Our goal is to obtain estimates for both $\mathbf{X}=\{\mathbf{x}^1, \mathbf{x}^2, \ldots, \mathbf{x}^N\}$ and $\boldsymbol{\theta}$. The original optimization problem with the parameter set $\boldsymbol{\theta}$ now transforms to a new optimization problem with two sets of variables as follows: \begin{align} \label{eqn:LDPM} \begin{split} &\arg\max_{\boldsymbol{\theta},\mathbf{X}} \mathcal{L}_{\text{DPM}} (\boldsymbol{\theta},\mathbf{X}) \\ & = \arg\max_{\boldsymbol{\theta},\mathbf{X}} \prod_{i=1}^N \prod_{t=1}^{T^i} \text{Pr}_{\boldsymbol{\theta}}(y_{t+1}^i \mid {x}_{t}^i,\mathbf{r}_t^i, \mathbf{m}_t^i )\text{Pr}_{\boldsymbol{\theta}}({x}_t^i \mid {x}_{t-1}^i, \mathbf{r}_{t-1}^i, \mathbf{m}_{t-1}^i) \\ & = \arg\max_{\boldsymbol{\theta},\mathbf{X}} \sum_{i=1}^N \sum_{t=1}^{T^i} \underbrace{\log \text{Pr}_{\boldsymbol{\theta}}(y_{t+1}^i \mid {x}_{t}^i,\mathbf{r}_{t}^i, \mathbf{m}_t^i )}_{\text{generalized linear model}} + \underbrace{\log \text{Pr}_{\boldsymbol{\theta}}({x}_t^i \mid {x}_{t-1}^i, \mathbf{r}_{t-1}^i,\mathbf{m}_{t-1}^i)}_{\text{time series}} \end{split} \end{align} where $\mathcal{L}_{\text{DPM}}$ is the objective function of DPM. As can be seen, the inner integral is removed since $\mathbf{X}$ is treated as a parameter matrix. The factorized form in Equation~\ref{eqn:LDPM} provides insightful explanation about the objective function. The first part of the factorized form is a logistic regression model with customer heterogeneity, while the second part explains temporal dependency. This objective function can be also viewed as a logistic regression with sophisticated temporal regularization, or a time series model with a logistic loss regularization. An alternating maximization of the objective function can be written as follows: \begin{enumerate} \item Initialize $\boldsymbol{\theta}$ \item Until convergence on $\boldsymbol{\theta}$, \begin{enumerate} \item Maximize $\mathcal{L}_{\text{DPM}}$ over $\mathbf{X}$ using Particle Methods \item Maximize $\mathcal{L}_{\text{DPM}}$ over $\boldsymbol{\theta}$ \end{enumerate} \end{enumerate} This procedure is more efficient than the brute-force EM algorithm, since only the most likely single path $x_{1:T}$ needs to be stored. However, the maximization over $\boldsymbol{\theta}$ can be still expensive because of large number of customer samples. Stochastic Gradient Descent (SGD) is a suitable solution to this type of large-scale learning problem. If we apply SGD on the second maximization step, we obtain: \begin{enumerate} \item Initialize $\boldsymbol{\theta}$ \item Until convergence on $\boldsymbol{\theta}$, \begin{enumerate} \item Maximize $\mathcal{L}_{\text{DPM}}$ over $\mathbf{X}$ \item Initialize $\boldsymbol{\theta}_{\text{SGD}} = \boldsymbol{\theta}$ \item Until convergence on $\boldsymbol{\theta}_{\text{SGD}}$, \begin{enumerate} \item Select a subset of customer samples, \begin{enumerate} \item Maximize $\mathcal{L}_{\text{DPM}}$ over $\boldsymbol{\theta}_{\text{SGD}}$ \end{enumerate} \end{enumerate} \end{enumerate} \end{enumerate} where $\boldsymbol{\theta}_{\text{SGD}}$ represents the inner loop parameter in the SGD step. This procedure is the same as the previous procedure except the SGD maximization part. To estimate $\boldsymbol{\theta}$, a subset of customers are uniformly sampled from the entire training dataset. The gradient of the objective function is calculated based on the subset, then the inner loop parameter is incremented by the gradient. This inner loop continues until $\boldsymbol{\theta}_{\text{SGD}}$ converges. The maximization step of $\mathbf{X}$ still requires a particle method that scans the entire training dataset. Note that not all $\mathbf{x}^i$ samples are used in the SGD maximization step. The particle estimates on $\mathbf{x}^i$ can be generated on demand according to the selected subset as follows: \begin{enumerate} \item Initialize $\boldsymbol{\theta}$ \item Until convergence on $\boldsymbol{\theta}$, \begin{enumerate} \item Initialize $\boldsymbol{\theta}_{\text{SGD}} = \boldsymbol{\theta}$ \item Until convergence on $\boldsymbol{\theta}_{\text{SGD}}$, \begin{enumerate} \item Select a subset of samples, \begin{enumerate} \item Maximize $\mathcal{L}_{\text{DPM}}$ over $\mathbf{X}$ using $\boldsymbol{\theta}$ \item Maximize $\mathcal{L}_{\text{DPM}}$ over $\boldsymbol{\theta}_{\text{SGD}}$ \end{enumerate} \end{enumerate} \end{enumerate} \end{enumerate} Although $\mathbf{X}$ is estimated only on the selected subset of samples, this procedure has nested loops. The inner loop estimates the stochastic gradient parameter, and the outer loop resets the stochastic gradient parameter. We observe that the nested loops can be effectively removed, since in practice the inner loop converges within one or two iterations for customer samples of size 1. We now introduce the parameter estimation algorithm for DPM in Algorithm~\ref{algorithm:dpm}. \begin{algorithm} \SetAlgoLined \KwData{$\mathbf{Y}$} \KwResult{$\boldsymbol{\theta}$} Initialize $\boldsymbol{\theta}$\; \While{Until convergence on $\boldsymbol{\theta}$}{ Randomly pick a customer $\mathbf{y}^i = [y_1, y_2, \ldots, y_{T^i}]$\; $\boldsymbol{\theta} = \boldsymbol{\theta} + \gamma \partial_{\boldsymbol{\theta}} \mathcal{L}_{\text{DPM}} (\boldsymbol{\theta}, \argmax_{\mathbf{x}^i} \mathcal{L}_{\text{DPM}} (\boldsymbol{\theta}, \mathbf{x}^i))$\; } \caption{SGDPM: DPM parameter estimation algorithm using stochastic gradient}\label{algorithm:dpm} \end{algorithm} The SGDPM algorithm converges almost surely if $\sum_{v} (\gamma^{(v)})^2 < \infty$ and $\sum_{v} \gamma^{(v)} = \infty$. Under sufficient regularity conditions, the best convergence speed is achieved when $\gamma^{(v)} \sim v^{-1}$ \citep{Murata1998}. The algorithm converges to local optima, thus for potentially better solutions, one needs to try out several different initializations. The inner $\argmax$ is estimated using particle methods, and the outer gradient can be obtained in a closed form. As an illustrative example, we show a gradient form for the damping parameter $\phi$: \begin{small} \begin{align*} & \partial_{\phi} \sum_{t=1}^{T^i} \log \text{Pr}_{\boldsymbol{\theta}}(y_{t+1}^i \mid {x}_{t}^i,\mathbf{m}_t^i ) + \log \text{Pr}_{\boldsymbol{\theta}}({x}_t^i \mid {x}_{t-1}^i, \mathbf{m}_{t-1}^i)\\ & = \partial_{\phi} \sum_{t=1}^{T^i} \log (\frac{\exp(s_{t+1}^i )}{1+\exp(s_{t+1}^i )})^{y_{t+1}^i}(\frac{1}{1+\exp(s_{t+1}^i )})^{1-y_{t+1}^i} - \frac{1}{2}\log 2\pi \sigma^2 - \frac{(x_t^i - s_t^i )^2}{2\sigma^2}\\ & = \sum_{t=1}^{T^i} y_{t+1} x_{t}^i + \frac{x_{t}^i \exp(c + \phi x_{t}^i + \boldsymbol{\alpha}^\top \mathbf{r}_t^i + \boldsymbol{\beta}^\top \mathbf{m}_{t}^i)}{1+ \exp(c + \phi x_{t}^i + \boldsymbol{\alpha}^\top \mathbf{r}_t^i + \boldsymbol{\beta}^\top \mathbf{m}_{t}^i)} \\ &\quad\quad\quad\quad\quad + \frac{x_{t-1}^i (x_t^i - c - \phi x_{t-1}^i - \boldsymbol{\alpha}^\top \mathbf{r}_{t-1}^i - \boldsymbol{\beta}^\top \mathbf{m}_{t-1}^i)}{\sigma^2} \end{align*} \end{small} The SGD update for the damping parameter $\phi$ is as follows: \begin{align*} {\phi}^{(v+1)} = {\phi}^{(v)} + \gamma^{(v)} \partial_\phi \mathcal{L}_{\text{DPM}} (\boldsymbol{\theta}^{(v)}, \mathbf{x}^i ) \end{align*} where the gradient is added, since we want to maximize the objective function. SGD updates for the other parameters can be similarly obtained, we omit detailed equations for conserving space. \section{Empirical Study}\label{sec:empirical} In this section, we evaluate DPM using a real dataset from one of the largest insurance companies in the U.S. We first give an overview of the dataset and its basic statistics, and show evidence for the time dependency of marketing effects. Baseline models are constructed using logistic regression models with lagged variables. Finally, we compare the parameters and predictive performances of DPM and the baseline models. \subsection{Dataset Overview} The dataset contains six months' records of marketing touches, responses, and purchases on 13 different kinds of financial products, collected over July 2012 - December 2012. There are six different types of marketing touches: three semi-targetable $\mathbf{r}_t^i = (r_{1t}^i,r_{2t}^i,r_{3t}^i)$, three targetable $\mathbf{m}_t^i = (m_{1t}^i,m_{2t}^i,m_{3t}^i)$ marketing touches. Marketing touch types are masked due to confidentiality reasons. The records are samples from the company's customer database, stratified based on products and customers who have not yet bought the respective product in the beginning of our six months time window. The company believes that, if personalized marketing touches can engage these new customers, then they tend to turn into loyal customers. Among the 13 different product types, for illustrative purpose, DPM is applied to the first purchases of two products: Product A and Product B. Customers are partitioned into training (50\%) and test datasets (50\%). \begin{table}[h] \caption{Data Format }\label{tab:format} \begin{center} \begin{tabular}{ l | l | l l l l l l | l } \hline id & time & $r.1$ & $r.2$ & $r.3$ & $m.1$ & $m.2$ & $m.3$ & y \\ \hline\hline 1847410 & July 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 1847410 & July 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ & $\vdots$ & & & & & & & \\ 1847410 & Dec 2 & 1 & 2 & 0 & 0 & 0 & 0 & 0\\ 1847410 & Dec 3 & 1 & 0 & 0 & 0 & 0 & 0 & \textbf{1} (purchase) \\ \hline 1352132 & July 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ & $\vdots$ & & & & & & & \\ \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{Dataset Overview }\label{tab:data} \begin{center} \begin{tabular}{ l l l l l } \hline & \multicolumn{2}{l }{Product A} & \multicolumn{2}{l }{Product B} \\ & \multicolumn{2}{l }{(70K customers)} & \multicolumn{2}{l }{(20K customers)} \\ Variable & Mean & Max & Mean & Max \\ \hline\hline $r_1$ & 0.0010 & 2 & 0.0115 & 3\\ $r_2$ & 0.0044 & 3 & 0.0057 & 2\\ $r_3$ & 0.0004 & 1 & 0.0027 & 2\\ $m_1$ & 0.0165 & 1 & 0.0168 & 1 \\ $m_2$ & 0.0354 & 1 & 0.0229 & 1\\ $m_3$ & 0.0003 & 1 & 0.0032 & 1 \\ $y$ & 0.0001 & 1 & 0.0004 & 1 \\ \hline \end{tabular} \end{center} \end{table} The format of the data is shown in Table~\ref{tab:format}. Each row is identified by the combination of IDs and time stamps. Marketing touches are recorded as counts i.e. $r.2=2$ means two events of the same marketing type for a day. Table~\ref{tab:data} shows basic statistics of the data. The dataset for Product A contains about 70,000 customers, and the dataset for Product B has about 20,000 customers. Both the marketing touches and the targets are highly sparse. For example, daily purchase rate of Product A is 0.01\%, and 0.04\% for Product B. One of the main motivations for designing DPM was the time dependency of marketing effects. The positive effects of past marketing touches can be easily illustrated. Figure~\ref{fig:evidence} shows the histogram of the day of the last marketing exposure before purchase. The x-axis represents time at a day resolution, where $x=0$ is the time of purchase. As can be seen, past marketing events are correlated to purchases. Noticeably, the temporal effects show exponential decay, which suggest an underlying autoregressive process. \begin{figure} \center \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{./BK_CREDIT_CARD_evidence} \caption{Product A} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{./PC_AUTO_INS_evidence} \caption{Product B} \end{subfigure} \caption{Evidence of time-lagged marketing effects. Each cell represents a different marketing touch. }\label{fig:evidence} \end{figure} \subsection{Baseline Models} We build three logistic regression models using lagged variables as follows: \begin{small} \begin{align*} \text{(glm)}:\quad\quad& E[y_{t+1} \mid \mathbf{r}_{t}, \mathbf{m}_{t}] = \text{logit}^{-1}(c + \boldsymbol{\alpha}_0^\top \mathbf{r}_t + \boldsymbol{\beta}_0^\top \mathbf{m}_t)\\ \text{(glm.lag1)}:\quad\quad& E[y_{t+1} \mid \mathbf{r}_{t:(t-1)}, \mathbf{m}_{t:(t-1)}] = \text{logit}^{-1}(c + \sum_{l = 0}^{1} \boldsymbol{\alpha}_l^\top \mathbf{r}_{t-l} + \sum_{l = 0}^{1} \boldsymbol{\beta}_l^\top \mathbf{m}_{t-l} ) \\ \text{(glm.lag2)}:\quad\quad& E[y_{t+1} \mid \mathbf{r}_{t:(t-2)}, \mathbf{m}_{t:(t-2)}] = \text{logit}^{-1}(c + \sum_{l = 0}^{2} \boldsymbol{\alpha}_l^\top \mathbf{r}_{t-l} + \sum_{l = 0}^{2}\boldsymbol{\beta}_l^\top \mathbf{m}_{t-l} ) \end{align*} \end{small} where $\boldsymbol{\alpha}_l$ and $\boldsymbol{\beta}_l$ represent the effects for the $l$-lagged marketing touches. These models are estimated using the \texttt{glm} method in \texttt{R} 2.15.3. Since our datasets have highly imbalanced target ratios, the parameters of these models with lagged variables, especially \texttt{glm.lag1}, sometimes do not converge. We do not show the models with deep lagged variables $(l>2)$, since those models are much harder to estimate. \subsection{Estimated Parameters} We estimate the parameters of DPM using the SGDPM algorithm. Figure~\ref{fig:sgdpm} shows the estimated parameters over iterations. Recall that each iteration in SGDPM represents a data block of one customer. The customers who purchased the products are selected with a higher probability, since otherwise SGDPM may not see any positive examples before it converges. As can be seen, the parameters almost converge after visiting approximately two thousand customers. The damping parameters for Product A and Product B are 0.36 and 0.53, respectively. Marketing effects diminish by about half in the next day, but they still positively affect purchases. The estimated parameters from the baseline models are significantly different from the DPM parameters. Figure~\ref{fig:coeff} compares the coefficients from two different models: DPM and \texttt{glm}. As can be seen in the figure, \texttt{glm} outputs negative weights for some of the marketing effects e.g. $\beta_{01} < 0 $ and $\beta_{03} < 0 $ for Product A, and $\beta_{01} < 0 $ and $\beta_{02} < 0 $ for Product B. These negative coefficients mainly appear on the targetable marketing touches $\mathbf{m}_t$. A viable explanation for this is that the company is already targeting a specific group of customers whose purchase rates are lower than normal population. On the other hand, DPM shows all positive coefficients for the marketing touches. DPM tracks customers' marketing histories, and adjusts the current offsets $x_t$ for purchases. Thus, the coefficients from DPM are more robust for sampling biases than the baseline models. Table~\ref{tab:productb_coeff} shows the estimated parameters of the three baseline models for Product B. The negative effects have higher p-values compared to the positive effects. For example, the p-value of \texttt{glm.lag1} $\beta_{12}$ is 0.974. Some of the lagged variables have significant effects on purchases, which supports our claim that past marketing touches affect purchase behaviors. These GLM models put more weights on the semi-targetable marketing touches than the targetable marketing touches. Unless the effects of the targetable marketing touches are estimated using control and test groups, these results do not provide insights for building actionable marketing strategies. \begin{figure} \center \begin{subfigure}[b]{1\textwidth} \includegraphics[width=1\textwidth]{./BK_CREDIT_CARD_beta_iteration} \caption{Product A} \end{subfigure} \begin{subfigure}[b]{1\textwidth} \includegraphics[width=1\textwidth]{./PC_AUTO_INS_beta_iteration} \caption{Product B} \end{subfigure} \caption{SGD estimation of the DPM parameters.}\label{fig:sgdpm} \end{figure} \begin{figure} \center \includegraphics[width=0.9\textwidth]{./coeff_comparison} \caption{Estimated Parameters from DPM and GLM.}\label{fig:coeff} \end{figure} \begin{table}\caption{Estimated Parameters for Product B from Generalized Linear Models. }\label{tab:productb_coeff} \begin{small} \begin{center} \begin{tabular}{ c c c c c c c } \hline & \multicolumn{2}{c }{\textbf{glm}} & \multicolumn{2}{c }{\textbf{glm.lag1}} & \multicolumn{2}{c }{\textbf{glm.lag2}} \\ \hline Param. & Estimate & p-value & Estimate & p-value & Estimate & p-value \\\hline\hline $\alpha_{01}$ & 2.02031 & $<$ 2e-16 & 1.90741 & $<$ 2e-16 & 1.84673 & $<$ 2e-16\\ $\alpha_{02}$ & 2.74625 & $<$ 2e-16 & 2.66615 &$<$ 2e-16 & 2.59654 & $<$ 2e-16 \\ $\alpha_{03}$ & 3.16096 & $<$ 2e-16 & 3.14794 & $<$ 2e-16& 3.18083 & $<$ 2e-16 \\\hline $\beta_{01}$ & -0.59591 & 0.185 & -0.59042 & 0.18922& -0.59176 & 0.188291 \\ $\beta_{02}$ & -0.32632 & 0.308 & -0.31584 & 0.32377&-0.33462 & 0.296802\\ $\beta_{03}$ & 1.30361 & 1.74e-05 & 1.25721 & 4.05e-05& 1.25432 & 4.52e-05 \\\hline\hline $\alpha_{11}$ & - & - & 0.66684 &0.00018 &0.51507 & 0.007232 \\ $\alpha_{12}$ & - & - & 0.60640&0.05751 & 0.55132 & 0.086354\\ $\alpha_{13}$ & - & - & 1.63553 & 5.69e-05& 1.65591 & 4.32e-05\\\hline $\beta_{11}$ & - & - & -1.25127& 0.21076& -1.24799 & 0.212039 \\ $\beta_{12}$ & - & - & -0.01249& 0.97401 & -0.04239 & 0.911988\\ $\beta_{13}$ & - & - & 1.21296 & 0.01544 & 1.10564 & 0.029452\\\hline\hline $\alpha_{21}$ & - &- & -& -& 0.52325 & 0.013828\\ $\alpha_{22}$ &- &- & -& -& 0.52655 & 0.149112 \\ $\alpha_{23}$ &- & -& -& -&1.53267 & 0.000412\\\hline $\beta_{21}$ &- & -& -& -& -0.11958 & 0.836673 \\ $\beta_{22}$ &- &- &- &- & 0.62343& 0.025465\\ $\beta_{23}$ &- &- &- &- & 1.71137& 2.53e-05 \\\hline \end{tabular} \end{center} \end{small} \end{table} \subsection{Predictive Performance} We measure the predictive performances of DPM and the baseline models using hold-out test datasets. The customers in the test datasets do not overlap with the customers in the training datasets. The performances were measured using Receiver Operating Characteristics (ROC) curves. Figure~\ref{fig:roc} shows the measured ROC curves for both Product A and B. As can be seen, the DPM's areas under the curves are significantly higher than those of the baseline models. Although their performances are comparable when False Positive Ratios are close to zero (FPR $\approx$ 0), the baseline models cannot capture the purchase probabilities of the customers who had (latent) intentions of purchasing the products. In other words, the logistic regression models are trained directly on the observed variables, and there is no latent variable involved. On the other hand, DPM, a latent-variable time series model, simultaneously estimates both the intention of purchasing the products and the parameters of the model. The latent intentions of purchasing are captured through marketing touches and their corresponding responses: the customers who clicked display ads, who received referrals, or who positively responded to promotion phone calls. By modelling these latent intentions, DPM achieves better performance curves than traditional lagged variable model. The visualization of the latent variable provides a different perspective on purchase behaviors. Figure~\ref{fig:illust} shows the dynamic purchase probabilities from two models of a customer. A customer's propensity to purchase propagates and accumulates over time. The predicted purchase probability of DPM shows the process of building up a purchase decision. On the 28th and 29th day, the customer received and responded to the $r_1$ marketing touch, and on the 30th day, he was targeted by the $m_2$ marketing touch. Through the process of marketing touches and responses, his propensity to purchase Product B has gradually increased. On the other hand, the baseline model (\texttt{glm}) does not capture this cumulative process, and it actually provides a lower probability score on the purchase day than the day before. \begin{figure} \center \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{./BK_CREDIT_CARD_ROC} \caption{Product A} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{./PC_AUTO_INS_ROC} \caption{Product B} \end{subfigure} \caption{Receiver Operating Characteristic curves from the test datasets.}\label{fig:roc} \end{figure} \begin{figure}[h] \center \includegraphics[width=0.9\textwidth]{./illust} \caption{Visualization of dynamic purchase probability.}\label{fig:illust} \end{figure} \section{Concluding Remarks}\label{sec:conclusion} In this paper, we proposed a state space model, DPM, to answer three main challenges in direct marketing: \textit{which channel} to use, \textit{which offer} to make, and \textit{when} to offer. DPM is a latent variable time series model that utilizes demographics, and both marketing and purchase histories of a customer. To estimate the parameters of the model, a new statistical methodology, SGDPM, was developed. This methodology combines state-space models with a stochastic gradient descent approach, resulting in fast estimation of the model coefficients from big data. The experimental results using a real dataset showed that DPM can effectively forecast the time of purchase. We used only six marketing variables to focus the modeling idea of DPM: three semi-targetable and three targetable marketing touches. In practice, combining demographic information with DPM can significantly improve the predictive performance \citep{Risselada2013}. For example, marketing demographic segments can be used in a multi-level modeling approach; each segment would have slightly different parameters. Social network information \citep{Palmer2009} can also leverage the predictive performance of the model. Specifically, information about customers' important life events, such as a marriage or a new baby, can be easily used to extend the model as follows: \begin{align*} s_{t+1}^i = c + \phi x_{t}^i + \boldsymbol{\alpha}^\top \mathbf{r}_t^i + \boldsymbol{\beta}^\top \mathbf{m}_{t}^i + \boldsymbol{\psi}^\top \underbrace{\mathbf{e}_t^i }_{\mathclap{ \text{life events}}} \end{align*} where $\mathbf{e}_t^i$ represents a vector of life events. Some life events, such as having a family, may increase the propensity of buying financial products ($\psi > 0$), whereas a better deal from an rival company would decrease the propensity ($\psi < 0$). Although we showed the predictive performance of DPM using hold-out datasets, the true predictive performance of the model should also be confirmed by thorough A/B-testing or randomized controlled experiments. The scores from DPM and other existing models can also be combined to increase purchase rates. To build a personalized marketing strategy, customer feedback needs to be fully utilized. An IT infrastructure for daily monitoring of customers should be the first step for these types of dynamic models in practice. Extensions of our approaches and further discussions on implementation methods are left as future work. \bibliographystyle{abbrvnat}
train/arxiv
BkiUa4XxK5YsWV5L120-
4
0.8
\section{Introduction} Following \cite[Section 5.3]{Co-93}, we define the Hurwitz class number $H(N)$, Where $N$ is a non-negative integer, as follows. (1)If $N\equiv 1,2 \pmod 4$ then $H(N)=0$. (2)If $N=0$ then $H(0)=-1/12$. (3)If $N>0$, $N\equiv 0,3 \pmod 4$, then $H(N)$ is the class number of positive definite binary quadratic forms of discriminant $-N$, with those classes that contain a multiple of $x^2+y^2$ or $x^2+xy+y^2$ counted with weight $1/2$ or $1/3$, respectively. For the modular behaviors, we use Kronecker's notation \cite{Kr-60} $F(n)$ satisfying $F(8n+7)=H(8n+7)$ and $F(8n+3)=3H(8n+3)$, see also \cite{Wa-35} where denote $H(n)$ as $F_1(n)$. The generating functions of $F(an+b)$ define as $$ \mathscr{F}_{a,b}(q):=\sum_{n=0}^{\infty}F(an+b)q^n. $$ The generating functions of $H(n)$ (or $F(n)$) were studied by many authors. For example, Humbert \cite{Hu-07} showed lots of identities by using theory of quadratic form. Watson \cite{Wa-35} studied the transformations of such generating functions by Mordell integrals. Hirzebruch and Zagier \cite[Section 2.2]{Hi-Za-76} discussed the relation between the generating functions and certain weight $3/2$ modular forms. On the other hand, Zwegers \cite{Zw-02} found that mock theta functions is related to weight $1/2$ modular forms. In this paper, we find some similar results between the generating functions of Hurwitz class numbers and certain mock theta functions. Mock theta functions were first mentioned by Ramanujan in his last letter to Hardy in 1920 as Eulerian forms. To study the modular behaviors, many authors such as Watson \cite{Wa-36,Wa-37}, Andrews \cite{An-86}, Andrews and Hickerson \cite{An-Hi-91}, Berndt and Chan \cite{Be-Ch-07}, Choi \cite{Ch-99,Ch-00,Ch-02,Ch-07}, Garvan \cite{Ga-19}, Gordon and McIntosh \cite{Go-Mc-00}, Hickerson \cite{Hi-88} and Zwegers \cite{Zw-09} find the Appell-Lerch series and Hecke-Rogers series of mock theta functions. In a recent work, Hickerson and Mortenson \cite{Hi-Mo-14} using the ``m-block" to list the Appell-Lerch series of all the mock theta functions. As they are half-integer lower weight ``mock modular forms", we study $\mathscr{F}_{a,b}(q)$ by following one of the method of studying mock theta functions, to find the Hecke-Rogers series of some of $\mathscr{F}_{a,b}(q)$. Something amazing is that the Hecke-Rogers series we find are quite close to certain mock theta functions. See example \cite{Ch-Ga}, we know that $\mathscr{F}_{8,3}(q)$, $\mathscr{F}_{24,7}(q)$, $\mathscr{F}_{24,11}(q)$ and $\mathscr{F}_{24,19}(q)$ are nice eta-quotients, and $\mathscr{F}_{4,-1}(q)$, $\mathscr{F}_{8,-1}(q)$, $\mathscr{F}_{12,-1}(q)$ and $\mathscr{F}_{24,-1}(q)$ are associated with the eighth order mock theta function $V_1(q)$, the second order mock theta function $A(q)$, the sixth order mock theta function $\sigma(q)$ and the sixth order mock theta function $\phi_{-}(q)$ modulo 4, respectively. In Section 2, we introduce the preparation in our works, including Appell-Lerch series, Garvan's MAPLE packages and derivative of certain theta functions. In Section 3, we find further connections between Hurwitz class numbers and certain mock theta fucntions. Let $$ A(q):=\sum_{n=0}^{\infty}\frac{q^{(n+1)^2}(-q;q^2)_n}{(q;q^2)^2_{n+1}} $$ be one of the second order mock theta functions (see \cite{Mc-07}), where we always assume that $|q|<1$ and in common $$ (a;q)_n=\prod_{k=1}^{n}(1-aq^{k-1}). $$ Cui, Gu and Hao \cite[Eq. (2.24)]{Cu-Gu-Ha-18} showed the Hecke-Rogers series of $A(q)$ and we rewrite it as \beq \label{Aq0} \frac{J_1^2}{J_2}A(q)=\sum_{1\leq j\leq |n|}sg(n)(-1)^{n-1}q^{2n^2-n-j^2+j}, \eeq where, as usual, $sg(n)=1$ if $n\geq 0$ and $sg(n)=-1$ if $n<0$ and $$ J_k:=(q^k;q^k)_\infty. $$ For $\mathscr{F}_{8,-1}(q)$, there is a similar identity \beq \label{H80} \frac{J_1^2}{J_2}\mathscr{F}_{8,-1}(q)=\sum_{1\leq j\leq |n|}sg(n)(-1)^{j-1}(2j-1)q^{2n^2-n-j^2+j}, \eeq which appeared in \cite[P.376]{Hu-07}. \eqref{H80} is quite close to \eqref{Aq0} and we find a $z$-analog of this two identities \beq \label{H8z0} (zq,q/z,q^2;q^2)_\infty F_8(z,q)=\sum_{1\leq j\leq |n|}sg(n)(-1)^{j-1}q^{2n^2-n-j^2+j}\cdot \frac{z^{1-j}-z^j}{1-z}, \eeq where \beq \label{H8zd0} F_8(z,q)=\sum_{n=0}^{\infty}\frac{(-1)^n(q;q^2)_nq^{(n+1)^2}}{(zq,q/z;q^2)_{n+1}}. \eeq Our proof is based on the formulas of generating functions of Hurwitz class numbers by Alfes, Bringmann and Lovejoy \cite{Al-Br-Lo-11}, then use the formulas of Appell-Lerch series by Hickerson and Mortenson \cite{Hi-Mo-14}. \cite[Eq. (1.5)]{Al-Br-Lo-11} showed that $F_8(1,q)=\mathscr{F}_{8,-1}(q)$ as a special case of $N^0(a,b;z;q)$ in their paper. Setting $(z,q)=(1,q)$ and $(z,q)=(-1,-q)$, noting that $F_8(-1,-q)=-A(q)$, \eqref{H8z0} yields \eqref{H80} and \eqref{Aq0}, respectively. Further more, we also find a Appell-Lerch series of $F_8(z,q)$ which is $$ (z^2q^2,q^2/z^2,q^4;q^4)_\infty F_8(z,q)=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}q^{2k^2}}{1+q^{4k-1}}\cdot \frac{z^{1-2k}-z^{2k}}{1-z}, $$ Recall the eighth order mock theta function $V_1(q)$ \cite{Go-Mc-00} $$ V_1(q)=\sum_{n=0}^{\infty}\frac{q^{(n+1)^2}(-q;q^2)_n}{(q;q^2)_{n+1}}, $$ and two sixth order mock theta functions $\sigma(q)$ \cite{An-Hi-91} and $\phi_{-}(q)$ \cite{Be-Ch-07} $$ \sigma(q)=\sum_{n=0}^{\infty}\frac{q^{(n+1)(n+2)/2}(-q;q)_n}{(q;q^2)_{n+1}}, $$ $$ \phi_{-}(q)=\sum_{n=1}^{\infty}\frac{q^n(-q;q)_{2n-1}}{(q;q^2)_n}. $$ We also find and prove Hecke-Rogers series of $\mathscr{F}_{a,-1}(q),(a=4,12,24)$ which is close to these three mock theta functions, which is $$ \frac{J_1J_4}{J_2}\mathscr{F}_{4,-1}(q)=\sum_{1\leq j\leq n}(-1)^{n-1+j(j-1)/2}nq^{n^2-j(j-1)/2}, $$ $$ \frac{J_1^2}{J_2}\mathscr{F}_{12,-1}(q)=\sum_{1-|n|\leq j\leq |n|}sg(n)(-1)^{j-1}(4n-1)q^{4n^2-2n-3j^2+2j}, $$ and $$ \frac{J_1^2}{J_2}\mathscr{F}_{24,-1}(q)=\sum_{1-|n|\leq j\leq |n|}sg(n)(-1)^{n-1}(4j-1)q^{3n^2-n-2j^2+j}. $$ In the final section, we prove combinatorial interpretation of $F(4n-1)$ and $H(8n-1)$ appeared on OEIS. \section{Preparation} \subsection{Appell-Lerch series} Following \cite{Hi-Mo-14}, we define $$ j(x;q):=(x,q/x,q;q)_\infty=\sum_{n=-\infty}^{\infty}(-1)^nq^{n(n-1)/2}x^n, $$ where in common $(a_1,a_2,...,a_k;q)_n:=(a_1;q)_n(a_2;q)_n...(a_k;q)_n$, and the Appell-Lerch series $$ m(x,q,z):=\frac{1}{j(z;q)}\sum_{r=-\infty}^{\infty}\frac{(-1)^rq^{r(r-1)/2}z^r}{1-q^{r-1}xz}, $$ and the ``building block" of mock theta functions $$ f_{a,b,c}(x,y,q):=\sum_{sg(r)=sg(s)}sg(r)(-1)^{r+s}x^ry^sq^{ar(r-1)/2+brs+cs(s-1)/2}, $$ and \begin{align*} &g_{a,b,c}(x,y,q,z_1,z_0)\\ :=&\sum_{t=0}^{a-1}(-y)^tq^{ct(t-1)/2}j(q^{bt}x;q^a)m(-q^{ab(b+1)/2-ca(a+1)/2-t(b^2-ac)}\frac{(-y)^a}{(-x)^b},q^{a(b^2-ac)},z_0)\\ &+\sum_{t=0}^{c-1}(-x)^tq^{at(t-1)/2}j(q^{bt}y;q^c)m(-q^{cb(b+1)/2-ac(c+1)/2-t(b^2-ac)}\frac{(-x)^c}{(-y)^b},q^{a(b^2-ac)},z_1). \end{align*} We will use the formulas of Appell-Lerch series in \cite{Hi-Mo-14} such as the following theorems. Let $n=1$ in \cite[Theorem 1.6]{Hi-Mo-14} \begin{theorem} \label{mthe16} For generic $x,y\in \mathbb{C^*}$ $$ f_{1,2,1}(x,y,q)=g_{1,2,1}(x,y,q,y/x,x/y). $$ \end{theorem} Let $n=1$ in \cite[Theorem 1.11]{Hi-Mo-14} \begin{theorem} \label{mthe111} For generic $x,y\in \mathbb{C^*}$ $$ f_{1,5,1}(x,y,q)=g_{1,5,1}(x,y,q,y/x,x/y)-\Theta_{1,4}(x,y,q), $$ where $$ \Theta_{1,4}(x,y,q):=\frac{-qxyj(y/x;q^{24})}{j(y/x;q^{24})j(-q^{10}x^4;q^{24})j(-q^{10}y^4;q^{24})}\left(j(q^4;q^{16})S_1-qj(q^8;q^{16})S_2\right), $$ with \begin{align*} S_1:=&\frac{j(q^{22}x^2y^2;q^{24})j(-q^{12}y/x;q^{24})j(q^5xy;q^{12})}{J_{12}^3J_{48}}\cdot \bigg(j(-q^{10}x^2y^2;q^{24})j(q^{12}y^2/x^2;q^{24})J_{24}^2\\ &+\frac{q^5x^2j(-q^{22}x^2y^2;q^{24})j(q^{12}y/x;q^{24})^2j(-y/x;q^{24})^2}{J_{24}}\bigg), \end{align*} \begin{align*} S_2:=&\frac{j(q^{10}x^2y^2;q^{24})j(-y/x;q^{24})j(q^{11}xy;q^{12})}{J_{12}^2}\\ &\cdot \bigg(\frac{q^2j(-q^{10}x^2y^2;q^{24})j(q^{12}y^2/x^2;q^{24})J_{48}}{yJ_{24}}+\frac{qxj(-q^{22}x^2y^2;q^{24})j(q^{24}y^2/x^2;q^{48})^2}{J_{48}}\bigg). \end{align*} \end{theorem} \cite[Proposition 3.1]{Hi-Mo-14} are some basic propositions of $m(x,q,z)$: \begin{prop} \label{mpro31} For generic $x,z \in \mathbb{C^*}$ $$ m(x,q,z)=m(x,q,qz), $$ $$ m(x,q,z)=x^{-1}m(x^{-1},q,z^{-1}), $$ $$ m(qx,q,z)=1-xm(x,q,z), $$ $$ m(x,q,z)=1-q^{-1}xm(q^{-1}x,q,z), $$ $$ m(x,q,z)=x^{-1}-x^{-1}m(qx,q,z). $$ \end{prop} \cite[Theorem 3.3]{Hi-Mo-14} is an important identity: \begin{theorem} \label{mthe33} For generic $x,z_0,z_1\in \mathbb{C^*}$ $$ m(x,q,z_1)-m(x,q,z_0)=\frac{z_0J_1^3j(z_1/z_0;q)j(xz_0z_1;q)}{j(z_0;q)j(z_1;q)j(xz_0;q)j(xz_1;q)}. $$ \end{theorem} \cite[Corollary 3.7]{Hi-Mo-14} is also needed: \begin{cor} \label{mcor37} For generic $x,z\in \mathbb{C^*}$ $$ m(x,q,z)=m(-qx^2,q^4,z^4)-\frac{x}{q}m(-x^2/q,q^4,q^4)-\frac{J_2J_4j(-xz^2;q)j(-xz^3;q)}{xj(xz;q)j(z^4;q^4)j(-qx^2z^4;q^2)}. $$ \end{cor} \subsection{Garvan's MAPLE package} Define the usual Atkin $U_p$ operator which acts on a formal power series $$ f(q)=\sum_{n\in \mathbb{Z}}a(n)q^n, $$ by $$ U_p(f(q))=\sum_{n\in \mathbb{Z}}a(pn)q^n. $$ We will use Garvan's MAPLE programs to prove identities in this paper. The programs rely on theory of modular functions. For details, see \cite[Section 2]{Ch-Ch-Ga}. Identities which have only eta-quotients (or with $U_p$ operator) can be proved by ETA-package algorithmically, see \beq \label{eta} \text{https://qseries.org/fgarvan/qmaple/ETA/} \eeq \subsection{Derivative of theta functions} We also need the derivative of certain theta functions. \begin{lemma} \label{pthe1} \beq \label{pj21} \frac{d}{dz}j(\pm zq;q^2)\big|_{z=1}=0. \eeq \end{lemma} \begin{proof} By $$ j(zq;q^2)=\sum_{n=-\infty}^{\infty}(-1)^nq^{n^2}z^n, $$ we have $$ \frac{d}{dz}j(zq;q^2)\big|_{z=1}=\sum_{n=-\infty}^{\infty}(-1)^nnq^{n^2}=0, $$ and replacing $q$ by $-q$ $$ \frac{d}{dz}j(-zq;q^2)\big|_{z=1}=\sum_{n=-\infty}^{\infty}nq^{n^2}=0. $$ \end{proof} Using the results by Oliver \cite[Theorem 1.1]{Ol-13}, we can prove more identities like \eqref{pj21}. \begin{lemma} \label{pthe2} \begin{align} \label{pj31} \frac{d}{dz}\left(\frac{1}{z}j(z^6q;q^3)\right)\bigg|_{z=1}&=-\frac{J_1^4}{J_3}-9q\frac{J_9^3J_1}{J_3},\\ \label{pj31m} \frac{d}{dz}\left(\frac{1}{z}j(-z^6q;q^3)\right)\bigg|_{z=1}&=-\frac{J_1^5}{J_2^2}. \end{align} \end{lemma} \begin{proof} Similar to Lemma \ref{pthe1} $$ \frac{d}{dz}\left(\frac{1}{z}j(z^6q;q^3)\right)\bigg|_{z=1}=\sum_{n=-\infty}^{\infty}(-1)^n(6n-1)q^{n(3n-1)/2}. $$ By the result of Oliver \cite[Theorem 1.1]{Ol-13}, we have $$ J_1^3=\sum_{n=1}^{\infty}(-1)^{n-1}(2n-1)q^{n(n-1)/2}. $$ So that by 3-dissection of $J_1^3$ $$ U_3(J_1^3)=\sum_{n=-\infty}^{\infty}(-1)^{n-1}(6n-1)q^{n(3n-1)/2}. $$ Then $$ \frac{d}{dz}\left(\frac{1}{z}j(z^6q;q^3)\right)\bigg|_{z=1}=-U_3(J_1^3)=-\frac{J_1^4}{J_3}-9q\frac{J_9^3J_1}{J_3}, $$ where the last equation was verified by MAPLE \eqref{eta}. Also by \cite[Theorem 1.1]{Ol-13} $$ \frac{d}{dz}\left(\frac{1}{z}j(-z^6q;q^3)\right)\bigg|_{z=1}=\sum_{n=-\infty}^{\infty}(6n-1)q^{n(3n-1)/2}=-\frac{J_1^5}{J_2^2}. $$ \end{proof} The following two lemmas can also be proved by \cite[Theorem 1.1]{Ol-13} and \cite[Theorem 1.2]{Ol-13}. Since the proofs are similar to \ref{pthe2}, we omit the proofs. \begin{lemma} \begin{align} \label{pj41} \frac{d}{dz}\left(\frac{1}{z}j(z^4q;q^4)\right)\bigg|_{z=1}&=-\frac{J_2^9}{J_4^3J_1^3},\\ \label{pj41m} \frac{d}{dz}\left(\frac{1}{z}j(-z^4q;q^4)\right)\bigg|_{z=1}&=-J_1^3. \end{align} \end{lemma} \begin{lemma} \begin{align} \label{pj61} \frac{d}{dz}\left(\frac{1}{z}j(z^3q;q^6)\right)\bigg|_{z=1}&=-\frac{J_2^5}{J_1^2},\\ \label{pj61m} \frac{d}{dz}\left(\frac{1}{z}j(-z^3q;q^6)\right)\bigg|_{z=1}&=-\frac{J_4^2J_1^2}{J_2}. \end{align} \end{lemma} The case for $q^{12}$ may be slightly more complicated. \begin{lemma} \begin{align} \label{pj125m} \frac{d}{dz}\left(\frac{1}{z}j(-z^{12}q^5;q^{12})\right)\bigg|_{z=1}&=-\frac{1}{2}\left(\frac{J_1^4}{J_3}+\frac{9qJ_9^3J_1}{J_3}+\frac{J_1^5}{J_2^2}\right),\\ \label{pj121m} \frac{d}{dz}\left(\frac{1}{z^5}j(-z^{12}q;q^{12})\right)\bigg|_{z=1}&=-\frac{1}{2}\left(\frac{J_1^4}{qJ_3}+\frac{9J_9^3J_1}{J_3}-\frac{J_1^5}{qJ_2^2}\right). \end{align} \end{lemma} \begin{proof} Let $$ f(z):=\frac{1}{z}j(-z^{12}q^5;q^{12}), $$ and $$ g(z):=\frac{1}{z^5}j(-z^{12}q;q^{12}). $$ Then it's easy to see that $$ f(z)+qg(1/z)=\frac{1}{z}j(-z^6q;q^3), $$ and $$ f(z)-qg(1/z)=\frac{1}{z}j(z^6q;q^3). $$ So that by \eqref{pj31} and \eqref{pj31m} $$ f'(1)-qg'(1)=-\frac{J_1^5}{J_2^2}, $$ and $$ f'(1)+qg'(1)=-\frac{J_1^4}{J_3}-\frac{9qJ_9^3J_1}{J_3}. $$ Hence the lemma holds. \end{proof} Replacing $q$ by $-q$ in \eqref{pj125m} and \eqref{pj121m} we have \begin{lemma} \begin{align} \label{pj125} \frac{d}{dz}\left(\frac{1}{z}j(z^{12}q^5;q^{12})\right)\bigg|_{z=1}&=-\frac{1}{2}\left(\frac{J_{12}J_3J_2^{12}}{J_6^3J_4^4J_1^4}-\frac{9qJ_{18}^9J_{12}J_3J_2^3}{J_{36}^3J_9^3J_6^3J_4J_1}+\frac{J_2^{13}}{J_4^5J_1^5}\right),\\ \label{pj121} \frac{d}{dz}\left(\frac{1}{z^5}j(z^{12}q;q^{12})\right)\bigg|_{z=1}&=\frac{1}{2}\left(\frac{J_{12}J_3J_2^{12}}{qJ_6^3J_4^4J_1^4}-\frac{9J_{18}^9J_{12}J_3J_2^3}{J_{36}^3J_9^3J_6^3J_4J_1}-\frac{J_2^{13}}{qJ_4^5J_1^5}\right). \end{align} \end{lemma} Finally, it is easy to see that \beq \label{pj1} \frac{d}{dz}\left(\frac{1}{z}j(-z^2;q)\right)\bigg|_{z=1}=0. \eeq \section{Hecke-Rogers series} \subsection{Hecke-Rogers series of $\mathscr{F}_{4,-1}(q)$} Following Alfes, Bringmann and Lovejoy \cite{Al-Br-Lo-11}, define $$ F_4(z,q):=-N^o(1,-1/q,-z,-q)=\sum_{n=0}^{\infty}\frac{(-1)^n(q;-q)_{2n}q^{n+1}}{(zq,q/z;q^2)_{n+1}}, $$ and \cite{Al-Br-Lo-11} showed $F_4(1,q)=\mathscr{F}_{4,-1}(q)$ and $F_4(i,-q)=-V_1(q)$. Mortenson \cite[Eq. (2.12)]{Mo-14} also studied this function and showed the Appell-Lerch series \beq \label{4mid} \left(1-\frac{1}{z}\right)F_4(z,q)=m(-z,q^2,-q). \eeq The following theorem is equivalent to \eqref{4mid}. \begin{theorem} \label{4th1} For $z\in \mathbb{C^*}$ and $z\neq 1$, we have \begin{align} \label{4th1id} (zq,q/z,q^2;q^2)_\infty F_4(z,q)=&\frac{1}{1-z}\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}q^{k^2}}{1+q^{2k-1}}z^{1-k}=-\frac{1}{1-z}\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}q^{k^2}}{1+q^{2k-1}}z^k\\ \nonumber =&\sum_{k=1}^{\infty}\frac{(-1)^{k-1}q^{k^2}}{1+q^{2k-1}}\cdot \frac{z^{1-k}-z^k}{1-z} \end{align} \end{theorem} \begin{proof} By Theorem \ref{mthe33} $$ m(-z,q^2,-q)=m(-z,q^2,q/z). $$ And by Proposition \ref{mpro31} $$ m(-z,q^2,q/z)=-\frac{1}{z}m(-1/z,q^2,z/q)=-\frac{1}{z}m(-1/z,q^2,qz). $$ Then \eqref{4th1id} holds obviously. \end{proof} Letting $z\rightarrow 1$ in \eqref{4th1id} we have the Appell-Lerch series of $\mathscr{F}_{4,-1}(q)$ \begin{cor} \beq \label{4cor1} \frac{J_1^2}{J_2}\mathscr{F}_{4,-1}(q)=\sum_{k=1}^{\infty}\frac{(-1)^{k-1}(2k-1)q^{k^2}}{1+q^{2k-1}}. \eeq \end{cor} Then we prove the Hecke-Rogers series of $F_4(z,q)$. \begin{theorem} \label{4th2} For $z\in \mathbb{C^*}$ and $z\neq 1$, we have \beq \label{4th2id} (-zq,-1/z,q;q)_\infty F_4(z,-q)=\sum_{1\leq j\leq n}q^{n^2-j(j-1)/2}\cdot \frac{z^n-z^{-n}}{1-z}. \eeq \end{theorem} \begin{proof} Replacing $q$ by $-q$ in \eqref{4th1id}, we have $$ (-zq,-q/z,q^2;q^2)_\infty F_4(z,-q)=\frac{1}{1-z}\sum_{k=-\infty}^{\infty}\frac{q^{k^2}}{1-q^{2k-1}}z^k. $$ So that \eqref{4th2id} is equivalent to \beq \label{4th21} (-zq,-1/z,q;q)_\infty \sum_{k=-\infty}^{\infty}\frac{q^{k^2}}{1-q^{2k-1}}z^k=(-zq,-q/z,q^2;q^2)_\infty \sum_{1\leq j\leq |n|}sg(n)q^{n^2-j(j-1)/2}z^n. \eeq Noting that $$ \frac{(-zq,-1/z,q;q)_\infty}{(-zq,-q/z,q^2;q^2)_\infty}=\frac{J_1}{J_2^2}(-zq^2,-1/z;q^2)_\infty, $$ \eqref{4th21} can be simplify as $$ \sum_{m=-\infty}^{\infty}q^{m(m+1)}z^m\sum_{k=-\infty}^{\infty}\frac{q^{k^2}}{1-q^{2k-1}}z^k=\frac{J_2^2}{J_1}\sum_{1\leq j\leq |n|}sg(n)q^{n^2-j(j-1)/2}z^n. $$ Denote $A_n$ and $B_n$ be the coefficients of $z^n$ on both sides, then $$ A_n=\sum_{k=-\infty}^{\infty}\frac{q^{k^2+(n-k)(n-k+1)}}{1-q^{2k-1}}, $$ and $$ B_n=\frac{J_2^2}{J_1}\sum_{j=1}^{|n|}sg(n)q^{n^2-j(j-1)/2}. $$ It is easy to verify that $B_n$ satisfy \begin{enumerate} \item[(1)]For all $n\in \mathbb{Z}$ $$ B_n+B_{-n}=0, $$ \item[(2)]For all $n\in \mathbb{N}$ $$ \frac{B_{n+1}}{q^{(n+1)^2}}-\frac{B_n}{q^{n^2}}=q^{-\frac{n(n+1)}{2}}\frac{J_2^2}{J_1}. $$ \end{enumerate} Noting that (1) implies $B_0=0$. If $A_n$ also satisfy (1) and (2), then $A_n=B_n$ for all $n\in \mathbb{Z}$ by mathematical induction. $$ A_{-n}=\sum_{k=-\infty}^{\infty}\frac{q^{k^2+(-n-k)(-n-k+1)}}{1-q^{2k-1}}=-\sum_{k=-\infty}^{\infty}\frac{q^{k^2+(n-k)(n-k+1)}}{1-q^{2k-1}}=-A_n, $$ by replacing $k$ by $1-k$. So that $A_n$ satisfy (1). $$ A_{n}=\sum_{k=-\infty}^{\infty}\frac{q^{k^2+(n-k)(n-k+1)}}{1-q^{2k-1}}=q^{n^2}\sum_{k=-\infty}^{\infty}\frac{q^{(k-n)(2k-1)}}{1-q^{2k-1}}. $$ So that \beq \label{4th22} \frac{A_{n+1}}{q^{(n+1)^2}}-\frac{A_n}{q^{n^2}}=\sum_{k=-\infty}^{\infty}\frac{q^{(k-n-1)(2k-1)}-q^{(k-n)(2k-1)}}{1-q^{2k-1}}=\sum_{k=-\infty}^{\infty}q^{(k-n-1)(2k-1)}. \eeq Let \beq \label{4th24} C_n:=\sum_{k=-\infty}^{\infty}q^{(2k-n)(2k+1-n)/2}=\sum_{k=-\infty}^{\infty}q^{(k-n-1)(2k-1)+n(n+1)/2}. \eeq Then $$ C_{n+1}=\sum_{k=-\infty}^{\infty}q^{(2k-n-1)(2k-n)/2}=\sum_{k=-\infty}^{\infty}q^{(2(n-k)-n+1)(2(n-k)-n)/2}=C_n. $$ So that \beq \label{4th25} C_n=C_0=\frac{J_2^2}{J_1}. \eeq By \eqref{4th22} - \eqref{4th25} $$ \frac{A_{n+1}}{q^{(n+1)^2}}-\frac{A_n}{q^{n^2}}=q^{-\frac{n(n+1)}{2}}C_n=q^{-\frac{n(n+1)}{2}}\frac{J_2^2}{J_1}, $$ and $A_n$ satisfy (2). Hence $A_n=B_n$ for all $n\in \mathbb{Z}$ and \eqref{4th21} holds. \end{proof} \eqref{4th2id} yields the Hecke-Rogers series of $V_1(q)$ by $z=i$, which is equivalent to \cite[Eq. (2.37)]{Cu-Gu-Ha-18} \beq \label{4V1HR} \frac{J_1J_4}{J_2}V_1(q)=\sum_{1\leq j\leq n}\left(\frac{-4}{n}\right)q^{n^2-j(j-1)/2}, \eeq where $(\frac{\cdot}{\cdot})$ denote the Kronecker symbol. Replacing $q$ by $-q$ and letting $z\rightarrow 1$ in \eqref{4th2id} we have the Hecke-Rogers series of $\mathscr{F}_{4,-1}(q)$ which is close to \eqref{4V1HR} \begin{cor} $$ \frac{J_1J_4}{J_2}\mathscr{F}_{4,-1}(q)=\sum_{1\leq j\leq n}(-1)^{n-1+j(j-1)/2}nq^{n^2-j(j-1)/2}. $$ \end{cor} \subsection{Hecke-Rogers series of $\mathscr{F}_{8,-1}(q)$} Let $$ F_8(z,q):=\sum_{n=0}^{\infty}\frac{(-1)^n(q;q^2)_nq^{(n+1)^2}}{(zq,q/z;q^2)_{n+1}}. $$ This function was also studied by Alfes, Bringmann and Lovejoy \cite{Al-Br-Lo-11} where showed that $F_8(1,q)=N^o(0,-1,1,q)=\mathscr{F}_{8,-1}(q)$ and $F_8(-1,-q)=-N^o(0,1,1,q)=-A(q)$ in page 3, and Mortenson \cite{Mo-14} where showed the the Appell-Lerch series of $F_8(z,q)$. In this subsection, again, we use the formulas of Appell-Lerch series to rewrite $F_8(z,q)$ as another form and then prove the Hecke-Rogers identity. \begin{lemma} \label{8lem1} If $z\in \mathbb{C^*}$ is not an integral power of $q$, then \beq \label{8mlem} m(-z,q,-1)=m(-qz^2,q^4,q^2/z^2)-\frac{1}{z}m(-q/z^2,q^4,q^2z^2)+\frac{j(q;q^2)^2}{2j(z;q)}. \eeq \end{lemma} \begin{proof} Setting $x=-z$, $z_1=-1$ and $z_0=\sqrt{z/q}$ in Theorem \ref{mthe33}, we have \beq \label{8m0} m(-z,q,-1)=m(-z,q,\sqrt{z/q})+\frac{j(q;q^2)^2}{2j(z;q)}. \eeq Then by Corollary \ref{mcor37} and Proposition \ref{mpro31} \begin{align} \label{8m1} m(-z,q,\sqrt{z/q})=&m(-qz^2,q^4,q^2/z^2)+\frac{z}{q}m(-z^2/q,q^4,q^2/z^2)\\ \nonumber =&m(-qz^2,q^4,q^2/z^2)-\frac{1}{z}m(-q/z^2,q^4,z^2/q^2)\\ \nonumber =&m(-qz^2,q^4,q^2/z^2)-\frac{1}{z}m(-q/z^2,q^4,q^2z^2). \end{align} \eqref{8mlem} holds by \eqref{8m0} and \eqref{8m1}. \end{proof} \begin{theorem} \label{8th1} For $z\in \mathbb{C^*}$ and $z\neq 1$, we have \beq \label{8th1id} (z^2q^2,q^2/z^2,q^4;q^4)_\infty F_8(z,q)=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}q^{2k^2}}{1+q^{4k-1}}\cdot \frac{z^{1-2k}-z^{2k}}{1-z}. \eeq \end{theorem} \begin{proof} If $z$ is an integral power of $q$, it is easy to check both sides of \eqref{8th1} are zero. For $z$ is not an integral power of $q$, setting $x=-z$ in \cite[Eq. (2.15)]{Mo-14} $$ F_8(z,q)=\frac{-z}{1-z}\left(m(-z,q,-1)-\frac{j(q;q^2)^2}{2j(z;q)}\right). $$ By Lemma \ref{8lem1} \begin{align*} F_8(z,q)=&\frac{1}{1-z}(m(-q/z^2,q^4,q^2z^2)-zm(-qz^2,q^4,q^2/z^2))\\ =&\frac{1}{j(q^2z^2;q^4)}\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}q^{2k^2}}{1+q^{4k-1}}\cdot \frac{z^{1-2k}-z^{2k}}{1-z}, \end{align*} which is \eqref{8th1id}. \end{proof} Letting $z\rightarrow 1$, \eqref{8th1id} yields \begin{cor} \beq \label{8ALid} \frac{J_2^2}{J_4}\mathscr{F}_{8,-1}(q)=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(4k-1)q^{2k^2}}{1+q^{4k-1}}. \eeq \end{cor} And \eqref{8th1id} yields \cite[Eq. (5.1)]{Hi-Mo-14} $$ \frac{J_2^2}{J_4}A(q)=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}q^{2k^2}}{1-q^{4k-1}}, $$ by replacing $q$ by $-q$ and $z=-1$. These easily imply that \cite[Lemma 3.1]{Ch-Ga} $$ \mathscr{F}_{8,-1}(q)\equiv -A(-q) \pmod 4. $$ \begin{theorem} \label{8th2} For $z\in \mathbb{C^*}$ and $z\neq 1$, we have \beq \label{8th2id} (zq,q/z,q^2;q^2)_\infty F_8(z,q)=\sum_{1\leq j\leq |n|}sg(n)(-1)^{j-1}q^{2n^2-n-j^2+j}\cdot \frac{z^{1-j}-z^j}{1-z}. \eeq \end{theorem} \begin{proof} By Theorem \ref{8th1}, \eqref{8th2id} is equivalent to for $z\in \mathbb{C^*}$ \begin{align*} (1-z)F_8(z,q)=&\frac{1}{(z^2q^2,q^2/z^2,q^4;q^4)_\infty} \sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}q^{2k^2}}{1+q^{4k-1}}(z^{1-2k}-z^{2k})\\ =&\frac{1}{(zq,q/z,q^2;q^2)_\infty} \sum_{1\leq j\leq |n|}sg(n)(-1)^{j-1}q^{2n^2-n-j^2+j}(z^{1-j}-z^j), \end{align*} which can be simplify as \begin{align} \label{8th21} &\sum_{m=-\infty}^{\infty}(-1)^mq^{2m^2} \sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}q^{2k^2}}{1+q^{4k-1}}(z^{1-2k}-z^{2k})\\ \nonumber =&\sum_{m=-\infty}^{\infty}q^{m^2}z^m \sum_{1\leq j\leq |n|}sg(n)(-1)^{j-1}q^{2n^2-n-j^2+j}(z^{1-j}-z^j). \end{align} Denote $A_k$ and $B_k$ be the coefficients of $z^k$ on both sides. It is easy to see that $$ A_{2k}=-A_{1-2k}=\frac{J_2^2}{J_4}\cdot \frac{(-1)^kq^{2k^2}}{1+q^{4k-1}}. $$ For $B_k$, by \eqref{8th21} we have $$ B_k=\sum_{1\leq j\leq |n|}sg(n)(-1)^{j-1}q^{2n^2-n-j^2+j}\left(q^{(k-1+j)^2}-q^{(k-j)^2}\right). $$ So we can verify that $$ B_{2k}=-B_{1-2k}, $$ and \begin{align*} B_{2k}=&\sum_{1\leq j\leq |n|}sg(n)(-1)^{j-1}q^{2n^2-n-j^2+j}\left(q^{(2k-1+j)^2}-q^{(2k-j)^2}\right)\\ =&\sum_{n=-\infty}^{\infty}sg(n)q^{4k^2+2n^2-n}\left(\frac{1-(-1)^nq^{|n|(4k-1)}}{1+q^{4k-1}}- \frac{1-(-1)^nq^{-|n|(4k-1)}}{1+q^{4k-1}}\right)\\ =&\frac{q^{4k^2}}{1+q^{4k-1}}\sum_{n=-\infty}^{\infty}sg(n)(-1)^nq^{2n^2-n}(q^{-|n|(4k-1)}-q^{|n|(4k-1)})\\ =&\frac{q^{4k^2}}{1+q^{4k-1}}\left(\sum_{n=-\infty}^{\infty}(-1)^nq^{2n^2-4nk}-\sum_{n=-\infty}^{\infty}(-1)^nq^{2n^2-2n+4nk}\right)\\ =&\frac{(-1)^kq^{2k^2}}{1+q^{4k-1}}\sum_{n=-\infty}^{\infty}(-1)^{n-k}q^{2(n-k)^2}\\ =&\frac{J_2^2}{J_4}\cdot \frac{(-1)^kq^{2k^2}}{1+q^{4k-1}}. \end{align*} Hence $A_k=B_k$ for all integers $k$ and \eqref{8th21} holds. \end{proof} \eqref{8th2id} yields the Hecke-Rogers series of $A(q)$ \cite[Eq. (2.24)]{Cu-Gu-Ha-18} by replacing $q$ by $-q$ and $z=-1$ \beq \label{8AHR} \frac{J_1^2}{J_2}A(q)=\sum_{1\leq j\leq |n|}sg(n)(-1)^{n-1}q^{2n^2-n-j^2+j}. \eeq Letting $z\rightarrow 1$ in \eqref{8th2id} we have the Hecke-Rogers series of $\mathscr{F}_{8,-1}(q)$ \cite[P. 376]{Hu-07} which is close to \eqref{8AHR} \begin{cor} \beq \label{8hrid} \frac{J_1^2}{J_2}\mathscr{F}_{8,-1}(q)=\sum_{1\leq j\leq |n|}sg(n)(-1)^{j-1}(2j-1)q^{2n^2-n-j^2+j}. \eeq \end{cor} \subsection{Hecke-Rogers series of $\mathscr{F}_{12,-1}(q)$} Unlike $\mathscr{F}_{4,-1}(q)$ and $\mathscr{F}_{8,-1}(q)$, we did't find Eulerian type series of $\mathscr{F}_{12,-1}(q)$ and $\mathscr{F}_{24,-1}(q)$. So we did't find a nice $z$-analog of $\mathscr{F}_{12,-1}(q)$ and mock theta functions $\sigma(q)$. By the 3-dissection of $\mathscr{F}_{4,-1}(q)$, we have the Appell-Lerch series which is close to \cite[Eq. (4.8)]{An-Hi-91} with $z=q^3$ $$ \frac{J_3^2}{J_6}\sigma(q)=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}q^{3k^2}}{1-q^{6k-1}}. $$ \begin{theorem} $$ \frac{J_3^2}{J_6}\mathscr{F}_{12,-1}(q)=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(6k-1)q^{3k^2}}{1+q^{6k-1}}+\frac{2qJ_{12}J_6J_2^5}{J_4J_1^2}. $$ \end{theorem} \begin{proof} We denote the 3-dissection on both sides of \eqref{4cor1} by $$ \frac{J_1^2}{J_2}=A_0(q^3)+qA_1(q^3)+q^2A_2(q^3), $$ $$ \mathscr{F}_{4,-1}(q)=B_0(q^3)+qB_1(q^3)+q^2B_2(q^3), $$ and $$ \sum_{k=1}^{\infty}\frac{(-1)^{k-1}(2k-1)q^{k^2}}{1+q^{2k-1}}=C_0(q^3)+qC_1(q^3)+q^2C_2(q^3). $$ So that \beq \label{12lid2} A_0(q)B_0(q)+qA_1(q)B_2(q)+qA_2(q)B_1(q)=C_0(q). \eeq We have $$ B_0(q)=\mathscr{F}_{12,-1}(q), $$ and we note that $$ \frac{J_1^2}{J_2}=\sum_{k=-\infty}^{\infty}(-1)^kq^{k^2}=\sum_{k=-\infty}^{\infty}(-1)^kq^{9k^2}-2q\sum_{k=-\infty}^{\infty}(-1)^kq^{9k^2+6k}=\frac{J_9^2}{J_{18}}-2q\frac{J_{18}^2J_3}{J_9J_6}. $$ So that \beq \label{12lemid24} A_0(q)=\frac{J_3^2}{J_6},\text{ }A_1(q)=-2\frac{J_{6}^2J_1}{J_3J_2}\text{ and }A_2(q)=0. \eeq By \cite[Eq. (2.7), (2.9)]{Ch-Ga} \begin{align} \label{12lmb} B_2(q)=&\mathscr{F}_{12,7}(q)=\sum_{n=0}^{\infty}H(24n+7)q^{2n}+3q\sum_{n=0}^{\infty}H(24n+19)q^{2n}\\ \nonumber =&\frac{J_6^2J_4^5}{J_{12}J_2^3}+3q\frac{J_{12}^3J_4}{J_2}=\frac{J_{12}J_3J_2^6}{J_6J_4J_1^3}, \end{align} where the last equation was verified by MAPLE \eqref{eta}. Since $$ \sum_{k=1}^{\infty}\frac{(-1)^{k-1}(2k-1)q^{k^2}}{1+q^{2k-1}}=\sum_{k=1}^{\infty}\frac{(-1)^{k-1}(2k-1)q^{k^2}}{1+q^{6k-3}}(1-q^{2k-1}+q^{4k-2}), $$ we have \begin{align} \label{12lmc} &C_0(q^3)\\ \nonumber =&\sum_{k=1}^{\infty}\frac{(-1)^{3k-1}(2\cdot(3k)-1)q^{(3k)^2}}{1+q^{6\cdot(3k)-3}}+\sum_{k=0}^{\infty}\frac{(-1)^{3k+1-1}(2\cdot(3k+1)-1)q^{(3k+1)^2+4\cdot(3k+1)-2}}{1+q^{6\cdot(3k+1)-3}}\\ \nonumber =&\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(6k-1)q^{9k^2}}{1+q^{18k-3}}. \end{align} So that $$ C_0(q)=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(6k-1)q^{3k^2}}{1+q^{6k-1}}. $$ Hence \eqref{12lid2} is $$ \frac{J_3^2}{J_6}\mathscr{F}_{12,-1}(q)-\frac{2qJ_{12}J_6J_2^5}{J_4J_1^2}=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(6k-1)q^{3k^2}}{1+q^{6k-1}}. $$ \end{proof} The following lemmas is needed for proving the Hecke-Rogers identitiy of $\mathscr{F}_{12,-1}(q)$. \begin{lemma} \label{12lem1} Let $$ g(z):=\frac{1}{z}j(-z^2;q^2)m(q,q^6,-z^2/q). $$ Then $$ g'(1)=-\frac{J_6^{12}J_4^4J_1^3}{J_{12}^6J_3^3J_2^6}. $$ \end{lemma} \begin{proof} From Theorem \ref{mthe33} $$ m(q,q^6,-z^2/q)=m(q,q^6,-1)-\frac{J_6^3j(z^2/q;q^6)j(z^2;q^6)}{j(-1;q^6)j(-q;q^6)j(-z^2/q;q^6)j(-z^2;q^6)}. $$ Let $$ f(z):=\frac{j(z^2/q;q^6)j(z^2;q^6)}{j(-z^2/q;q^6)j(-z^2;q^6)}. $$ Then \begin{align} \label{12lid1} f'(1)=&\lim_{z\rightarrow 1}\frac{f(z)-f(1)}{z-1}\\ \nonumber =&\lim_{z\rightarrow 1}(1+z)(q^6z,q^6/z,q^6;q^6)_\infty\frac{j(z^2/q;q^6)}{j(-z^2/q;q^6)j(-z^2;q^6)}\\ \nonumber =&\frac{J_6^7J_4J_1^2}{J_{12}^3J_3^2J_2^3}. \end{align} So that $$ \frac{d}{dz}m(q,q^6,-z^2/q)\bigg|_{z=1}=-\frac{J_6^{12}J_4^2J_1^3}{2J_{12}^6J_3^3J_2^5}, $$ and by \eqref{pj1} $$ \frac{d}{dz}\left(\frac{1}{z}j(-z^2;q^2)\right)\bigg|_{z=1}=0. $$ Hence $$ g'(1)=-j(-1;q^2)\frac{J_6^{12}J_4^2J_1^3}{2J_{12}^6J_3^3J_2^5}=-\frac{J_6^{12}J_4^4J_1^3}{J_{12}^6J_3^3J_2^6}. $$ \end{proof} \begin{lemma} Let $$ g(z):=\frac{1}{z}j(qz^4;q^2)m(-q^2/z^6,q^6,q^3z^6). $$ Then $$ g'(1)=-\frac{J_6J_1^2}{J_3^2J_2}\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(6k-1)q^{3k^2}}{1+q^{6k-1}}. $$ \end{lemma} \begin{proof} Since by \eqref{pj21} $$ \frac{d}{dz}j(qz^4;q^2)\bigg|_{z=1}=0, $$ we have \begin{align*} g'(1)=&j(q;q^2)\frac{d}{dz}\left(\frac{1}{z}m(-q^2/z^6,q^6,q^3z^6)\right)\bigg|_{z=1}\\ =&j(q;q^2)\left(\frac{d}{dz}m(-q^2/z^6,q^6,q^3z^6)\bigg|_{z=1}-m(-q^2,q^6,q^3)\right)\\ =&-\frac{J_6J_1^2}{J_3^2J_2}\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(6k-1)q^{3k^2}}{1+q^{6k-1}}. \end{align*} \end{proof} \begin{lemma} \label{12lem3} Let $$ g(z):=\frac{j(qz^4;q^2)j(-q^4z^4;q^6)j(q^4z^2;q^6)}{zj(-q^5z^2;q^6)j(q^3z^6,q^6)j(-q^5;q^6)j(q^5z^4;q^6)}. $$ Then $$ g'(1)=-\frac{J_6^9J_4^4J_1^3}{J_{12}^6J_3^3J_2^6}-\frac{2qJ_{12}J_2^4}{J_6J_4J_3^2}. $$ \end{lemma} \begin{proof} Let $$ g_1(z):=\frac{j(q;q^2)}{j(q^3;q^6)j(-q^5,q^6)}\cdot \frac{j(-z^4q^4;q^6)j(z^2q^4;q^6)}{zj(-z^2q^5;q^6)j(z^4q^5;q^6)}. $$ Then by \eqref{pj21} $$ g_1'(1)=g'(1). $$ Let \begin{align*} u_1(z):=&z^{2/3}j(-z^4q^4;q^6)=z^{2/3}j(-q^2/z^4;q^6),\\ u_2(z):=&z^{1/3}j(z^2q^4;q^6)=z^{1/3}j(q^2/z^2;q^6),\\ v_1(z):=&z^{2/3}j(-z^2q^5;q^6)=z^{2/3}j(-q/z^2;q^6),\\ v_2(z):=&z^{4/3}j(z^4q^5;q^6)=z^{4/3}j(q/z^4;q^6). \end{align*} By \eqref{pj31}, \eqref{pj31m}, \eqref{pj61} and \eqref{pj61m}, we have \begin{align*} g_1'(1)=&g_1(1)\left(\frac{u_1'(1)}{u_1(1)}+\frac{u_2'(1)}{u_2(1)}-\frac{v_1'(1)}{v_1(1)}-\frac{v_2'(1)}{v_2(1)}\right)\\ =&\frac{2J_6J_2^2J_1^3}{3J_{12}^2J_3^3}+\frac{J_6^3J_4^3J_1^3}{J_{12}^3J_3^3J_2^4}\left(\frac{J_2^{3}}{3J_6}+\frac{3q^2J_{18}^3}{J_6}\right)-\frac{2J_6^4J_4^6J_1^6}{3J_{12}^4J_3^4J_2^7}-\frac{4J_6J_4^3J_2^2}{3J_{12}^3J_3^2}.\\ =&-\frac{J_6^9J_4^4J_1^3}{J_{12}^6J_3^3J_2^6}-\frac{2qJ_{12}J_2^4}{J_6J_4J_3^2}, \end{align*} where the last equation was verified by MAPLE \eqref{eta}. \end{proof} Now we can prove the Hecke-Rogers identity. \begin{theorem} \beq \label{12th2id} \frac{J_1^2}{J_2}\mathscr{F}_{12,-1}(q)=\sum_{1-|n|\leq j\leq |n|}sg(n)(-1)^{j-1}(4n-1)q^{4n^2-2n-3j^2+2j}. \eeq \end{theorem} \begin{proof} Let $$ f(z):=qz^3f_{1,2,1}(q^3z^4,-q^4z^2,q^2). $$ The right side of \eqref{12th2id} is \begin{align*} &\sum_{1-|n|\leq j\leq |n|}sg(n)(-1)^{j-1}(4n-1)q^{4n^2-2n-3j^2+2j}\\ =&\sum_{sg(r)=sg(s)}(-1)^r(2s+4r+3)q^{(2r+1)s+(r+s+1)^2}\\ =&f'(1). \end{align*} Hence \eqref{12th2id} is equivalent to \beq \label{12thid1} \frac{J_2}{J_1^2}f'(1)=\frac{J_6}{J_3^2}\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(6k-1)q^{3k^2}}{1+q^{6k-1}}+\frac{2qJ_{12}J_6^2J_2^5}{J_4J_3^2J_1^2}. \eeq By Theorem \ref{mthe16} \beq \label{12thid2} f(z)=\frac{1}{z}(j(-z^2;q^2)m(q,q^6,-z^2/q)-j(qz^4;q^2)m(-q^2/z^6,q^6,-q^5z^2)). \eeq And by Theorem \ref{mthe33} \beq \label{12thid3} m(-q^2/z^6,q^6,-q^5z^2)=m(-q^2/z^6,q^6,q^3z^6)+\frac{J_6^3j(-q^4z^4;q^6)j(q^4z^2;q^6)}{j(-q^5z^2;q^6)j(q^3z^6;q^6)j(-q^5q^6)j(q^5z^4;q^6)}. \eeq \eqref{12thid1} holds by \eqref{12thid2} - \eqref{12thid3} and Lemma \ref{12lem1} - Lemma \ref{12lem3}. \end{proof} We rewrite \cite[Eq. (4.2)]{An-Hi-91} as \beq \label{12si} \frac{J_1^2}{J_2}\sigma(q)=\sum_{1-|n|\leq j\leq |n|}sg(n)(-1)^{j-1}q^{4n^2-2n-3j^2+2j}. \eeq \eqref{12th2id} is close to \eqref{12si} and which imply that $$ \mathscr{F}_{12,-1}(q)\equiv -\sigma(q) \pmod 4, $$ which is equivalent to \cite[Lamma 3.11]{Ch-Ga}. \subsection{Hecke-Rogers series of $\mathscr{F}_{24,-1}(q)$} We first prove the Appell-Lerch series of $\mathscr{F}_{24,-1}(q)$ which is more complex than $\mathscr{F}_{12,-1}(q)$. \begin{theorem} \label{24th1} $$ \frac{J_6^2}{J_{12}}\mathscr{F}_{24,-1}(q)=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(12k-1)q^{6k^2}}{1+q^{12k-1}}+\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(12k-7)q^{6k^2-2}}{1+q^{12k-7}}+\frac{2qJ_{12}^2J_3^2J_2^6}{J_6^2J_4J_1^3}. $$ \end{theorem} \begin{proof} We denote the 3-dissection on both sides of \eqref{8ALid} by $$ \frac{J_2^2}{J_4}=A_0(q^3)+qA_1(q^3)+q^2A_2(q^3), $$ $$ \mathscr{F}_{8,-1}(q)=B_0(q^3)+qB_1(q^3)+q^2B_2(q^3), $$ and $$ \sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(4k-1)q^{2k^2}}{1+q^{4k-1}}=C_0(q^3)+qC_1(q^3)+q^2C_2(q^3). $$ So that \beq \label{24th1id1} A_0(q)B_0(q)+qA_2(q)B_1(q)+qA_1(q)B_2(q)=C_0(q). \eeq Similar to \eqref{12lemid24}-\eqref{12lmc} we have \begin{align} \label{24lma} A_0(q)&=\frac{J_6^2}{J_{12}},\\ A_1(q)&=0,\\ A_2(q)&=\frac{-2J_{12}^2J_2}{J_6J_4},\\ B_0(q)&=\mathscr{F}_{24,-1}(q),\\ B_1(q)&=\mathscr{F}_{24,7}(q)=\sum_{n=1}^{\infty}H(24n+7)q^n=\frac{J_3^2J_2^5}{J_6J_1^3},\\ \label{24lmc} C_0(q)&=\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(12k-1)q^{6k^2}}{1+q^{12k-1}}+\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(12k-7)q^{6k^2-2}}{1+q^{12k-7}}. \end{align} We complete the proof by substituting \eqref{24lma} - \eqref{24lmc} into \eqref{24th1id1}. \end{proof} The following lemma can be proved similar to Lemma \ref{12lem3}. We use \eqref{pj21} - \eqref{pj1} to calculate each term to sum of eta-quotients then use MAPLE \eqref{eta} to verified the eta-quotients identity. We omit the proof. \begin{lemma} \label{24lmt} Let \begin{align*} g(z):=&\frac{z^5J_{12}J_4^2j(-qz^4;q^{12})}{qJ_8J_6j(-z^8;q^{12})j(-q^4z^8;q^{12})}\left(\frac{J_{12}^2J_8}{J_{24}^2J_4}j(q^{14}z^8;q^{24})^2-\frac{q^2J_{24}^2J_6J_4^2}{J_{12}^2J_8J_2}j(q^8z^8;q^{12})\right)\\ -&\frac{zJ_{12}^3j(qz^4;q^2)j(q^5z^8;q^{12})}{j(qz^4;q^{12})j(q^6z^{12};q^{12})}\left(\frac{j(-q^2z^4;q^{12})}{j(-q^4z^8;q^{12})j(-q^{11};q^{12})}+\frac{z^4j(-q^6z^4;q^{12})}{qj(-z^8;q^{12})j(-q^5;q^{12})}\right). \end{align*} then $$ g'(1)=-\frac{2qJ_{12}^3J_3^2J_2^5}{J_6^4J_4J_1}. $$ \end{lemma} The Hecke-Rogers series of $\mathscr{F}_{24,-1}(q)$ is \begin{theorem} \beq \label{24mid} \frac{J_1^2}{J_2}\mathscr{F}_{24,-1}(q)=\sum_{1-|n|\leq j\leq |n|}sg(n)(-1)^{n-1}(4j-1)q^{3n^2-n-2j^2+j}. \eeq \end{theorem} \begin{proof} Let $$ f(z):=\frac{q_1}{z}f_{1,5,1}(iz^2q_1^7,iq_1^3/z^2,q_1^2). $$ Letting $q=q_1^4$, the right side of \eqref{24mid} is \begin{align*} &\sum_{1-|n|\leq j\leq |n|}sg(n)(-1)^{n-1}(4j-1)q_1^{12n^2-4n-8j^2+4j}\\ =&\sum_{\substack{sg(r)=sg(s) \\r+s\equiv 1\pmod 2}}sg(r)(-1)^r(2r-2s-1)q_1^{r^2+6r+10rs+s^2+2s+1}\\ =&\IM\left(\sum_{sg(r)=sg(s)}sg(s)(-i)^{r+s}(2r-2s-1)q_1^{r^2+6r+10rs+s^2+2s+1}\right)\\ =&\IM(f'(1)), \end{align*} where $\IM(z)$ denote the imaginary part of $z$. Then by Theorem \ref{mthe111} we have $$ \IM(f(z))=\frac{q_1}{z}\IM\left(g_{1,5,1}(iq_1^7z^2,iq_1^3/z^2,q_1^2,1/q_1^4z^4,q_1^4z^4)-\Theta_{1,4}(iq_1^7z^2,iq_1^3/z^2,q_1^2)\right), $$ where it is easy to calculate that \begin{align*} &\frac{q_1}{z}\IM(g_{1,5,1}(iq_1^7z^2,iq_1^3/z^2,q_1^2,1/q_1^4z^4,q_1^4z^4))\\ =&zj(qz^4;q^2)\left(m(-q^5z^{12},q^{12},1/qz^4)-\frac{1}{q^2z^8}m(-1/qz^{12},q^{12},qz^4)\right), \end{align*} and \begin{align*} &\frac{q_1}{z}\IM(\Theta_{1,4}(iq_1^7z^2,iq_1^3/z^2,q_1^2))\\ =&\frac{z^5J_{12}J_4^2j(-qz^4;q^{12})}{qJ_8J_6j(-z^8;q^{12})j(-q^4z^8;q^{12})}\left(\frac{J_{12}^2J_8}{J_{24}^2J_4}j(q^{14}z^8;q^{24})^2-\frac{q^2J_{24}^2J_6J_4^2}{J_{12}^2J_8J_2}j(q^8z^8;q^{12})\right). \end{align*} Hence by Theorem \ref{24th1}, \eqref{24mid} is equivalent to \begin{align} \label{24mid1} &\frac{J_1^2J_{12}}{J_6^2J_2}\left(\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(12k-1)q^{6k^2}}{1+q^{12k-1}}+\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(12k-7)q^{6k^2-2}}{1+q^{12k-7}}+\frac{2qJ_{12}^2J_3^2J_2^6}{J_6^2J_4J_1^3}\right)\\ \nonumber =&g'(1), \end{align} where \begin{align*} g(z):=&\IM(f(z))\\ =&zj(qz^4;q^2)\left(m(-q^5z^{12},q^{12},1/qz^4)-\frac{1}{q^2z^8}m(-1/qz^{12},q^{12},qz^4)\right)\\ -&\frac{z^5J_{12}J_4^2j(-qz^4;q^{12})}{qJ_8J_6j(-z^8;q^{12})j(-q^4z^8;q^{12})}\left(\frac{J_{12}^2J_8}{J_{24}^2J_4}j(q^{14}z^8;q^{24})^2-\frac{q^2J_{24}^2J_6J_4^2}{J_{12}^2J_8J_2}j(q^8z^8;q^{12})\right). \end{align*} By Theorem \ref{mthe33} \begin{align*} m(-q^5z^{12},q^{12},1/qz^4)=&m(-q^5z^{12},q^{12},q^6/z^{12})\\ +&\frac{J_{12}^3j(q^5z^8;q^{12})j(-q^2z^4;q^{12})}{j(qz^4;q^{12})j(q^6z^{12};q^{12})j(-q^4z^8;q^{12})j(-q^{11};q^{12})}, \end{align*} and \begin{align*} m(-1/qz^{12},q^{12},qz^4)=&m(-1/qz^{12},q^{12},q^6z^{12})\\ -&\frac{qz^{12}J_{12}^3j(q^5z^8;q^{12})j(-q^6z^4;q^{12})}{j(qz^4;q^{12})j(q^6z^{12};q^{12})j(-z^8;q^{12})j(-q^5;q^{12})}. \end{align*} So that $g(z)=g_1(z)-g_2(z)$ where $$ g_1(z):=zj(qz^4;q^2)\left(m(-q^5z^{12},q^{12},q^6/z^{12})-\frac{1}{q^2z^8}m(-1/qz^{12},q^{12},q^6z^{12})\right), $$ and \begin{align*} g_2(z):=&\frac{z^5J_{12}J_4^2j(-qz^4;q^{12})}{qJ_8J_6j(-z^8;q^{12})j(-q^4z^8;q^{12})}\left(\frac{J_{12}^2J_8}{J_{24}^2J_4}j(q^{14}z^8;q^{24})^2-\frac{q^2J_{24}^2J_6J_4^2}{J_{12}^2J_8J_2}j(q^8z^8;q^{12})\right)\\ -&\frac{zJ_{12}^3j(qz^4;q^2)j(q^5z^8;q^{12})}{j(qz^4;q^{12})j(q^6z^{12};q^{12})}\left(\frac{j(-q^2z^4;q^{12})}{j(-q^4z^8;q^{12})j(-q^{11};q^{12})}+\frac{z^4j(-q^6z^4;q^{12})}{qj(-z^8;q^{12})j(-q^5;q^{12})}\right). \end{align*} Since by \eqref{pj21} $$ \frac{d}{dz}j(qz;q^2)\bigg|_{z=1}=0 $$ by definition of $m(x,q,z)$ we have $$ g_1'(1)=\frac{J_1^2J_{12}}{J_6^2J_2}\left(\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(12k-1)q^{6k^2}}{1+q^{12k-1}}+\sum_{k=-\infty}^{\infty}\frac{(-1)^{k-1}(12k-7)q^{6k^2-2}}{1+q^{12k-7}}\right), $$ and by Lemma \ref{24lmt} we have $$ g_2'(1)=-\frac{2qJ_{12}^3J_3^2J_2^5}{J_6^4J_4J_1}. $$ Hence \eqref{24mid1} holds. \end{proof} \eqref{24mid} is very close to \cite[Eq. (2.1)]{Be-Ch-07} $$ \frac{J_1^2}{J_2}\phi_{-}(q)=\sum_{1-|n|\leq j\leq |n|}sg(n)(-1)^{n-1}q^{3n^2-n-2j^2+j}, $$ which easily implies that \cite[Lemma 3.8]{Ch-Ga} $$ \mathscr{F}_{24,-1}(q)\equiv -\phi_{-}(q)\pmod 4. $$ And \eqref{24mid} also close to \cite[Lemma 4.4]{Ch-Ga} $$ \frac{J_1^2}{J_2}\psi(q)=\sum_{1-|n|\leq j\leq |n|}sg(n)(-1)^{j-1}q^{3n^2-n-2j^2+j}. $$ \section{Hurwitz class numbers and partitions} In this section, we start with a formula by Humbert \cite[P. 346]{Hu-07} \begin{align} \label{smain} \mathscr{F}_{4,-1}(q)=&\sum_{m=0}^{\infty}\sum_{u=-m}^{m}\frac{q^{(2m+1)^2/4+(2m+1)/2+1/4-u^2}}{1-q^{2m+1}}\\ \nonumber =&\sum_{m=0}^{\infty}\sum_{u=-m}^{m}\frac{q^{(m+1)^2-u^2}}{1-q^{2m+1}}. \end{align} And we will prove the combinatorial interpretation of $F(4n-1)$ and $H(8n-1)$ appeared on OEIS. Denote $P(n)$ be the number of strongly unimodal compositions of $n$ with absolute difference of successive parts equal to 1(see A238872 on OEIS). It is easy to see that $$ n=x+(x+1)+...+(y-1)+y+(y-1)+...+(z+1)+z=y^2-\frac{x(x-1)}{2}-\frac{z(z-1)}{2}. $$ So that \beq \label{s1id} \sum_{n=1}^{\infty}P(n)q^n=\sum_{y=1}^{\infty}\sum_{x=1}^{y}\sum_{z=1}^{y}q^{y^2-x(x-1)/2-z(z-1)/2}. \eeq \begin{theorem} For all $n\in \mathbb{N^*}$, we have $$ P(n)=F(4n-1). $$ \end{theorem} \begin{proof} Let $$ A(m,u):=\frac{q^{(m+1)^2-u^2}}{1-q^{2m+1}}, $$ and $$ B(y,x,z):=q^{y^2-x(x-1)/2-z(z-1)/2}. $$ Then \begin{align*} A(m,u)=&\frac{q^{(m+1)^2-u^2}}{1-q^{2m+1}}=\sum_{k=1}^{\infty} q^{m^2-u^2+k(2m+1)}=\sum_{k=1}^{\infty} q^{(k+m)^2-(k+u)(k+u-1)/2-(k-u)(k-u-1)/2}\\ =&\sum_{k=1}^{\infty} B(k+m,k+u,k-u). \end{align*} So that by \eqref{smain} and \eqref{s1id} and noting that $$ B(y,x,z)=B(y,x,1-z)=B(y,1-x,z), $$ we have \begin{align*} \sum_{n=1}^{\infty}F(4n-1)q^n=&\sum_{m=0}^{\infty}\sum_{u=-m}^{m}A(m,u)=\sum_{m=0}^{\infty}\sum_{u=-m}^{m}\sum_{k=1}^{\infty} B(k+m,k+u,k-u)\\ =&\sum_{y=1}^{\infty}\sum_{(x,z)\in D_y}B(y,x,z)=\sum_{y=1}^{\infty}\sum_{x=1}^{y}\sum_{z=1}^{y}B(y,x,z)=\sum_{n=1}^{\infty}P(n)q^n, \end{align*} where $$ D_y:=\{(x,z):x\leq y, z\leq y, x+z\geq 2, x\equiv z\pmod 2\} $$ \end{proof} Denote $Q(n)$ be the number of partitions of $n$ into consecutive parts, all singletons except the largest(see A321440 on OEIS). It is easy to see that $$ n=(m+1)+(m+2)+...+(l-1)+l+l+...+l=l(l+1)/2-m(m-1)/2+kl $$ So that \begin{align} \label{s2id} \sum_{n=1}^{\infty}Q(n)q^n=&\sum_{l=1}^{\infty}\sum_{m=1}^{l}\sum_{k=0}^{\infty}q^{l(l+1)/2-m(m-1)/2+kl}\\ \nonumber =&\sum_{l=1}^{\infty}\sum_{m=1}^{l}\frac{q^{l(l+1)/2-m(m-1)/2}}{1-q^l}. \end{align} \begin{theorem} For all $n\in \mathbb{N^*}$, we have $$ Q(n)=H(8n-1). $$ \end{theorem} \begin{proof} It is easy to see that $$ \sum_{l=1}^{\infty}\sum_{m=1}^{l}\frac{q^{l(l+1)/2-m(m-1)/2}}{1-q^l}=\sum_{D}\frac{q^{l(l+1)/2-m(m-1)/2}}{1-q^l}, $$ where $$ D=\{(l,m):1-l\leq m\leq l,m\equiv l+1\pmod 2\}. $$ So replacing $l$, $m$ by $s+t+1$ and $s-t$ respectively in \eqref{s2id} we have \begin{align*} \sum_{n=1}^{\infty}Q(n)q^n=&q\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{q^{2st+2s+t}}{1-q^{s+t+1}}=q\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{q^{2st+2s+t}}{1-q^{2s+2t+2}}(1+q^{s+t+1})\\ =&q\sum_{sg(s)=sg(t)}sg(s)\frac{q^{2st+2s+t}}{1-q^{2(s+t)+2}}. \end{align*} Replacing $m$, $u$ by $s+t$ and $s-t$ respectively in the right side of \eqref{smain}, we have \beq \label{s2id1} \sum_{D_0}\frac{q^{(m+1)^2-u^2}}{1-q^{2m+1}}=\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{q^{4st+2s+2t+1}}{1-q^{4(s+t)+2}}+\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{q^{4st+4s+4t+2}}{1-q^{4(s+t)+2}}, \eeq and replacing $m$, $u$ by $s+t+1$ and $s-t$ respectively in the right side of \eqref{smain}, we have \beq \label{s2id2} \sum_{D_1}\frac{q^{(m+1)^2-u^2}}{1-q^{2m+1}}=\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{q^{4st+4s+4t+4}}{1-q^{4(s+t)+6}}+\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{q^{4st+6s+6t+7}}{1-q^{4(s+t)+6}}, \eeq where $$ D_i=\{(m,u):-m\leq u\leq m,m-u\equiv i\pmod 2\}. $$ By \eqref{smain}, \eqref{s2id1} and \eqref{s2id2} we have \begin{align*} \mathscr{F}_{8,-1}(q)=&\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{q^{2st+2s+2t+1}}{1-q^{2(s+t)+1}}+\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{q^{2st+2s+2t+2}}{1-q^{2(s+t)+3}}\\ =&q\sum_{sg(s)=sg(t)}sg(s)\frac{q^{2st+2s+2t}}{1-q^{2(s+t)+1}}. \end{align*} So that $$ \mathscr{F}_{8,-1}(q)=\sum_{n=1}^{\infty}Q(n)q^n, $$ since \begin{align*} &q\sum_{sg(s)=sg(t)}sg(s)\frac{q^{2st+2s+2t}}{1-q^{2(s+t)+1}}=q\sum_{sg(s)=sg(t)=sg(r)}q^{2(st+sr+tr)+2s+2t+r}\\ =&q\sum_{sg(s)=sg(t)=sg(r)}q^{2(st+sr+tr)+2s+t+2r}=q\sum_{sg(s)=sg(t)}sg(s)\frac{q^{2st+2s+t}}{1-q^{2(s+t)+2}}. \end{align*} \end{proof} \begin{cor} For all $n\in \mathbb{N^*}$, we have $$ Q(n)=P(2n). $$ \end{cor} \subsection*{Acknowledgements} The first author was supported by Shanghai Sailing Program (21YF1413600).
train/arxiv
BkiUbaA5qsNCPdNDGipr
5
1
\section{Introduction} \label{intro} Star formation is a key process in the universe, shaping the structure of entire galaxies and determining the route to planet formation. In particular, the formation of massive stars impacts the dynamical, thermal and chemical structure of the interstellar medium (ISM), and it is almost the only mode of star formation observable in extragalactic systems. During their early formation, massive star-forming regions inject energy to the ISM via their outflows and jets, then during their main sequence evolution, the intense uv-radiation of the massive stars heats up their environment, and at the very end of their life, Supernovae stir up the ISM by their strong explosions. Furthermore, massive stars are the cradles of the heavy elements. Hence, life as we know it today would not exist if massive stars had not formed first the heavy elements in their interiors via nucleosynthesis. In addition to this, almost all stars form in clusters, and within the clusters massive stars dominate the overall luminosities. Isolated low-mass star formation is the exception, and isolated high-mass star formation likely does not exist {\it Concepts:} In spite of their importance, many physical processes during the formation of massive stars are not well understood. While there exists a paradigm for low-mass star formation on which large parts of the scientific community agree (e.g., \citealt{andre2000,mckee2007}), this is less the case for high-mass star formation (e.g., \citealt{beuther2006b,zinnecker2007}). The conceptional problem is based on the fact that at least in spherical symmetry the radiation pressure of a centrally ignited star $\geq$8\,M$_{\odot}$ would be large enough to stop any further gas infall and hence inhibit the formation of more massive objects. Over the last decade, two schools have been followed to solve this problem: (a) The turbulent accretion scenario, which is largely an enhancement of the low-mass star formation scenario, forms massive stars in a turbulent core with high accretion rates and a geometry including accretion disks and molecular outflows (e.g., \citealt{yorke2002,mckee2003,krumholz2006b,keto2007}). In contrast to that, (b) the competitive accretion scenario relies on the clustered mode of massive star formation. In this scenario, the accretion rates are determined by the whole cluster potential, and those sources sitting closest to the potential well will competitively accrete most of the mass (e.g., \citealt{bonnell2004,bonnell2006}). {\it Potential observational tests:} How can we observationally discriminate between the different scenarios? For example, the turbulent accretion scenario predicts qualitatively similar outflow and disk properties as known for low-mass stars, however, with quantitatively enhanced parameters like the accretion rates, outflow energies, or disk sizes. In contrast to that, modeling of the proto-cluster evolution in the competitive accretion scenario indicates extremely dynamic movements of all cluster-members throughout their whole evolution. It is unlikely that in such a dynamic environment collimated outflows or large disks could survive at all. Another difference between both scenarios is based on their early fragmentation predictions. While the turbulent accretion scenario draws their initial gas clumps from (gravo-)turbulently fragmented clouds (e.g., \citealt{padoan2002}) and these gas clumps do not fragment afterwards much anymore, the competitive accretion scenario predicts that the initial gas clumps fragment down to many clumps all of the order a Jeans-mass ($\sim$0.5\,M$_{\odot}$). Hence, while in the former scenario, the Initial Mass Function (IMF) is determined during the early cloud fragmentation processes, the latter models predict that the IMF only develops during the ongoing cluster formation. Therefore, studying the initial fragmentation and the early core mass functions can give insights in the actual massive star formation processes. {\it Evolutionary sequence:} Independent of the formation scenario, there has to be an evolutionary sequence in which the processes take place. For this review, I will follow the evolutionary sequence outlined by \citet{beuther2006b}: Massive star-forming regions start as High-Mass Starless Cores (HMSCs), i.e., these are massive gas cores of the order a few 100 to a few 1000\,M$_{\odot}$ without any embedded protostars yet. In the next stage we have High-Mass cores with embedded low- to intermediate-mass protostars below 8\,M$_{\odot}$, which have not started hydrogen burning yet. During that evolutionary phase, their luminosity should still be dominated by accretion luminosity. Following that, the so called High-Mass Protostellar Objects (HMPOs) are still massive gas cores but now they contain embedded massive protostars $>$8\,M$_{\odot}$ that have started hydrogen burning which soon dominates the total luminosity of the sources. Hot Molecular Cores (HMCs) and hypercompact H{\sc ii} regions (HCH{\sc ii}s) are part of that class. The last evolutionary stage then contains the final stars that have stopped accreting. While most ultracompact H{\sc ii} regions (UCH{\sc ii}s) are likely part of the latter group, some of them may still be in the accretion phase and could then hence still harbor HMPOs. {\it Definitions of a massive protostar:} Another debate in massive star formation centers around the exact definition of a "massive protostar". If one followed the low-mass definition which basically means that a protostars is an object that derives most of its luminosity from accretion, then "massive protostars" should not exist or only during a very short period of time because as soon as they are termed "massive" ($>$8\,M$_{\odot}$), their luminosity is quickly dominated by hydrogen burning. In this scenario, during the ongoing formation processes, one would then need to talk about "accreting stars". This approach is for example outlined by \citet{zinnecker2007}. A different definition for "massive protostars" is advocated, e.g., recently by \citet{beuther2006b}: In this picture, a protostar is defined in the sense that each massive object that is still in its accretion phase is called a "massive protostar", independent of the dominating source of luminosity. This definition follows more closely the usual terminology of "proto", meaning objects that are not finished yet. {\it Observational challenges:} Whatever physical or chemical processes we are interested in massive star formation, one faces severe observational challenges because of the clustered mode of massive star formation and the on average large distances of a few kiloparsec. Therefore, high spatial resolution is a prerequisite for any such study. Furthermore, the early stages of massive star formation are characterized by on average cold gas and dust temperatures which are best observed at (sub)mm wavelength. Hence, most observations presented in the following are based on (sub)mm interferometer observations of young massive star-forming regions at different evolutionary stages. The main body of this article is divided into four sections dealing first with the initial conditions that are present prior to or at the onset of massive star formation. The next two sections deal with our current knowledge about the properties of potential massive accretion disks and the fragmentation behavior of massive gas clumps and cores. The following section will then outline the status and future possibilities of astrochemical investigations. Finally, I try to sketch the directions where current and future answers to the questions raised in the Abstract may lead to. \section{The earliest stages of massive star formation} \label{early} What are the initial conditions of massive star formation? Until a few years ago, to address this question observationally in a statistical sense was close to impossible because we had no means to identify large samples of sources prior to or at the onset of massive star formation. The situation has changed significantly since the advent of the space-based near- and mid-infrared missions that surveyed the Galactic plane starting with ISO and MSX, and now conducted with much better sensitivity and spatial resolution by Spitzer. These missions have revealed more than $10^4$ Infrared Dark Clouds (IRDCs), which are cold molecular clouds that are identified as shadows against the Galactic background (e.g., \citealt{egan1998,carey2000,simon2006}). These clouds are characterized by on average cold temperatures ($\sim$15\,K), large masses (a few 100 to a few 1000\,M$_{\odot}$) and average densities of the order $10^4-10^5$\,cm$^{-1}$ (e.g., \citealt{rathborne2005,sridharan2005,pillai2006}). Although these clouds appear as dark shadows, they may be starless but they can also harbor embedded forming protostars. In fact, a statistical analysis of the percentage of starless IRDCs versus IRDCs with embedded protostars will be an important step to understand the time-scales important for the earliest evolutionary stages. Currently, the statistical database of in depth IRDC studies is still insufficient for such an estimate (e.g., \citealt{rathborne2005,rathborne2006,pillai2006b,beuther2007a,motte2007}), however, it is interesting to note that until now no starless IRDC has been unambiguously identified in the literature, all detailed studies revealed embedded star formation processes. To first order, this triggers the interpretation/speculation that the high-mass starless core phase is likely to be extremely shortlived. Future investigations of larger samples will answer this question more thoroughly. \begin{figure}[htb] \begin{center} \end{center} \caption{\small Sample IRDCs from \citet{sridharan2005}: MSX A-band (8\,$\mu$m) images (black is bright) with 1.2\,mm emission contours: The first two numbers refer to the corresponding IRAS source and the third number labels the mm sub-sources. The five-pointed stars mark cores lacking good 1.2 mm measurements.} \label{irdcs} \end{figure} In a recent spectral line study of a sample of 43 IRDCs (Fig.~\ref{irdcs}), \citet{beuther2007g} detected SiO(2--1) emission from 18 sources. Assuming that SiO is produced solely through sputtering from dust grains, and that this sample is representative for IRDCs in general, it indicates that at least 40\% of the IRDCs have ongoing outflow activity. Since the non-detection of SiO does not imply no outflow activity, this number is a lower limit, and even a higher percentage of sources may harbor already ongoing star formation. The range of observed SiO line-widths down to zero intensity varied between 2.2 and 65\,km\,s$^{-1}$. While inclination effects and embedded objects of different mass could account for some of the differences, such effects are unlikely causing the whole velocity spread. Therefore, \citet{beuther2007g} speculate whether the varying SiO line-widths are also indicators of their evolutionary stage with the smallest line-width close after the onset of star formation activity. In the same study, \citet{beuther2007g} observed CH$_3$OH and CH$_3$CN. While CH$_3$CN was detected only toward six sources, CH$_3$OH was found in approximately 40\% of the sample. The derived column densities are low of the order $10^{-10}$ with respect to H$_2$. These values are consistent with chemical models of the earliest evolutionary stages of high-mass star formation (e.g., \citealt{nomura2004}), and the CH$_3$OH abundances compare well to recently reported values for low-mass starless cores (e.g., \citealt{tafalla2006}). Zooming into selected regions in more detail, we studied one particularly interesting IRDC at high angular resolution with the Plateau de Bure Interferometer and the Spitzer Space Telescope (IRDC\,18223-3, see Fig.~\ref{irdcs} right panel; \citealt{beuther2005d,beuther2007a}). Combining the Spitzer mid-infrared data between 3 and 8\,$\mu$m with the 3.2\,mm long-wavelengths observations from the Plateau de Bure Interferometer (PdBI), we did not find any mid-infrared counterpart to the massive gas core detected at 3.2\,mm (Fig.~\ref{18223-3}, \citealt{beuther2005d}). However, we did detect three faint 4.5\,$\mu$m features at the edge of the central 3.2\,mm continuum core. Since emission features that occur only in the 4.5\,$\mu$m but no other Spitzer band are usually attributed to shocked H$_2$ emission from molecular outflows (e.g., \citealt{noriega2004}), we concluded that the region likely hosts a very young protostar that drives a molecular outflow but that is still too deeply embedded by to be detected by the Spitzer IRAC bands. This interpretation found further support by line-wing emission in older CO and CS data. Based on the inferred central source, we predicted that the region should have a strongly rising spectral energy distribution (SED) and hence be detected at longer wavelengths. As soon as the MIPSGAL mid- to far-infrared survey with Spitzer became available, we then could identify the central source at 24 and 70\,$\mu$m (Fig.~\ref{18223-3}, \citealt{beuther2007a}). Combing the available mid-/far-infrared data with the long-wavelengths observations in the mm regime, it is possible to fit the SED with a two component model: one cold component ($\sim$15\,K and $\sim$576\,M$_{\odot}$) that contains most of the mass and luminosity, and one warmer component ($\sim$51\,K and $\sim$0.01\,M$_{\odot}$) to explain the 24\,$\mu$m data. The integrated luminosity of $\sim$177\,L$_{\odot}$ can be used to constrain additional parameters of the embedded protostar from the turbulent core accretion model for massive star formation \citep{mckee2003}. Following the simulations by \citet{krumholz2006b}, the data of IRDC\,18223-3 are consistent with a massive gas core harboring a low-mass protostellar seed of still less than half a solar mass with high accretion rates of the order $10^{-4}$\,M$_{\odot}$yr$^{-1}$ and an age below 1000\,yrs. In the framework of this model, the embedded protostar is destined to become a massive star at the end of its formation processes. While this interpretation is attractive, it is not unambiguous, and especially the derived time-scale from this model appears short when comparing with recent outflow data that will be presented in the following section (\S\ref{18223-cassie}). \begin{figure}[htb] \caption{\small IRDC\,18223-3 images at different wavelengths from \citet{beuther2007a}. The color scales show Spitzer images at various wavelength, and the contours show the 93\,GHz (3.2mm) continuum emission observed with the PdBI \citep{beuther2005d}. The left panel presents a three-color composite with blue 3.6\,$\mu$m, green 4.5\,$\mu$m and red 8.0\,$\mu$m (adapted from \citealt{beuther2005d}). The inlay zooms into the central core region. The middle and right panel show the Spitzer 24 and 70\,$\mu$m images, respectively. The circles in each panel present the Spitzer beam sizes and the ellipse in the left panel presents the PdBI 3.2\,mm continuum synthesized beam. A size-ruler is also shown in the left panel.} \label{18223-3} \end{figure} In summary, these observations indicate that the physical and chemical conditions at the onset of low- and high-mass star formation do not differ significantly (except for largely different initial cloud clump masses and accretion rates), and that the time-scale for massive bound gas clumps to remain starless is likely relatively short. \section{Massive accretion disks?} \label{disks} As mentioned in the Introduction, molecular outflows and accretion disks can be used to discriminate between the different formation scenarios for massive stars. Massive outflows have been subject to intense studies for more than a decade (e.g., \citealt{shepherd1996a,beuther2002b,zhang2005,arce2006}), and although there is considerable discussion about the details, we find a growing consensus that massive molecular outflows are ubiquitous in high-mass star formation, and that collimated jet-like outflows do exist for massive sources as well, at least during the very early evolutionary stages \citep{beuther2005b}. The collimation of the outflows is likely to widen with ongoing evolution. Nevertheless, these data are consistent with the turbulent core model for massive star formation, whereas they are less easy to reconcile with the competitive accretion model because the latter is so dynamic that collimated structures likely could not survive very long. Furthermore, the existence of collimated outflows can only be explained by magneto-hydrodynamic acceleration of the jet from an underlying accretion disk. Hence, there is ample indirect evidence for massive accretion disks, however, the physical characterization of disks in massive star formation is still lacking largely the observational basis \citep{cesaroni2006}. The two main reasons for this are that the expected massive accretion disks are still deeply embedded within their natal cores complicating the differentiation of the disk emission from the ambient core, and that the clustered mode of massive star formation combined with the large average distances of the targets makes spatially resolving structures of the order 1000\,AU a difficult observational task. In spite of these difficulties, the advent of broad spectral bandpasses allowing us to study several spectral lines simultaneously, as well as the improved spatial resolution of existing and forthcoming interferometers have increased the number of disk studies over the last few years. For a recent review see \citet{cesaroni2006}. Here, I will show three different examples of disk and/or rotation candidates in an evolutionary sense: It starts with a rotation and outflow investigation of the previously discussed IRDC\,18223-3 (\S\ref{early}), then I will present recent data from the high-mass disk candidate in the HMPO IRAS\,18089-1732, and finally observations from a massive disk candidate at a more evolved evolutionary stage will be discussed. \subsection{Rotation in IRDCs: the case of IRDC\,18223-3} \label{18223-cassie} As a follow-up of the Infrared Dark Cloud study of IRDC\,18233-3 discussed in \S\ref{early}, Fallscheer et al.~(in prep.) observed the same region with the Submillimeter Array (SMA) in several spectral setups around 230 and 280\,GHz covering outflow as well as dense gas tracers. Figure \ref{cassie} shows a compilation of the CO(2--1) data, one CH$_3$OH line and the dust continuum emission. The blue- and red-shifted CO(2--1) emission clearly identifies at least one large-scale outflow in the north-west south-east direction. This is consistent with two of the 4.5\,$\mu$m emission features at the edge of the core (Fig.~\ref{18223-3}, left panel and inlay). There is another collimated red-shifted feature to the south-west corresponding to the third 4.5\,$\mu$m feature, however, we do not identify a blue counterpart and refrain from further interpretation of that feature. Since we find for the main north-west south-east outflow blue- and red-shifted emission on both sides of the continuum peak, the orientation of the outflow should be close to the plane of the sky (see, e.g., models by \citealt{cabrit1990}), and hence the assumed underlying perpendicular rotating structure close to edge-on. Following the approach outlined in \citet{beuther2002b}, the outflow mass and outflow rate are 13\,M$_{\odot}$ and $3.5\times 10^{-4}$\,M$_{\odot}$yr$^{-1}$, respectively. With the above derived core mass of $\sim$576\,M$_{\odot}$ (\S\ref{early}), this source fits well into the correlation between outflow rate and core mass previously derived for HMPOs (Fig.~7 in \citealt{beuther2002b}). \begin{figure}[htb] \caption{\small SMA observations toward IRDC\,18223-3 (Fallscheer et al.~in prep.). The left panel shows the blue- and red-shifted CO(2--1) emission as solid and dashed contours overlaid on the grey-scale 1.3\,mm dust continuum emission. The central core is the same source as in Fig.~\ref{18223-3}. The right panel presents in grey-scale a 1st moment map (intensity weighted velocity distribution) of CH$_3$OH overlaid with the 1.1\,mm continuum emission. The empty and full circle are the synthesized beams of the line and continuum emission, respectively.} \label{cassie} \end{figure} While the outflow rate is consistent with the accretion rate previously derived from the SED (\S\ref{early}), discrepancies arise with respect to the age of the system. Although dynamical timescales are highly uncertain (e.g., \citealt{parker1991}), the size of the molecular outflow combined with a low inclination angle allows for at least a timescale estimate for the outflow of the order a few $10^4$\,yrs, well in excess of the value $\leq 10^3$\,yrs previously derived from the SED applied to models (\S\ref{early}). Notwithstanding the large errors between the different estimates, the discrepancy of more than an order of magnitude appears real. How can we explain that? There is no clear answer to that yet, but a possibility is that the orientation of the disk-outflow system with the disk close to edge-on absorbs a large amount of flux distorting the SED on the Wien-side. If that were the case, the SED-estimated age could underestimate the age of the system. Another possibility to solve the discrepancy is that the initial start of high-mass star formation may proceed slower, i.e., the first low-mass protostar(s) (destined to become massive or not?) form within the massive cores, and they already start driving outflows, but at that stage it is impossible to detect them in the near- to far-infrared because of the large extinction. In this picture at some point the high-mass star formation process would need to accelerate because otherwise the massive stars cannot form in the short time-scales of a few $10^5$\,yrs (e.g., \citealt{mckee2002}). It is not clear why the whole process should start slow and what could trigger such acceleration later-on. Obviously, more theoretical and observational work is required to explain the different time-scales in more detail. Figure \ref{cassie} (right panel) zooms into the central core and shows dust continuum emission as well as the velocity structure of the dense central gas observed in CH$_3$OH$(6_{0,1}-5_{0,1})$ with a lower level excitation level of $E_{\rm{low}}=34.8$\,K. Interestingly, both the continuum and the spectral line emission are elongated in the north-east south-west direction perpendicular to the main molecular outflow. While the continuum emission shows three resolved emission features, CH$_3$OH exhibits a smooth velocity gradient across the source spanning approximately 3\,km\,s$^{-1}$. The CH$_3$OH line-width FWHM toward the continuum peak is 2.1\,km\,s$^{-1}$. The blue-redshifted features in the north-west are likely part of the molecular outflow and one sees even a slight elongation of the continuum emission in that direction. While CH$_3$OH is a well-known shock tracer and hence regularly found within molecular outflows (e.g., \citealt{bachiller2001}), it is more of a surprise that we find it in an elongated structure likely associated with rotation and infall perpendicular to the outflow. The extent of this structure is large with $\sim$6.5$''$ corresponding to more than 20000\,AU at the given distance of 3.7\,kpc. Although we have no methanol isotopologue in the setup to exactly determine the opacity of the line, a low-energy transition like this is likely to be optically thick. Hence, we are tracing some of the outer rotating structures, probably corresponding to a larger scale rotating and potentially infalling/inspiralling toroid (e.g., \citealt{cesaroni2005,keto2007}). The small velocity spread across the structure as well as the relatively narrow CH$_3$OH line-width toward the core center are also consistent with tracing outer structures because due to momentum conservation rotating structures should have lower velocities further out. Notwithstanding that we do not exactly know the age of IRDC\,18223-3, its non-detection up to 8\,$\mu$m puts it at an early evolutionary phase prior to the better studied HMPOs and Hot Molecular Cores. Our data clearly show that even at such early stages molecular outflows and rotating structures perpendicular to that have been developed, and it is likely that closer toward the core center, one will find a real accretion disk. To investigate the latter in more detail, higher angular resolution observations of an optically thin dense gas tracer are required. \subsection{The HMPO disk candidate IRAS\,18089-1732} As a more evolved massive disk candidate, we have studied intensely over the last few years the HMPO IRAS\,18089-1732. This source at a distance of 3.6\,kpc with a luminosity of $10^{4.5}$\,L$_{\odot}$ is part of a large sample of HMPOs, it hosts H$_2$O and Class {\sc ii} CH$_3$OH maser and has strong molecular line emission indicative of en embedded Hot Molecular Core \citep{sridha,beuther2002a}. During early SMA observations \citet{beuther2004a,beuther2004b,beuther2005c} identified in SiO a molecular outflow in the north-south direction, and perpendicular to that in HCOOCH$_3$ a velocity gradient on scales of a few 1000\,AU. Although these data were indicative of rotation and an underlying massive accretion disk, the observations did not allow us to characterize the structure in more detail because of a lack of spatial resolution. Therefore, we observed IRAS\,18089-1732 now in high-energy transitions of NH$_3$ at 1.2\,cm wavelength with the VLA and the ATCA at a spatial resolution of $0.4''$ \citep{beuther2008a}. These NH$_3$(4,4) and (5,5) lines have a two-fold advantage: Their high excitation levels ($>200$\,K) ensure that we are tracing the warm inner regions and are less confused by the surrounding cold envelope, whereas the cm wavelengths regime is less affected by high optical depth of the dust emission in high column density regions and may hence be particularly well suited for massive disk studies (e.g. \citealt{krumholz2007a}). Figure \ref{18089} presents an integrated image and a 1st moment map (intensity weighted velocity) of the corresponding VLA observations. \begin{figure}[htb] \begin{center} \end{center} \caption{\small The left panel shows the VLA NH$_3$(5,5) emission integrated from 31 to 37\,km\,s$^{-1}$ \citep{beuther2008a}. The right panel presents the corresponding 1st moment map contoured from 31.5 to 36.5\,km\,s$^{-1}$ (step 1\,km\,s$^{-1}$). The white-black dashed contours show the 1.2\,cm continuum emission. The asterisks mark the position of the submm continuum peak \citep{beuther2005c}, and the synthesized beams are shown at the bottom-left (grey NH$_3$ and dashed 1.2\,cm emission).} \label{18089} \end{figure} The 1st moment map confirms the previously assessed velocity gradient in east-west direction perpendicular to the molecular outflow. The NH$_3$ line-width FWHM toward the central core is 4.7\,km\,s$^{-1}$, significantly broader than that of IRDC\,18223-3 (\S\ref{18223-cassie}). In the simple picture of equilibrium between gravitational and centrifugal force, the rotationally bound mass would be $\sim$37\,M$_{\odot}$, of the same order as the whole gas mass as well as the mass of the central source (of the order 15\,M$_{\odot}$). Furthermore, the position-velocity diagram is not consistent with Keplerian rotation. It even shows indications of super-Keplerian motion, which is expected for very massive disks where the rotation profile is not only determined by the mass of the central object but also by the disk itself (e.g., \citealt{krumholz2006b}). Hence, the new VLA and ATCA data clearly confirm the previous assessment of rotation perpendicular to the outflow/jet, however, the kinematic signatures of that rotating structure are not consistent with a Keplerian disk like in low-mass star formation, but they show additional features which can be produced by massive self-gravitating disks as well as by infalling gas that may settle eventually on the disk. In addition to this, the detection of the high-excitation lines in the rotating material indicates high average gas temperatures $>$100\,K for the disk-like structures, well in excess of typical gas temperatures in low-mass disks of the order 30\,K (e.g., \citealt{pietu2007}). Moreover, we detect double-lobe cm continuum emission close to the core center where the two lobes are oriented in north-south direction parallel to the outflow identified in SiO. With respect to previous data at longer wavelength, we find a spectral index at cm wavelength of 1.9, consistent with an optically thick jet \citep{reynolds1986}. It will be interesting to further zoom into the innermost regions with future instruments like ALMA and eVLA to asses whether the quantitative deviations from typical low-mass accretion disks continue down to the smallest scales, or whether we will find Keplerian disk structures as known from their low-mass counterparts. \subsection{A more evolved massive disk candidate?} Moving along in the evolutionary sequence, we have recently identified a potential disk around a more evolved candidate young stellar object (Quanz et al. in prep.). The source labeled so far mdc1 (massive disk candidate) was identified serendipitously during a near-infrared wide-field imaging campaign on Calar Alto via its K-band cone-like nebulosity and a central dark lane (Fig.~\ref{mdc1}). First single-dish bolometer and spectral line measurements revealed a 1.2\,mm flux of 12\,mJy and a velocity of rest of $\sim$51\,km\,s$^{-1}$. The latter value indicates a kinematic distance of $\sim$5\,kpc, consistent with distances of a few UCH{\sc ii} regions in the surrounding neighborhood. To investigate this object in more detail, we recently observed it with the SMA at 1.3\,mm wavelength mainly in the mm continuum and the $^{12}$CO/C$^{18}$O spectral line emission. Figure \ref{mdc1} presents an overlay of the SMA data with the K-band nebulosity, and a few points need to be stressed: \begin{figure}[htb] \begin{center} \end{center} \caption{\small The grey-scale in both panels shows the K-band near-infrared nebulosity observed for this massive evolved disk candidate (Quanz et al.~in prep.). The contours are the corresponding SMA mm observations where the left panel shows the 1.3\,mm continuum emission, and the right panel in black and white contours the blue-shifted $^{12}$CO(2--1) and the integrated C$^{18}$O(2--1) emission, respectively.} \label{mdc1} \end{figure} (a) Although spatially unresolved with a synthesized beam of $\sim 4.0''$ the 1.3\,mm continuum peak exactly coincides with the infrared dark lane consistent with the large column densities of the proposed disk-like structure. The flux measured with the SMA is 12\,mJy like the previous single-dish measurements. This indicates that we have no surrounding dust/gas envelope but rather an isolated central structure. Assuming optically thin dust emission at 50\,K, the approximate gas mass of the central structure is $\sim$5\,M$_{\odot}$. (b) We detect blue-shifted CO(2--1) spatially well correlated with the K-band nebulosity north of the dark lane. This confirms the initial interpretation of that feature to be due to an outflow. (c) The integrated C$^{18}$O(2--1) emission is elongated perpendicular to the outflow observed in CO and K-band continuum emission. The line-width FWHM of the C$^{18}$O emission is narrow with $\sim$0.8\,km\,s$^{-1}$, however, the spatial extent of this structure is large, of the order $2\times10^4$\,AU. While the low gas mass and the missing more massive gas envelope could be interpreted in the framework of a low-mass source, such large disk-structures as indicated by the C$^{18}$O emission are not known from typical low-mass disk sources (e.g., \citealt{simon2000}). Therefore, these observations can also be interpreted as a remnant disk/torus around an intermediate to high-mass (proto)star that has already dispersed much of its envelope. Although C$^{18}$O is detected only in two channels, these show a clear velocity shift, and the small line-width may be due to the lower rotational motions on large scales assuming momentum conservation in rotating, potentially Keplerian structures.\\ Synthesizing the three example sources shown here, it is interesting to note that the line-widths are small in the youngest and the supposed to be oldest source, whereas they are large in the HMPO which should be in its main accretion phase. In an evolutionary picture this can be interpreted that at early evolutionary stages infall, turbulence and rotation are not yet that vigorous. Then in the main accretion phase, infall, rotation and outflow processes strongly increase the line-width. And finally, when the accretion stops, the envelope and disk slowly disperse and one observes only a remnant structure with small line-width in the outer regions. This scenario is speculative, however, the number of disk candidates is steadily increasing, and since we start sampling more evolutionary stages, we are getting the chance to address disk evolution questions in high-mass star formation as well. \section{Fragmentation in high-mass star formation} How massive gas clumps fragment is one of the key questions if one wants to understand the formation of the Initial Mass Function, and as outlined in \S \ref{intro}, the two main massive star formation theories predict differences in the early fragmentation processes. In the following I will present several examples of fragmenting massive cores addressing issues about fragmentation on the cluster-scale, fragmentation of smaller groups, potential proto-trapezia, and the determination of density structures of sub-sources within evolving clusters. \subsection{Resolving the massive proto-cluster IRAS\,19410+2336} To address fragmentation processes at early evolutionary stages high angular resolution at (sub)mm wavelengths is the tool of choice to resolve the relevant substructures. \citet{beuther2004c} resolved the young massive star-forming region IRAS\,19410+2336 (distance $\sim$2.1\,kpc and luminosity $\sim$10$^{4}$\,L$_{\odot}$) at 1.3\,mm wavelength with the PdBI at approximately 2000\,AU linear resolution into 24 sub-sources. Although from a statistical point of view such numbers cannot compete with the clusters exceeding 100 or even 1000 stars observed at optical and near-infrared wavelength, this is still one of the prime examples of a spatially resolved massive proto-cluster. Assuming that all emission features are due to cold dust emission from embedded protostars, they were able to derive a core-mass function. With a power-law slope of -2.5, this core mass function is consistent with the Salpeter IMF slope of -2.35 \citep{salpeter1955}. Therefore, \citet{beuther2004c} interpreted these observations as support for the turbulent fragmentation put forth by, e.g., \citet{padoan2002}. A few caveats need to be kept in mind: While \citet{beuther2004c} assumed a uniform gas temperature for all sub-sources, it is more likely that the central peaks are warmer than those further outside. This issue can be addressed by spectral line emission with temperature sensitive molecules (e.g., H$_2$CO) which is an ongoing project by Rodon et al.~(in prep.). Furthermore, the assumption that all mm continuum peaks are of pro- or pre-stellar nature is not necessarily always valid, e.g., \citet{gueth2003} or \citet{beuther2004d} have shown that mm continuum emission can partly also be caused by collimated jets. However, only the central source is detected at cm wavelength and collimated jets should be detectable at cm wavelengths as well. Therefore, we believe that jets should not affect the analysis much. Independent of the caveats, it is surprising that IRAS\,19410+2336 is still the only young massive star-forming region that is resolved in $>$10 sub-sources in the mm continuum emission. While this can be explained to some degree by the exceptionally good uv-coverage obtained for the given observations, which results in a good sampling of spatial structures, we also need to consider whether different modes of fragmentation may exist. Similar high-spatial-resolution studies of more proto-clusters spanning a broad range of luminosities are required to tackle this question in more detail. Another interesting question is associated with the spatial filtering of interferometers and the corresponding large-scale emission: Many interferometric (sub)mm continuum studies of massive star-forming regions filter out of the order 90\% of the flux, hence, large amounts of the gas are distributed on larger scales, usually $>10''$. The question remains whether this gas will eventually participate in the star formation process or not? \subsection{Fragmentation of potential proto-trapezia} \subsubsection{The enigmatic proto-trapezium W3-IRS5} The W3-IRS5 region is one of the prototypical high-mass star-forming regions with $\sim 10^5$\,L$_{\odot}$ at a distance of $\sim$1.8\,kpc that shows fragmentation on scales of the order 1000\,AU observed at near-infrared as well as cm wavelengths \citep{megeath2005,vandertak2005a}. However, not much was known about the cold dust and gas emission. Therefore, we observed the region with the PdBI at 1.3 and 3.5\,mm wavelengths with the new extended baselines resulting in an unprecedented spatial resolution of $\sim 0.37''$ (Rodon et al.~in prep.). Figure \ref{w3irs5} shows a compilation of the 1.3\,mm continuum data and the SiO(5--4) and SO$_2(22_{2,20}--22_{1,21}$) spectral line emission. \begin{figure}[htb] \caption{\small PdBI observations of the W3-IRS5 system from Rodon et al. (in prep.). The left panel shows the 1.3\,mm continuum emission, and the stars mark near-infrared sources from \citet{megeath2005}. The middle panel presents as solid and dotted contours the blue- and red-shifted SiO(5--4) emission overlaid on the grey-scale 1.3\,mm continuum emission. The right panel finally shows the 1st moment map of SO$_2(22_{2,20}--22_{1,21}$ in grey-scale with the 1.3\,mm continuum contours.} \label{w3irs5} \end{figure} The mm continuum emission resolves the W3-IRS5 region into five sub-sources, where four of them are coincident with near-infrared and cm emission peaks. Three of the sources are clustered in a very small projected volume of only $\sim$2000\,AU. With this high spatial resolution we find extremely large average column densities of the order a few times $10^{24}$\,cm$^{-2}$ which corresponds to visual extinctions $A_v$ between $5\times10^3$ and $10^4$ averaged over the beam size. Such extinctions should be far too large to allow any detection at near-infrared wavelengths, nevertheless, near-infrared counterparts are detected \citep{megeath2005}. This conundrum can likely be explained by the detection of several SiO outflows in the field. In particular, we find very compact blue- and red-shifted SiO emission toward the two main mm peaks, where the blue- and red-shifted emission is barely spatially separated (Fig.~\ref{w3irs5} middle panel). Since the overall time-scale of the W3-IRS5 outflow system is relatively large (of the order a few times $10^4$\,yrs, \citealt{ridge2001}), these compact features are unlikely from very young outflows, but they indicate that the outflows are oriented almost along the line of sight. The opening cones of the outflows are the likely cause that emission from close to the protostars can escape the region and hence make them detectable at near-infrared wavelengths. The right panel of Fig.~\ref{w3irs5} shows the 1st moment map of SO$_2$ (intensity weighted velocities) which encompasses the mm continuum peaks. The coherent velocity field over the sub-sources is a strong indicator that the system is a bound structure and not some unbound chance alignment within the field (e.g., \citealt{launhardt2004}). In addition to the general velocity gradient from the south-east to the north-west, one tentatively identifies velocity gradients across the two strongest mm continuum peaks. Since we do not know the exact orientation of the outflows with respect to the SO$_2$ rotation axis, it is not yet possible to identify these structures with disk-like components as in \S\ref{disks}. Future observations in different tracers may help to shed more light on the rotational structure associated with each sub-source. It should also be noted that the line-width FWHM toward the mm continuum peaks varies between 6.2 and 7\,km\,s$^{-1}$ larger than those reported in \S\ref{disks}. While the larger line-width compared with the IRDC and the more evolved source may be explained by the evolutionary sequence sketched at the end of \S\ref{disks}, the larger FWHM compared to the HMPO may have different reasons, among them are the larger luminosity of W3-IRS5, its multiplicity compared with the so far unresolved source IRAS\,18089-1732, and also the molecular species, because SO$_2$ should be more affected by shocks than the NH$_3$ line used for the IRAS\,18089-1732 study. In addition to this, the SO$_2$ moment map exhibits a velocity discontinuity with a velocity jump of the order 4\,km\,s$^{-1}$ south-east of the mm continuum peaks. What is the cause of this discontinuity, is it associated with the original core formation and a shock within converging flows, or is it of different origin? In summary, the combination of high-spatial-resolution observations of the continuum emission in addition to outflow and dense gas tracers allows us to characterize many physical properties of this proto-trapezium system with respect to its multiple components and their outflow and rotation properties. \subsubsection{Fragmentation of the hot core G29} The hot core G29.96 located right next to a well-known cometary H{\sc ii} region comprises another example of several protostellar submm continuum sources within the innermost center of a high-mass star forming region (distance $\sim$6\,kpc, luminosity $9\times 10^4$\,L$_{\odot}$). High-spatial resolution observations with the SMA in its most extended configuration yielded a spatial resolution of $0.36''\times 0.25''$ in the submm continuum at $\sim$348\,GHz, corresponding to a linear resolution of 2000\,AU (Fig.~\ref{g29}, \citealt{beuther2007d}, the line data will be discussed in \S\ref{sequence}). The Hot Molecular Core previously identified in a high-excitation NH$_3$ line \citep{cesaroni1998} is resolved by these new data into four sub-sources within a projected diameter of $\sim$6900\,AU. Assuming that the emission peaks are of protostellar nature, \citet{beuther2007d} estimated a protostellar density of $\sim 2\times 10^5$\,protostars/pc$^{-3}$. This is considered a lower limit since we are limited by spatial resolution, sensitivity and projection effects. Nevertheless, such a protostellar density is about an order of magnitude higher than values usually reported for star-forming regions (e.g., \citealt{lada2003}). Although this value is still about an order of magnitude lower than protostellar densities that would be required in the merging scenario for massive stars (e.g., \citealt{bonnell2004,bally2005}), it is interesting to note that increasingly higher protostellar densities are reported when going to younger sources and better angular resolution (see also \citealt{megeath2005}). This allows us to speculate whether future observations with better spatial resolution and sensitivity toward extremely massive star-forming regions will reveal protostellar densities that may be sufficient to make mergers possible. While such a detection would not be a proof for mergers to exist, it will certainly be important to verify whether the required initial conditions do exist at all. \begin{figure}[htb] \begin{center} \end{center} \caption{\small Compilation of data toward the UCH{\sc ii}/hot core region G29.96 from \citealt{beuther2007d}. The dashed contours present the cometary UCH{\sc ii} regions whereas the full contours show the older NH$_3$ observation from the hot core \citep{cesaroni1994}. The grey-scale with contours then present the new high-resolution ($0.36''\times 0.25''$) submm continuum data from the SMA.} \label{g29} \end{figure} \subsection{Density structure of sub-sources -- IRAS\,05358+3543} As a final example for the potential of (sub)mm continuum studies, I present the recent multi-wavelength investigation of the HMPO IRAS\,05358+3543. This region at a distance of 1.8\,kpc with a luminosity of $10^{3.8}$\,L$_{\odot}$ was observed in a combined effort with the PdBI and the SMA at arcsecond resolution in four wavelength bands (3.1 and 1.2\,mm, and 875 and 438\,$\mu$m, \citealt{beuther2007c,leurini2007}). While many details about the sub-structure of the forming cluster can be derived, here, I will discuss only two results. Based on the multi-wavelength data, \citet{beuther2007c} fitted the spectral energy distribution on the Rayleigh-Jeans side of the spectrum (Fig. \ref{sed}). While the main source can well be fitted by a typical protostellar spectrum consisting of free-free emission at long wavelength and a steep flux increase at shorter wavelength due to the dust emission, another sub-source did not fit at all into that picture. In particular the shortest wavelength data-point at 438\,$\mu$m shows significantly lower fluxes than expected for a typical protostar. The most likely explanation for this effect is that we are dealing with a very cold source and that therefore we are already approaching the peak of the spectral energy distribution. The data allowed us to estimate an upper limit for the dust temperature of $\leq 20$\,K. Since we are also not detecting any other line emission from this core (mainly from typical hot core molecules, \citealt{leurini2007}), it may well be a starless core right in the vicinity of an already more evolved massive protostar. Further investigations of this sub-source in typical cold gas tracers like N$_2$H$^+$ or NH$_3$ are required to test this proposal. Independent of whether this source harbors an embedded protostar or not, these observations show the importance of short wavelength data at high spatial resolution if one wants to differentiate between critical core parameters like the dust temperature. \begin{figure}[htb] \caption{\small The left panel presents the SED toward the coldest sub-source in IRAS\,05358+3543 \citep{beuther2007c}. The parameters of the fits are marked in the figure. The right panel shows intensities averaged in uv-annuli and plotted versus the baseline-length for different sub-sources and wavelengths. Most can be well fitted by power-law distributions.} \label{sed} \end{figure} Another physical parameter which has so far not been observationally constrained for massive star formation, is the density profile of individual sub-sources. While density profiles of low-mass star-forming cores have well been characterized (e.g., \citealt{motte1998,wardthompson1999,andre2000}), in high-mass star formation, density profiles were until now only derived with single-dish observation covering scales of the whole cluster but not individual sub-sources (e.g., \citealt{beuther2002a,mueller2002,hatchell2003}). This is partly due to the technical problem of interferometer observations that filter out large amounts of the flux and hence make density profile determinations from their images extremely unreliable. To overcome this problem, \citet{beuther2007c} analyzed the data directly in the uv-domain prior to any fourier transformation. Figure \ref{sed} shows the corresponding plots of the observed intensities versus the uv-distance for three sub-sources in three wavelengths bands, respectively. The observations cannot be fitted with Gaussian distributions, but much better fits are achieved with power-law distributions. These power-laws in the uv-domain can directly be converted to the corresponding power-laws of the intensity profiles in the image plane. Assuming furthermore a temperature distribution $T\propto r^{-0.4}$ we can now infer the density profiles of individual sub-sources of the evolving cluster. The derived density profiles $\rho\propto r^{-p}$ have power-law indices $p$ between 1.5 and 2. Although this result is similar to the density profiles previously determined for low-mass cores, to our knowledge this is the first time that they have been observationally constrained for resolved sub-sources in a massive star-forming region. The density structure is an important input parameter for any model of star formation (e.g., \citealt{mckee2003}). \section{Astrochemistry} \subsection{Toward a chemical evolutionary sequence} \label{sequence} Astrochemistry is a continuously growing field in astronomy. Although line-survey style studies of different sources have existed for quite some time (e.g., \citealt{blake1987,schilke1997b}), these studies had usually been performed with single-dish instruments averaging the chemical properties over the whole cluster-forming regions. Since the advent of broadband receivers at interferometers like the SMA, it is now also possible to perform imaging spectral line surveys that allow us to spatially differentiate which molecules are present in which part of the targeted regions, for example, the spatial differentiation between nitrogen- and oxygen-bearing molecules in Orion-KL (e.g., \citealt{blake1996,beuther2005a}). In addition to the spatial analysis of individual regions, we are also interested in analyzing how the chemistry evolves in an evolutionary sense. As an early step in this direction we synthesized SMA observations that were observed in the same spectral setup around 862\,$\mu$m toward four massive star-forming regions over the last few years (Beuther et al. subm.). These four regions comprise a range of luminosities between $10^{3.8}$\,L$_{\odot}$ and $10^5$\,L$_{\odot}$, and they cover different evolutionary stages from young High-Mass Protostellar Objects (HMPOs) to typical Hot Molecular Cores (HMCs): Orion-KL: HMC, $L\sim 10^5$\,L$_{\odot}$, $D\sim 0.45$\,kpc \citep{beuther2005a}; G29.96: HMC, $L\sim 9\times 10^4$\,L$_{\odot}$, $D\sim 6$\,kpc \citep{beuther2007d}; IRAS\,23151, HMPO, $L\sim 10^5$\,L$_{\odot}$, $D\sim 5.7$\,kpc \citep{beuther2007f}; IRAS\,05358: HMPO, $L\sim 10^{3.8}$\,L$_{\odot}$, $D\sim 1.8$\,kpc \citep{beuther2007c,leurini2007}. Smoothing all datasets to the same linear spatial resolution of 5700\,AU, we are now capable to start comparing these different regions. Figure \ref{sample_spectra} presents typical spectra extracted toward the HMC G29.96 and the HMPO IRAS\,23151. \begin{figure}[htb] \includegraphics[angle=-90,width=5.9cm]{g29_lsb.eps} \includegraphics[angle=-90,width=5.9cm]{g29_usb.eps}\\ \includegraphics[angle=-90,width=5.9cm]{23151_peak1_lsb.eps} \includegraphics[angle=-90,width=5.9cm]{23151_peak1_usb.eps}\\ \caption{\small SMA spectra extracted toward two massive star-forming regions (G29.96 top row \& IRAS\,23151+5912 bottom row, Beuther et al., subm.). The spectral resolution in all spectra is 2\,km/s. The left and right column show the lower and upper sideband data, respectively.} \label{sample_spectra} \end{figure} A detailed comparison between the four sources is given in a forthcoming paper (Beuther et al.~subm.), here we just outline a few differences in a qualitative manner.\\ $\bullet$ The HMCs show far more molecular lines than the HMPOs. Orion-KL and G29.96 appear similar indicating that the nature of the two sources is likely to be comparable as well. Regarding the two HMPOs, the higher luminosity one (IRAS\,23151) shows still more lines than the lower-luminosity source (IRAS\,05358). Since IRAS\,05358 is approximately three times closer to us than IRAS\,23151, this is not a sensitivity issue but it is likely due to the different luminosity objects forming at the core centers.\\ $\bullet$ The ground-state CH$_3$OH lines are detected toward all four sources. However, the vibrational-torsional excited CH$_3$OH are only strongly detected toward the HMCs Orion-KL and G29.96. Independent of the luminosity, the HMPOs exhibit only one CH$_3$OH $v_t=1$ line, which can easily be explained by the lower average temperatures of the HMPOs.\\ $\bullet$ A more subtle difference can be discerned by comparing the SO$_2$ and the HN$^{13}$C/CH$_3$CH$_2$CN line blend near 348.35\,GHz (in the upper sideband). While the SO$_2$ line is found toward all four sources, the HN$^{13}$C/CH$_3$CH$_2$CN line blend is strongly detected toward the HMCs, but it is not found toward the HMPOs. In the framework of warming up HMCs, this indicates that nitrogen-bearing molecules are either released from the grains only at higher temperatures, or they are daughter molecules which need some time during the warm-up phase to be produced in gas-phase chemical networks. In both cases, such molecules are not expected to be found much prior to the formation of a detectable HMC.\\ $\bullet$ Comparing the spatial distribution of different molecules, we find, e.g., that C$^{34}$S is observed mainly at the core edges and not toward the submm continuum peak positions. This difference can be explained by temperature-selective desorption and successive gas-phase chemistry reactions: CS desorbs early from the grains at temperatures of a few 10\,K and should peak during the earliest evolutionary phases toward the main continuum sources. Subsequently when the core warms up to $\sim$100\,K, H$_2$O desorbs and dissociates to OH. The OH then quickly reacts with the sulphur to form SO and SO$_2$ which should then peak toward the main continuum sources. This is what we observe in our data. The fact that the C$^{34}$S peaks are offset from the submm continuum condensations even toward the younger sources is due to their evolutionary stage where they have already heated up their central regions to more than 100\,K. Even younger sources are required to confirm this scenario. \subsection{C$_2$H as a tracer of the earliest evolutionary stages?} \label{sec_c2h} In an effort to study a larger source sample with respect to its chemical evolution, we observed 21 massive star-forming regions covering all evolutionary stages from IRDCs via HMPOs/hot cores to UCH{\sc ii} with the APEX telescope at submm wavelengths (Beuther et al.~subm.). While most spectral lines were detected mainly toward the HMPO/hot core sources, the ethynyl molecule C$_2$H is omni-present toward all regions. To get an idea about the spatial structure of ethynyl, we went back to an older SMA data-set targeting the HMPO IRAS\,18089-1732 at the same frequency around 349.4\,GHz of the C$_2$H line \citep{beuther2005c}. Because we were not able to image the spatial distribution of C$_2$H at that time, we now restricted the data to only the compact configuration allowing us to better image the larger-scale distribution of the gas. Figure \ref{c2h} presents the resulting molecular line map, and we find that C$_2$H is distributed in a shell-like fashion around the central protostellar condensation. Comparing this with all other imaged molecules in the original paper, only C$_2$H exhibits this behavior. To better understand this effect, we ran a set of chemical models in 1D for a cloud of 1200\,M$_{\odot}$, a density power-law $\rho\propto r^{-1.5}$ and different temperature distributions $T\propto r^q$. A snapshot of these models after an evolutionary time of $5\times 10^4$\,yrs is presented in Figure \ref{c2h}. The models reproduce well the central C$_2$H gap in IRAS\,18089-1732 which should have approximately the same age. \begin{figure}[htb] \caption{\small The left panel presents in grey-scale the C$_2$H emission and in thick solid contours the corresponding submm continuum from the SMA toward the HMPO IRAS\,18089-1732 (Beuther et al., subm.). The right panel shows a chemical model explaining the decreased emission toward the core center after approximately $5\times 10^4$\,yrs. The parameter $q$ denotes the temperature power-law index, and the $T$ values refer to the temperature at the core edge or to isothermal values ($q=0$).} \label{c2h} \end{figure} While these models reproduce the observations, they give also predictions how the C$_2$H emission should look like at different evolutionary times. In particular, C$_2$H forms quickly early on, also at the core center. Since not many molecules exist which do not freeze out and are available to investigate the cold early phases of massive star formation (valuable exceptions are, e.g., NH$_3$ or N$_2$H$^+$), the detection of C$_2$H toward the whole sample in combination with the chemical models triggers the prediction that C$_2$H may well be an excellent molecule to investigate the physical conditions of (massive) star-forming regions at very early evolutionary stages. High-spatial-resolution observations of IRDCs are necessary to investigate this potentially powerful astrophysical tool in more detail. \subsection{Employing molecules as astrophysical tools} While the chemical evolution of massive star-forming regions is interesting in itself, one also wants to use the different characteristics of molecular lines to trace various physical processes. In contrast to molecules like SiO and CO that are well-known outflow/jet tracers, the task gets more difficult searching for suitable accretion disk tracers. Investigating our sample and disk claims in the literature, one finds that in many cases exclusively one or the other molecule allows the investigation of rotational motion, whereas most other molecular lines remain without clear signatures. For example, the HN$^{13}$C line discussed above (\S \ref{sequence}) traces rotation in the hot core G29.96 but it is not even detectable in younger sources. The other way around, C$^{34}$S traced disk rotation in the young HMPO IRAS\,20126 \citep{cesaroni2005}, but not anymore toward more evolved sources (\S\ref{sequence}). These differences imply that one will unlikely find a uniquely well suited molecular line allowing the study of large samples of massive accretion disks, but that one has to select for each source or source class the suitable molecule for detailed investigations. In the following, I give a short table with molecules and their potential usefulness to study different physical processes. This table (1) is restricted to molecules with spectral lines at cm/(sub)mm wavelengths and does not claim any kind of completeness, it should just serve as a rough overview and it only lists the main isotopologues of each species. \begin{table}[htb] \begin{tabular}{ll} \hline OH & Zeeman effect, magnetic fields, maser signpost of star\\ & formation \\ CO & General cloud structure, outflows \\ SiO & Shocks due to jets/outflows\\ CO$^+$ & Far-UV radiation from embedded protostars \\ CS & Dense gas, rotation, also outflows \\ CN & Photodominated regions, Zeeman effect, magnetic fields \\ SO & Shocks, dense gas \\ H$_2$O & Shocks and hot cores, rotation (H$_2^{18}$O), maser signpost of\\ & star formation \\ HDO & Deuterium chemistry \\ H$_2$D$^+$ & Cold gas, pre-stellar cores, freeze out \\ HCN & Dense cores, also outflows \\ HNC & Dense cores, rotation (HN$^{13}$C) \\ HCO$^+$ & Outflows, infall, cosmic rays, ionization degree, \\ & dense gas (H$^{13}$CO$^+$) \\ SO$_2$ & Shocks, dense gas \\ C$_2$H & Early evolutionary stages (\S\ref{sec_c2h})\\ N$_2$H$^+$ & Early evolutionary stages \\ N$_2$D$^+$ & Deuteration, freeze out \\ H$_3$O$^+$ & Cosmic rays \\ H$_2$CO & Dense gas, temperatures \\ NH$_3$ & Cold and hot cores, rotation, temperatures \\ CH$_3$OH & Shocks, young rotating structures? (\S\ref{18223-cassie}), temperatures, \\ & maser signpost of massive star formation \\ CH$_3$CN & Hot cores, temperatures, rotation \\ CH$_3$CCH & Dense gas, temperatures \\ HCOOCH$_3$ & Hot cores, rotation \\ CH$_3$CH$_3$CN & Hot Cores \\ \hline \end{tabular} \label{linelist} \caption{A few useful molecules and some of their potential applications.} \end{table} \section{Conclusions and summary} This article tries to outline how far we can currently constrain physical and chemical properties in massive star formation using (sub)mm interferometry. Coming back to the original questions raised in the abstract: (a) What are the physical conditions at the onset of massive star formation? (b) What are the characteristics of potential massive accretion disks and what do they tell us about massive star formation in general? (c) How do massive clumps fragment, and what does it imply to high-mass star formation? (d) What do we learn from imaging spectral line surveys with respect to the chemistry itself as well as for utilizing molecules as tools for astrophysical investigations? Can we reasonably answer any of these questions with confidence? There are no clear-cut answers possible yet, however, the observations are paving a way to shedding light on many of the issues, and one can try to give tentative early answers. The following is a rough attempt to outline the directions for current and future answers in these fields: (a) Massive gas clumps prior or at the onset of high-mass star formation are characterized by cold temperatures of the order 15\,K and small line-widths indicative of a low level of turbulence. Their molecular abundances appear comparable to those of low-mass starless cores. Interestingly, the outflow detection rates toward IRDCs are high, and no genuine High-Mass Starless Cores have been reported in the literature yet. Although the statistical basis is not solid enough yet, this allows us to speculate that the high-mass starless phase is likely to be very shortlived. (b) The detection of a real accretion disk around a massive protostar still remains an open issue. However, we find many rotating structures in the vicinity of young massive star-forming regions all the way from IRDCs to Hot Molecular Cores. These structures are on average large with sizes between $1\times 10^3$ and $2\times 10^4$\,AU, and they have masses of the order of the central protostar. Hence, most of them are not Keplerian accretion disks but rather some larger-scale rotating/infalling structures or toroids that may feed more genuine accretion disks in the so far unresolved centers of these regions. (c) Fragmentation of massive star-forming regions is frequently observed, and the core mass function of one young region is consistent with the Initial Mass Function. However, caveats of unknown temperature distributions or missing flux on larger scales may still affect the results. Furthermore, we find proto-trapezium like structures which show multiple bound sources on small scales of a few 1000\,AU implying protostellar densities of the order $10^5$ protostars/pc$^{-3}$. Such densities are still not sufficient to allow coalescence, however, it may be possible to find even higher protostellar densities with the improved observational capabilities of future instruments. Although mergers do not appear necessary to form massive stars in general, they still remain a possibility for the most massive objects. (d) Astro-chemistry is a young branch in astrophysical research, and we are currently only touching the surface of its potential. The different paths to follow in the coming years are manyfold: With larger source-samples, we will be able to derive a real chemical evolutionary sequence with one of the goals to use chemistry as an astrophysical clock. Furthermore, understanding the chemical differences is important to use the molecular lines as astrophysical tools to investigate the physical processes taking place. Moreover, another current hot topic is planet formation, and in this context astro-biology is a rising subject. In this regard understanding astro-chemistry and detecting new and more complex molecules in space is paving the way for future astro-biological science. \noindent{\small {\bf Acknowledgments:} Thanks a lot to Cassie Fallscheer and Javier Rodon for preparing the figures related to the IRDC\,18223-3 outflow/disk system and the W3-IRS5 fragmenting core. I further acknowledge financial support by the Emmy-Noether-Program of the Deutsche Forschungsgemeinschaft (DFG, grant BE2578).} \input{refs} \input{biermann_beuther.bbl} \vfill \end{document}
train/arxiv
BkiUbMrxK6Ot9V_E2E9V
5
1
\section{Introduction} Let's Plays have garnered an enormous audience on websites such as Twitch and YouTube. At their core, Let's Plays consist of individuals playing through a segment of a video game and engaging viewers with improvised commentary, often times not related to the game itself. There are a number of reasons why Let's Plays may be of interest to Game AI researchers. First, part of Let's Play commentary focuses on explaining the game, which is relevant to game tutorial generation, gameplay commentary, and explainable AI in games broadly. Second, Let's Plays focus on presenting engaging commentary. Thus if we can successfully create Let's Play commentary we may be able to extend such work to improve the engagement of NPC dialogue and system prompts. Finally, Let's Plays are important cultural artifacts, as they are the primary way many people engage with video games. Up to this point Let's Plays have been drawn on for tasks like bug detection \cite{lin2019identifying} or learning game rules \cite{guzdial2017game}. To the best of our knowledge there have only been two attempts at this problem, the first focused on generation of a bag-of-words representation, which is an unordered collection of words that does not constitute legible commentary \cite{guzdial2018towards}. The second attempt at this problem instead structured commentary generation as a sequence-to-sequence generation task \cite{li2019end}. We do not compare against this second approach as it was not yet published during the development of this research. In this paper we present an attempt at generating Let's Play commentary with deep neural networks, specifically a convolutional neural network (CNN) that takes in a current frame of a gameplay video and produces commentary. As an initial attempt at this problem we focus on Let's Plays of the game Minecraft. We chose Minecraft due to its large and active Let's Play community and due to Minecraft's relative graphical simplicity. In this paper we present two major contributions: (1) a dataset of Minecraft gameplay frames and their associated commentary and (2) the results of applying a CNN to this task, compared to the approach presented by Guzdial et al. \cite{guzdial2018towards}. The remainder of this paper covers relevant prior work, presents our approach and implementation of the baseline, presents results of a brief quantitative analysis, and example output of our approach. \section{Related Work} This approach aims to take in video game footage (raw pixels) and output commentary. To the best of our knowledge Guzdial et al. \shortcite{guzdial2018towards} were the first to attempt this problem. Guzdial et al. focused on a preliminary approach towards Let's Play commentary of Super Mario Bros. gameplay, but notably could not produce full commentary. Their approach focused on clustering pairs of Let's Play commentary utterances and gameplay video and then training non-deep machine learning models to predict a bag of words from an input gameplay frame. More recently, Li et al. \shortcite{li2019end} represented artificial let's play commentary as a sequence-to-sequence generation problem, converting video clips to commentary. Prior approaches have attempted to create commentary from logs of in-game actions for both traditional, physical sports games and video games \cite{kolekar2006event,graefe2016guide,barot2017bardic,Ehsan:2018:RNM:3278721.3278736,scores2017lee,Ehsan:2019:ARG:3301275.3302316}. These approaches depend on access to a game's engine or the existence of a publicly accessible logging system. This work draws on convolutional neural networks (CNNs) to predict commentary for a particular frame of a gameplay video. CNNs have been employed to take an input snapshot of a game and predict player experience \cite{guzdial2016deep,liao2017deep}, game balance \cite{liapis2019fusing}, and the utility of particular game states \cite{stanescu2016evaluating}. Significant prior work has explored Let's Play as cultural artifact and as a medium. For example, prior studies of the audience of Let's Plays \cite{sjoblom2017people}, content of Let's Plays \cite{sjoblom2017content}, and building communities around Let's Play \cite{hamilton2014streaming}. The work described in this paper is preliminary as a means of exploring the possibility for automated generation of Let's Play commentary. We anticipate future developments in this work to more closely engage with scholarship in these areas. Other approaches employ Let's Play videos as input for alternative purposes beyond producing commentary. Both Guzdial and Riedl \shortcite{guzdial2016game} and Summerville et al. \shortcite{summerville2016learning} employ Longplays, a variation of Let's Play generally without commentary, as part of a process to generate video game levels through procedural content generation via machine learning \cite{summerville2017procedural}. Other work has looked at eSport commentators in a similar manner, as a means of determining what approaches the commentators use that may apply to explainable AI systems \cite{dodge2018experts}. Lin et al. \cite{lin2019identifying} draw on summarizing metrics of gameplay video including Let's Plays as a means of automatically detecting bugs, but do not directly engage with the video data. \section{System Overview} In this section, we give a high-level overview of two approaches (our approach and a baseline) for automated commentary generation. We describe our implementation of the baseline approach as originally discussed in Guzdial et al. \cite{guzdial2018towards}. The baseline can be understood as a variation of our approach in which we test the assumption that training on clustered subsets of data reduces the variance of Let's Play commentary and consequently improves commentary prediction. We first delve into the preprocessing steps to extract and featurize our data for the experiments. We then describe the two approaches in succession. In an idealized final version of this approach, first Let's Play videos would be collected with their associated commentary. Second, this data would be preprocessed to featurize the data. Third, this data would be used to train a convolutional neural network model. Finally, this new model would be fed in new video and produce output commentary. \subsection{Dataset} For our dataset, we collected three 25-minute YouTube videos, one each from three popular Minecraft Let's Players. We extracted the associated text transcripts for each of these videos generated by YouTube to serve as our commentary corpus. We applied ffmpeg, an open source tool for processing multimedia files, to each video to break apart each video into individual frames at 1 FPS. Although we observed that each sentence in the commentary usually spanned a few frames, we purposely paired each frame with a sentence. In other words, there were multiple frames-comment pairs with the same commentary. We did this for simplicity's sake so that it would be easier for our model to learn the relationship between single frame-comment pairs. We refrained from converting the images to grayscale to prevent the loss of any color features. This is especially important for a game like Minecraft, in which all game entities are composed of cubes that primarily differ according to color. In total, our dataset is comprised of 4840 frame-comment instances, 3600 of which were used for our training set and the rest for our test set.\footnote{This dataset is publicly available at: https://github.com/shukieshah/AutoCommentateDataset.} \begin{figure*}[tb] \centering \includegraphics[width=5in]{clusters.png} \caption{The medoids of each of the clusters found by the K-Medoids clustering algorithm.} \label{fig:clusters} \end{figure*} \subsection{Sentence Embeddings} Sentence embeddings are a standard way in the field of natural-language processing (NLP) to represent sentences in a vector-representation appropriate to deep neural networks. We tokenized the sentence in each frame-comment pair and converted it to a 512-dimensional numerical vector using the Universal Sentence Encoder \cite{cer2018universal}. The Universal Sentence Encoder is a model that is trained with a deep averaging network (DAN) encoder to convert plain English strings into a corresponding vector representation. We used this representation over traditional Word2Vec word embeddings because the model is specifically catered towards 'greater-than-word' length strings such as the sentences and phrases present in our dataset. The sentence embeddings produced through this method are better able to model contextual awareness in sequences of words, which is crucial for the use case of commentary generation. \subsection{Our Approach} For our approach, we trained a convolutional neural network (CNN) with the 4840 training instances, taking as input the gameplay frame and predicting the associated commentary in a sentence embedding representation. The CNN architecture was as follows: (1) a conv layer with 32 3x3 filters followed by a max pool layer, (2) a second conv layer with 64 3x3 filters, (3) a third conv layer with 64 3x3 filters followed by a max pool layer, (4) a fully connected layer of length 1024, (5) a dropout layer fixed at 0.9, (6) a fully connected layer of length 512, which represents the final 512-vector sentence embedding. We used adam \cite{kingma2014adam} for optimization (with a learning rate of 0.001) and mean-square error for our loss function. All layers used leaky ReLU activation \cite{xu2015empirical}. We employed Tensorflow \cite{abadi2016tensorflow} and trained until convergence on our training set (roughly 20 epochs). We note that this architecture was constructed by considering architectures for similarly sized datatsets for image captioning (including CifarNet \cite{hosang2015taking}), a related area for computer vision given that the Let's Play utterance can be thought of as like an abstract caption for the gameplay frame. \section{Baseline} The baseline, adapted from Guzdial et al. \cite{guzdial2018towards}, is ironically more complex than our approach. This approach calls for first clustering the Let's Play data as frame and utterance pairs and then training a unique machine learning model for each cluster individually. Thus we first cluster our 4840 training instances and then train the same CNN architecture used on our approach on the largest of the output clusters. We walk through this process in greater depth below. \subsection{Image Embeddings} For the clustering of the frame and utterance data we re-represent our gameplay frames in an image embedding representation. Image feature embeddings are similar to the sentence embeddings discussed above. These vectors were generated by passing images through a ResNet \cite{targ2016resnet} CNN architecture trained on the ImageNet dataset \cite{deng2009imagenet}. The images were fed through the network up to the penultimate activation layer, and the activation weights were extracted as features for the particular image. This allowed us to better represent the context of the image for clustering purposes without having to directly compare images to one another, which would have been highly time consuming. \subsection{Clustering} Using the process described in \cite{guzdial2018towards} we employed K-medoids clustering with $K$ estimated via the distortion ratio, using means square error as the distance function for the clusters, comparing both image and sentence vectors combined into a single vector. Figure \ref{fig:clusters} shows the actual medoid instance (frame-comment pairs) for each of the learned clusters. It is interesting to note that the the clusters with the most instances (cluster 9 and 1 respectively) comment on 'building' things, a key component of Minecraft. Furthermore, the clustering seems to have chosen clusters that capture unique moments of gameplay. For example, cluster 2 represents an opening-screen where Let's Players typically introduce themselves and greet the audience. Cluster 3, on the other hand, represent underground gameplay which is distinctive both visually and mechanically. From a qualitative standpoint, the clusters appear to effectively capture high-level themes. Thus we find it to be a successful implementation of the Guzdial et al. work \cite{guzdial2018towards}. \begin{figure*}[tb] \centering \includegraphics[width=6in]{predictions.png} \caption{Each frame is paired with the five closest nearest-neighbors of the model's actual predicted commentary.} \label{fig:examples} \end{figure*} \section{Evaluation} \begin{table}[tb] \begin{center} \caption{Average percentile error of our approach and the three largest clusters for the baseline. } \begin{tabular}{|l|c|c|} \hline Model & Percent Error & Training Set Size\\ \hline CNN & \textbf{0.961\rpm0.026} & 4840\\ \hline Cluster 9 CNN & 0.977\rpm0.023 & 1336\\ \hline Cluster 1 CNN & 0.975\rpm0.042 & 802\\ \hline Cluster 3 CNN & 0.980\rpm0.024 & 684\\ \hline \end{tabular} \end{center} \label{tab:results} \end{table} Table 1 compares the results of our approach and the baseline approach for the three largest clusters in terms of the average percent error on the test set. By average percent error we indicate the averaged percentile error for each predicted utterance compared to the true utterance across the test set. Thus lower is better. The lowest possible value of this measure would then be 0.0 and the highest (and worst) value would be 1.0. As one can see in Table 1, none of the approaches do particularly well at this task. This underscores the difficulty in predicting natural language labels given gameplay frame video only. However, we note that our approach outperforms the baseline across all three of its largest clusters. All of the other baseline per-cluster approaches do strictly worse and so we omit them from our analysis. \section{Example Output} Figure \ref{fig:examples} shows the predicted commentary and cosine similarity scores for three test images for our approach. We include the closest sentences from our training set to the predicted sentence encoding as novel commentary due to the limitations of the Universal Sentence Encoder \cite{cer2018universal}, but with another sentence embedding we could directly output novel commentary. The commentary represents the five closest neighbors to the actual predicted output from the baseline model. As one can see, there are repeats of predicted sentences across instances. This is because we are only retrieving commentary from within our training dataset which may bias certain sentences due to their greater overall semantic similarity to other sentences. The ordering and scores of the predictions vary for different test instances, indicating that the model did not just learn a single strategy. Although the commentary doesn't correlate well to the images shown, the generation of commentary is a promising advancement from prior work. \section{Conclusions and Future Work} In this paper we demonstrate an initial approach to Let's Play commentary for the game Minecraft. While the initial results are not particularly impressive, they outperform the original approach to this problem. We did not compare to the more recent Li et al. \shortcite{li2019end} as it was unavailable during our research, but we note that they represent the problem in a significantly different way, making a direct comparison non-trivial. Further, the results speak to the difficulty of this problem, which we anticipate being a fruitful area of future research. Our primary contributions are our dataset of Minecraft Let's Play frames and associated commentary and the results and analysis presented in this paper. This work had a number of limitations, which we hope to address in future work. First, we acknowledge a limitation in the relative weakness of the results. We imagine two major reasons for this issue: (1) that the model makes predictions without knowledge of previous utterances and (2) the size of the training dataset. Thus we anticipate greater success by including as input to the model the prior utterance as a sentence embedding and increasing the size of the training dataset. The second limitation we identify is in our choice to limit our output to a single game. While we acknowledge that this is helpful for an initial approach, an ideal system could take in any arbitrary gameplay video. Further, increasing the games we include would help us solve the training dataset size problem. Nonetheless, generalizing to other types of games would itself present a unique challenge since context and commentary are highly dependent on the rules and design of a particular game. Although solving this problem is nontrivial, in future work we hope to extend this project to other, popular games for Let's Plays by abstracting lower level details and focusing on higher level themes shared across games. \section{Acknowledgements} This material is based upon work supported by the National Science Foundation under Grant No. IIS-1525967. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. \bibliographystyle{aaai}
train/arxiv
BkiUe4c5qhLBjuJj91K3
4
0.8
\section{Introduction}\label{introduction} The class of asymptotically Euclidean manifolds introduced by Melrose \cite{megs,mesc} consists of $C^\infty$ compact manifolds $X$ with boundary $\partial X,$ equipped with a Riemannian metric that is $C^\infty$ in the interior of $X$ and singular at $\partial X,$ where it has an expansion \begin{gather} g= \frac{dx^2}{x^4}+ \frac{{\mathcal H}}{x^2}, \label{metric0} \end{gather} where $x$ is a defining function of $\partial X$ (that is $x\in C^\infty(X),$ $x\geq 0,$ $x^{-1}(0)=\partial X,$ and $dx\not=0$ at $\partial X$), and ${\mathcal H}$ is a $C^\infty$ symmetric 2-tensor such that $h_0={\mathcal H}|_{\partial X}$ defines a metric on $\partial X.$ The motivation for this definition comes from the fact that in polar coordinates $(r,\theta)$ the Euclidean metric has the form $g_E=dr^2+r^2 d\omega^2,$ where $d\omega^2$ is the induced metric on ${\mathbb S}^{n-1}.$ If one then uses the compactification $x=\frac{1}{r},$ for $r>C,$ the metric $g$ takes the form $$g_E=\frac{dx^4}{x^4}+\frac{d\omega^2}{x^2}, \text{ near } \{x=0\}.$$ It was pointed out in \cite{megs} that any two boundary defining functions $x$ and $\tilde{x}$ for which \eqref{metric0} holds, must satisfy $x-\tilde{x}=O(x^2),$ and hence ${\mathcal H}|_{\partial X}$ is uniquely determined by the metric $g.$ It was shown in \cite{js3} that fixed $h_0={\mathcal H}|_{\partial X},$ there exists a unique defining function $x$ near $\partial X$ such that \begin{gather} g= \frac{dx^2}{x^4}+ \frac{h(x)}{x^2}, \text{ in } (0,\varepsilon)\times \partial X, \label{metric} \end{gather} where $h(x)$ is a $C^\infty$one-parameter family of metrics on $\partial X$ and $h(0)=h_0.$ We will consider solutions to the Cauchy problem for the wave equation, \begin{gather} \begin{gathered} \left(D_t^2-\Delta_g\right)u(t,z)=0 \text{ on } (0,\infty) \times X \\ u(0,z)=f_1(z), \;\ \partial_t u(0,z)=f_2(z), \end{gathered}\label{waveeq} \end{gather} where $\Delta_g$ is the (positive) Laplace operator corresponding to the metric $g.$ The forward radiation field was defined by Friedlander \cite{fried0,fried1} as \begin{gather} {\mathcal R}_+(f_1,f_2)(s,y)= \lim_{x\rightarrow 0} x^{-\frac{n}{2}} D_t u(s+\frac{1}{x},x,y), \label{fradf} \end{gather} where $n$ is the dimension of $X.$ In the case of odd-dimensional Euclidean space, this is also known as the Lax-Phillips transform, and is given by \begin{gather*} {\mathcal R}_+(f_1,f_2)(s,\omega) = (2(2\pi))^{\frac{n-1}{2}} \left( D_s^{\frac{n+1}{2}} R f_1(s,-\omega)- D_s^{\frac{n-1}{2}} R f_2(s,-\omega)\right), \end{gather*} where $R$ is the Radon transform $Rf(s,\omega)=\int_{\langle x,\omega\rangle=s} f(x) \; d\mu(x),$ and $\mu(x)$ is the Lebesgue measure on the hyperplane $\langle x,\omega \rangle =s.$ The well known theorem of Helgason \cite{helgrt} states that if $f \in {\mathcal S}({\mathbb R}^n)$ (the class of Schwartz functions), and $Rf(s,y)=0$ for $s<\leq\rho,$ then $f(z)=0$ for $|z|\geq \rho.$ One should notice that $Rf(-s,-\omega)=Rf(s,\omega),$ if $Rf(s,\omega)=0$ for $s\leq -\rho,$ then $Rf(s,\omega)=0$ for $s\geq \rho$. Wiegerinck \cite{wieg} proved local versions of this result. More precisely, he proved that if $f\in C_0^\infty({\mathbb R}^n),$ then $f(z)=0$ on the set $$\{z\in {\mathbb R}^n: \langle z, \omega\rangle=s, \text{ and } (s,\omega)\not\in \textrm{Supp} (Rf).\}.$$ Wiegerinck's proof relies very strongly on analyticity properties of the Fourier transform of functions in $C_0^\infty({\mathbb R}^n),$ and the fact that the Fourier transform in the $s$ variables of $Rf(s,\omega)$ satisfies $\widehat{Rf}(\lambda,\omega)=\widehat{f}(\lambda\omega),$ where the right hand side essentially is the Fourier transform of $f$ in polar coordinates. Such a result is not likely to hold in more general situations. Here we will prove the following \begin{thm}\label{main} Let $(X,g)$ be an asymptotically Euclidean manifold, let $\Omega \subset \partial X$ be an open subset, and let $f\in C_0^\infty(\overset{\circ}{X}).$ Let $\varepsilon>0$ be such that \eqref{metric} holds on $(0,\varepsilon)\times \partial X$ and let $\bar{\varepsilon}=\min\{\varepsilon,-\frac{1}{s_0}\}.$ Then ${\mathcal R}_+(0,f)(s,y)=0$ for $s\leq s_0<0$ and $y \in \Omega,$ if and only if for every $(x,y),$ $x\in(0,\bar{\varepsilon}),$ and $y\in \Omega,$ \begin{gather} d_g((x,y), \textrm{Supp} f) \geq s_0+ \frac{1}{x}. \label{distsup} \end{gather} \end{thm} In the case where $\Omega=\partial X,$ this result was proved in \cite{sberf}. In the case of radial solutions of semilinear wave equations $\Box u=f(u)$ in ${\mathbb R}\times {\mathbb R}^3,$ with critical non-linearities, and $\Omega={\mathbb S}^{n-1}$ a similar result was proved in \cite{basa}. In the case of asymptotically hyperbolic manifolds results of this nature have been proved in \cite{guilsa,hosa,sbhrf}. In Euclidean space, the polar distance $r=\frac{1}{x},$ and hence \eqref{distsup} implies that if $z\in \textrm{Supp}(f),$ then for every $p,$ such that $p=r\omega,$ $\omega \in \Omega,$ and $|p|>|s_0|,$ \begin{gather*} |z-p|\geq |p|-|s_0|. \end{gather*} In particular this implies that if \begin{gather*} |z|^2-2r\langle z, \omega\rangle \geq |s_0|^2-2r |s_0|, \;\ r>|s_0|, \;\ \omega \in \Omega. \end{gather*} If we let $r\rightarrow \infty,$ it follows that if $z\in \textrm{Supp}(f)$ then $\langle z, \omega \rangle \leq |s_0|.$ See Fig. \ref{fig4} \begin{figure} \scalebox{.7} { \begin{pspicture}(0,-1.23)(13.16,1.89) \psline[linewidth=0.04cm](7.24,1.29)(11.72,-1.21) \psline[linewidth=0.04cm](5.4924355,1.2975421)(1.3675644,-1.1575421) \usefont{T1}{ptm}{m}{n} \rput(6.31,1.695){$\Omega$} \usefont{T1}{ptm}{m}{n} \rput(10.94,0.675){$\langle z, \omega\rangle\geq |s_0|$} \usefont{T1}{ptm}{m}{n} \rput(2.18,0.715){$\langle z, \omega\rangle\geq |s_0|$} \psarc[linewidth=0.04](6.35,-0.16){1.69}{57.36249}{121.551384} \usefont{T1}{ptm}{m}{n} \rput(6.61,0.755){$|s_0|$} \usefont{T1}{ptm}{m}{n} \rput(8.53,1.195){$f=0$} \usefont{T1}{ptm}{m}{n} \rput(4.37,1.235){$f$=0} \usefont{T1}{ptm}{m}{n} \rput(11.26,0.275){$\omega\in \Omega$} \usefont{T1}{ptm}{m}{n} \rput(1.92,0.315){$\omega\in \Omega$} \rput{-180.05002}(12.640708,1.0968579){\psarc[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](6.3201146,0.5511877){1.0810385}{315.1605}{236.88087}} \psline[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](6.98,1.37)(6.34,0.37)(5.58,1.33)(5.58,1.31) \end{pspicture} } \caption{If $f\in C_0^\infty({\mathbb R}^n)$ and ${\mathcal R}(0,f)(s,\omega)=0$ for $s\leq s_0<0$ and $\omega \in \Omega\subset {\mathbb S}^{n-1},$ then $f(z)=0$ if $\langle z, \omega \rangle \geq |s_0|$ for all $\omega\in \Omega.$} \label{fig4} \end{figure} This result can be rephrased in terms of the sojourn times for geodesics in ${\mathbb R}^n.$ Let $\omega \in {\mathbb S}^{n-1}$ and $\gamma_{z,\omega}(t)=z+t \omega$ be a geodesic starting at a point $z\in {\mathbb R}^n$ in the direction of the unit vector $\omega.$ The sojourn time along $\gamma_{z,\omega}$ is defined to be $$S(z,\omega)=\lim_{t\rightarrow \infty} (t-|\gamma_{z,\omega}(t)|),$$ see for example \cite{sawu}. But \begin{gather*} t-|\gamma(t)|=t-\left(|z|^2+t^2+2t \langle z, \omega\rangle \right)^ {\frac{1}{2}}= t-t\left( 1+\frac{2}{t} \langle z, \omega \rangle + \frac{|z|^2}{t^2}\right)^ {\frac{1}{2}}= -\langle z,\omega\rangle + O(t^{-1}). \end{gather*} So Theorem \ref{main} says that if $z\in \textrm{Supp}(f)$ and $\omega\in \Omega,$ then $S(z,\omega)\geq s_0,$ (i.e $\langle z, \omega\rangle \leq |s_0|.$). The connection between sojourn times and scattering theory is well known, see for example \cite{guil}. The sojourn times on asymptotically Euclidean manifolds was studied in \cite{sawu}. If $(X,g)$ is an asymptotically Euclidean manifold, $z\in \overset{\circ}{X}$ and $\gamma(t)$ is a geodesic parametrized by the arc-length such that $\gamma(0)=z$ and $\lim_{t\rightarrow \infty}\gamma(t)=y\in \partial X,$ the sojourn time along $\gamma$ is defined by \begin{gather*} S(z,\gamma)=\lim_{t\rightarrow \infty} (t-\frac{1}{x(\gamma(t))}), \end{gather*} where $x$ is a boundary defining function as in \eqref{metric}. We obtain the following result from Theorem \ref{main}: \begin{cor}\label{conse} Let $f\in C_0^\infty(\overset{\circ}{X})$ and let $\Omega\subset \partial X$ be an open subset. Suppose that ${\mathcal R}(0,f)(s,y)=0$ for every $s\leq s_0<0$ and $y \in \Omega.$ If $z\in \overset{\circ}{X}$ is such that there exists $y \in \Omega$ and a geodesic $\gamma$ parametrized by the arc-length such that $\gamma(0)=z,$ $\lim_{t\rightarrow \infty}\gamma(t)=y,$ and $S(z,\gamma)< s_0,$ then $f(z)=0.$ \end{cor} \begin{proof} If $z$ and $\gamma(t)$ are as in the hypothesis, then since $t$ is the arc-length parameter $ d(z,\gamma(t))\leq t.$ If $S(z,\gamma)<s_0,$ then there exists $T>0$ such that $ t-\frac{1}{x(\gamma(t))}<s_0$ for $t>T.$ If $T$ is large enough $\gamma(t)\in (0,\varepsilon) \times \Omega,$ and for $t>T,$ \begin{gather*} d(z,\gamma(t))\leq t < s_0+\frac{1}{x(\gamma(t))}, \end{gather*} thus $z\not\in \textrm{Supp}(f),$ and hence $f(z)=0.$ \end{proof} \section{The proof of Theorem \ref{main}} Suppose that $f\in C_0^\infty(\overset{\circ}{X})$ and \eqref{distsup} holds for $x\in(0,\varepsilon)$ and $y \in \Omega.$ Let $u$ be the solution of \eqref{waveeq} with initial data $(0,f),$ and let $v(x,s,y)=x^{-\frac{n}{2}} u(s+\frac{1}{x},x,y).$ By finite speed of propagation, \begin{gather*} u(t,(x,y))=0 \text{ if } t\leq d_g((x,y),\textrm{Supp}(f)). \end{gather*} This implies that \begin{gather*} v(x,s,y)=0 \text{ if } s\leq d_g((x,y),\textrm{Supp}(f))-\frac{1}{x}. \end{gather*} If $d_g((x,y),\textrm{Supp}(f))-\frac{1}{x}\geq s_0,$ then $v(x,s,y)=0$ if $x\in(0,\varepsilon),$ $y\in \Omega$ and $s\leq s_0.$ In particular, ${\mathcal R}(0,f)(s,y)=0$ if $s\leq s_0$ and $y \in \Omega.$ The converse is much harder to prove. Since $f\in C_0^\infty(\overset{\circ}{X}),$ there exists $x_0\in (0,\varepsilon)$ such that $\textrm{Supp}(f)\ \subset \{ x\geq x_0\}.$ If $-\frac{1}{x_0}<s_0,$ the result is obvious. Indeed, if $x<x_0,$ then $d((x,y), \textrm{Supp}(f))> d((x,y), (x_0,y))=\frac{1}{x}-\frac{1}{x_0}> \frac{1}{x}+s_0,$ then, by the definition of support, $f(z)=0$ if there exists $(x,y)$ such that $d(z, (x,y))\leq s_0+\frac{1}{x}.$ So we will assume from now on that $\textrm{Supp} (f) \subset \{x\geq x_0\},$ and $-\frac{1}{x_0}<s_0,$ see Fig. \ref{fig0}. \begin{figure} \scalebox{.6} { \begin{pspicture}(0,-4.87)(10.378055,4.89) \definecolor{color264g}{rgb}{0.8,0.8,0.8} \psline[linewidth=0.04cm](1.6780548,4.03)(1.6980548,-4.85) \psline[linewidth=0.04cm](1.6780548,1.09)(9.678055,1.13) \psbezier[linewidth=0.04](2.2380548,-4.17)(2.2180548,-3.05)(3.9588823,-1.4511017)(4.8380547,-0.87)(5.7172275,-0.2888983)(8.298055,0.69)(8.958055,0.53) \usefont{T1}{ptm}{m}{n} \rput(10.068055,1.035){$x$} \usefont{T1}{ptm}{m}{n} \rput(2.1280549,4.695){$s$} \usefont{T1}{ptm}{m}{n} \rput(1.2680548,0.555){$s_0$} \usefont{T1}{ptm}{m}{n} \rput(3.3080547,1.335){$x_0$} \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](1.6780548,0.31)(7.678055,0.33) \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](3.0180547,1.09)(3.0380547,-2.51) \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](1.6580548,-2.47)(3.0580547,-2.47) \usefont{T1}{ptm}{m}{n} \rput(3.2580547,-3.285){$v=0$} \usefont{T1}{ptm}{m}{n} \rput(4.728055,-3.685){by finite speed of propagation} \usefont{T1}{ptm}{m}{n} \rput{88.99379}(-0.98454493,-1.2780112){\rput(0.1580548,-1.145){$v=0$}} \psline[linewidth=0.04,fillstyle=gradient,gradlines=2000,gradbegin=color264g,gradend=color264g,gradmidpoint=1.0](0.6380548,0.31)(0.65805477,-4.11)(1.7180548,-4.11)(1.6980548,0.33)(0.6380548,0.33)(0.65805477,0.31) \psline[linewidth=0.04,fillstyle=gradient,gradlines=2000,gradbegin=color264g,gradend=color264g,gradmidpoint=1.0](1.7380548,-2.49)(3.0580547,-2.47)(2.6180549,-3.07)(2.318055,-3.67)(2.2380548,-4.11)(1.6980548,-4.11) \end{pspicture} } \caption{$v(x,s,y)=0$ for $-\frac{1}{x}<s\leq s_0$ by finite speed of propagation, and for $y\in \Omega,$ $v=0$ for $x\leq 0$ and $s\leq s_0$ because ${\mathcal R}(0,f)=0$ for $s\leq s_0$.} \label{fig0} \end{figure} The one-parameter family of metrics $h(x),$ $x\in [0,\varepsilon],$ has a (non-unique) $C^\infty$ extension to $[-\varepsilon,\varepsilon],$ and Friedlander proved that , fixed the etension of $h,$ $v(x,s,y)=x^{-\frac{n}{2}} u(s+\frac{1}{x},x,y)$ has a unique extension to $C^\infty([-\varepsilon,\varepsilon] \times {\mathbb R} \times \partial X)$ which satisfies \begin{gather} \begin{gathered} Pv=0 \text{ for } s>-\frac{1}{x} \\ v(x,-\frac{1}{x},y)=0, \;\ \partial_s v(x, -\frac{1}{x},y)= x^{-\frac{n}{2}} f(x,y), \end{gathered}\label{waveeq1} \end{gather} where $P$ is the wave operator written in coordinates $(x,s,y),$ with $s=t-\frac{1}{x},$ which is \begin{gather*} P=D_x(2D_s+x^2 D_x) + \Delta_{h} +iA(D_s+x^2D_x) +B, \\ A=\frac{1}{\sqrt{|h|}} \partial_x \sqrt{|h|} , \;\ B= \frac{n-1}{2}(\frac{3-n}{2}+xA), \end{gather*} $|h|$ is the volume element of the metric $h$ and $\Delta_{h}$ is the (positive) Laplacian with respect to $h.$ By finite speed of propagation, $v=0$ if $s\leq -\frac{1}{x_0},$ and the formal power series argument carried out in section 4 of \cite{sberf} shows that $\partial_x^k v(0,s,y)=0$ for $k=0,1,2,...,$ provided $s< s_0$ and $y\in \Omega.$ This implies that \begin{gather} \begin{gathered} v(x,s,y)=0 \text{ if } x<0, \;\ s<s_0 ,\;\ y \in \Omega, \\ v(x,s,y)=0 \text{ if } s\leq -\frac{1}{x_0}, \;\ s>-\frac{1}{x}, \;\ 0<x<\varepsilon, \end{gathered}\label{vanv} \end{gather} see Fig. \ref{fig0}. We begin by proving \begin{lemma}\label{lemma1} Let $v(x,s,y)$ satisfy \eqref{waveeq1} and \eqref{vanv} for $x_0\in (0,\varepsilon)$ and $-\frac{1}{x_0}<s_0<0.$ Let $y_0\in \Omega$ and suppose that $\{y:|y-y_0|<r\}\subset \Omega.$ Let $N$ be such that $s_0+\frac{1}{x_0}<\frac{N}{4}.$ There exists $\delta>0,$ depending on $r$ and on derivatives up to order two of the tensor $h(x),$ $x\in [-\varepsilon,\varepsilon],$ such that $v=0$ on the set \begin{gather} \left\{x<\frac{\delta}{3N}(s_0+\frac{1}{x_0}) , \; |y-y_0|<\left(\frac{\delta^ {\frac{1}{2}}}{3N}(s_0+\frac{1}{x_0}) \right)^ {\frac{1}{2}}, \;\ -\frac{1}{x}<s< -\frac{1}{x_0}+\frac{1}{3N}(s_0+\frac{1}{x_0})\right\}. \label{vanv1} \end{gather} \end{lemma} \begin{proof} We should point out that the fact that the bound on $|s+\frac{1}{x_0}|$ does not depend on $\delta$ or $r,$ is due to the fact that the coefficients of the operator $P$ do not depend on $s.$ Let $(\xi,\sigma,\eta)$ denote dual local coordinates to $(x,s,y).$ The principal symbol of $P$ is \begin{gather} p=2\xi\sigma + x^2 \xi^2+h, \label{symbol} \end{gather} and the Hamilton vector field of $p$ is equal to \begin{gather*} H_p= 2(\sigma+x^2\xi)\partial_x + 2\xi \partial_s -(2x\xi^2+\partial_x h) \partial_\xi+ \sum_{j=1}^n (\partial_{\eta_j} h \partial_{y_j}- \partial_{y_j} h \partial_{\eta_j}). \end{gather*} Suppose that $y_0=0\in \Omega$ and let $y$ be local coordinates valid in $\{|y|<r\}\subset \Omega.$ Let \begin{gather*} \varphi= -2x-2\delta(s-a) -x(s-a) - \delta^ {\frac{1}{2}} |y|^2 , \text{ where } a=-\frac{1}{x_0}, \text{ and } \\ \widetilde{\varphi}= -x-\delta(s-a)-x(s-a). \end{gather*} Then \begin{gather} \begin{gathered} v=0 \text{ on the set } \{ \widetilde{\varphi}>0\} \cap\left( \{x\leq 0, s\leq s_0, \; |y|<r \} \cup \{ -\frac{1}{x} <s<-\frac{1}{x_0}, \;\ 0<x< x_0, \; |y|<r \} \right), \\ \end{gathered}\label{vanishtil} \end{gather} see Fig. \ref{fig01}. \begin{figure} \scalebox{.7} { \begin{pspicture}(0,-5.06)(11.52,5.08) \definecolor{color512g}{rgb}{0.8,0.8,0.8} \psline[linewidth=0.04cm](3.08,-4.68)(3.02,5.0) \psline[linewidth=0.04cm](0.0,1.74)(10.86,1.78) \psbezier[linewidth=0.04](4.04,-5.04)(4.106109,-3.8307755)(6.02,-1.18)(6.7089453,-0.52889794)(7.397891,0.12220408)(8.774181,1.0608163)(9.74,1.18) \usefont{T1}{ptm}{m}{n} \rput(11.21,1.625){$x$} \usefont{T1}{ptm}{m}{n} \rput(3.63,4.885){$s$} \usefont{T1}{ptm}{m}{n} \rput(2.63,0.625){$s_0$} \usefont{T1}{ptm}{m}{n} \rput(2.86,-3.115){$a$} \usefont{T1}{ptm}{m}{n} \rput(9.26,0.245){$s=-\frac{1}{x}$} \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](5.24,-1.24)(5.24,-1.22) \usefont{T1}{ptm}{m}{n} \rput(5.27,1.945){$x_0$} \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](3.08,-2.26)(3.06,-2.26) \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](3.1,0.6)(8.48,0.62) \usefont{T1}{ptm}{m}{n} \rput(1.54,-1.875){$\varphi>0$} \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](3.04,-3.06)(4.24,-3.06) \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](3.6,-1.26)(3.62,-1.26) \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](3.96,-3.08)(3.96,1.78) \usefont{T1}{ptm}{m}{n} \rput{86.89689}(-1.332562,-6.45324){\rput(2.74,-3.935){$v=0$}} \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](4.24,-3.06)(4.8,-3.08) \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](4.8,-3.1)(4.98,1.84) \usefont{T1}{ptm}{m}{n} \rput{89.40736}(4.026995,-6.168865){\rput(5.13,-1.035){$\textrm{Supp}([P,\chi] v$}} \usefont{T1}{ptm}{m}{n} \rput(3.82,-3.435){$v=0$} \usefont{T1}{ptm}{m}{n} \rput{89.34301}(2.119077,-3.243516){\rput(2.7,-0.555){$v=0$}} \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](3.1,-4.5)(8.06,-4.5) \psbezier[linewidth=0.04](1.82,1.64)(1.82,0.92)(2.7532682,-1.7980325)(3.1,-3.06)(3.4467318,-4.3219676)(6.72,-4.22)(7.68,-4.3) \usefont{T1}{ptm}{m}{n} \rput(2.83,-4.535){$a-1$} \psbezier[linewidth=0.04](2.44,1.78)(2.4,1.04)(3.2932682,-1.8180325)(3.84,-2.84)(4.3867316,-3.8619676)(7.56,-3.66)(8.5,-3.72) \psdots[dotsize=0.12](3.1,-0.86) \usefont{T1}{ptm}{m}{n} \rput(7.53,-2.875){$\varphi\geq -\frac{\delta}{16}$} \psline[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm,fillstyle=gradient,gradlines=2000,gradbegin=color512g,gradend=color512g,gradmidpoint=1.0](3.04,-0.28)(4.88,-0.26)(4.88,-0.9)(3.08,-0.9)(3.06,-0.92) \psline[linewidth=0.04,fillstyle=gradient,gradlines=2000,gradbegin=color512g,gradend=color512g,gradmidpoint=1.0](3.96,-0.28)(3.96,-3.08)(4.84,-3.08)(4.88,-0.26)(3.04,-0.26)(3.06,-0.96)(3.08,-0.88)(3.04,-0.9)(3.94,-0.9)(3.96,-0.92) \end{pspicture} } \caption{$v(x,s,y)=0$ for $\widetilde{\vphi}>0$ in a neighborhood of $x=0,$ $s=a$ and $y=0.$} \label{fig01} \end{figure} We also have \begin{gather} \begin{gathered} p(x,s,y,d\varphi)= 2(2\delta+x)(2+s-a)+x^2(2+s-a)^2+h(x,y,d_y \varphi)> 2\delta \\ \text{ provided } |s-a|< 1, \;\ |x|< \delta. \end{gathered}\label{nonchar} \end{gather} and \begin{gather} \begin{gathered} H_p \varphi =-2(\sigma+x^2\xi)(2+s-a)-2\xi(2\delta+x) -\delta^ {\frac{1}{2}} H_h|y|^2, \\ H_p^2 \varphi=-8\xi(\sigma+x^2\xi)(1+x(2+s-a))+4x(2\delta+x+x^2(2+s-a))\xi^2 \\ -2\delta^ {\frac{1}{2}}(\sigma+x^2\xi)\partial_x H_h |y|^2- 2(2\delta+x+x^2(2+s-a))\partial_x h-\delta^ {\frac{1}{2}} H_h^2 |y|^2. \end{gathered}\label{hpfi} \end{gather} If $p=H_p\varphi=0,$ then \begin{gather*} h=\left(\frac{2(2\delta+x)}{2+s-a}+x^2\right)\xi^2+\frac{\delta^ {\frac{1}{2}}}{2+s-a} \xi H_h |y|^2, \end{gather*} and hence \begin{gather*} h+\frac{1}{2+s-a} (H_h |y|^2)^2 \geq \left(\frac{3\delta+2x}{2+s-a}+ x^2\right) \xi^2. \end{gather*} If $|s-a|<1,$ $|x|<\delta$ and $C>0$ is such that \begin{gather*} (H_h |y|^2)^2\leq C h, \end{gather*} then \begin{gather} h\geq C \delta \xi^2. \label{boundh} \end{gather} Here, and from now on, $C>0$ denotes a constant which depends on the metric $h(x),$ $x\in [-\varepsilon,\varepsilon].$ If $p=0,$ then $2(\sigma+x^2\xi)\xi=-h +x^2\xi^2,$ and we deduce from \eqref{hpfi} that \begin{gather*} H_p^2\varphi=4(1+x(2+s-a))h +\frac{2\delta}{2+s-a} (H_h |y|^2)(\partial_x H_h |y|^2)-\delta^ {\frac{1}{2}} H_h^2 |y|^2\\ -2(2\delta+x+x^2(2+s-a)) \partial_x h + \frac{2(2\delta+x)}{2+s-a} \xi \partial_x H_h |y|^2 + 8\delta x \xi^2, \end{gather*} and if $\delta<\frac{1}{10}$ \begin{gather*} H_p^2\geq 3h- 100 \delta^ {\frac{1}{2}}( (\partial_x H_h|y|^2)^2+ (H_h|y|^2)^2+ |\partial_x h|+ |H_h^2|y|^2|)-20 \delta^2 \xi^2. \end{gather*} We can pick $\delta_0$ such that \begin{gather} 3h- 100 \delta^ {\frac{1}{2}}( (\partial_x H_h|y|^2)^2+ (H_h|y|^2)^2+ |\partial_x h|+ |H_h^2|y|^2|)>h, \text{ if } \delta<\delta_0 \label{boundh0} \end{gather} we can use \eqref{boundh} to conclude that if $|x|<\delta$ and $\delta<\delta_0,$ \begin{gather} H_p^2 \varphi\geq h-20\delta^2\xi^2= {\frac{1}{2}} h + C\delta \xi^2. \label{boundh2} \end{gather} Hence we conclude that if $|x|<\delta<\delta_0$ and $|s-a|<1,$ then \begin{gather*} p(d\varphi)> \delta \text{ and if } p=H_p \varphi=0 \Rightarrow H_p^2 \varphi>0 \text{ provided } (\xi,\sigma,\eta)\not=0. \end{gather*} So the level surfaces of $\varphi$ are strongly pseudoconvex with respect to $P$ in the region \begin{gather} U=\{ |x|< \delta, \; |s-a|<\bar{s} , \; |y|< r \}, \;\ \bar{s}=\min\{1, s_0-a\} \label{regionu} \end{gather} and therefore it follows from Theorem 28.2.3 and Proposition 28.3.3 of \cite{hormander4} that if $Y\subset\subset U$ and $\lambda>0$ and $K>0$ are large enough, then for $\psi=e^{\lambda \varphi},$ \begin{gather} \begin{gathered} \sum_{|\alpha|<2} \tau^{2(2-|\alpha|)-1} \int |D^\alpha w|^2 e^{2\tau \psi} dxdsdy \leq \\ K(1+C\tau^{- {\frac{1}{2}}}) \int |Pw|^2 e^{2\tau \psi} dxdsdy, \;\ w\in C_0^\infty(Y), \;\ \tau>1. \end{gathered}\label{carle} \end{gather} Let \begin{gather*} U_\gamma= \{ |x|<\gamma \delta, \; |s-a|< \gamma\bar{s} , \; |y|< \gamma r \}, \end{gather*} and $\chi(x,s,y) \in C_0^\infty$ be such that $\chi=1$ on $U_\frac{1}{4}$ and $\chi=0$ outside $U_ {\frac{1}{2}}.$ Therefore \begin{gather} \textrm{Supp}[P,\chi] \subset \overline{U_ {\frac{1}{2}}\setminus U_\frac{1}{4}}.\label{supcom} \end{gather} On the other hand, $v=0$ if $\widetilde{\vphi}>0,$ and $|x|<\delta,$ $|s-a|<\bar{s},$ and $|y|<r,$ and \begin{gather*} \varphi= \widetilde{\vphi}-(x+\delta(s-a)+\delta^ {\frac{1}{2}} |y|^2), \end{gather*} so we conclude that \begin{gather*} \varphi\leq -(x+\delta(s-a)+\delta^ {\frac{1}{2}} |y|^2) \text{ on the support of } v. \end{gather*} So we deduce from \eqref{supcom} that, provided that $\delta^ {\frac{1}{2}} <\frac{r^2}{4},$ and $N$ is such that $\frac{s_0-a}{N}<\frac{1}{4},$ \begin{gather*} \varphi \leq - \min_{\overline{V_ {\frac{1}{2}}\setminus V_{\frac{1}{4}}}} (x+\delta(s-a)+\delta^ {\frac{1}{2}} |y|^2)=-\frac{\delta(s_0-a)}{N}. \end{gather*} So we conclude that \begin{gather*} \textrm{Supp} [P,\chi] v \subset \{ \varphi \leq -\frac{\delta(s_0-a)}{N}\}, \end{gather*} and hence we deduce from \eqref{carle} applied to $w=\chi v,$ and the fact that $P\chi v= \chi P v+[P,\chi ] v=[P,\chi] v$ that \begin{gather*} \sum_{|\alpha|<2} \tau^{2(2-|\alpha|)-1} \int |D^\alpha \chi v|^2 e^{2\tau \psi} dxdsdy \leq C e^{2\tau e^{-\lambda \frac{\delta(s_0-a)}{N}}} \end{gather*} and we conclude that $\chi v=0$ if $\varphi\geq -\frac{\delta(s_0-a)}{N}.$ Notice that $\varphi\geq -\frac{\delta(s_0-a)}{3N}$ corresponds to the set \begin{gather*} x+\delta(s-a)+\delta^ {\frac{1}{2}} |y|^2<\frac{\delta(s_0-a)}{N} \end{gather*} and since $v=0$ in $\{x<0, \;\ s<s_0\} \cup \{-\frac{1}{x}<s< a\},$ we deduce that $v=0$ on the set \begin{gather*} \{|x|< \frac{ \delta(s_0-a)}{3N}, \;\ |s-a|\leq \frac{s_0-a}{3N}, \; |y|^2\leq \frac{\delta^ {\frac{1}{2}}(s_0-a)}{3N}\}. \end{gather*} \end{proof} The next step on the proof is the following lemma: \begin{lemma}\label{lemma2} Let $\Omega \subset \partial X$ be an open subset and let $u(t,z)$ be a solution to \eqref{waveeq} with initial data $(0,f).$ Suppose that \eqref{metric} holds in $(0,\varepsilon),$ and let $v(x,s,y)=x^{-\frac{n}{2}} u(s+\frac{1}{x},x,y).$ Suppose that $v\in C^\infty([0,\varepsilon)\times {\mathbb R} \times \partial X)$ and that $v=0$ on the set \begin{gather*} \{-\frac{1}{x}< s<a, x\leq x_0=-\frac{1}{a}\;\ y\in \Omega\} \cup \{x<0, \;\ s<s_0, \;\ y\in \Omega \}. \end{gather*} Then $v=0$ on the set $\{-\frac{1}{x}<s<s_0, \;\ x<\min\{\varepsilon,-\frac{1}{s_0}\}, \;\ y\in \Omega\}.$ See Fig.\ref{fig2} \end{lemma} \begin{proof} The main ingredients of the proof of this result are Lemma \ref{lemma1} and the following result of Tataru \cite{tataru,tataru1}: If $u(t,z)$ is a $C^\infty$ function that satisfies \begin{gather*} \begin{gathered} (D_t^2-\Delta_g +L(z,D_z))u=0 \text{ in } (\widetilde{T}, \widetilde{T}) \times \Omega, \\ u(t,z)=0 \text{ in a neighborhood of } \{z_0\} \times (-T,T), \;\ T<\widetilde{T}, \end{gathered} \end{gather*} where $\Omega\subset {\mathbb R}^n,$ $g$ is a $C^\infty$ Riemannian metric and $L$ is a first order $C^\infty$ operator (that does not depend on $t$), then \begin{gather} u(t,z)=0 \text{ if } |t|+ d_g(z,z_0)<T, \label{tatres} \end{gather} where $d_g$ is the distance measured with respect to the metric $g.$ Let \begin{gather} a_0=a, \;\ a_1= a_0+ \frac{1}{3N}(s_0-a_0) \text{ and } a_j=a_{j-1} + \frac{1}{3N}(s_0-a_{j-1}).\label{sequence} \end{gather} We know from Lemma \ref{lemma1} that for each $y_0\in \Omega,$ there exists $\delta>0$ such that $v(x,s,y)=0$ if $x<C\delta,$ $|y-y_0|<C\delta$ and $s<a_1=a+\frac{s_0-a}{3N}.$ In particular for any $\alpha \in (0,C\delta ),$ $v(\alpha,s,y)=0$ in a neighborhood of the segment \begin{gather*} x=\alpha, \; -\frac{1}{\alpha} < s< a_1, \; y \in \{|y-y_0|<C\delta\}. \end{gather*} Since $t=s+\frac{1}{\alpha},$ this implies that $u(t,x,y)= x^{\frac{n}{2}} v(x,t-\frac{1}{x},y)=0$ in a neighborhood of the segment \begin{gather*} x=\alpha, \; 0< t <a_1+\frac{1}{\alpha}, \;\ y \text{ such that } |y-y_0|< C\delta. \end{gather*} But since the initial data is of the form $(0,f),$ $u(t,z)=-u(-t,z),$ and hence $u(t,z)=0$ in a neighborhood of \begin{gather*} x=\alpha, \;\ -a_1 -\frac{1}{\alpha}< t< \frac{1}{\alpha} +a_1, \;\ y \text{ such that } |y-y_0|< C\delta . \end{gather*} From \eqref{tatres} we obtain \begin{gather*} u(t,z)=0 \text{ if } |t|+ d_g(z, (\alpha,y) )<a_1 + \frac{1}{\alpha}. \label{tatres1} \end{gather*} If one picks $z=(x,y),$ with $\varepsilon> x>\alpha,$ then $d_g(z,(\alpha,y))= \frac{1}{\alpha}-\frac{1}{x},$ and hence in particular, \begin{gather*} u(t,x,y)=0 \text{ if } 0< t<\frac{1}{x}+a_1, \;\ x<\min\{-\frac{1}{a_1},\varepsilon\}, \;\ |y-y_0|< C\delta . \label{tatres2} \end{gather*} Since $y_0$ is arbitrary, this implies that \begin{gather*} v(x,s,y)=0 \text{ on the set } \{-\frac{1}{x}<s<a_1, \;\ x<\min\{-\frac{1}{a_1}, \varepsilon\} , \;\ y\in \Omega\}. \end{gather*} Applying this argument above $j$ times we obtain \begin{gather*} v(x,s,y)=0 \text{ on the set } \{x\leq 0, s\leq s_0, y \in \Omega\} \cup \{ x\leq \min\{\varepsilon,-\frac{1}{a_j}\} , \;\ -\frac{1}{x} < s \leq a_1, \; y \in \Omega\}. \end{gather*} The sequence \eqref{sequence} is increasing and $a_j< s_0.$ Let $L=\lim_{j\rightarrow\infty} a_j.$ Then from the definition of $a_j$ it follows that $\frac{1}{3N}(s_0-L)=0,$ and so $L=s-0.$ Since $v\in C^\infty$ it follows that \begin{gather*} v(x,s,y)=0 \text{ on the set } \{x\leq 0, s\leq s_0, y \in \Omega\} \cup \{ x\leq \min\{\varepsilon,-\frac{1}{s_0}\} \;\ -\frac{1}{x} < s\leq s_0, \; y \in \Omega\}. \end{gather*} This proves the Lemma. \end{proof} \begin{figure} \scalebox{.6} { \begin{pspicture}(0,-4.58)(10.98,4.6) \definecolor{color751g}{rgb}{0.8,0.8,0.8} \psline[linewidth=0.04cm](0.96,4.44)(0.96,-4.46) \psline[linewidth=0.04cm](0.98,0.8)(10.54,0.84) \usefont{T1}{ptm}{m}{n} \rput(1.48,-4.055){$v=0$} \usefont{T1}{ptm}{m}{n} \rput(10.67,0.645){$x$} \usefont{T1}{ptm}{m}{n} \rput(1.39,4.405){$s$} \usefont{T1}{ptm}{m}{n} \rput(0.6,-3.635){$a$} \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm](2.78,-2.4)(2.8,0.86) \usefont{T1}{ptm}{m}{n} \rput(2.87,1.045){$\varepsilon$} \usefont{T1}{ptm}{m}{n} \rput(0.43,0.0050){$s_0$} \psbezier[linewidth=0.04](1.36,-4.56)(1.74,-3.58)(2.976013,-1.9763635)(4.34,-1.1)(5.703987,-0.22363645)(8.84,0.32)(9.66,0.2) \usefont{T1}{ptm}{m}{n} \rput{90.0196}(0.83104706,-5.290763){\rput(3.06,-2.215){$v=0$ by unique continuation}} \usefont{T1}{ptm}{m}{n} \rput{87.40695}(-1.2388291,-2.4837968){\rput(0.68,-1.895){$v=0$}} \psline[linewidth=0.04,fillstyle=gradient,gradlines=2000,gradbegin=color751g,gradend=color751g,gradmidpoint=1.0](0.98,0.02)(2.8,-0.02)(2.8,-2.42)(1.86,-3.62)(0.98,-3.6) \end{pspicture} } \caption{The second step of the unique continuation across the wedge $\{-\frac{1}{x}<s< -\frac{1}{x_0},y\in \Omega\} \cup \{x< 0,s< s_0, y\in \Omega\}$ } \label{fig2} \end{figure} Now we can finish the proof of Theorem \ref{main}. Suppose $\textrm{Supp}(f) \subset \{x>x_0\}$ and that ${\mathcal R}_+(0,f)(s,y)=0,$ if $s<s_0$ and $y\in \Omega.$ Then $v$ extends as a solution to \eqref{waveeq1} satisfying \eqref{vanv}. Then Lemma \ref{lemma1} and Lemma \ref{lemma2} imply that \begin{gather*} v=0 \text{ in the set } \{x<\min\{\varepsilon,-\frac{1}{s_0}\}, \;\ -\frac{1}{x}<s <s_0, \;\ y \in \Omega\}. \end{gather*} As in the proof of Lemma \ref{lemma2}, we deduce that for any $(x,y)$ with $x\leq \min\{\varepsilon, -\frac{1}{s_0}\} $ and $y\in \Omega,$ $u(t,w)=0$ in a neighborhood of $-(s_0+\frac{1}{x})< t<(s_0+\frac{1}{x}),$ and applying \eqref{tatres}, we conclude that \begin{gather*} u(t,z)=\partial_t u(t,z)=0 \text{ provided } x<\varepsilon, \;\ y\in \Omega \text{ and } |t|+d_g(z,(x,y))< s_0 + \frac{1}{x}. \end{gather*} In particular, if $t=0,$ $f=\partial_t u(0,z)=0$ if $d_g(z,(x,y))< s_0 + \frac{1}{x}.$ This concludes the proof of Theorem \ref{main}.
train/arxiv
BkiUfIY4uzlh9r2JtfAZ
5
1
\section{Introduction} Learning low-dimensional vector representations of nodes in graphs \cite{hamilton2017representation} has led to advances on tasks such as node classification \cite{kipf2016semi}, link prediction \cite{grover2016node2vec}, graph classification \cite{ying2018hierarchical} and graph generation \cite{you2018graphrnn}, with successful applications across domains such as social and information networks \cite{ying2018graph}, chemistry \cite{you2018graph}, and biology \cite{zitnik2017predicting}. Node embedding methods can be categorized into Graph Neural Networks (GNNs) approaches \cite{scarselli2009graph}, matrix-factorization approaches \cite{belkin2002laplacian}, and random-walk approaches \cite{perozzi2014deepwalk}. Among these, GNNs are currently the most popular paradigm, largely owing to their efficiency and inductive learning capability~\cite{hamilton2017inductive}. By contrast, random-walk approaches~\cite{perozzi2014deepwalk,grover2016node2vec} are limited to transductive settings and cannot incorporate node attributes. In the GNN framework, the embedding of a node is computed by a GNN layer aggregating information from the node's network neighbors via non-linear transformation and aggregation functions~\cite{battaglia2018relational}. Long-range node dependencies can be captured via stacking multiple GNN layers, allowing the information to propagate for multiple-hops~\cite{xu2018representation}. However, the key limitation of existing GNN architectures is that they fail to capture the {\em position/location} of the node within the broader context of the graph structure. For example, if two nodes reside in very different parts of the graph but have topologically the same (local) neighbourhood structure, they will have identical GNN structure. Therefore, the GNN will embed them to the same point in the embedding space (we ignore node attributes for now). Figure~\ref{fig:example} gives an example where a GNN cannot distinguish between nodes $v_1$ and $v_2$ and will always embed them to the same point because they have isomorphic network neighborhoods. Thus, GNNs will never be able to classify nodes $v_1$ and $v_2$ into different classes because from the GNN point of view they are indistinguishable (again, not considering node attributes). Researchers have spotted this weakness~\cite{xu2018powerful} and developed heuristics to fix the issue: augmenting node features with one-hot encodings \cite{kipf2016semi}, or making GNNs deeper~\cite{selsam2018learning}. However, models trained with one-hot encodings cannot generalize to unseen graphs, and arbitrarily deep GNNs still cannot distinguish structurally isomorphic nodes (Figure \ref{fig:example}). \begin{figure} \centering \includegraphics[width=0.42\textwidth]{figs/example.png} \caption{Example graph where GNN is not able to distinguish and thus classify nodes $v_1$ and $v_2$ into different classes based on the network structure alone. (Note we do not consider node features.) Each node is labeled based on its label $A$ or $B$, and effective node embedding should be able to learn to distinguish nodes $v_1$ and $v_2$ (that is, embed them into different points in the space). However, GNNs, regardless of depth, will \textit{always} assign the same embedding to both nodes, because the two nodes are symmetric/isomorphic in the graph, and their GNN rooted subtrees used for message aggregation are the same. In contrast, P-GNNs can break the symmetry by using $v_3$ as the anchor-set, then the shortest path distances $(v_1, v_3)$ and $(v_2, v_3)$ are different and nodes $v_1$ and $v_2$ can thus be distinguished. } \label{fig:example} \vspace{-3mm} \end{figure} Here we propose {\em Position-aware Graph Neural Networks (P-GNNs)}, a new class of Graph Neural Networks for computing node embeddings that incorporate a node's positional information with respect to all other nodes in the network, while also retaining inductive capability and utilizing node features. Our key observation is that node position can be captured by a low-distortion embedding by quantifying the distance between a given node and a set of anchor nodes. Specifically, P-GNN uses a sampling strategy with theoretical guarantees to choose $k$ random subsets of nodes called {\em anchor-sets}. To compute a node's embedding, P-GNN first samples multiple anchor-sets in each forward pass, then learns a non-linear aggregation scheme that combines node feature information from each anchor-set and weighs it by the distance between the node and the anchor-set. Such aggregations can be naturally chained and combined into multiple layers to enhance model expressiveness. Bourgain theorem \cite{bourgain1985lipschitz} guarantees that only $k = O(\log^2 n)$ anchor-sets are needed to preserve the distances in the original graph with low distortion. We demonstrate the P-GNN framework in various real-world graph-based prediction tasks. In settings where node attributes are not available, P-GNN's computation of the $k$ dimensional distance vector is inductive across different node orderings and different graphs. When node attributes are available, a node's embedding is further enriched by aggregating information from all anchor-sets, weighted by the $k$ dimensional distance vector. Furthermore, we show theoretically that P-GNNs are more general and expressive than traditional message-passing GNNs. In fact, message-passing GNNs can be viewed as special cases of P-GNNs with degenerated distance metrics and anchor-set sampling strategies. In large-scale applications, computing distances between nodes can be prohibitively expensive. Therefore, we also propose P-GNN-Fast which adopts approximate node distance computation. We show that P-GNN-Fast has the same computational complexity as traditional GNN models while still preserving the benefits of P-GNN. We apply P-GNNs to 8 different datasets and several different prediction tasks including link prediction and community detection\footnote{Code and data are available in \url{https://github.com/JiaxuanYou/P-GNN/}}. In all datasets and prediction tasks, we show that P-GNNs consistently outperforms state of the art GNN variants, with up to 66\% AUC score improvement. \section{Related Work} Existing GNN models belong to a family of graph message-passing architectures that use different aggregation schemes for a node to aggregate feature messages from its neighbors in the graph: Graph Convolutional Networks use mean pooling \cite{kipf2016semi}; GraphSAGE concatenates the node's feature in addition to mean/max/LSTM pooled neighborhood information \cite{hamilton2017inductive}; Graph Attention Networks aggregate neighborhood information according to trainable attention weights \cite{velickovic2017graph}; Message Passing Neural Networks further incorporate edge information when doing the aggregation \cite{gilmer2017neural}; And, Graph Networks further consider global graph information during aggregation \cite{battaglia2018relational}. However, all these models focus on learning node embeddings that capture local network structure around a given node. Such models are at most as powerful as the WL graph isomorphism test \cite{xu2018powerful}, which means that they cannot distinguish nodes at symmetric/isomorphic positions in the network (Figure~\ref{fig:example}). That is, without relying on the node feature information, above models will always embed nodes at symmetric positions into same embedding vectors, which means that such nodes are indistinguishable from the GNN's point of view. Heuristics that alleviate the above issues include assigning an unique identifier to each node \cite{kipf2016semi,hamilton2017inductive} or using locally assigned node identifiers plus pre-trained transductive node features \cite{zhang2018link}. However, such models are not scalable and cannot generalize to unseen graphs where the canonical node ordering is not available. In contrast, P-GNNs can capture positional information without sacrificing other advantages of GNNs. One alternative method to incorporate positional information is utilizing a graph kernel, which crucially rely on the positional information of nodes and inspired our P-GNN model. Graph kernels implicitly or explicitly map graphs to a Hilbert space. Weisfeiler-Lehman and Subgraph kernels have been incorporated into deep graph kernels~\cite{Yan+2015} to capture structural properties of neighborhoods. \citeauthor{Gae+2003} (\citeyear{Gae+2003}) and \citeauthor{Kas+2003} (\citeyear{Kas+2003}) also proposed graph kernels based on random walks, which count the number of walks two graphs have in common~\cite{Sug+2015}. Kernels based on shortest paths were first proposed in~\cite{Borgwardt2005}. \section{Proposed Approach} \section{Preliminaries} \begin{figure*} \centering \includegraphics[width=\textwidth]{figs/PGNN.png} \caption{P-GNN architecture. P-GNN first samples multiple anchor-sets $S = \{S_1, S_2, S_3\}$ of different sizes (\textbf{Left}). Then, position-aware node embeddings $\mathbf{z}_{v_i}$ are computed via messages $M_{v_i}$ between a given node $v_i$ and the anchor-sets $S_i$ which are shared across all the nodes (\textbf{Middle}). To compute the embedding $\mathbf{z}_{v_1}$ for node $v_1$, one layer of P-GNN first computes messages via function $F$ and then aggregates them via a learnable function $\textsc{Agg}_M$ over the nodes in each anchor-set $S_i$ to obtain a matrix of anchor-set messages $\mathbf{M}_{v_1}$. The message matrix $\mathbf{M}_{v_1}$ is then further aggregated using a learnable function $\textsc{Agg}_S$ to obtain node $v_1$'s message $\mathbf{h}_{v_1}$ that can be passed to the next level of P-GNN. At the same time a learned vector $\mathbf{w}$ is used reduce $\mathbf{M}_{v_1}$ into a fixed-size position-aware embedding $\mathbf{z}_{v_1}$ which is the output of the P-GNN (\textbf{Right}). } \label{fig:PGNN} \end{figure*} \subsection{Notation and Problem Definition} A graph can be represented as $G = (\mathcal{V},\mathcal{E})$, where $\mathcal{V} = \{v_1, ..., v_n\}$ is the node set and $\mathcal{E}$ is the edge set. In many applications where nodes have attributes, we augment $G$ with the node feature set $\mathcal{X} = \{\mathbf{x}_1, ..., \mathbf{x}_n\}$ where $\mathbf{x}_i$ is the feature vector associated with node $v_i$. Predictions on graphs are made by first embedding nodes into a low-dimensional space which is then fed into a classifier, potentially in an end-to-end fashion. % Specifically, a node embedding model can be written as a function $f: \mathcal{V} \rightarrow \mathcal{Z}$ that maps nodes $\mathcal{V}$ to $d$-dimensional vectors $\mathcal{Z} = \{\mathbf{z}_1, ..., \mathbf{z}_n\}, \mathbf{z}_i \in \mathbb{R}^d$. \subsection{Limitations of Structure-aware Embeddings} \label{sc:limitation_structure_aware} \cut{ We identify two key properties of node embeddings that are of special interests in applications, which we name as \textit{structure-aware} embeddings and \textit{position-aware} embeddings. Structure-aware embeddings detect the structural role for a given node, usually involve some approximate graph isomorphism test. Position-aware embeddings reflect homophily properties in the graph \cite{grover2016node2vec}, which is highly related to the shortest path distance metrics. Usually, tasks such as community detection and link prediction require more of position-aware embeddings, while common node classification tasks reflect more of structure-aware distances {\textcolor{red}{[CITE]}}. Although both types of embeddings have been used to qualitatively discuss the nature of various network prediction tasks, their relationship is not clearly understood. } Our goal is to learn embeddings that capture the local network structure as well as retain the global network position of a given node. We call node embeddings to be {\em position-aware}, if the embedding of two nodes can be used to (approximately) recover their shortest path distance in the network. This property is crucial for many prediction tasks, such as link prediction and community detection. We show below that GNN-based embeddings cannot recover shortest path distances between nodes, which may lead to suboptimal performance in tasks where such information is needed. \begin{definition} A node embedding $\mathbf{z}_i = f_p(v_i), \forall v_i \in \mathcal{V}$ is position-aware if there exists a function $g_p(\cdot, \cdot)$ such that $d_{sp}(v_i, v_j) = g_p(\mathbf{z}_i, \mathbf{z}_j)$, where $d_{sp}(\cdot, \cdot)$ is the shortest path distance in $G$. \end{definition} \begin{definition} A node embedding $\mathbf{z}_i = f_{s_q}(v_i), \forall v_i \in \mathcal{V}$ is structure-aware if it is a function of up to $q$-hop network neighbourhood of node $v_i$. Specifically, $\mathbf{z}_i = g_s(N_1(v_i),...,N_q(v_i))$, where $N_k(v_i)$ is the set of the nodes $k$-hops away from node $v_i$, and $g_s$ can be any function. \end{definition} For example, most graph neural networks compute node embeddings by aggregating information from each node's $q$-hop neighborhood, and are thus structure-aware. In contrast, (long) random-walk-based embeddings such as DeepWalk and Node2Vec are position-aware, since their objective function forces nodes that are close in the shortest path to also be close in the embedding space. In general, structure-aware embeddings cannot be mapped to position-aware embeddings. Therefore, when the learning task requires node positional information, only using structure-aware embeddings as input is not sufficient: \begin{proposition} \label{prop:emb_dichoto} There exists a mapping $g$ that maps structure-aware embeddings $f_{s_q}(v_i), \forall v_i \in \mathcal{V}$ to position-aware embeddings $f_p(v_i), \forall v_i \in \mathcal{V}$, if and only if no pair of nodes have isomorphic local $q$-hop neighbourhood graphs. \end{proposition} Proposition \ref{prop:emb_dichoto} is proved in the Appendix. The proof is based on the identifiability arguments similar to the proof of Theorem 1 in \cite{hamilton2017inductive}, and also explains why in some cases GNNs may perform well in tasks requiring positional information. However, in real-world graphs such as molecules and social networks, the structural equivalences between nodes' local neighbourhood graphs are quite common, making GNNs hard to identify different nodes. Furthermore, the mapping $g$ essentially memorizes the shortest path distance between a pair of structure-aware node embeddings whose local neighbourhoods are unique. Therefore, even if the GNN perfectly learns the mapping $g$, it cannot generalize to the mapping to new graphs. \cut{ We concretely state the limitation of structure-aware node embeddings in the inductive setting in Proposition \ref{prop:emb_dichoto_inductive} (proof in the Appendix). \begin{proposition} \label{prop:emb_dichoto_inductive} There does not exist a general mapping $g$ that maps from structure-aware embeddings $f_{s_q}(v_i)$ to shortest path distance $d_{sp}$ for any given graph. \end{proposition} } \cut{ Finally, we point out that while inductive position-aware link-prediction tasks are well-defined and prevalent, inductive position-aware node classification tasks are in fact ill-defined\footnote{Note that position-aware node classification tasks well defined in the transductive setting}. For different In other words, the predictions for position-aware node classification tasks only need to match the node labels \emph{up to permutation of labels}, which is equivalent to link prediction where the links represents the equivalence relation between nodes, and the labels form the quotient set of $V$. \begin{proposition} \label{prop:node_transform_edge} Inductive position-aware node labels are in fact ill-defined. There does not exist a position-aware node labeling function for a dataset of multiple graphs, such that the labeling function remains consistent for each input node after permutations of the graphs. \end{proposition} } \section{Proposed Approach} In this section, we first describe the P-GNN\xspace framework that extends GNNs to learn position-aware node embeddings. We follow by a discussion on our model designing choices. Last, we theoretically show how P-GNNs generalize existing GNNs and learn position-aware embeddings. \subsection{The Framework of P-GNNs} We propose Position-aware Graph Neural Networks that generalize the concepts of Graph Neural Networks with two key insights. First, when computing the node embedding, instead of only aggregating messages computed from a node's local network neighbourhood, we allow P-GNNs to \textit{aggregate messages from anchor-sets}, which are randomly chosen subsets of all the nodes (Figure~\ref{fig:PGNN}, left). Note that anchor sets get resampled every time the model is run forward. Secondly, when performing message aggregation, instead of letting each node aggregate information independently, the aggregation is \textit{coupled across all the nodes} in order to distinguish nodes with different positions in the network (Figure~\ref{fig:PGNN}, middle). We design P-GNNs such that each node embedding dimension corresponds to messages computed with respect to one anchor-set, which makes the computed node embeddings position-aware (Figure~\ref{fig:PGNN}, right). P-GNNs contain the following key components:\\ \-\hspace{5mm} $\bullet$ $k$ anchor-sets $S_i$ of different sizes.\\ \-\hspace{5mm} $\bullet$ Message computation function $F$ that combines feature information of two nodes with their network distance.\\ \-\hspace{5mm}$\bullet$ Matrix $\mathbf{M}$ of anchor-set messages, where each row $i$ is an anchor-set message $\mathcal{M}_i$ computed by $F$.\\ \-\hspace{5mm}$\bullet$ Trainable aggregation functions $\textsc{Agg}_M$, $\textsc{Agg}_S$ that aggregate/transform feature information of the nodes in the anchor-set and then also aggregate it across the anchor-sets.\\ \-\hspace{5mm}$\bullet$ Trainable vector $\mathbf{w}$ that projects message matrix $\mathbf{M}$ to a lower-dimensional embedding space $\mathbf{z} \in \mathbb{R}^k$. Algorithm \ref{alg:pgnn} summarizes the general framework of P-GNNs. A P-GNN consists of multiple P-GNN layers. Concretely, the $l^\text{th}$ P-GNN layer first samples $k$ random anchor-sets $S_i$. Then, the $i^{\text{th}}$ dimension of the output node embedding $\mathbf{z}_{v}$ represents messages computed with respect to the $i^{\text{th}}$ anchor-set $S_i$. Each dimension of the embedding is obtained by first computing the message from each node in the anchor-set via message computation function $F$, then applying a message aggregation function $\textsc{Agg}_M$, and finally applying a non-linear transformation to get a scalar via weights $\mathbf{w}\in\mathbb{R}^{r}$ and non-linearity $\sigma$. Specifically, the message from each node includes distances that reveal node positions as well as feature-based information from input node features. The message aggregation functions are the same class of functions as used by existing GNNs. We further elaborate on the design choices in Section \ref{sc:design_choices}. \xhdr{P-GNNs are position-aware} The output embeddings $\mathbf{z}_v$ are position-aware, as each dimension of the embedding encodes the necessary information to distinguish structurally equivalent nodes that reside in different parts of the graph. Note that if we permute the dimensions of all the node embeddings $\mathbf{z}_v$, the resulting embeddings are equivalent to the original embeddings because they carry the same node positional information with respect to (permuted order of) anchor-sets $\{S_i\}$. Multiple P-GNN layers can be naturally stacked to achieve higher expressive power. Note that unlike GNNs, we cannot feed the output embeddings $\mathbf{z}_v$ from the previous layer to the next layer, because the dimensions of $\mathbf{z}_v$ can be arbitrarily permuted; therefore, applying a fixed non-linear transformation over this representation is problematic. The deeper reason we cannot feed $\mathbf{z}_v$ to the next layer is that the position of a node is always \textit{relative} to the chosen anchor-sets; thus, canonical position-aware embeddings do not exist. Therefore, P-GNNs also compute structure-aware messages $\mathbf{h}_{v}$, which are computed via an order-invariant message aggregation function that aggregates messages \textit{across anchor-sets}, and are then fed into the next P-GNN layer as input. \begin{algorithm}[h!] \caption{The framework of P-GNNs} \label{alg:pgnn} \begin{algorithmic} \STATE {\bfseries Input:} Graph $G=(\mathcal{V},\mathcal{E})$; Set $S$ of $k$ anchor-sets $\{S_i\}$; Node input features $\{\mathbf{x}_v\}$; Message computation function $F$ that outputs an $r$ dimensional message; Message aggregation functions $\textsc{Agg}_M, \textsc{Agg}_S$; Trainable weight vector $\mathbf{w}\in\mathbb{R}^{r}$; Non-linearity $\sigma$; Layer $l \in [1,L]$ \STATE {\bfseries Output:} Position-aware embedding $\mathbf{z}_v$ for every node $v$ \STATE $\mathbf{h}_v \leftarrow \mathbf{x}_v$ \FOR{$l=1,\dots,L$} \STATE $S_i \sim \mathcal{V}$\- for $i = 1,\dots,k$ \FOR{$v \in \mathcal{V}$} \STATE $\mathbf{M}_v = \mathbf{0} \in \mathbb{R}^{k\times r}$ \FOR{$i = 1\dots,k$} \STATE $\mathcal{M}_i \leftarrow \{F(v,u,\mathbf{h}_v,\mathbf{h}_u), \forall u \in S_i\}$ \STATE $\mathbf{M}_v[i] \leftarrow \textsc{Agg}_M(\mathcal{M}_i)$ \ENDFOR \STATE $\mathbf{z}_{v} \leftarrow \sigma(\mathbf{M}_v \cdot \mathbf{w})$ \STATE $\mathbf{h}_{v} \leftarrow \textsc{Agg}_S(\{\mathbf{M}_v[i], \forall i \in [1,k]\})$ \ENDFOR \ENDFOR \STATE $\mathbf{z}_v \in \mathbb{R}^k$, $\forall v \in \mathcal{V}$ \end{algorithmic} \end{algorithm} \subsection{Anchor-set Selection} \label{sc:anchor_selection} We rely on Bourgain's Theorem to guide the choice of anchor-sets, such that the resulting representations are guaranteed to have low distortion. Specifically, distortion measures the faithfulness of the embeddings in preserving distances when mapping from one metric space to another metric space, which is defined as follows: \begin{definition} Given two metric spaces $(\mathcal{V},d)$ and $(\mathcal{Z},d')$ and a function $f: \mathcal{V} \rightarrow \mathcal{Z}$, $f$ is said to have distortion $\alpha$ if $\forall u,v \in \mathcal{V}$, $\frac{1}{\alpha} d(u,v) \leq d'(f(u),f(v)) \leq d(u,v)$. \end{definition} Theorem \ref{th:bourgain} states the Bourgain Theorem \cite{bourgain1985lipschitz}, which shows the existence of a low distortion embedding that maps from any metric space to the $l_p$ metric space: \begin{theorem} \label{th:bourgain} (Bourgain theorem) Given any finite metric space $(\mathcal{V},d)$ with $|\mathcal{V}| = n$, there exists an embedding of $(\mathcal{V}, d)$ into $\mathbb{R}^k$ under any $l_p$ metric, where $k = O(\log^2 n)$, and the distortion of the embedding is $O(\log n)$. \end{theorem} A constructive proof of Theorem \ref{th:bourgain} \cite{linial1995geometry} provides an algorithm to construct an $O(\log^2 n)$ dimensional embedding via anchor-sets, as summarized in Theorem \ref{th:bourgain_constructive}: \begin{theorem} \label{th:bourgain_constructive} (Constructive proof of Bourgain theorem) For metric space $(\mathcal{V},d)$, given $k = c\log^2 n$ random sets $S_{i,j} \subset \mathcal{V}, i=1,2,...,\log n, j = 1,2,...,c\log n$ where $c$ is a constant, $S_{i,j}$ is chosen by including each point in $\mathcal{V}$ independently with probability $\frac{1}{2^i}$. An embedding method for $v \in \mathcal{V}$ is defined as: \begin{equation} f(v) = \big( \frac{d(v, S_{1,1})}{k}, \frac{d(v, S_{1,2})}{k}, ..., \frac{d(v, S_{\log n,c\log n})}{k} \big) \end{equation} where $d(v, S_{i,j}) = \min_{u\in S_{i,j}} d(v,u)$. Then, $f$ is an embedding method that satisfies Theorem \ref{th:bourgain}. \end{theorem} The proposed P-GNNs can be viewed as a generalization of the embedding method in Theorem \ref{th:bourgain_constructive}, where the distance metric $d$ is generalized via message computation function $F$ and message aggregation function $\textsc{Agg}_M$ that accounts for both node feature information and position-based similarities (Section \ref{sc:design_choices}). Using this analogy, Theorem \ref{th:bourgain_constructive} offers two insights for selecting anchor-sets in P-GNNs. First, $O(\log^2 n)$ anchor-sets are needed to guarantee low distortion embedding. Second, these anchor-sets have sizes distributed exponentially. Here, we illustrate the intuition behind selecting anchor-sets with different sizes via the $1$-hop shortest path distance defined in Equation~\ref{eq:q_hop_dist}. Suppose that the model is computing embeddings for node $v_i$. We say an anchor-set \emph{hits} node $v_i$ if $v_i$ or any of its one-hop neighbours is included in the anchor-set. Small anchor-sets can provide positional information with high certainty, because when a small anchor-set hits $v_i$, we know that $v_i$ is located close to one of the very few nodes in the small anchor-set. However, the probability that such small anchor-set hits $v_i$ is low, and the anchor-set is uninformative if it misses $v_i$. On the contrary, large anchor-sets have higher probability of hitting $v_i$, thus sampling large anchor-sets can result in high sample efficiency. However, knowing that a large anchor-set hits $v_i$ provides little information about its position, since $v_i$ might be close to any of the many nodes in the anchor-set. Therefore, choosing anchor-sets of different sizes balances the trade-off and leads to efficient embeddings. Following the above principle, P-GNNs choose $k = c\log^2 n$ random anchor-sets, denoted as $S_{i,j} \subset \mathcal{V}$, where $i=1,\dots,\log n, j = 1,\dots,c\log n$ and $c$ is a hyperparameter. To sample an anchor-set $S_{i,j}$, we sample each node in $\mathcal{V}$ independently with probability $\frac{1}{2^i}$. \subsection{Design decisions for P-GNNs} \label{sc:design_choices} In this section, we discuss the design choices of the two key components of P-GNNs: the message computation function $F$ and the message aggregation functions $\textsc{Agg}$. \xhdr{Message Computation Function $F$} Message computation function $F(v,u,\mathbf{h}_v,\mathbf{h}_u)$ has to account for both position-based similarities as well as feature information. Position-based similarities are the key to reveal a node's positional information, while feature information may include other side information that is useful for the prediction task. Position-based similarities can be computed via the shortest path distance, or, for example, personalized PageRank \cite{jeh2003scaling}. However, since the computation of shortest path distances has a $O(|\mathcal{V}|^3)$ computational complexity, we propose the following $q$-hop shortest path distance \begin{equation} \label{eq:q_hop_dist} d^q_{sp}(v,u) = \begin{cases} d_{sp}(v,u), & \text{if } d_{sp}(v,u) \leq q, \\ \infty, & \text{otherwise} \end{cases} \end{equation} where $d_{sp}$ is the shortest path distance between a pair of nodes. Note that $1$-hop distance can be directly identified from the adjacency matrix, and thus no additional computation is needed. Since we aim to map nodes that are close in the network to similar embeddings, we further transform the distance $s(v,u) = \frac{1}{d^q_{sp}(v,u)+1}$ to map it to a $(0,1)$ range. Feature information can be incorporated into $\mathbf{h}_u$ by passing in the information from the neighbouring nodes, as in GCN \cite{kipf2016semi}, or by concatenating node features $\mathbf{h}_v$ and $\mathbf{h}_u$, similar to GraphSAGE \cite{hamilton2017inductive}, although other approaches like attention can be used as well \cite{velickovic2017graph}. Combining position and feature information can then be achieved via concatenation or product. We find that simple product works well empirically. Specifically, we find the following message passing function $F$ performs well empirically \begin{equation} F(v,u,\mathbf{h}_v,\mathbf{h}_u) = s(v,u) \textsc{concat}(\mathbf{h}_v,\mathbf{h}_u) \end{equation} \xhdr{Message Aggregation Functions $\textsc{Agg}$} Message aggregation functions aggregate information from a set of messages (vectors). Any permutation invariant function, such as $\textsc{Mean}, \textsc{Min}, \textsc{Max}, \textsc{Sum}$, can be used, and non-linear transformations are often applied before and/or after the aggregation to achieve higher expressive power \cite{zaheer2017deep}. We find that using simple $\textsc{Mean}$ aggregation function provides good results, thus we use it to instantiate both $\textsc{Agg}_M$ and $\textsc{Agg}_S$. \section{Theoretical Analysis of P-GNNs} \subsection{Connection to Existing GNNs} \label{sc:connection_to_gnns} P-GNNs generalize existing GNN models. From P-GNN's point of view, existing GNNs use the same anchor-set message aggregation techniques, but use different anchor-set selection and sampling strategies, and only output the structure-aware embeddings $\mathbf{h}_{v}$. GNNs either use deterministic or stochastic neighbourhood aggregation \cite{hamilton2017inductive}. Deterministic GNNs can be expressed as special cases of P-GNNs that treat each individual node as an anchor-set and aggregate messages based on $q$-hop distance. In particular, the function $F$ in Algorithm~\ref{alg:pgnn} corresponds to the message aggregation function of a deterministic GNN. In each layer, most GNNs aggregate information from a node's one-hop neighbourhood \cite{kipf2016semi,velickovic2017graph}, corresponding to using $1$-hop distance to compute messages, or directly aggregating $k$-hop neighbourhood \cite{xu2018representation}, corresponding to computing messages within $k$-hop distance. For example, a GCN \cite{kipf2016semi} can be written as choosing $\{S_i\} = \{v_i\}$, $\textsc{Agg}_M= \textsc{Mean}$, $\textsc{Agg}_S = \textsc{Mean}$, $F = \frac{1}{d^1_{sp}(v,u)+1}\mathbf{h}_u$, and the output embedding is $\mathbf{h}_u$ in the final layer. Stochastic GNNs can be viewed as P-GNNs that sample size-1 anchor-sets, but each node's choice of anchor-sets is different. For example, GraphSAGE \cite{hamilton2017inductive} can be viewed as a special case of P-GNNs where each node samples $k$ size-1 anchor-sets and then computes messages using 1-hop shortest path distance anchor-set, followed by aggregation $\textsc{Agg}_S$. This understanding reveals the connection between stochastic GNNs and P-GNNs. First, P-GNN uses larger anchor-sets thereby enabling higher sample efficiency (Sec \ref{sc:anchor_selection}). Second, anchor-sets that are shared across all nodes serve as reference points in the network, consequently, positional information of each node can be obtained from the shared anchor-sets. \subsection{Expressive Power of P-GNNs} \label{sc:anchor_distance} Next, we show that P-GNNs provide a more \textit{general class of inductive bias} for graph representation learning than GNNs; therefore, are more expressive to learn both structure-aware and position-aware node embeddings. We motivate our idea by considering pairwise relation prediction between nodes. Suppose a pair of nodes $u, v$ are labeled with label $y$, using labeling function $d_y(u, v)$, and our goal is to predict $y$ for unseen node pairs. From the perspective of representation learning, we can solve the problem via learning an embedding function $f$ that computes the node embedding $\mathbf{z}_v$, where the objective is to maximize the likelihood of the conditional distribution $p(y|\mathbf{z}_u, \mathbf{z}_v)$. Generally, an embedding function takes a given node $v$ and the graph $G$ as input and can be written as $\mathbf{z}_v = f(v, G)$, while $p(y|\mathbf{z}_u, \mathbf{z}_v)$ can be expressed as a function $d_z(\mathbf{z}_u, \mathbf{z}_v)$ in the embedding space. As shown in Section \ref{sc:limitation_structure_aware}, GNNs instantiate $f$ via a function $f_{\theta}(v, S_v)$ that takes a node $v$ and its $q$-hop neighbourhood graph $S_v$ as arguments. Note that $S_v$ is independent from $S_u$ (the $q$-hop neighbourhood graph of node $u$) since knowing the neighbourhood graph structure of node $v$ provides no information on the neighbourhood structure of node $u$. In contrast, P-GNNs assume a more general type of inductive bias, where $f$ is instantiated via $f_{\phi}(v, S)$ that aggregates messages from random anchor-sets $S$ that are shared across all the nodes, and nodes are differentiated based on their different distances to the anchor-sets $S$. Under this formulation, each node's embedding is computed similarly as in the stochastic GNN when combined with a proper $q$-hop distance computation (Section \ref{sc:connection_to_gnns}). However, since the anchor-sets $S$ are shared across all nodes, pairs of node embeddings are correlated via anchor-sets $S$, and are thus no longer independent. This formulation implies a joint distribution $p(\mathbf{z}_u, \mathbf{z}_v)$ over node embeddings, where $\mathbf{z}_u = f_{\phi}(u, S)$ and $\mathbf{z}_v = f_{\phi}(v, S)$. In summary, \textit{learning node representations} can be formalized with the following two types of objectives: \-\hspace{5mm} $\bullet$ GNN representation learning objective: \begin{equation} \begin{aligned} \label{eq:ob_edge_marginal} \min_\theta & \ \mathbb{E}_{u \sim V_{train}, v \sim V_{train}, S_u \sim p(V) , S_v \sim p(V)} \\ & \mathcal{L}(d_z(f_{\theta}(u, S_u), f_{\theta}(v, S_v))-d_y(u,v)) \end{aligned} \end{equation} \-\hspace{5mm} $\bullet$ P-GNN representation learning objective: \begin{equation} \begin{aligned} \label{eq:ob_edge_joint} \min_\theta & \ \mathbb{E}_{u \sim V_{train}, v \sim V_{train}, S \sim p(V)} \\ & \mathcal{L}(d_z(f_{\phi}(u, S), f_{\phi}(v, S))-d_y(u,v)) \end{aligned} \end{equation} where $d_y$ is the target similarity metric determined by the learning task, for example, indicating links between nodes or membership to the same community, and $d_z$ is the similarity metric in the embedding space, usually the $l_p$ norm. Optimizing Equations \ref{eq:ob_edge_marginal} and \ref{eq:ob_edge_joint} gives representations of nodes using joint and marginal distributions over node embeddings, respectively. If we treat $u$, $v$ as random variables from $G$ that can take values of any pair of nodes, then the mutual information between the joint distribution of node embeddings and any $Y = d_y(u,v)$ is larger than that between the marginal distributions and $Y$: $I(Y; X_{joint}) \geq I(Y; X_{marginal})$, where $X_{joint} = (f_{\phi}(u, S_u), f_{\phi}(v, S_v)) \sim p(f_{\phi}(u, S_u), f_{\phi}(v, S_v))$; $X_{marginal} = (f_{\theta}(u, S), f_{\theta}(v, S)) \sim p(f_{\theta}(u, S)) \otimes p(f_{\theta}(v, S))$, where $\otimes$ is the Kronecker product. The gap of this mutual information is great, if the target task $d_y(u,v)$ is related to the positional information of nodes which can be captured by the shared choice of anchor-sets. Thus, we conclude that P-GNNs, which embed nodes using the joint distribution of their distances to common anchors, have more expressive power than existing GNNs. \subsection{Complexity Analysis} Here we discuss the complexity of neural network computation. In P-GNNs, every node communicates with $O(\log^2 n)$ anchor-sets in a graph with $n$ nodes and $e$ edges. Suppose on average each anchor-set contains $m$ nodes, then there are $O(mn \log^2n)$ message communications in total. If we follow the exact anchor-set selection strategy, the complexity will be $O(n^2 \log^2n)$. In contrast, the number of communications is $O(n+e)$ for existing GNNs. In practice, we observe that the computation can be sped up by using a simplified aggregation $\textsc{Agg}_S$, while only slightly sacrificing predictive performance. Here for each anchor-set, we only aggregate message from the node closest to a given node $v$. This removes the factor $m$ in the complexity of P-GNNs, making the complexity $O(n\log^2n)$. We use this implementation in the experiments. \section{Experiments} \begin{table*}[t] \centering \begin{footnotesize} \caption{P-GNNs compared to GNNs on link prediction tasks, measured in ROC AUC. Grid-T and Communities-T refer to the transductive learning setting of Grid and Communities, where one-hot feature vectors are used as node attributes. Standard deviation errors are given.} \label{tab:link_pred} \begin{tabular}{@{}llllllllll@{}} \toprule & Grid-T & Communities-T & Grid & Communities & PPI \\ \midrule GCN &$0.698 \pm 0.051$ &$0.981 \pm 0.004$ & $0.456 \pm 0.037$ & $0.512 \pm 0.008$ & $0.769 \pm 0.002$ \\ GraphSAGE &$0.682 \pm 0.050$ &$0.978 \pm 0.003$ & $0.532 \pm 0.050$& $0.516 \pm 0.010$ & $0.803 \pm 0.005$ \\ GAT &$0.704 \pm 0.050$ &$0.980 \pm 0.005$ & $0.566 \pm 0.052$& $0.618 \pm 0.025$ & $0.783 \pm 0.004$ \\ GIN &$0.732 \pm 0.050$ &$0.984 \pm 0.005$ & $0.499 \pm 0.054$& $0.692 \pm 0.049$ & $0.782 \pm 0.010$ \\ \midrule P-GNN-F-1L &$0.542 \pm 0.057$ &$0.930 \pm 0.093$ & $0.619 \pm 0.080$& $0.939 \pm 0.083$ & $0.719 \pm 0.027$ \\ P-GNN-F-2L &$0.637 \pm 0.078$ &$\mathbf{0.989} \pm 0.003$ & $0.694 \pm 0.066$& $\mathbf{0.991} \pm 0.003$ & $0.805 \pm 0.003$ \\\midrule P-GNN-E-1L &$0.665 \pm 0.033$ &$0.966 \pm 0.013$ & $0.879 \pm 0.039$& $0.985 \pm 0.005$ & $0.775 \pm 0.029$ \\ P-GNN-E-2L &$\mathbf{0.834} \pm 0.099$ &$0.988 \pm 0.003$ & $\mathbf{0.940} \pm 0.027$& $0.985 \pm 0.008$ & $\mathbf{0.808} \pm 0.003$ \\ \bottomrule \end{tabular} \end{footnotesize} \end{table*} \begin{table}[t] \centering \begin{footnotesize} \caption{Performance on pairwise node classification tasks, measured in ROC AUC. Standard deviation errors are given.} \label{tab:community_detect} \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}llll@{}} \toprule & Communities & Email & Protein \\ \midrule GAT & $0.520 \pm 0.025$ & $0.515 \pm 0.019$ & $0.515 \pm 0.002$\\ GraphSAGE & $0.514 \pm 0.028$ & $0.511 \pm 0.016$& $0.520 \pm 0.003$\\ GAT & $0.620 \pm 0.022$ & $0.502 \pm 0.015$ & $0.528 \pm 0.011$\\ GIN & $0.620 \pm 0.102$& $0.545 \pm 0.012$ & $0.523 \pm 0.002$\\ \midrule P-GNN-F-1L & $0.985 \pm 0.008$ & $0.630 \pm 0.019$ & $0.510 \pm 0.010$\\ P-GNN-F-2L & $0.997 \pm 0.006$ & $\mathbf{0.640} \pm 0.037$ & $\mathbf{0.729} \pm 0.176$\\\midrule P-GNN-E-1L & $0.991 \pm 0.013$ & $0.625 \pm 0.058$ & $0.507 \pm 0.006$\\ P-GNN-E-2L & $\mathbf{1.0} \pm 0.001$ & $\mathbf{0.640} \pm 0.029$ & $0.631 \pm 0.175$\\\bottomrule \end{tabular}} \end{footnotesize} \end{table} \subsection{Datasets} We perform experiments on both synthetic and real datasets. We use the following datasets for a link prediction task: \\ \-\hspace{5mm} $\bullet$ \xhdr{Grid } 2D grid graph representing a 20$\times$ 20 grid with $|V|=$ 400 and no node features. \\ \-\hspace{5mm} $\bullet$ \xhdr{Communities} Connected caveman graph \cite{watts1999networks} with 1\% edges randomly rewired, with 20 communities where each community has 20 nodes. \\ \-\hspace{5mm} $\bullet$ \xhdr{PPI} 24 Protein-protein interaction networks \cite{zitnik2017predicting}. Each graph has 3000 nodes with avg. degree 28.8, each node has 50 dimensional feature vector. We use the following datasets for pairwise node classification tasks which include community detection and role equivalence prediction\footnote{Inductive position-aware node classification is not well-defined due to permutation of labels in different graphs. However pairwise node classification, which only decides if nodes are of the same class, is well defined in the inductive setting.}. \\ \-\hspace{5mm} $\bullet$ \xhdr{Communities} The same as above-mentioned community dataset, with each node labeled with the community it belongs to. \\ \-\hspace{5mm} $\bullet$ \xhdr{Emails} 7 real-world email communication graphs from SNAP \cite{leskovec2007graph} with no node features. Each graph has 6 communities and each node is labeled with the community it belongs to. \\ \-\hspace{5mm} $\bullet$ \xhdr{Protein} 1113 protein graphs from \cite{borgwardt2005protein}. Each node is labeled with a functional role of the protein. Each node has a 29 dimensional feature vector. \subsection{Experimental setup} Next we evaluate P-GNN model on both transductive and inductive learning settings. \xhdr{Transductive learning} In the transductive learning setting, the model is trained and tested on a given graph with a fixed node ordering and has to be re-trained whenever the node ordering is changed or a new graph is given. As a result, the model is allowed to augment node attributes with unique one-hot identifiers to differentiate different nodes. Specifically, we follow the experimental setting from \cite{zhang2018link}, and use two sets of 10\% existing links and an equal number of nonexistent links as test and validation sets, with the remaining 80\% existing links and equal number of nonexistent links used as the training set. We report the test set performance when the best performance on the validation set is achieved, and we report results over 10 runs with different random seeds and train/validation splits. \xhdr{Inductive learning} We demonstrate the inductive learning performance of P-GNNs on pairwise node classification tasks for which it is possible to transfer the positional information to a new unseen graph. In particular, for inductive tasks, augmenting node attributes with one-hot identifiers restricts a model's generalization ability, because the model needs to generalize across scenarios where node identifiers can be arbitrarily permuted. Therefore, when the dataset does not come with node attributes, we only consider using constant order-invariant node attributes, such as a constant scalar, in our experiments. Original node attributes are used if they are available. We follow the transductive learning setting to sample links, but only use order-invariant attributes. When multiple graphs are available, we use 80\% of the graphs for training and the remaining graphs for testing. Note that we do not allow the model to observe ground-truth graphs at the training time. For the pairwise node classification task, we predict whether a pair of nodes belongs to the same community/class. In this case, a pair of nodes that do not belong to the same community are a negative example. \subsection{Baseline models} So far we have shown that P-GNNs are a family of models that differ from the existing GNN models. Therefore, we compare variants of P-GNNs against most popular GNN models. To make a fair comparison, all models are set to have similar number of parameters and are trained for the same number of epochs. We fix model configurations across all the experiments. (Implementational details are provide in the Appendix.) We show that even the simplest P-GNN models can significantly outperform GNN models in many tasks, and designing more expressive P-GNN models is an interesting venue for future work. \xhdr{GNN variants} We consider 4 variants of GNNs, each with three layers, including GCN \cite{kipf2016semi}, GraphSAGE \cite{hamilton2017inductive}, Graph Attention Networks (GAT) \cite{velickovic2017graph} and Graph Isomorphism Network (GIN) \cite{xu2018powerful}. Note that in the context of link prediction task, our implementation of GCN is equivalent to GAE \cite{kipf2016variational}. \xhdr{P-GNN variants} We consider 2 variants of P-GNNs, either with one layer or two layers (labeled 1L, 2L): (1) P-GNNs using truncated 2-hop shortest path distance (P-GNN-F); (2) P-GNNs using exact shortest path distance (P-GNN-E). \subsection{Results} \xhdr{Link prediction} In link prediction tasks two nodes are generally more likely to form a link, if they are close together in the graph. Therefore, the task can largely benefit from position-aware embeddings. Table \ref{tab:link_pred} summarizes the performance of P-GNNs and GNNs on a link prediction task. We observe that P-GNNs significantly outperform GNNs across all datasets and variants of the link prediction taks (inductive vs. transductive). P-GNNs perform well in all inductive link prediction settings, for example improve AUC score by up to 66\% over the best GNN model in the grid dataset. In the transductive setting, P-GNNs and GNNs achieve comparable performance. The explanation is that one-hot encodings of nodes help GNNs to memorize node IDs and differentiate symmetric nodes, but at the cost of expensive computation over $O(n)$ dimensional input features and the failure of generalization to unobserved graphs. On the other hand, P-GNNs can discriminate symmetric nodes by their different distances to anchor-sets, and thus adding one-hot features does not help their performance. In addition, we observe that when graphs come with rich features (e.g., PPI dataset), the performance gain of P-GNNs is smaller, because node features may already capture positional information. Quantifying how much of the positional information is already captured by the input node features is an interesting direction left for future work. Finally, we show that the ``fast'' variant of the P-GNN model (P-GNN-F) that truncates expensive shotest distance computation at 2 still achieves comparable results in many datasets. \xhdr{Pairwise node classification} In pairwise node classification tasks, two nodes may belong to different communities but have similar neighbourhood structures, thus GNNs which focus on learning structure-aware embeddings will not perform well in this tasks. Table \ref{tab:community_detect} summarizes the performance of P-GNNs and GNNs on pairwise node classification tasks. The capability of learning position-aware embeddings is crucial in the Communities dataset, where all P-GNN variants nearly perfectly detect memberships of nodes to communities, while the best GNN can only achieve 0.620 ROC AUC, which means that P-GNNs give 56\% relative improvement in ROC AUC over GNNs on this task. Similar significant performance gains are also observed in Email and Protein datasets: 18\% improvement in ROC AUC on Email and 39\% improvement of P-GNN over GNN on Protein dataset. \section{Conclusion} We propose Position-aware Graph Neural Networks, a new class of Graph Neural Networks for computing node embeddings that incorporate node positional information, while retaining inductive capability and utilizing node features. We show that P-GNNs consistently outperform existing GNNs in a variety of tasks and datasets. \section*{Acknowledgements} \label{sec:ack} This research has been supported in part by Stanford Data Science Initiative, NSF, DARPA, Boeing, Huawei, JD.com, and Chan Zuckerberg Biohub. \section{Appendix} \begin{proof} Proof of Proposition 1. No pair of nodes have the same position-aware node embeddings, otherwise the two nodes have distance $0$ in the embedding space although their shortest path distance is $1$. Since $f_p(v_i)$ is a function of $f_{s_q}(v_i)$, then no pair of nodes have the same structure-aware node embeddings as well. If a pair of node have isomorphic $q$-hop neighbourhood graph, their each hop neighbour below $q$ is the same, thus the inputs for $f_{s_q}$ are the same, thus they should have the same structure-aware embeddings, which contradicts with the fact. Thus no pair of nodes have isomorphic $q$-hop neighbourhood graph. On the reverse side, if no pair of nodes have isomorphic $q$-hop neighbourhood graph, then there exists $f_{s_q}$ such that each node has different structure-aware embeddings $f_{s_q}(v_i)$, because the inputs to $f_{s_q}$ are different. Since each node have different position-aware embeddings $f_p(v_i)$, there exist an one-to-one mapping $g$ that maps each structure-aware embedding to corresponding position-aware embeddings. \end{proof} \begin{proof} Proof of Proposition 2. We propose a constructive proof. Suppose such a function $g$ exist. Then, we construct a new graph $G'$, via first independently construct $q+1$-hop neighbourhood graphs for each of the node in the original graph $G$, then take the union as $G'$. Then, adding any edge between the neighbourhood graphs will not affect the structure-aware embedding method, yet it will change the shortest path distance between nodes in different neighbourhood graphs. This requires $g$ to map the same input to different outputs which conflicts with the definition of a function, therefore such $g$ does not exist. \end{proof} \begin{proof} Proof of Proposition 3. If two nodes have isomorphic $\infty$-hop neighbourhood graphs where $q$ is greater equal than the diameter of the graph, then for any anchor $S^l_1$ in $v_i$'s $\infty$-hop neighbourhood graph, there exists another anchor $S^l_2$ in $v_j$'s $\infty$-hop neighbourhood graph with $d_{sp}(v_i, S^l_1) = d_{sp}(v_j, S^l_2)$, thus the mapping is bijective. Therefore $d_{sp}(v_i, S^l)$ follows the same distribution and $p(d_{sp}(v_i, S^l)) = p(d_{sp}(v_j, S^l))$. Since the neighbours of $v_i$ and $v_j$ have isomorphic $\infty$-hop neighbourhood graphs as well, the statement holds true iteratively. If $p(d_{sp}(v_i, S^l)) = p(d_{sp}(v_j, S^l))$ iteratively, and the $q$-hop neighbourhood graphs of $v_i$ and $v_j$ are not $\infty$-hop isomorphic. Since the both neighbourhood graphs are defined over the same set of nodes, with out loss of generality, we assume that the non-isomorphism occurs such that node pairs $v_e, v_f$ in $v_i$'s neighbourhood graph are connected and $v_g, v_h$ in $v_j$'s neighbourhood graph are not. Then, $p(d_{sp}(v_e, S^l)) \neq p(d_{sp}(v_g, S^l))$ because $v_e$ has one more anchor ($v_f$) with distance one compared with $v_g$. Since $v_e$ and $v_g$ are within $v_i$ and $v_j$ neighbourhood graph respectively, $p(d_{sp}(v_i, S^l)) = p(d_{sp}(v_j, S^l))$ does not hold true iteratively, which conflicts with the assumption. Thus, the $q$-hop neighbourhood graphs of $v_i$ and $v_j$ are $\infty$-hop isomorphic, which finishes the proof. \end{proof} \section{Proposed Approach} \jiaxuan{A figure for the model} \subsection{Notation and Problem definition} A graph can be represented as $G = (V,E)$, where $V = \{v_1, ..., v_n\}$ is the node set, $E=\{(v_i,v_j)|v_i,v_j\in V\}$ is the edge set. When a node has internal attributes, there may exist the node feature set $F = \{f_1, ..., f_n\}$ where $f_i$ is the feature associated with node $v_i$. For notation simplicity, we use $V$ to refer to both $V$ and $F$ if $F$ exists. Usually, predictions on graphs are made via learning a node embedding model. Specifically, a node embedding model can be written as a function $f: V \rightarrow Z$ that maps nodes $V$ to low-dimensional vectors $Z = \{z_1, ..., z_n\}, z_i \in \mathbb{R}^k$. Node-level prediction tasks correspond to a unary objective function $d_y(v_i)$, while edge-level prediction tasks imply a pairwise objective function $d_y(v_i, v_j)$. Learning node embedding can then be written as an optimization problem, $\min \mathbb{E}_{(v_i,v_j)\sim E_{train}}[L(d_z(z_i,z_j)-d_y(v_i,v_j))]$ for edge-level tasks, and $\min \mathbb{E}_{v_i\sim V_{train}}[L(d_z(z_i)-d_y(v_i))]$ for node-level tasks, where $L(\cdot, \cdot)$ is a loss function. A common additional requirement is that $d_z(\cdot,\cdot)$ should be $l_2$ metric, so that the computed embeddings are interpretable and can be easily used for downstream tasks. In this paper, we call the node embeddings to be position-aware, if the node embeddings can be used to recover the shortest path distance. This property is crucial for many prediction tasks, such as link prediction and community detection. Specifically, let $d_{sp}(v_i,v_j)$ be the shortest path, then $d_{sp}(v_i, v_j) = g(d_z(f(v_i), f(v_j)))$ holds for position-aware node embeddings. We will show in Section \ref{} that GNN-based embeddings cannot recover the shortest path distance between two nodes in practical settings; therefore, GNN models cannot achieve satisfactory performance in the above-mentioned prediction tasks. The above formulation is for transductive learning settings, where models are trained and tested on the same graph. The formulation can be generalized to inductive settings that is much more practical in real-world problems. In the inductive setting, $G$ is viewed as a sample from a class of graph $G \in \mathbb{G}$. The goal is to learn an underlying objective function $d_y$ that is share among any $G \in \mathbb{G}$, via a shared embedding model $f$ that can take any $G \in \mathbb{G}$ as input. \subsection{Limitations of structure-aware embeddings} We identify two types of node embeddings that are of special interests, which we name as structure-aware embeddings and position-aware embeddings. Structure-aware embeddings detect the structural role for a given node, usually involve some approximate graph isomorphism test. Position-aware embeddings reflect homophily properties in the graph \cite{grover2016node2vec}, which is highly related to the shortest path distance metrics. Usually, tasks such as community detection and link prediction require more of position-aware embeddings, while common node classification tasks reflect more of position-aware distances. Although both types of embeddings have been used to qualitatively discuss the nature of various network prediction tasks, their relationships are not clearly understood. To take deeper analysis, we first define the two types of node embeddings that will be discussed throughout this paper. \begin{definition} A node embedding $z_i = f_p(v_i)$ is position-aware if there exists function $g(\cdot, \cdot)$ such that $d_{sp}(v_i, v_j) = g(z_i, z_j)$, where $d_{sp}(\cdot, \cdot)$ is the shortest path distance. \end{definition} \begin{definition} A node embedding $z_i = f_s(v_i)$ is structure-aware if it is a function of each hop neighbourhood of node $v_i$. Specifically, $z_i = g(g_1(N_1(v_i)),...,g_q(N_q(v_i)))$, where $N_k(v_i)$ is the set of the $k$-hop neighbourhood of $v_i$, and $g, g_k, k=1,...,q$ can be any functions. \end{definition} Based on the definitions, we point out that there is a dichotomy between structure-aware and position-aware embeddings. Specifically, structure-aware embeddings cannot reproduce position-aware embeddings in most cases. Therefore, when the learning task requires node position information, only using structure-aware embeddings as input features is not sufficient. \begin{proposition} \label{prop:dist_ortho} The shortest path distance $d_{sp}$ is a function of structure-aware embeddings $f_s(v_i)$, if and only $f_s(v_i)\neq f_s(v_j)$ holds for any pair of nodes. \end{proposition} In real world datasets, $d_{s_k}(v_i, v_j) = 0$ rarely holds, especially when $k$ is small. Therefore, Proposition 1 shows that position-aware metrics is usually not explainable by using structural-aware metrics. \jiaxuan{maybe need another proposition} As is shown in {\textcolor{red}{[CITE]}}, Graph Convolutional Networks (GCNs) are equivalent of doing approximate graph isomorphism test, where $k$-layer GCNs corresponding to computing $d_{s_k}$ at best. Thus, GNN models fail to capture the position-aware component in the objective metric function $d_y(\cdot, \cdot)$. More expressive model is needed to learn the position-aware component in the object. \subsection{Structure-award embeddings via random anchors} \subsubsection{Low-distortion metric embedding method} Discuss Bourgain theorem \begin{theorem} Bourgain theorem \end{theorem} Points out GNN only compute feature based distance. \subsection{Embedding nodes via anchors} Inspiration from the Bourgain Theorem, we further claim that a node can be uniquely identified using position-aware distances to randomly chosen anchors. Specifically, a node $v_i$ can be represented by the distribution $p_{v_i} = d_p(v_i, S^Q)$, where $S^q$ is a randomly chosen anchor with probability $q$. \subsection{DeepBourgain} Inspired by the low-distortion metric embedding, we propose DeepBourgain that is able to generate position-aware node embeddings. Computing DeepBourgain embedding involves three stages \subsubsection{Anchor selection} \subsubsection{Compute Distances} \subsubsection{Anchor aggregation} \subsection{Connection to GNN} GNN are special cases of DeepBourgain. It can be view as DeepBourgain that selects all nodes as anchors, and adopt link prediction distance. \section{Appendix} \begin{proof} Proof of Proposition 1. No pair of nodes have the same position-aware node embeddings, otherwise the two nodes have distance $0$ in the embedding space although their shortest path distance is $1$. Since $f_p(v_i)$ is a function of $f_{s_q}(v_i)$, then no pair of nodes have the same structure-aware node embeddings as well. If a pair of node have isomorphic $q$-hop neighbourhood graph, their each hop neighbour below $q$ is the same, thus the inputs for $f_{s_q}$ are the same, thus they should have the same structure-aware embeddings, which contradicts with the fact. Thus no pair of nodes have isomorphic $q$-hop neighbourhood graph. On the reverse side, if no pair of nodes have isomorphic $q$-hop neighbourhood graph, then there exists $f_{s_q}$ such that each node has different structure-aware embeddings $f_{s_q}(v_i)$, because the inputs to $f_{s_q}$ are different. Since each node have different position-aware embeddings $f_p(v_i)$, there exist an one-to-one mapping $g$ that maps each structure-aware embedding to corresponding position-aware embeddings. \end{proof} \begin{proof} Proof of Proposition 2. We propose a constructive proof. Suppose such a function $g$ exist. Then, we construct a new graph $G'$, via first independently construct $q+1$-hop neighbourhood graphs for each of the node in the original graph $G$, then take the union as $G'$. Then, adding any edge between the neighbourhood graphs will not affect the structure-aware embedding method, yet it will change the shortest path distance between nodes in different neighbourhood graphs. This requires $g$ to map the same input to different outputs which conflicts with the definition of a function, therefore such $g$ does not exist. \end{proof} \begin{proof} Proof of Proposition 3. If two nodes have isomorphic $\infty$-hop neighbourhood graphs where $q$ is greater equal than the diameter of the graph, then for any anchor $S^l_1$ in $v_i$'s $\infty$-hop neighbourhood graph, there exists another anchor $S^l_2$ in $v_j$'s $\infty$-hop neighbourhood graph with $d_{sp}(v_i, S^l_1) = d_{sp}(v_j, S^l_2)$, thus the mapping is bijective. Therefore $d_{sp}(v_i, S^l)$ follows the same distribution and $p(d_{sp}(v_i, S^l)) = p(d_{sp}(v_j, S^l))$. Since the neighbours of $v_i$ and $v_j$ have isomorphic $\infty$-hop neighbourhood graphs as well, the statement holds true iteratively. If $p(d_{sp}(v_i, S^l)) = p(d_{sp}(v_j, S^l))$ iteratively, and the $q$-hop neighbourhood graphs of $v_i$ and $v_j$ are not $\infty$-hop isomorphic. Since the both neighbourhood graphs are defined over the same set of nodes, with out loss of generality, we assume that the non-isomorphism occurs such that node pairs $v_e, v_f$ in $v_i$'s neighbourhood graph are connected and $v_g, v_h$ in $v_j$'s neighbourhood graph are not. Then, $p(d_{sp}(v_e, S^l)) \neq p(d_{sp}(v_g, S^l))$ because $v_e$ has one more anchor ($v_f$) with distance one compared with $v_g$. Since $v_e$ and $v_g$ are within $v_i$ and $v_j$ neighbourhood graph respectively, $p(d_{sp}(v_i, S^l)) = p(d_{sp}(v_j, S^l))$ does not hold true iteratively, which conflicts with the assumption. Thus, the $q$-hop neighbourhood graphs of $v_i$ and $v_j$ are $\infty$-hop isomorphic, which finishes the proof. \end{proof} \section{Proposed Approach} \jiaxuan{A figure for the model} \subsection{Notation and Problem definition} A graph can be represented as $G = (V,E)$, where $V = \{v_1, ..., v_n\}$ is the node set, $E=\{(v_i,v_j)|v_i,v_j\in V\}$ is the edge set. When a node has internal attributes, there may exist the node feature set $F = \{f_1, ..., f_n\}$ where $f_i$ is the feature associated with node $v_i$. For notation simplicity, we use $V$ to refer to both $V$ and $F$ if $F$ exists. Usually, predictions on graphs are made via learning a node embedding model. Specifically, a node embedding model can be written as a function $f: V \rightarrow Z$ that maps nodes $V$ to low-dimensional vectors $Z = \{z_1, ..., z_n\}, z_i \in \mathbb{R}^k$. Node-level prediction tasks correspond to a unary objective function $d_y(v_i)$, while edge-level prediction tasks imply a pairwise objective function $d_y(v_i, v_j)$. Learning node embedding can then be written as an optimization problem, $\min \mathbb{E}_{(v_i,v_j)\sim E_{train}}[L(d_z(z_i,z_j)-d_y(v_i,v_j))]$ for edge-level tasks, and $\min \mathbb{E}_{v_i\sim V_{train}}[L(d_z(z_i)-d_y(v_i))]$ for node-level tasks, where $L(\cdot, \cdot)$ is a loss function. A common additional requirement is that $d_z(\cdot,\cdot)$ should be $l_2$ metric, so that the computed embeddings are interpretable and can be easily used for downstream tasks. In this paper, we call the node embeddings to be position-aware, if the node embeddings can be used to recover the shortest path distance. This property is crucial for many prediction tasks, such as link prediction and community detection. Specifically, let $d_{sp}(v_i,v_j)$ be the shortest path, then $d_{sp}(v_i, v_j) = g(d_z(f(v_i), f(v_j)))$ holds for position-aware node embeddings. We will show in Section \ref{} that GNN-based embeddings cannot recover the shortest path distance between two nodes in practical settings; therefore, GNN models cannot achieve satisfactory performance in the above-mentioned prediction tasks. The above formulation is for transductive learning settings, where models are trained and tested on the same graph. The formulation can be generalized to inductive settings that is much more practical in real-world problems. In the inductive setting, $G$ is viewed as a sample from a class of graph $G \in \mathbb{G}$. The goal is to learn an underlying objective function $d_y$ that is share among any $G \in \mathbb{G}$, via a shared embedding model $f$ that can take any $G \in \mathbb{G}$ as input. \subsection{Limitations of structure-aware embeddings} We identify two types of node embeddings that are of special interests, which we name as structure-aware embeddings and position-aware embeddings. Structure-aware embeddings detect the structural role for a given node, usually involve some approximate graph isomorphism test. Position-aware embeddings reflect homophily properties in the graph \cite{grover2016node2vec}, which is highly related to the shortest path distance metrics. Usually, tasks such as community detection and link prediction require more of position-aware embeddings, while common node classification tasks reflect more of position-aware distances. Although both types of embeddings have been used to qualitatively discuss the nature of various network prediction tasks, their relationships are not clearly understood. To take deeper analysis, we first define the two types of node embeddings that will be discussed throughout this paper. \begin{definition} A node embedding $z_i = f_p(v_i)$ is position-aware if there exists function $g(\cdot, \cdot)$ such that $d_{sp}(v_i, v_j) = g(z_i, z_j)$, where $d_{sp}(\cdot, \cdot)$ is the shortest path distance. \end{definition} \begin{definition} A node embedding $z_i = f_s(v_i)$ is structure-aware if it is a function of each hop neighbourhood of node $v_i$. Specifically, $z_i = g(g_1(N_1(v_i)),...,g_q(N_q(v_i)))$, where $N_k(v_i)$ is the set of the $k$-hop neighbourhood of $v_i$, and $g, g_k, k=1,...,q$ can be any functions. \end{definition} Based on the definitions, we point out that there is a dichotomy between structure-aware and position-aware embeddings. Specifically, structure-aware embeddings cannot reproduce position-aware embeddings in most cases. Therefore, when the learning task requires node position information, only using structure-aware embeddings as input features is not sufficient. \begin{proposition} \label{prop:dist_ortho} The shortest path distance $d_{sp}$ is a function of structure-aware embeddings $f_s(v_i)$, if and only $f_s(v_i)\neq f_s(v_j)$ holds for any pair of nodes. \end{proposition} In real world datasets, $d_{s_k}(v_i, v_j) = 0$ rarely holds, especially when $k$ is small. Therefore, Proposition 1 shows that position-aware metrics is usually not explainable by using structural-aware metrics. \jiaxuan{maybe need another proposition} As is shown in {\textcolor{red}{[CITE]}}, Graph Convolutional Networks (GCNs) are equivalent of doing approximate graph isomorphism test, where $k$-layer GCNs corresponding to computing $d_{s_k}$ at best. Thus, GNN models fail to capture the position-aware component in the objective metric function $d_y(\cdot, \cdot)$. More expressive model is needed to learn the position-aware component in the object. \subsection{Structure-award embeddings via random anchors} \subsubsection{Low-distortion metric embedding method} Discuss Bourgain theorem \begin{theorem} Bourgain theorem \end{theorem} Points out GNN only compute feature based distance. \subsection{Embedding nodes via anchors} Inspiration from the Bourgain Theorem, we further claim that a node can be uniquely identified using position-aware distances to randomly chosen anchors. Specifically, a node $v_i$ can be represented by the distribution $p_{v_i} = d_p(v_i, S^Q)$, where $S^q$ is a randomly chosen anchor with probability $q$. \subsection{DeepBourgain} Inspired by the low-distortion metric embedding, we propose DeepBourgain that is able to generate position-aware node embeddings. Computing DeepBourgain embedding involves three stages \subsubsection{Anchor selection} \subsubsection{Compute Distances} \subsubsection{Anchor aggregation} \subsection{Connection to GNN} GNN are special cases of DeepBourgain. It can be view as DeepBourgain that selects all nodes as anchors, and adopt link prediction distance. \section{Conclusion} We propose Position-aware Graph Neural Networks, a new class of Graph Neural Networks for computing node embeddings that incorporate node positional information, while retaining inductive capability and utilizing node features. We show that P-GNNs consistently outperform existing GNNs in a variety of tasks and datasets. \section*{Acknowledgements} \label{sec:ack} This research has been supported in part by Stanford Data Science Initiative, NSF, DARPA, Boeing, Huawei, JD.com, and Chan Zuckerberg Biohub. \section{Experiments} \begin{table*}[t] \centering \begin{footnotesize} \caption{P-GNNs compared to GNNs on link prediction tasks, measured in ROC AUC. Grid-T and Communities-T refer to the transductive learning setting of Grid and Communities, where one-hot feature vectors are used as node attributes. Standard deviation errors are given.} \label{tab:link_pred} \begin{tabular}{@{}llllllllll@{}} \toprule & Grid-T & Communities-T & Grid & Communities & PPI \\ \midrule GCN &$0.698 \pm 0.051$ &$0.981 \pm 0.004$ & $0.456 \pm 0.037$ & $0.512 \pm 0.008$ & $0.769 \pm 0.002$ \\ GraphSAGE &$0.682 \pm 0.050$ &$0.978 \pm 0.003$ & $0.532 \pm 0.050$& $0.516 \pm 0.010$ & $0.803 \pm 0.005$ \\ GAT &$0.704 \pm 0.050$ &$0.980 \pm 0.005$ & $0.566 \pm 0.052$& $0.618 \pm 0.025$ & $0.783 \pm 0.004$ \\ GIN &$0.732 \pm 0.050$ &$0.984 \pm 0.005$ & $0.499 \pm 0.054$& $0.692 \pm 0.049$ & $0.782 \pm 0.010$ \\ \midrule P-GNN-F-1L &$0.542 \pm 0.057$ &$0.930 \pm 0.093$ & $0.619 \pm 0.080$& $0.939 \pm 0.083$ & $0.719 \pm 0.027$ \\ P-GNN-F-2L &$0.637 \pm 0.078$ &$\mathbf{0.989} \pm 0.003$ & $0.694 \pm 0.066$& $\mathbf{0.991} \pm 0.003$ & $0.805 \pm 0.003$ \\\midrule P-GNN-E-1L &$0.665 \pm 0.033$ &$0.966 \pm 0.013$ & $0.879 \pm 0.039$& $0.985 \pm 0.005$ & $0.775 \pm 0.029$ \\ P-GNN-E-2L &$\mathbf{0.834} \pm 0.099$ &$0.988 \pm 0.003$ & $\mathbf{0.940} \pm 0.027$& $0.985 \pm 0.008$ & $\mathbf{0.808} \pm 0.003$ \\ \bottomrule \end{tabular} \end{footnotesize} \end{table*} \begin{table}[t] \centering \begin{footnotesize} \caption{Performance on pairwise node classification tasks, measured in ROC AUC. Standard deviation errors are given.} \label{tab:community_detect} \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}llll@{}} \toprule & Communities & Email & Protein \\ \midrule GAT & $0.520 \pm 0.025$ & $0.515 \pm 0.019$ & $0.515 \pm 0.002$\\ GraphSAGE & $0.514 \pm 0.028$ & $0.511 \pm 0.016$& $0.520 \pm 0.003$\\ GAT & $0.620 \pm 0.022$ & $0.502 \pm 0.015$ & $0.528 \pm 0.011$\\ GIN & $0.620 \pm 0.102$& $0.545 \pm 0.012$ & $0.523 \pm 0.002$\\ \midrule P-GNN-F-1L & $0.985 \pm 0.008$ & $0.630 \pm 0.019$ & $0.510 \pm 0.010$\\ P-GNN-F-2L & $0.997 \pm 0.006$ & $\mathbf{0.640} \pm 0.037$ & $\mathbf{0.729} \pm 0.176$\\\midrule P-GNN-E-1L & $0.991 \pm 0.013$ & $0.625 \pm 0.058$ & $0.507 \pm 0.006$\\ P-GNN-E-2L & $\mathbf{1.0} \pm 0.001$ & $\mathbf{0.640} \pm 0.029$ & $0.631 \pm 0.175$\\\bottomrule \end{tabular}} \end{footnotesize} \end{table} \subsection{Datasets} We perform experiments on both synthetic and real datasets. We use the following datasets for a link prediction task: \\ \-\hspace{5mm} $\bullet$ \xhdr{Grid } 2D grid graph representing a 20$\times$ 20 grid with $|V|=$ 400 and no node features. \\ \-\hspace{5mm} $\bullet$ \xhdr{Communities} Connected caveman graph \cite{watts1999networks} with 1\% edges randomly rewired, with 20 communities where each community has 20 nodes. \\ \-\hspace{5mm} $\bullet$ \xhdr{PPI} 24 Protein-protein interaction networks \cite{zitnik2017predicting}. Each graph has 3000 nodes with avg. degree 28.8, each node has 50 dimensional feature vector. We use the following datasets for pairwise node classification tasks which include community detection and role equivalence prediction\footnote{Inductive position-aware node classification is not well-defined due to permutation of labels in different graphs. However pairwise node classification, which only decides if nodes are of the same class, is well defined in the inductive setting.}. \\ \-\hspace{5mm} $\bullet$ \xhdr{Communities} The same as above-mentioned community dataset, with each node labeled with the community it belongs to. \\ \-\hspace{5mm} $\bullet$ \xhdr{Emails} 7 real-world email communication graphs from SNAP \cite{leskovec2007graph} with no node features. Each graph has 6 communities and each node is labeled with the community it belongs to. \\ \-\hspace{5mm} $\bullet$ \xhdr{Protein} 1113 protein graphs from \cite{borgwardt2005protein}. Each node is labeled with a functional role of the protein. Each node has a 29 dimensional feature vector. \subsection{Experimental setup} Next we evaluate P-GNN model on both transductive and inductive learning settings. \xhdr{Transductive learning} In the transductive learning setting, the model is trained and tested on a given graph with a fixed node ordering and has to be re-trained whenever the node ordering is changed or a new graph is given. As a result, the model is allowed to augment node attributes with unique one-hot identifiers to differentiate different nodes. Specifically, we follow the experimental setting from \cite{zhang2018link}, and use two sets of 10\% existing links and an equal number of nonexistent links as test and validation sets, with the remaining 80\% existing links and equal number of nonexistent links used as the training set. We report the test set performance when the best performance on the validation set is achieved, and we report results over 10 runs with different random seeds and train/validation splits. \xhdr{Inductive learning} We demonstrate the inductive learning performance of P-GNNs on pairwise node classification tasks for which it is possible to transfer the positional information to a new unseen graph. In particular, for inductive tasks, augmenting node attributes with one-hot identifiers restricts a model's generalization ability, because the model needs to generalize across scenarios where node identifiers can be arbitrarily permuted. Therefore, when the dataset does not come with node attributes, we only consider using constant order-invariant node attributes, such as a constant scalar, in our experiments. Original node attributes are used if they are available. We follow the transductive learning setting to sample links, but only use order-invariant attributes. When multiple graphs are available, we use 80\% of the graphs for training and the remaining graphs for testing. Note that we do not allow the model to observe ground-truth graphs at the training time. For the pairwise node classification task, we predict whether a pair of nodes belongs to the same community/class. In this case, a pair of nodes that do not belong to the same community are a negative example. \subsection{Baseline models} So far we have shown that P-GNNs are a family of models that differ from the existing GNN models. Therefore, we compare variants of P-GNNs against most popular GNN models. To make a fair comparison, all models are set to have similar number of parameters and are trained for the same number of epochs. We fix model configurations across all the experiments. (Implementational details are provide in the Appendix.) We show that even the simplest P-GNN models can significantly outperform GNN models in many tasks, and designing more expressive P-GNN models is an interesting venue for future work. \xhdr{GNN variants} We consider 4 variants of GNNs, each with three layers, including GCN \cite{kipf2016semi}, GraphSAGE \cite{hamilton2017inductive}, Graph Attention Networks (GAT) \cite{velickovic2017graph} and Graph Isomorphism Network (GIN) \cite{xu2018powerful}. Note that in the context of link prediction task, our implementation of GCN is equivalent to GAE \cite{kipf2016variational}. \xhdr{P-GNN variants} We consider 2 variants of P-GNNs, either with one layer or two layers (labeled 1L, 2L): (1) P-GNNs using truncated 2-hop shortest path distance (P-GNN-F); (2) P-GNNs using exact shortest path distance (P-GNN-E). \subsection{Results} \xhdr{Link prediction} In link prediction tasks two nodes are generally more likely to form a link, if they are close together in the graph. Therefore, the task can largely benefit from position-aware embeddings. Table \ref{tab:link_pred} summarizes the performance of P-GNNs and GNNs on a link prediction task. We observe that P-GNNs significantly outperform GNNs across all datasets and variants of the link prediction taks (inductive vs. transductive). P-GNNs perform well in all inductive link prediction settings, for example improve AUC score by up to 66\% over the best GNN model in the grid dataset. In the transductive setting, P-GNNs and GNNs achieve comparable performance. The explanation is that one-hot encodings of nodes help GNNs to memorize node IDs and differentiate symmetric nodes, but at the cost of expensive computation over $O(n)$ dimensional input features and the failure of generalization to unobserved graphs. On the other hand, P-GNNs can discriminate symmetric nodes by their different distances to anchor-sets, and thus adding one-hot features does not help their performance. In addition, we observe that when graphs come with rich features (e.g., PPI dataset), the performance gain of P-GNNs is smaller, because node features may already capture positional information. Quantifying how much of the positional information is already captured by the input node features is an interesting direction left for future work. Finally, we show that the ``fast'' variant of the P-GNN model (P-GNN-F) that truncates expensive shotest distance computation at 2 still achieves comparable results in many datasets. \xhdr{Pairwise node classification} In pairwise node classification tasks, two nodes may belong to different communities but have similar neighbourhood structures, thus GNNs which focus on learning structure-aware embeddings will not perform well in this tasks. Table \ref{tab:community_detect} summarizes the performance of P-GNNs and GNNs on pairwise node classification tasks. The capability of learning position-aware embeddings is crucial in the Communities dataset, where all P-GNN variants nearly perfectly detect memberships of nodes to communities, while the best GNN can only achieve 0.620 ROC AUC, which means that P-GNNs give 56\% relative improvement in ROC AUC over GNNs on this task. Similar significant performance gains are also observed in Email and Protein datasets: 18\% improvement in ROC AUC on Email and 39\% improvement of P-GNN over GNN on Protein dataset. \section{Proposed Approach} \section{Preliminaries} \begin{figure*} \centering \includegraphics[width=\textwidth]{figs/PGNN.png} \caption{P-GNN architecture. P-GNN first samples multiple anchor-sets $S = \{S_1, S_2, S_3\}$ of different sizes (\textbf{Left}). Then, position-aware node embeddings $\mathbf{z}_{v_i}$ are computed via messages $M_{v_i}$ between a given node $v_i$ and the anchor-sets $S_i$ which are shared across all the nodes (\textbf{Middle}). To compute the embedding $\mathbf{z}_{v_1}$ for node $v_1$, one layer of P-GNN first computes messages via function $F$ and then aggregates them via a learnable function $\textsc{Agg}_M$ over the nodes in each anchor-set $S_i$ to obtain a matrix of anchor-set messages $\mathbf{M}_{v_1}$. The message matrix $\mathbf{M}_{v_1}$ is then further aggregated using a learnable function $\textsc{Agg}_S$ to obtain node $v_1$'s message $\mathbf{h}_{v_1}$ that can be passed to the next level of P-GNN. At the same time a learned vector $\mathbf{w}$ is used reduce $\mathbf{M}_{v_1}$ into a fixed-size position-aware embedding $\mathbf{z}_{v_1}$ which is the output of the P-GNN (\textbf{Right}). } \label{fig:PGNN} \end{figure*} \subsection{Notation and Problem Definition} A graph can be represented as $G = (\mathcal{V},\mathcal{E})$, where $\mathcal{V} = \{v_1, ..., v_n\}$ is the node set and $\mathcal{E}$ is the edge set. In many applications where nodes have attributes, we augment $G$ with the node feature set $\mathcal{X} = \{\mathbf{x}_1, ..., \mathbf{x}_n\}$ where $\mathbf{x}_i$ is the feature vector associated with node $v_i$. Predictions on graphs are made by first embedding nodes into a low-dimensional space which is then fed into a classifier, potentially in an end-to-end fashion. % Specifically, a node embedding model can be written as a function $f: \mathcal{V} \rightarrow \mathcal{Z}$ that maps nodes $\mathcal{V}$ to $d$-dimensional vectors $\mathcal{Z} = \{\mathbf{z}_1, ..., \mathbf{z}_n\}, \mathbf{z}_i \in \mathbb{R}^d$. \subsection{Limitations of Structure-aware Embeddings} \label{sc:limitation_structure_aware} \cut{ We identify two key properties of node embeddings that are of special interests in applications, which we name as \textit{structure-aware} embeddings and \textit{position-aware} embeddings. Structure-aware embeddings detect the structural role for a given node, usually involve some approximate graph isomorphism test. Position-aware embeddings reflect homophily properties in the graph \cite{grover2016node2vec}, which is highly related to the shortest path distance metrics. Usually, tasks such as community detection and link prediction require more of position-aware embeddings, while common node classification tasks reflect more of structure-aware distances {\textcolor{red}{[CITE]}}. Although both types of embeddings have been used to qualitatively discuss the nature of various network prediction tasks, their relationship is not clearly understood. } Our goal is to learn embeddings that capture the local network structure as well as retain the global network position of a given node. We call node embeddings to be {\em position-aware}, if the embedding of two nodes can be used to (approximately) recover their shortest path distance in the network. This property is crucial for many prediction tasks, such as link prediction and community detection. We show below that GNN-based embeddings cannot recover shortest path distances between nodes, which may lead to suboptimal performance in tasks where such information is needed. \begin{definition} A node embedding $\mathbf{z}_i = f_p(v_i), \forall v_i \in \mathcal{V}$ is position-aware if there exists a function $g_p(\cdot, \cdot)$ such that $d_{sp}(v_i, v_j) = g_p(\mathbf{z}_i, \mathbf{z}_j)$, where $d_{sp}(\cdot, \cdot)$ is the shortest path distance in $G$. \end{definition} \begin{definition} A node embedding $\mathbf{z}_i = f_{s_q}(v_i), \forall v_i \in \mathcal{V}$ is structure-aware if it is a function of up to $q$-hop network neighbourhood of node $v_i$. Specifically, $\mathbf{z}_i = g_s(N_1(v_i),...,N_q(v_i))$, where $N_k(v_i)$ is the set of the nodes $k$-hops away from node $v_i$, and $g_s$ can be any function. \end{definition} For example, most graph neural networks compute node embeddings by aggregating information from each node's $q$-hop neighborhood, and are thus structure-aware. In contrast, (long) random-walk-based embeddings such as DeepWalk and Node2Vec are position-aware, since their objective function forces nodes that are close in the shortest path to also be close in the embedding space. In general, structure-aware embeddings cannot be mapped to position-aware embeddings. Therefore, when the learning task requires node positional information, only using structure-aware embeddings as input is not sufficient: \begin{proposition} \label{prop:emb_dichoto} There exists a mapping $g$ that maps structure-aware embeddings $f_{s_q}(v_i), \forall v_i \in \mathcal{V}$ to position-aware embeddings $f_p(v_i), \forall v_i \in \mathcal{V}$, if and only if no pair of nodes have isomorphic local $q$-hop neighbourhood graphs. \end{proposition} Proposition \ref{prop:emb_dichoto} is proved in the Appendix. The proof is based on the identifiability arguments similar to the proof of Theorem 1 in \cite{hamilton2017inductive}, and also explains why in some cases GNNs may perform well in tasks requiring positional information. However, in real-world graphs such as molecules and social networks, the structural equivalences between nodes' local neighbourhood graphs are quite common, making GNNs hard to identify different nodes. Furthermore, the mapping $g$ essentially memorizes the shortest path distance between a pair of structure-aware node embeddings whose local neighbourhoods are unique. Therefore, even if the GNN perfectly learns the mapping $g$, it cannot generalize to the mapping to new graphs. \cut{ We concretely state the limitation of structure-aware node embeddings in the inductive setting in Proposition \ref{prop:emb_dichoto_inductive} (proof in the Appendix). \begin{proposition} \label{prop:emb_dichoto_inductive} There does not exist a general mapping $g$ that maps from structure-aware embeddings $f_{s_q}(v_i)$ to shortest path distance $d_{sp}$ for any given graph. \end{proposition} } \cut{ Finally, we point out that while inductive position-aware link-prediction tasks are well-defined and prevalent, inductive position-aware node classification tasks are in fact ill-defined\footnote{Note that position-aware node classification tasks well defined in the transductive setting}. For different In other words, the predictions for position-aware node classification tasks only need to match the node labels \emph{up to permutation of labels}, which is equivalent to link prediction where the links represents the equivalence relation between nodes, and the labels form the quotient set of $V$. \begin{proposition} \label{prop:node_transform_edge} Inductive position-aware node labels are in fact ill-defined. There does not exist a position-aware node labeling function for a dataset of multiple graphs, such that the labeling function remains consistent for each input node after permutations of the graphs. \end{proposition} } \section{Proposed Approach} In this section, we first describe the P-GNN\xspace framework that extends GNNs to learn position-aware node embeddings. We follow by a discussion on our model designing choices. Last, we theoretically show how P-GNNs generalize existing GNNs and learn position-aware embeddings. \subsection{The Framework of P-GNNs} We propose Position-aware Graph Neural Networks that generalize the concepts of Graph Neural Networks with two key insights. First, when computing the node embedding, instead of only aggregating messages computed from a node's local network neighbourhood, we allow P-GNNs to \textit{aggregate messages from anchor-sets}, which are randomly chosen subsets of all the nodes (Figure~\ref{fig:PGNN}, left). Note that anchor sets get resampled every time the model is run forward. Secondly, when performing message aggregation, instead of letting each node aggregate information independently, the aggregation is \textit{coupled across all the nodes} in order to distinguish nodes with different positions in the network (Figure~\ref{fig:PGNN}, middle). We design P-GNNs such that each node embedding dimension corresponds to messages computed with respect to one anchor-set, which makes the computed node embeddings position-aware (Figure~\ref{fig:PGNN}, right). P-GNNs contain the following key components:\\ \-\hspace{5mm} $\bullet$ $k$ anchor-sets $S_i$ of different sizes.\\ \-\hspace{5mm} $\bullet$ Message computation function $F$ that combines feature information of two nodes with their network distance.\\ \-\hspace{5mm}$\bullet$ Matrix $\mathbf{M}$ of anchor-set messages, where each row $i$ is an anchor-set message $\mathcal{M}_i$ computed by $F$.\\ \-\hspace{5mm}$\bullet$ Trainable aggregation functions $\textsc{Agg}_M$, $\textsc{Agg}_S$ that aggregate/transform feature information of the nodes in the anchor-set and then also aggregate it across the anchor-sets.\\ \-\hspace{5mm}$\bullet$ Trainable vector $\mathbf{w}$ that projects message matrix $\mathbf{M}$ to a lower-dimensional embedding space $\mathbf{z} \in \mathbb{R}^k$. Algorithm \ref{alg:pgnn} summarizes the general framework of P-GNNs. A P-GNN consists of multiple P-GNN layers. Concretely, the $l^\text{th}$ P-GNN layer first samples $k$ random anchor-sets $S_i$. Then, the $i^{\text{th}}$ dimension of the output node embedding $\mathbf{z}_{v}$ represents messages computed with respect to the $i^{\text{th}}$ anchor-set $S_i$. Each dimension of the embedding is obtained by first computing the message from each node in the anchor-set via message computation function $F$, then applying a message aggregation function $\textsc{Agg}_M$, and finally applying a non-linear transformation to get a scalar via weights $\mathbf{w}\in\mathbb{R}^{r}$ and non-linearity $\sigma$. Specifically, the message from each node includes distances that reveal node positions as well as feature-based information from input node features. The message aggregation functions are the same class of functions as used by existing GNNs. We further elaborate on the design choices in Section \ref{sc:design_choices}. \xhdr{P-GNNs are position-aware} The output embeddings $\mathbf{z}_v$ are position-aware, as each dimension of the embedding encodes the necessary information to distinguish structurally equivalent nodes that reside in different parts of the graph. Note that if we permute the dimensions of all the node embeddings $\mathbf{z}_v$, the resulting embeddings are equivalent to the original embeddings because they carry the same node positional information with respect to (permuted order of) anchor-sets $\{S_i\}$. Multiple P-GNN layers can be naturally stacked to achieve higher expressive power. Note that unlike GNNs, we cannot feed the output embeddings $\mathbf{z}_v$ from the previous layer to the next layer, because the dimensions of $\mathbf{z}_v$ can be arbitrarily permuted; therefore, applying a fixed non-linear transformation over this representation is problematic. The deeper reason we cannot feed $\mathbf{z}_v$ to the next layer is that the position of a node is always \textit{relative} to the chosen anchor-sets; thus, canonical position-aware embeddings do not exist. Therefore, P-GNNs also compute structure-aware messages $\mathbf{h}_{v}$, which are computed via an order-invariant message aggregation function that aggregates messages \textit{across anchor-sets}, and are then fed into the next P-GNN layer as input. \begin{algorithm}[h!] \caption{The framework of P-GNNs} \label{alg:pgnn} \begin{algorithmic} \STATE {\bfseries Input:} Graph $G=(\mathcal{V},\mathcal{E})$; Set $S$ of $k$ anchor-sets $\{S_i\}$; Node input features $\{\mathbf{x}_v\}$; Message computation function $F$ that outputs an $r$ dimensional message; Message aggregation functions $\textsc{Agg}_M, \textsc{Agg}_S$; Trainable weight vector $\mathbf{w}\in\mathbb{R}^{r}$; Non-linearity $\sigma$; Layer $l \in [1,L]$ \STATE {\bfseries Output:} Position-aware embedding $\mathbf{z}_v$ for every node $v$ \STATE $\mathbf{h}_v \leftarrow \mathbf{x}_v$ \FOR{$l=1,\dots,L$} \STATE $S_i \sim \mathcal{V}$\- for $i = 1,\dots,k$ \FOR{$v \in \mathcal{V}$} \STATE $\mathbf{M}_v = \mathbf{0} \in \mathbb{R}^{k\times r}$ \FOR{$i = 1\dots,k$} \STATE $\mathcal{M}_i \leftarrow \{F(v,u,\mathbf{h}_v,\mathbf{h}_u), \forall u \in S_i\}$ \STATE $\mathbf{M}_v[i] \leftarrow \textsc{Agg}_M(\mathcal{M}_i)$ \ENDFOR \STATE $\mathbf{z}_{v} \leftarrow \sigma(\mathbf{M}_v \cdot \mathbf{w})$ \STATE $\mathbf{h}_{v} \leftarrow \textsc{Agg}_S(\{\mathbf{M}_v[i], \forall i \in [1,k]\})$ \ENDFOR \ENDFOR \STATE $\mathbf{z}_v \in \mathbb{R}^k$, $\forall v \in \mathcal{V}$ \end{algorithmic} \end{algorithm} \subsection{Anchor-set Selection} \label{sc:anchor_selection} We rely on Bourgain's Theorem to guide the choice of anchor-sets, such that the resulting representations are guaranteed to have low distortion. Specifically, distortion measures the faithfulness of the embeddings in preserving distances when mapping from one metric space to another metric space, which is defined as follows: \begin{definition} Given two metric spaces $(\mathcal{V},d)$ and $(\mathcal{Z},d')$ and a function $f: \mathcal{V} \rightarrow \mathcal{Z}$, $f$ is said to have distortion $\alpha$ if $\forall u,v \in \mathcal{V}$, $\frac{1}{\alpha} d(u,v) \leq d'(f(u),f(v)) \leq d(u,v)$. \end{definition} Theorem \ref{th:bourgain} states the Bourgain Theorem \cite{bourgain1985lipschitz}, which shows the existence of a low distortion embedding that maps from any metric space to the $l_p$ metric space: \begin{theorem} \label{th:bourgain} (Bourgain theorem) Given any finite metric space $(\mathcal{V},d)$ with $|\mathcal{V}| = n$, there exists an embedding of $(\mathcal{V}, d)$ into $\mathbb{R}^k$ under any $l_p$ metric, where $k = O(\log^2 n)$, and the distortion of the embedding is $O(\log n)$. \end{theorem} A constructive proof of Theorem \ref{th:bourgain} \cite{linial1995geometry} provides an algorithm to construct an $O(\log^2 n)$ dimensional embedding via anchor-sets, as summarized in Theorem \ref{th:bourgain_constructive}: \begin{theorem} \label{th:bourgain_constructive} (Constructive proof of Bourgain theorem) For metric space $(\mathcal{V},d)$, given $k = c\log^2 n$ random sets $S_{i,j} \subset \mathcal{V}, i=1,2,...,\log n, j = 1,2,...,c\log n$ where $c$ is a constant, $S_{i,j}$ is chosen by including each point in $\mathcal{V}$ independently with probability $\frac{1}{2^i}$. An embedding method for $v \in \mathcal{V}$ is defined as: \begin{equation} f(v) = \big( \frac{d(v, S_{1,1})}{k}, \frac{d(v, S_{1,2})}{k}, ..., \frac{d(v, S_{\log n,c\log n})}{k} \big) \end{equation} where $d(v, S_{i,j}) = \min_{u\in S_{i,j}} d(v,u)$. Then, $f$ is an embedding method that satisfies Theorem \ref{th:bourgain}. \end{theorem} The proposed P-GNNs can be viewed as a generalization of the embedding method in Theorem \ref{th:bourgain_constructive}, where the distance metric $d$ is generalized via message computation function $F$ and message aggregation function $\textsc{Agg}_M$ that accounts for both node feature information and position-based similarities (Section \ref{sc:design_choices}). Using this analogy, Theorem \ref{th:bourgain_constructive} offers two insights for selecting anchor-sets in P-GNNs. First, $O(\log^2 n)$ anchor-sets are needed to guarantee low distortion embedding. Second, these anchor-sets have sizes distributed exponentially. Here, we illustrate the intuition behind selecting anchor-sets with different sizes via the $1$-hop shortest path distance defined in Equation~\ref{eq:q_hop_dist}. Suppose that the model is computing embeddings for node $v_i$. We say an anchor-set \emph{hits} node $v_i$ if $v_i$ or any of its one-hop neighbours is included in the anchor-set. Small anchor-sets can provide positional information with high certainty, because when a small anchor-set hits $v_i$, we know that $v_i$ is located close to one of the very few nodes in the small anchor-set. However, the probability that such small anchor-set hits $v_i$ is low, and the anchor-set is uninformative if it misses $v_i$. On the contrary, large anchor-sets have higher probability of hitting $v_i$, thus sampling large anchor-sets can result in high sample efficiency. However, knowing that a large anchor-set hits $v_i$ provides little information about its position, since $v_i$ might be close to any of the many nodes in the anchor-set. Therefore, choosing anchor-sets of different sizes balances the trade-off and leads to efficient embeddings. Following the above principle, P-GNNs choose $k = c\log^2 n$ random anchor-sets, denoted as $S_{i,j} \subset \mathcal{V}$, where $i=1,\dots,\log n, j = 1,\dots,c\log n$ and $c$ is a hyperparameter. To sample an anchor-set $S_{i,j}$, we sample each node in $\mathcal{V}$ independently with probability $\frac{1}{2^i}$. \subsection{Design decisions for P-GNNs} \label{sc:design_choices} In this section, we discuss the design choices of the two key components of P-GNNs: the message computation function $F$ and the message aggregation functions $\textsc{Agg}$. \xhdr{Message Computation Function $F$} Message computation function $F(v,u,\mathbf{h}_v,\mathbf{h}_u)$ has to account for both position-based similarities as well as feature information. Position-based similarities are the key to reveal a node's positional information, while feature information may include other side information that is useful for the prediction task. Position-based similarities can be computed via the shortest path distance, or, for example, personalized PageRank \cite{jeh2003scaling}. However, since the computation of shortest path distances has a $O(|\mathcal{V}|^3)$ computational complexity, we propose the following $q$-hop shortest path distance \begin{equation} \label{eq:q_hop_dist} d^q_{sp}(v,u) = \begin{cases} d_{sp}(v,u), & \text{if } d_{sp}(v,u) \leq q, \\ \infty, & \text{otherwise} \end{cases} \end{equation} where $d_{sp}$ is the shortest path distance between a pair of nodes. Note that $1$-hop distance can be directly identified from the adjacency matrix, and thus no additional computation is needed. Since we aim to map nodes that are close in the network to similar embeddings, we further transform the distance $s(v,u) = \frac{1}{d^q_{sp}(v,u)+1}$ to map it to a $(0,1)$ range. Feature information can be incorporated into $\mathbf{h}_u$ by passing in the information from the neighbouring nodes, as in GCN \cite{kipf2016semi}, or by concatenating node features $\mathbf{h}_v$ and $\mathbf{h}_u$, similar to GraphSAGE \cite{hamilton2017inductive}, although other approaches like attention can be used as well \cite{velickovic2017graph}. Combining position and feature information can then be achieved via concatenation or product. We find that simple product works well empirically. Specifically, we find the following message passing function $F$ performs well empirically \begin{equation} F(v,u,\mathbf{h}_v,\mathbf{h}_u) = s(v,u) \textsc{concat}(\mathbf{h}_v,\mathbf{h}_u) \end{equation} \xhdr{Message Aggregation Functions $\textsc{Agg}$} Message aggregation functions aggregate information from a set of messages (vectors). Any permutation invariant function, such as $\textsc{Mean}, \textsc{Min}, \textsc{Max}, \textsc{Sum}$, can be used, and non-linear transformations are often applied before and/or after the aggregation to achieve higher expressive power \cite{zaheer2017deep}. We find that using simple $\textsc{Mean}$ aggregation function provides good results, thus we use it to instantiate both $\textsc{Agg}_M$ and $\textsc{Agg}_S$. \section{Theoretical Analysis of P-GNNs} \subsection{Connection to Existing GNNs} \label{sc:connection_to_gnns} P-GNNs generalize existing GNN models. From P-GNN's point of view, existing GNNs use the same anchor-set message aggregation techniques, but use different anchor-set selection and sampling strategies, and only output the structure-aware embeddings $\mathbf{h}_{v}$. GNNs either use deterministic or stochastic neighbourhood aggregation \cite{hamilton2017inductive}. Deterministic GNNs can be expressed as special cases of P-GNNs that treat each individual node as an anchor-set and aggregate messages based on $q$-hop distance. In particular, the function $F$ in Algorithm~\ref{alg:pgnn} corresponds to the message aggregation function of a deterministic GNN. In each layer, most GNNs aggregate information from a node's one-hop neighbourhood \cite{kipf2016semi,velickovic2017graph}, corresponding to using $1$-hop distance to compute messages, or directly aggregating $k$-hop neighbourhood \cite{xu2018representation}, corresponding to computing messages within $k$-hop distance. For example, a GCN \cite{kipf2016semi} can be written as choosing $\{S_i\} = \{v_i\}$, $\textsc{Agg}_M= \textsc{Mean}$, $\textsc{Agg}_S = \textsc{Mean}$, $F = \frac{1}{d^1_{sp}(v,u)+1}\mathbf{h}_u$, and the output embedding is $\mathbf{h}_u$ in the final layer. Stochastic GNNs can be viewed as P-GNNs that sample size-1 anchor-sets, but each node's choice of anchor-sets is different. For example, GraphSAGE \cite{hamilton2017inductive} can be viewed as a special case of P-GNNs where each node samples $k$ size-1 anchor-sets and then computes messages using 1-hop shortest path distance anchor-set, followed by aggregation $\textsc{Agg}_S$. This understanding reveals the connection between stochastic GNNs and P-GNNs. First, P-GNN uses larger anchor-sets thereby enabling higher sample efficiency (Sec \ref{sc:anchor_selection}). Second, anchor-sets that are shared across all nodes serve as reference points in the network, consequently, positional information of each node can be obtained from the shared anchor-sets. \subsection{Expressive Power of P-GNNs} \label{sc:anchor_distance} Next, we show that P-GNNs provide a more \textit{general class of inductive bias} for graph representation learning than GNNs; therefore, are more expressive to learn both structure-aware and position-aware node embeddings. We motivate our idea by considering pairwise relation prediction between nodes. Suppose a pair of nodes $u, v$ are labeled with label $y$, using labeling function $d_y(u, v)$, and our goal is to predict $y$ for unseen node pairs. From the perspective of representation learning, we can solve the problem via learning an embedding function $f$ that computes the node embedding $\mathbf{z}_v$, where the objective is to maximize the likelihood of the conditional distribution $p(y|\mathbf{z}_u, \mathbf{z}_v)$. Generally, an embedding function takes a given node $v$ and the graph $G$ as input and can be written as $\mathbf{z}_v = f(v, G)$, while $p(y|\mathbf{z}_u, \mathbf{z}_v)$ can be expressed as a function $d_z(\mathbf{z}_u, \mathbf{z}_v)$ in the embedding space. As shown in Section \ref{sc:limitation_structure_aware}, GNNs instantiate $f$ via a function $f_{\theta}(v, S_v)$ that takes a node $v$ and its $q$-hop neighbourhood graph $S_v$ as arguments. Note that $S_v$ is independent from $S_u$ (the $q$-hop neighbourhood graph of node $u$) since knowing the neighbourhood graph structure of node $v$ provides no information on the neighbourhood structure of node $u$. In contrast, P-GNNs assume a more general type of inductive bias, where $f$ is instantiated via $f_{\phi}(v, S)$ that aggregates messages from random anchor-sets $S$ that are shared across all the nodes, and nodes are differentiated based on their different distances to the anchor-sets $S$. Under this formulation, each node's embedding is computed similarly as in the stochastic GNN when combined with a proper $q$-hop distance computation (Section \ref{sc:connection_to_gnns}). However, since the anchor-sets $S$ are shared across all nodes, pairs of node embeddings are correlated via anchor-sets $S$, and are thus no longer independent. This formulation implies a joint distribution $p(\mathbf{z}_u, \mathbf{z}_v)$ over node embeddings, where $\mathbf{z}_u = f_{\phi}(u, S)$ and $\mathbf{z}_v = f_{\phi}(v, S)$. In summary, \textit{learning node representations} can be formalized with the following two types of objectives: \-\hspace{5mm} $\bullet$ GNN representation learning objective: \begin{equation} \begin{aligned} \label{eq:ob_edge_marginal} \min_\theta & \ \mathbb{E}_{u \sim V_{train}, v \sim V_{train}, S_u \sim p(V) , S_v \sim p(V)} \\ & \mathcal{L}(d_z(f_{\theta}(u, S_u), f_{\theta}(v, S_v))-d_y(u,v)) \end{aligned} \end{equation} \-\hspace{5mm} $\bullet$ P-GNN representation learning objective: \begin{equation} \begin{aligned} \label{eq:ob_edge_joint} \min_\theta & \ \mathbb{E}_{u \sim V_{train}, v \sim V_{train}, S \sim p(V)} \\ & \mathcal{L}(d_z(f_{\phi}(u, S), f_{\phi}(v, S))-d_y(u,v)) \end{aligned} \end{equation} where $d_y$ is the target similarity metric determined by the learning task, for example, indicating links between nodes or membership to the same community, and $d_z$ is the similarity metric in the embedding space, usually the $l_p$ norm. Optimizing Equations \ref{eq:ob_edge_marginal} and \ref{eq:ob_edge_joint} gives representations of nodes using joint and marginal distributions over node embeddings, respectively. If we treat $u$, $v$ as random variables from $G$ that can take values of any pair of nodes, then the mutual information between the joint distribution of node embeddings and any $Y = d_y(u,v)$ is larger than that between the marginal distributions and $Y$: $I(Y; X_{joint}) \geq I(Y; X_{marginal})$, where $X_{joint} = (f_{\phi}(u, S_u), f_{\phi}(v, S_v)) \sim p(f_{\phi}(u, S_u), f_{\phi}(v, S_v))$; $X_{marginal} = (f_{\theta}(u, S), f_{\theta}(v, S)) \sim p(f_{\theta}(u, S)) \otimes p(f_{\theta}(v, S))$, where $\otimes$ is the Kronecker product. The gap of this mutual information is great, if the target task $d_y(u,v)$ is related to the positional information of nodes which can be captured by the shared choice of anchor-sets. Thus, we conclude that P-GNNs, which embed nodes using the joint distribution of their distances to common anchors, have more expressive power than existing GNNs. \subsection{Complexity Analysis} Here we discuss the complexity of neural network computation. In P-GNNs, every node communicates with $O(\log^2 n)$ anchor-sets in a graph with $n$ nodes and $e$ edges. Suppose on average each anchor-set contains $m$ nodes, then there are $O(mn \log^2n)$ message communications in total. If we follow the exact anchor-set selection strategy, the complexity will be $O(n^2 \log^2n)$. In contrast, the number of communications is $O(n+e)$ for existing GNNs. In practice, we observe that the computation can be sped up by using a simplified aggregation $\textsc{Agg}_S$, while only slightly sacrificing predictive performance. Here for each anchor-set, we only aggregate message from the node closest to a given node $v$. This removes the factor $m$ in the complexity of P-GNNs, making the complexity $O(n\log^2n)$. We use this implementation in the experiments. \section{Introduction} Learning low-dimensional vector representations of nodes in graphs \cite{hamilton2017representation} has led to advances on tasks such as node classification \cite{kipf2016semi}, link prediction \cite{grover2016node2vec}, graph classification \cite{ying2018hierarchical} and graph generation \cite{you2018graphrnn}, with successful applications across domains such as social and information networks \cite{ying2018graph}, chemistry \cite{you2018graph}, and biology \cite{zitnik2017predicting}. Node embedding methods can be categorized into Graph Neural Networks (GNNs) approaches \cite{scarselli2009graph}, matrix-factorization approaches \cite{belkin2002laplacian}, and random-walk approaches \cite{perozzi2014deepwalk}. Among these, GNNs are currently the most popular paradigm, largely owing to their efficiency and inductive learning capability~\cite{hamilton2017inductive}. By contrast, random-walk approaches~\cite{perozzi2014deepwalk,grover2016node2vec} are limited to transductive settings and cannot incorporate node attributes. In the GNN framework, the embedding of a node is computed by a GNN layer aggregating information from the node's network neighbors via non-linear transformation and aggregation functions~\cite{battaglia2018relational}. Long-range node dependencies can be captured via stacking multiple GNN layers, allowing the information to propagate for multiple-hops~\cite{xu2018representation}. However, the key limitation of existing GNN architectures is that they fail to capture the {\em position/location} of the node within the broader context of the graph structure. For example, if two nodes reside in very different parts of the graph but have topologically the same (local) neighbourhood structure, they will have identical GNN structure. Therefore, the GNN will embed them to the same point in the embedding space (we ignore node attributes for now). Figure~\ref{fig:example} gives an example where a GNN cannot distinguish between nodes $v_1$ and $v_2$ and will always embed them to the same point because they have isomorphic network neighborhoods. Thus, GNNs will never be able to classify nodes $v_1$ and $v_2$ into different classes because from the GNN point of view they are indistinguishable (again, not considering node attributes). Researchers have spotted this weakness~\cite{xu2018powerful} and developed heuristics to fix the issue: augmenting node features with one-hot encodings \cite{kipf2016semi}, or making GNNs deeper~\cite{selsam2018learning}. However, models trained with one-hot encodings cannot generalize to unseen graphs, and arbitrarily deep GNNs still cannot distinguish structurally isomorphic nodes (Figure \ref{fig:example}). \begin{figure} \centering \includegraphics[width=0.42\textwidth]{figs/example.png} \caption{Example graph where GNN is not able to distinguish and thus classify nodes $v_1$ and $v_2$ into different classes based on the network structure alone. (Note we do not consider node features.) Each node is labeled based on its label $A$ or $B$, and effective node embedding should be able to learn to distinguish nodes $v_1$ and $v_2$ (that is, embed them into different points in the space). However, GNNs, regardless of depth, will \textit{always} assign the same embedding to both nodes, because the two nodes are symmetric/isomorphic in the graph, and their GNN rooted subtrees used for message aggregation are the same. In contrast, P-GNNs can break the symmetry by using $v_3$ as the anchor-set, then the shortest path distances $(v_1, v_3)$ and $(v_2, v_3)$ are different and nodes $v_1$ and $v_2$ can thus be distinguished. } \label{fig:example} \vspace{-3mm} \end{figure} Here we propose {\em Position-aware Graph Neural Networks (P-GNNs)}, a new class of Graph Neural Networks for computing node embeddings that incorporate a node's positional information with respect to all other nodes in the network, while also retaining inductive capability and utilizing node features. Our key observation is that node position can be captured by a low-distortion embedding by quantifying the distance between a given node and a set of anchor nodes. Specifically, P-GNN uses a sampling strategy with theoretical guarantees to choose $k$ random subsets of nodes called {\em anchor-sets}. To compute a node's embedding, P-GNN first samples multiple anchor-sets in each forward pass, then learns a non-linear aggregation scheme that combines node feature information from each anchor-set and weighs it by the distance between the node and the anchor-set. Such aggregations can be naturally chained and combined into multiple layers to enhance model expressiveness. Bourgain theorem \cite{bourgain1985lipschitz} guarantees that only $k = O(\log^2 n)$ anchor-sets are needed to preserve the distances in the original graph with low distortion. We demonstrate the P-GNN framework in various real-world graph-based prediction tasks. In settings where node attributes are not available, P-GNN's computation of the $k$ dimensional distance vector is inductive across different node orderings and different graphs. When node attributes are available, a node's embedding is further enriched by aggregating information from all anchor-sets, weighted by the $k$ dimensional distance vector. Furthermore, we show theoretically that P-GNNs are more general and expressive than traditional message-passing GNNs. In fact, message-passing GNNs can be viewed as special cases of P-GNNs with degenerated distance metrics and anchor-set sampling strategies. In large-scale applications, computing distances between nodes can be prohibitively expensive. Therefore, we also propose P-GNN-Fast which adopts approximate node distance computation. We show that P-GNN-Fast has the same computational complexity as traditional GNN models while still preserving the benefits of P-GNN. We apply P-GNNs to 8 different datasets and several different prediction tasks including link prediction and community detection\footnote{Code and data are available in \url{https://github.com/JiaxuanYou/P-GNN/}}. In all datasets and prediction tasks, we show that P-GNNs consistently outperforms state of the art GNN variants, with up to 66\% AUC score improvement. \section{Related Work} Existing GNN models belong to a family of graph message-passing architectures that use different aggregation schemes for a node to aggregate feature messages from its neighbors in the graph: Graph Convolutional Networks use mean pooling \cite{kipf2016semi}; GraphSAGE concatenates the node's feature in addition to mean/max/LSTM pooled neighborhood information \cite{hamilton2017inductive}; Graph Attention Networks aggregate neighborhood information according to trainable attention weights \cite{velickovic2017graph}; Message Passing Neural Networks further incorporate edge information when doing the aggregation \cite{gilmer2017neural}; And, Graph Networks further consider global graph information during aggregation \cite{battaglia2018relational}. However, all these models focus on learning node embeddings that capture local network structure around a given node. Such models are at most as powerful as the WL graph isomorphism test \cite{xu2018powerful}, which means that they cannot distinguish nodes at symmetric/isomorphic positions in the network (Figure~\ref{fig:example}). That is, without relying on the node feature information, above models will always embed nodes at symmetric positions into same embedding vectors, which means that such nodes are indistinguishable from the GNN's point of view. Heuristics that alleviate the above issues include assigning an unique identifier to each node \cite{kipf2016semi,hamilton2017inductive} or using locally assigned node identifiers plus pre-trained transductive node features \cite{zhang2018link}. However, such models are not scalable and cannot generalize to unseen graphs where the canonical node ordering is not available. In contrast, P-GNNs can capture positional information without sacrificing other advantages of GNNs. One alternative method to incorporate positional information is utilizing a graph kernel, which crucially rely on the positional information of nodes and inspired our P-GNN model. Graph kernels implicitly or explicitly map graphs to a Hilbert space. Weisfeiler-Lehman and Subgraph kernels have been incorporated into deep graph kernels~\cite{Yan+2015} to capture structural properties of neighborhoods. \citeauthor{Gae+2003} (\citeyear{Gae+2003}) and \citeauthor{Kas+2003} (\citeyear{Kas+2003}) also proposed graph kernels based on random walks, which count the number of walks two graphs have in common~\cite{Sug+2015}. Kernels based on shortest paths were first proposed in~\cite{Borgwardt2005}.
train/arxiv
BkiUfYPxK7IDKzHWNzHC
5
1
\section{Introduction.} Wave Turbulence is a theory that describes random weakly nonlinear wave systems with broadband spectra (see e.g. ref. \cite{naza11}). The main object in this theory is a wave action spectrum which is the second-order moment of the wave amplitude and which evolves according to the so-called wave-kinetic equation. Special attention in past literature was given to studies of stationary scaling solutions of this equation which are analogous to the Kolmogorov spectrum of hydrodynamic turbulence, the so-called Kolmogorov-Zakharov spectra. However, as it was shown in refs. \cite{lvov_2004,choi_2004,choi_2005,choi_2005b,naza11}, Wave Turbulence approach can also be extended to describing the higher-order moments and even to the entire probability density function (PDF) of the wave amplitude. A formal justification of such an extension based on a rigorous statistical formulation was later presented in ref. \cite{Eyink}. An introduction to Wave Turbulence as well as a summary recent developments in this area can be found in book \cite{naza11} and in an older text \cite{ZLF}. It has been widely believed that the statistics of random weakly nonlinear wave systems is close to being Gaussian. Derivation of the evolution equation for the PDF of the wave intensities presented in ref. \cite{choi_2005} has made it possible to examine this belief. It was shown in ref. \cite{choi_2005} indeed has a stationary solution corresponding to the Gaussian state, but it was also noted that the typical evolution time of the PDF is the same as the one for the spectrum. Thus, for non-stationary wave systems one can expect significant deviations from the Gaussianity if the initial wave distribution is non-Gaussian. Note that non-Gaussian (typically deterministic) initial conditions for the wave intensity are typical in numerical simulations in Wave Turbulence. Also, there is no reason to believe that initial waves excited in natural conditions, e.g. sea waves excited by wind, should be Gaussian. Therefore, study of evolution of the wave statistics is important for both understanding of fundamental nonlinear processes as well as for the practical predictions such as e.g. wave weather forecast. In the present paper we will present the full general solution for the PDF equation derived in ref. \cite{choi_2005}. Based on that solution we will formulate the condition under which the wave statistics relaxes to the Gaussian state. \section{Evolution equations for the wave amplitude and for the PDF} Consider a weakly nonlinear wave system dominated by the four-wave interactions bounded by an $L$-periodic cube in the $d$ dimensional physical space. (Four-wave systems are considered here as an illustrative example only. All results of this paper hold for the $N$-wave systems with any $N$. The only difference will be in the expressions for $\gamma_{\bf k}$ and $\eta_{\bf k}$ below; see ref. \cite{naza11}.) We have the Hamiltoninan equations for the Fourier coefficients as follows, \begin{equation} i\dot{a_{\bf k}} = \frac{\partial \mathcal{H}}{\partial {a}_{\bf k}^*}, \quad \mathcal{H}=\sum_{\bf k} \omega_{\bf k} |{a}_{\bf k}|^2 + \frac{1}{2}\sum_{{\bf k}_1, {\bf k}_2,{\bf k}_3,{\bf k}_4} W^{{\bf k}_1,{\bf k}_2}_{{\bf k}_3,{\bf k}_4}{a}_{{\bf k}_1}^* {a}_{{\bf k}_2}^* a_{{\bf k}_3} a_{{\bf k}_4} \label{4wH}, \end{equation} where ${\bf k}, {\bf k}_1, {\bf k}_2,{\bf k}_3,{\bf k}_4 \in \frac L {2\pi} \mathbb{Z}^d $ are the wave vectors, $a_{\bf k}\in \mathbb{C}$ is the wave action variable, $W^{{\bf k}_1,{\bf k}_2}_{{\bf k}_3,{\bf k}_4} \in \mathbb{R}$ is an interaction coefficient which is a model-specific function of ${{\bf k}_1, {\bf k}_2,{\bf k}_3,{\bf k}_4}$ (e.g. $W^{{\bf k}_1,{\bf k}_2}_{{\bf k}_3,{\bf k}_4} =1$ for the Gross-Pitaevskii equation) and $\omega_{\bf k} \in \mathbb{R}$ is the frequency of mode $\bf k$. Let us consider the PDF ${\mathcal P}(t,s_{\bf k})$ of the wave intensity $J_{\bf k}= |a_{\bf k}|^2$ defined in a standard way as so that the probability for $J_{\bf k}$ to be in the range from $s_{\bf k}$ to $s_{\bf k} +ds_{\bf k}$ is ${\mathcal P}(t,s_{\bf k}) d s_{\bf k}$. In symbolic form, \begin{equation} {\mathcal P}(t,s_{\bf k}) = \langle \delta (s_{\bf k}-J_{\bf k}) \rangle,\label{pasdelta} \end{equation} Suppose that the waves are weakly nonlinear, so that the quadric part of the Hamiltonian is much less than its quadratic part. Suppose also that the complex wave amplitudes $a_{\bf k}$ are independent random variables for each $\bf k$ and that the initial phases of $a_{\bf k}$ are random and equally probable in the range from $0$ to $2\pi$. These are the main assumptions of the weak Wave Turbulence theory (see ref. \cite{naza11}), leading, upon taking the infinite-box limit $L \to \infty$, to the following evolution equation for ${\mathcal P}(t,s_{\bf k})$, as derived in ref. \cite{choi_2005}: \begin{equation} \frac{\partial {\mathcal P}(t, s_{\bf k})}{\partial t} +\frac{\partial F}{\partial s_{\bf k}}=0,\label{main} \end{equation} where \begin{equation} F = -s_{\bf k} \Big(\gamma_{\bf k} {\mathcal P} +\eta_{\bf k} \frac{\partial {\mathcal P} }{\partial s_{\bf k}}\Big) \end{equation} and, for the four-wave systems, \begin{eqnarray} \eta_{\bf k}(t) &=& 4\pi \int|W^{{\bf k},{\bf k}_1}_{{\bf k}_2,{\bf k}_3}|^2\delta({{\bf k} + {\bf k}_1 -{\bf k}_2 -{\bf k}_2} )\delta(\omega_{{\bf k}} + \omega_{{\bf k}_1} - \omega_{{\bf k}_2} -\omega_{{\bf k}_3}) n_{{\bf k}_1} n_{{\bf k}_2} n_{{\bf k}_3} d{\bf k}_1d{\bf k}_2d{\bf k}_3, \\ \gamma_{\bf k}(t) &=& 8\pi \int|W^{{\bf k},{\bf k}_1}_{{\bf k}_2,{\bf k}_3}|^2\delta({{\bf k} + {\bf k}_1 -{\bf k}_2 -{\bf k}_2} )\delta(\omega_{{\bf k}} + \omega_{{\bf k}_1} - \omega_{{\bf k}_2} -\omega_{{\bf k}_3}) \Big[n_{{\bf k}_1}(n_{{\bf k}_2}+n_{{\bf k}_3})-n_{{\bf k}_2}n_{{\bf k}_3}\Big]d{\bf k}_1d{\bf k}_2d{\bf k}_3, \label{gam} \end{eqnarray} where $n_{\bf k} = \langle J_{\bf k} \rangle$ is the wave action spectrum. The infinite-box limit resulted to passing continuous wave number description; each wave number integration in the above equations is over $\mathbb{R}^d$. In this paper, we will find the time-dependent solution of the PDF equation (\ref{main}). \section{Generating function } Let us introduce the generating function \begin{equation} \mathcal{Z}(t,\lambda_{\bf k}) = \langle e^{-\lambda_{\bf k} |a_{\bf k}(t)|^2} \rangle = \int^{\infty}_{0}{\mathcal P}(\lambda_{\bf k},t) e^{-\lambda_{\bf k} s_{\bf k}} ds_{\bf k} \label{eq:Z} \end{equation} where $\lambda_{\bf k}$ is a real parameter. Note that this definition is different from the one used in Ref.\cite{choi_2005} by the sign of the exponent. Here, we have changed the sign in order to comply with the standard relation between $\mathcal{P}$ and $\mathcal{Z}$ via the Laplace transform, as expressed in eqn. (\ref{eq:Z}). In what follows we will drop subscripts $k$ for brevity whenever it does not cause ambiguity. The inverse Laplace transformation of $\mathcal{Z}(t,\lambda)$ gives: \begin{equation} {\mathcal P}(t,s) = \frac{1}{2\pi i}\lim_{T\to \infty} \int^{T+i\infty}_{T-i\infty}{\mathcal Z}(\lambda,t) e^{s\lambda} d\lambda. \end{equation} Given ${\mathcal Z}$, one can easily calculate the moments of the wave intensity, \begin{equation} M^{(p)}_{\bf k} \equiv \langle |a_{\bf k}|^{2p}\rangle =(-1)^p {\mathcal Z}_{\lambda\cdots\lambda}|_{\lambda=0} = (-1)^p \langle |a|^{2p} e^{\lambda |a|^2}\rangle |_{\lambda=0}, \end{equation} where $p \in \mathbb{N}$ is the order of the moments and subscript $\lambda$ means taking derivative with respect to $\lambda$. In particular, for the waveaction spectrum we have $$ n_{\bf k} = - {\mathcal Z}_{\lambda}|_{\lambda=0} . $$ The evolution equation for $\mathcal{Z}$ can be obtained by Laplace transforming eqn. (\ref{eq:Z}), which gives \begin{equation} \dot{\mathcal{Z}} =- \lambda\eta \mathcal{Z} -(\lambda^2\eta+\lambda\gamma)\mathcal{Z}_{\lambda}.\label{gf} \end{equation} Note that the sign differences in this equation with respect to the corresponding equation in Ref.\cite{choi_2005} is due to the sign difference in our definition of $\mathcal{Z}$. Previously in Ref.\cite{choi_2005}, the general steady state solution eqn.(\ref{gf}) was presented: \begin{equation} {\mathcal{Z}} = \frac 1 {1+ \lambda_{\bf k} n_{\bf k}}. \label{gfst} \end{equation} This solution corresponds to gaussian statistics of the wave field (Rayleigh distribution for the wave intensity respectively). Below, we will concentrate the fully time-evolving problem, in which the parameters $ \eta, \gamma$ have time-dependency. The goal of this paper is first to find the solution of the eqn. (\ref{gf}) and then obtain the respective time-dependent PDF. \section{Solution for $\mathcal{Z}$ by the method of characteristics } We can find the solution for the fully time-evolving case of the eqn. (\ref{gf}) by using the method of characteristics. Rewriting this equation in the characteristic form we have: \begin{equation} \frac{d\lambda}{dt}= \Big({\gamma}+\lambda \eta \Big)\lambda, \quad \frac{d\mathcal{Z}}{dt}=- \lambda\eta\mathcal{Z}.\label{cm} \end{equation} Changing variable to $\mu = \lambda e^{-\int_0^t {\gamma(t')}dt'}$ in the first of these equations, we transform it into \begin{equation} \frac{d\mu(t)}{dt} = \eta \mu \lambda =\eta \mu^2 e^{\int_0^t {\gamma(t')}dt'}, \label{cm1} \end{equation} solving which we have: \begin{equation} -\frac 1 {\mu(t)}+\frac 1 {\mu_0} = \int^t_0 \eta(t') e^{\int^{t'}_0 \gamma(t'')dt''}dt', \end{equation} where $\mu_0 = \mu(0) = \lambda_0$. This gives for $\lambda(t) $: \begin{equation} \lambda(t) = \frac{\lambda_0 e^{\int^t_0 \gamma(t')dt'}}{1 - \lambda_0 \int^t_0 \eta(t') e^{\int^{t'}_0 \gamma(t'')dt''}dt'}.\label{slt} \end{equation} This relation has a simpler form in terms of $n$ rather than $\eta$. Indeed, $n$ satisfies the following (kinetic) equation, \begin{equation} \dot n = \eta - \gamma n, \label{eq:ndot} \end{equation} integrating which we have \begin{equation} n(t) =n(0) \, e^{-\int^t_0 \gamma(t')dt'} + e^{-\int^t_0 \gamma(t')dt' } \int^t_0 \eta(t') e^{\int^{t'}_0 \gamma(t'')dt''}dt'. \label{eq:n} \end{equation} Using this identity, we have: \begin{equation} \lambda(t) = \frac{\lambda_0 }{e^{-\int^t_0 \gamma(t')dt'} - \lambda_0 \left(n(t)-n(0) e^{-\int^t_0 \gamma(t')dt'}\right)}.\label{slt1} \end{equation} Conversely, we have: \begin{equation} \lambda_0 = \frac{\lambda e^{-\int^t_0 \gamma(t')dt'} }{1+ \lambda \left(n(t)-n(0) e^{-\int^t_0 \gamma(t')dt'}\right)}.\label{slt1i} \end{equation} Now, from the second of the eqns. (\ref{cm}) and from the first equality in eqn. (\ref{cm1}) we see that the log derivative of $\mathcal{Z}$ is equal to the negative log derivative of $\mu$. Thus, \begin{equation} \mathcal{Z}(t, \lambda)=\mathcal{Z}_0 \frac {\mu_0} \mu= \frac {\mathcal{Z}_0 \lambda_0} \lambda e^{\int_0^t {\gamma(t')}dt'} = \frac { \mathcal{Z}_0 }{1+ \lambda \left(n(t)-n(0) e^{-\int^t_0 \gamma(t')dt'}\right)}.\label{solnZ} \end{equation} where $\mathcal{Z}_0 = \mathcal{Z}(0, \lambda_0)$ and $\lambda_0$ must be substituted in terms of $\lambda$ solving for it from eqn. (\ref{slt1i}). Eqns. (\ref{solnZ}) and (\ref{slt1i}) allow us to prove the following theorem. \begin{thm} \begin{enumerate} \item Wave fields which are Gaussian initially will remain Gaussian for all time. \item Wave turbulence asymptotically becomes Gaussian if \begin{equation} \lim_{t \to \infty} \frac {n(0) e^{-\int_0^t {\gamma(t')}dt' }}{n(t)} = 0. \label{cond} \end{equation} \end{enumerate} \end{thm} To prove the first part we simply substitute $\mathcal{Z}_0 = 1/(1+ \lambda_0 n_0)$ into eqn. (\ref{solnZ}) and, after using (\ref{slt1i}), obtain $\mathcal{Z} = 1/(1+ \lambda n)$, which corresponds to the Gaussian statistics. To prove the second part we notice that if condition (\ref{cond}) is satisfied then \begin{equation} \lim_{t \to \infty} \lambda_0 (t, \lambda) = 0, \quad \lim_{t \to \infty} \mathcal{Z}_0 = \mathcal{Z}(0, 0) =1 \quad \hbox{and} \quad \lim_{t \to \infty} \mathcal{Z}(t, \lambda) = \frac { 1 }{1+ \lambda n}. \label{cond1} \end{equation} {\bf Remarks:} \begin{enumerate} \item Condition (\ref{cond}) is satisfied for the inertial range modes in forced-dissipated systems which tend to a steady state. Indeed, in this case $\gamma \to \eta/n $ which is a positive constant (at fixed mode $\bf k$), so the time integral of this quantity diverges as $t \to \infty$. \item In absence of forcing and dissipation, spectrum $n_{\bf k}$ decays to zero at any mode $\bf k$ as $t \to \infty$, and so does $\gamma_{\bf k}$. Thus the integral of $\gamma_{\bf k}(t)$ may converge as $t \to \infty$, which means that non-Gaussianity of some (or all) wave modes may persist as $t \to \infty$. \item In general, function $\gamma_{\bf k}(t)$ is not sign definite, and there may be transient time periods where $\gamma_{\bf k}(t)<0$. The deviation from Gaussianity of some (or all) wave modes may increase during these periods. \end{enumerate} \section{Evolution of the PDF} Now let us analyse the PDF of transient states. Let us think of a simple case with a deterministic initial wave intensity, $P(0,s) = \delta(s - J)$. We will call such a solution $\mathcal{P}_J(s,t)$. Then $\mathcal{Z}(0,\lambda) = e^{-\lambda J}$. In fact, since the inverse Laplace transform is a linear operation, the considered solution is nothing but Green's function for the general problem with an arbitrary initial condition $P(0,s)$: \begin{equation} \mathcal{P}(t,s) =\int_0^\infty \mathcal{P}(0,J) \mathcal{P}_J(t,s) dJ. \label{solnPgen} \end{equation} Let us take the inverse Laplace transform of $\mathcal{Z}(t, \lambda)$ given by eqn. (\ref{solnZ}) to obtain the $\mathcal{P}_\delta(s,t) $ at $t>0$: \begin{equation} \mathcal{P}_J(t,s) = \frac{1}{2\pi i}\lim_{T\to +\infty} \int^{T+i\infty}_{T-i\infty} e^{\lambda s}\mathcal{Z}(\lambda) d\lambda \\ = \frac{1}{2\pi i} \lim_{T\to +\infty} \int^{T+i\infty}_{T-i\infty} \frac { e^{\lambda s-\lambda_0 J} }{1+ \lambda \tilde n } d\lambda. \label{solnPst} \end{equation} where \begin{equation} \tilde n = n(t)-J e^{-\int^t_0 \gamma(t')dt'} \end{equation} (note that $n(0) =J$). Substituting $\lambda_0$ from (\ref{slt1}) and changing the integration variable as $ \rho = \lambda +1/\tilde n$, we have: \begin{equation} \mathcal{P}_J(t,s) = \frac{e^{-\frac{s}{\tilde n} - a \tilde n}}{2\pi i {\tilde n}} \lim_{T\to +\infty} \int^{T+i\infty}_{T-i\infty} \frac{ e^{s \rho +\frac a \rho} }\rho d \rho =\frac 1 { {\tilde n}} {e^{-\frac{s}{\tilde n} - a \tilde n}} I_0(2\sqrt{as}) . \label{solnPst1} \end{equation} where $a = \frac{J}{ {\tilde n}^2} e^{-\int^t_0 \gamma(t')dt'}$ and $I_0(x) $ is the zeroth modified Bessel function of the first kind. Note that $I_0(0)=1$, so we recover $\mathcal{P}_\delta \to \mathcal{P}_G = \frac 1 n e^{-s/n}$ as $t \to \infty$ if condition (\ref{cond}) is satisfied provided that $s$ is not too large, $as \ll 1$. Now let us suppose that condition (\ref{cond}) is satisfied and let consider the asymptotic behaviour of the probability distribution at large $s$ and large $t$, and $as \gg 1$ (i.e. $s$ is much larger than $1/a$ which is itself large). Taking into account that $I_0(x) \xrightarrow{x\rightarrow \infty} \frac{e^x}{\sqrt{2\pi x}}$, we have: \begin{equation} \mathcal{P}_J(s,t) \to \frac{\mathcal{P}_G}{(2\pi)^{1/2}(as)^{1/4}}e^{2\sqrt{as} - as} \ll \mathcal{P}_G \quad \hbox{for} \quad as\gg 1, \; \int^t_0 \gamma(t')dt' \gg 1. \label{front} \end{equation} Thus, we see a front at $s \sim s^{*}(t) = 1/a $ moving toward large $s$ as $t \to \infty$. The PDF ahead of this front is depleted with respect to the Gaussian distribution, whereas behind the front it asymptotes to Gaussian. Obviously, the same kind of behaviour will be realised for any solution (\ref{solnPgen}) arising from initial data having a finite support in $s$. \section{Conclusions and Discussion} In this paper we have obtained the general solutions for the generating function and for the PDF of wave intensities in Wave Turbulence, equations (\ref{solnZ}) and (\ref{slt1i}), and equation (\ref{solnPst1}) respectively. This allowed us to prove a theorem stating that wave fields which are Gaussian initially will remain Gaussian for all time and that Wave Turbulence asymptotically becomes Gaussian if condition (\ref{cond}) is satisfied. We have also found (when condition (\ref{cond}) is satisfied) an asymptotic solution for the PDF (\ref{front}) where the Gaussian distribution forms behind a front propagating toward large wave intensities. Condition (\ref{cond}) is satisfied for the inertial range modes in forced-dissipated systems approaching a steady state. Thus, the Gaussian statistics will form at large time for such modes in these systems. An interesting subclass of solutions in forced-dissipated systems is when the spectrum is in a steady state from the initial moment of time (i.e. it is a stationary solution of the wave-kinetic equation), while the PDF is not Rayleigh initially (i.e. the initial wave field is not Gaussian). For example, the initial wave intensities can be deterministic, i.e. their PDFs are delta-functions, as it is often taken in numerical simulations of Wave Turbulence. In this case, equation (\ref{solnPst1}) looks the simplest, with $\int^t_0 \gamma(t')dt' = \gamma t$. Since the characteristic evolution times are the same for the spectrum $n_{\bf k}$ and the PDF, the latter will remain non-Gaussian over a substantial time in the initial field is non-Gaussian. Such situations should be considered typical rather than exception in natural conditions (where initial waves arise, e.g., from an instability which does not necessarily produce Gaussian waves) and in numerical simulations (where typically the wave intensities are taken to be deterministic). Moreover, in absence of forcing and dissipation, spectrum $n_{\bf k}$ decays to zero at any mode $\bf k$ as $t \to \infty$, and so does $\gamma_{\bf k}$. Thus the integral $\int^t_0 \gamma_{\bf k}(t')dt' $ may converge as $t \to \infty$, which means that non-Gaussianity of some (or all) wave modes may persist as $t \to \infty$. Furthermore, since $\gamma_{\bf k}(t)$ is not sign definite, there may be transient time periods where $\gamma_{\bf k}(t)<0$. The deviation from Gaussianity of some (or all) wave modes may increase during these periods. The present paper considers the four-wave systems as an illustrative example, but it is clear that the obtained solutions are more general and apply to the wave systems with resonances of any order (one would simply have to use different expressions for the integrals $\gamma_{\bf k}(t)$ and $\eta_{\bf k}(t)$ corresponding to resonance of the considered order; see e.g. book \cite{naza11}). Note that our solution for the PDF (\ref{solnPst1}) is expressed in terms of the spectrum $n_{\bf k}(t)$ (recall that $\gamma_{\bf k}(t)$ depends on $n_{\bf k}(t)$ via eqn. (\ref{gam})). On the other hand, $n_{\bf k}(t)$ obeys the wave-kinetic equation which is not easy to solve for non-stationary systems. However, it is quite straightforward to solve the wave-kinetic equation numerically, after which the resulting $n_{\bf k}(t)$ can be used in the analytical formula for the PDF (\ref{solnPst1}). Since the latter formula is very simple, we believe that it can be very effective in practical calculations especially in the situations where non-Gaussianity is important, e.g., in wave weather forecasts including prediction of anomalously strong waves -- the so-called freak waves. \begin{acknowledgments} Yeontaek Choi's work is supported by National Institute for Mathematical Sciences(NIMS) and Korean Union of Public sector Research and Professional workers(KUPRP). The work of Young-Sam Kwon is supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2013R1A1A2057662). \end{acknowledgments}
train/arxiv
BkiUfCXxK7Ehm2ZQ1qP6
5
1
\section{Introduction} We call some semiconductors Nernst element~\cite{ref1,ref2}, which will be able to use for the generation of electric power by applying the Nernst effect. As the fundamental study for the power generation by the Nernst elements, we are studying for the transport properties of its candidates in a magnetic field~\cite{ref3,ref4}. By measuring their transport coefficients, we will be able to estimate the efficiency of energy conversion. \section{Experiment} The thermoelectric effect and the Nernst effect can be written respectively in the form, \begin{eqnarray} {\mbox {\boldmath $E$} } &=& \alpha \cdot {\mbox{\it grad }} T , \label{eq.1} \\ {\mbox {\boldmath $E$} } &=& N \cdot {\mbox {\boldmath $B$} }\times{\mbox{\it grad }}T , \label{eq.2} \end{eqnarray} where {\boldmath $E$} is electric field, $\alpha$ thermoelectric power, $T$ temperature, $N$ Nernst coefficient and {\boldmath $B$} magnetic induction. \begin{figure} \epsfxsize=8cm \epsffile{Fig1.eps} \caption{The scheme of measurement. (a) is for thermoelectric power and (b) is for Nernst coefficient.} \label{fig.1} \end{figure} Suppose a sample is rectangular parallelepiped and its scale is of length $L,$ width $w,$ thickness $t,$ then a temperature gradient is given in the direction of $L.$ That scheme is indicated to FIG.~\ref{fig.1}. We can obtain the thermoelectric power $\alpha$ by detecting the potential difference $V_{\alpha}$ and temperature difference $\delta T$ between edges in the direction of $L,$ and the Nernst coefficient $N$ by temperature gradient $\Delta T/L$ and the magnetic induction $B$ applied in the perpendicular direction to temperature gradient and potential difference $V_{\rm N}$ in the perpendicular direction to both temperature gradient and magnetic induction. They were calculated from eqs.(\ref{eq.1}) and (\ref{eq.2}) as follows: \begin{eqnarray} V_{\rm \alpha} &=& \alpha \cdot \left( \frac{\Delta T}{L} \right) , \label{eq.3}\\ V_{\rm N} &=& w N B \cdot \left( \frac{\Delta T}{L} \right) . \label{eq.4} \end{eqnarray} Using eqs.(\ref{eq.3}) and (\ref{eq.4}), we obtain the following relation: \begin{eqnarray} \alpha &=& \left( \frac{V_{\rm \alpha} \cdot L}{\Delta T} \right) , \label{eq.3.1}\\ N &=& \left( \frac{V_{\rm N} }{B } \right) \left( \frac{L }{w } \right) . \label{eq.4.1} \end{eqnarray} On this measurement, it is very important that we used two kinds of the shapes for a sample. They are indicated to FIG.~\ref{fig.2}. One is called ``Bridge shape", that has narrow main body, several legs for measuring leads and wide heads on the edges for having good attachments to heating or cooling bath. We call the other ``Fat-Bridge shape", that has 5 times wider body than Bridge one. The samples were cut out from thin wafers at accuracy of about 0.1mm by wire cutter. We measured their scale at precision of within 0.005mm by micrometer. These samples were located on the sample holder as is indicated to FIG.~\ref{fig.3}. \begin{figure} \epsfxsize=8cm \epsffile{Fig2.eps} \caption{The shape of samples. (a) is Bridge shape and (b) is Fat-Bridge shape.} \label{fig.2} \end{figure} \begin{figure} \epsfxsize=8cm \epsffile{Fig3.eps} \caption{Sample holder which is mounted on sample mounter and inserted in the central part of vacuum chamber. Figures (a) and (b) are its top view and side view respectively.} \label{fig.3} \end{figure} When we make the temperature gradient in the samples, temperatures are controlled by two Cu blocks attached to the edges of sample. One is heated up to about $100^{\circ}$C by foil heater and the other is cooled down to about $0^{\circ}$C by the unfrozen liquid mixed water and ethylene glycol. Temperatures were measured by Chromel-Almuel thermocouples. Measuring leads for voltage signals are 0.5mm diameter wires of the Cu which were welded to sample. We had the experiment around room temperatures and on the two kinds of temperature conditions. On one condition, the temperature difference of between sample edges is $10^{\circ}$C, and temperature of cold side is increased 0 up to $80^{\circ}$C by $10^{\circ}$C. On the other condition, the cold side is fixed to $0^{\circ}$C and hot side is $100^{\circ}$C. Experimental equipments are indicated to FIG.~\ref{fig.4}. By using the superconducting magnet coil which is included into the cryostat, magnetic field of 0 to 4 Tesla is generated in the central region of vacuum chamber. The region of 20mm cube in the central part of the chamber is stable within 0.2\% to the magnetic field strength of central point. We had measurements of physical properties in that region. Inner pressure of chamber was less than $10^{-3}$Pa. \begin{figure} \epsfxsize=8cm \epsffile{Fig4.eps} \caption{The experimental equipments. Superconducting magnet coil, vacuum chamber and sample mounter.} \label{fig.4} \end{figure} Physical phenomena of temperature and electric field are transduced to voltage signals by thermocouples and leads. As is indicated to FIG.~\ref{fig.5}, transduced voltage signals are inputted to isolation amplifiers. Then, amplified signals are sent to 16bit A/D converter plugged into a personal computer and are digitized. The maximum gain of the Amplifier is 2000. We can detect voltage signals in the resolution of 0.15$\mu$V at maximum. On this measurement, the precision of the temperature measurement were less than 0.1K and the relative error of the voltage measurement were not exceed 0.5\%. \begin{figure} \epsfxsize=8cm \epsffile{Fig5.eps} \caption{Data acquisition system.} \label{fig.5} \end{figure} \section{Data analysis and discussion} Figures \ref{fig.6} and \ref{fig.7} show the dependence of thermoelectric power and Nernst coefficient on magnetic induction respectively. They include data on the conditions that the temperature pairs of heating block and cooling one are set on 0 and $100^{\circ}$C, 10 and $20^{\circ}$C and 80 and $90^{\circ}$C. In FIG.~\ref{fig.6}, we used \begin{equation} \beta = N \cdot B , \label{eq.5} \end{equation} as substitute for Nernst coefficient. \begin{figure} \epsfxsize=8cm \epsffile{Fig6.eps} \caption{The B dependence of $\beta$, which is defined as product of Nernst coefficient and magnetic induction.} \label{fig.6} \end{figure} In very weak and strong magnetic fields, $\beta$ was linearly decreasing or decreasing with the increase of field strength. It means that Nernst coefficient is nearly constant in their regions. In intermediate fields, there is a transition of $N$ from one constant value to another. Especially in the case of $80-90^{\circ}$C, the sign of coefficient changed. Except in weak field of the case of $80-90^{\circ}$C, the sign of Nernst coefficients was negative on this experiment. These results have qualitative agreements with the investigation of up to 2 Tesla reported in Ref.~\cite{ref5}. But we detected the qualitative difference when the shapes of samples were different. At 4 Tesla, Nernst coefficients measured on the Fat-Bridge shape were about 12\% smaller than those on the Bridge shape in all cases and that tendencies are similar in strong fields. In the measurement of thermoelectric powers, the any values on the Fat-Bridge shape was smaller than that on the Bridge shape. But the rate of its reduction were about 8\%, 5\% and less than 1\% in the case of $0-100^{\circ}$C, $10-20^{\circ}$C, $80-90^{\circ}$C respectively. We suppose that the detected differences are apparently and due to the difference of samples' shape. We call it geometric contribution. To interpret this contribution and the inner state of the samples, more detailed analysis and the theoretical investigation will be proceeded in near future. Figure \ref{fig.8} shows the dependence of thermoelectric power on the applied direction of magnetic induction, which obtained on the measurement of ``Fat Bridge shape" in $\Delta T = 100^{\circ}$C. The sign of magnetic induction appears the applied direction of that. Figure~\ref{fig.8} includes the measured data, the analytically derived component and the analytically excluded one. Measured data was not symmetric to the direction of magnetic induction. We suppose this is due to the reason that the positions of measuring leads had the difference in the perpendicular direction to heat flux and Nernst effect was detected slightly. That component is indicated as the excluded $\beta$ in FIG.~\ref{fig.8}. Because of the similar mechanism, thermoelectric effect was detected on the measurement of Nernst effect. That appeared as offset voltages on the measurement apparently. They are indicated in the FIG.~\ref{fig.9}, which obtained on the measurement of ``Fat Bridge shape" in $\Delta T = 100^{\circ}$C. These phenomena are detected on all our measurements. \begin{figure} \epsfxsize=8cm \epsffile{Fig7.eps} \caption{The B dependence of the thermoelectric power.} \label{fig.7} \end{figure} \begin{figure} \epsfxsize=8cm \epsffile{Fig8.eps} \caption{Measured $\alpha$, analytically derived $\alpha$ and excluded $\beta$.} \label{fig.8} \end{figure} \begin{figure} \epsfxsize=8cm \epsffile{Fig9.eps} \caption{Measured $\beta$, analytically derived $\beta$ and excluded $\alpha.$} \label{fig.9} \end{figure} To excluded the contaminated components on the measured data, we calculate $\alpha$ and $\beta$ as follows: \begin{eqnarray} \alpha&=& \frac{1}{2} \left\{ \left( \frac{V_{\rm \alpha} (B) }{\Delta T } \right) + \left( \frac{V_{\rm \alpha} (-B) }{\Delta T } \right) \right\}, \label{eq.6} \\ \beta&=& \frac{1}{2} \left\{ \left( \frac{V_{\rm N} (B) }{\Delta T } \right) - \left( \frac{V_{\rm N} (-B) }{\Delta T } \right) \right\}. \label{eq.7} \end{eqnarray} The propriety of these processes are due to the properties that thermoelectric effect is even to the direction of magnetic induction and Nernst effect is odd. \section{Conclusion} When the magnetic field is applied to the semiconductor in which heat flux exist, two components of electric fields are generated inside of it. One is in the origin of thermoelectric effect and the other is Nernst effect. The mixture of these two effects generate the geometric contributions on the measurement of physical properties as we detected on this measurement using ``Bridge shape" and ``Fat-Bridge shape". This condition is similar to the measurement of the Hall effect~\cite{ref6,ref7}. When we confirm the physical properties by the measurement, the geometric contributions must be considered. We shall need to develop the calculation code to interpret the transport properties including the heat and current fluxes in more details and self-consistently. \acknowledgments \noindent The authors are grateful to Mr. Nishimura in Nishimura factory corporation for the process of samples and Dr. Tatsumi in Sumitomo Electric Industries for providing semiconductors and Prof. Kuroda in Nagoya university for many useful supports. We appreciate Prof. Iiyoshi and Prof. Motojima in the National Institute for Fusion Science for their helpful comments.
train/arxiv
BkiUfzXxK0zjCxh75yOv
5
1
\section{Introduction} \label{sec:intro} Rotational tunnelling takes place when groups of atoms in a molecule rotate, as an almost rigid structure, about a single bond. When the barriers between different nuclear configurations are sufficiently high some of the lowest states exhibit close energies and the transition between them can be investigated by suitable spectroscopies\cite{CHHP84,PH97,H99}. A typical example is provided by the methyl group ($-CH_{3}$). Rotational symmetry is commonly studied by means of simple models based on effective Hamiltonians for properly chosen restricted rigid rotors\cite {CHHP84,PH97,H99,HH85,KS16,KS17}. In some cases a single rotor model provides an acceptable description of the experimental data but in others one has to resort to a set of coupled rotors. The Schr\"{o}dinger equation for such models has been solved in more than one way\cite{HH85,KS17}. In this paper we are interested in a well known algorithm for the solution of band matrices\cite{Z83,Z84, ZST85,FOT86} that may be a convenient alternative approach to the iterative matrix inversion proposed several years ago\cite{HH85}. Although today the diagonalization of a band matrix offers no difficulty we think that such alternative methods may still be of interest. In addition to the comparison of the methods for solving the eigenvalue equation we want to point out that the case of a small rotational barrier (or large quantum numbers) may lead to numerical errors due to almost degenerate rotational states. In addition to what has just mentioned we will also discuss a space-time ( \mathcal{ST}$) symmetric non-Hermitian version of the effective Hamiltonian for the restricted rotor that takes place when the barrier height is allowed to be purely imaginary. This kind of problems have been intensely studied in recent years (see\cite{B07} for an earlier review on the issue and also\cite {BK11,FG14} for closely related models). In section~\ref{sec:model} we discuss the problem of nearly degenerate energies by means of a simple rigid-rotor model with symmetry $C_{3}$. In section~\ref{sec:parity} we show how to go around such difficulty by means of symmetry arguments. In section~\ref{sec:S-T_symmetry} we consider the \mathcal{ST}$-symmetric non-Hermitian counterpart and determine the regions of exact and broken $\mathcal{ST}$ symmetry. Finally, in section~\ref {sec:conclusions} we summarize the main results of the paper and draw conclusions. \section{Restricted-rotor model} \label{sec:model} For concreteness, in this paper we consider the rotation of a group of atoms hindered by a potential $V(\phi )$ with periodicity $V(\phi +2\pi /3)=V(\phi )$. It is commonly expanded in a Fourier series of the form\cite {CHHP84,PH97,H99} \begin{equation} V(\phi )=\sum_{j=0}^{\infty }V_{3j}\cos (3j\phi ). \label{eq:V_Fourier} \end{equation} For present discussion it is sufficient to consider just the leading term so that the hindered rotator is given by the effective Hamiltonian operator \begin{equation} H=-B\frac{d^{2}}{d\phi ^{2}}+V(\phi ),\;V(\phi )=V_{3}\cos \left( 3\phi \right) , \label{eq:H} \end{equation} where the magnitude of the rotational constant $B=\hbar ^{2}/(2I)$ is determined by the moment of inertia $I$ of the rotor. It is convenient to measure the energy $E$ in units of $B$ so that the dimensionless Schr\"{o}dinger equation becomes \begin{eqnarray} H\psi &=&\epsilon \psi , \nonumber \\ H &=&-\frac{d^{2}}{d\phi ^{2}}+V(\phi ),\;V(\phi )=\lambda \cos \left( 3\phi \right) , \nonumber \\ \epsilon &=&\frac{E}{B},\;\lambda =\frac{V_{3}}{B}. \label{eq:Schro_dim} \end{eqnarray} Since the potential is periodic of period $2\pi /3$ the eigenfunctions form basis for the irreducible representations $A$ and $E$ of the symmetry group C_{3}$. Therefore, the Fourier expansions for the eigenfunctions are of the form \begin{equation} \psi _{s}(\phi )=\sum_{j=-\infty }^{\infty }c_{j,s}f_{j,s}(\phi ),\;f_{j,s}(\phi )=\frac{1}{\sqrt{2\pi }}e^{i\left( 3j\phi +s\right) },\;s=0,\pm 1, \label{eq:Fourier_exp} \end{equation} where the subscripts $s=0$ and $s=\pm 1$ correspond to the symmetry species A$ and $E$, respectively. By means of the Fourier expansions (\ref {eq:Fourier_exp}) the Schr\"{o}dinger equation (\ref{eq:Schro_dim}) becomes a three-diagonal secular equation \begin{equation} \lambda c_{m-1,s}+2\left[ \epsilon -\left( 3m+s\right) ^{2}\right] c_{m,s}+\lambda c_{m+1,s}=0,\;m=0,\pm 1,\pm 2,\ldots . \label{eq:recurrence_relation} \end{equation} In practice we truncate the secular equation (\ref{eq:recurrence_relation}) and solve a matrix eigenvalue problem of dimension, say, $2N+1$. However, some time ago H\"{a}usler and H\"{u}ller\cite{HH85} proposed an iterative method, based on matrix inversion, that avoids matrix diagonalization. Today, such diagonalization can be carried out most easily even in the most modest personal computer. Nonetheless, we want to point out to an even simpler strategy proposed some time ago\cite{Z83,Z84, ZST85,FOT86} that consists in solving the secular equation (\ref{eq:recurrence_relation}) as a recurrence relation. The truncation of the secular equation just mentioned is equivalent to setting the boundary conditions $c_{m,s}=0$ for $|m|>N$ in the recurrence relation (\ref{eq:recurrence_relation}). Therefore, if we set c_{-N,s}=1$ we can calculate $c_{j,s}$ for $j=-N+1,-N+2,\ldots $ so that the roots of $c_{N+1,s}(\epsilon )=0$ are exactly the roots of the characteristic polynomial of the secular matrix of dimension $2N+1$ that yield estimates of the energies of the problem. In what follows $\epsilon _{0,s}(\lambda )<\epsilon _{1,s}(\lambda )<\epsilon _{2,s}(\lambda )<\ldots $denote the energies of the hindered rotor. When $\lambda =0$ the $A$ states are $\epsilon _{0,0}^{(0)}=0$, \epsilon _{2n-1,0}^{(0)}=\epsilon _{2n,0}^{(0)}=9n^{2}$, $n=1,2,\ldots $. On the other hand, the $E$ states are doubly degenerate for all $\lambda \geq 0$ and for $\lambda =0$ satisfy $(3n-1)^{2}=(-3n+1)^{2}$ which are obviously treated separately. In other words, the hindered potential splits the doubly degenerate $A$ states while the $E$ ones can be treated as nondegenerate with symmetry quantum numbers $s=-1$ ($E_{a}$) and $s=1$ ($E_{b}$). For this reason the calculation of the latter eigenvalues is much simpler. When $\lambda $ is sufficiently small the eigenvalues $\epsilon _{2n-1,0}(\lambda )$ and $\epsilon _{2n,0}(\lambda )$ are quasi degenerate which may make their numerical calculation somewhat difficult. An example is given in Figure~\ref{fig:P(epsilon)} that shows the characteristic polynomial $P(\epsilon )$ for $\lambda =0.1$ properly scaled to reduce its size. We clearly see that the splitting of the degenerate states is considerably smaller for $n=2$ than for $n=1$. In general, the magnitude of the splitting $\epsilon _{2n,0}(\lambda )-\epsilon _{2n-1,0}(\lambda )$ decreases as $n$ increases so that the problem also appears for greater values of $\lambda $ if the quantum number is large enough. Some algorithms may fail to find the almost identical roots of $P(\epsilon )$ if the accuracy of the calculation is insufficient. For $\lambda =0.1$ the corresponding eigenvalues are $\epsilon _{1,0}=8.99990740760586$, $\epsilon _{2,0}=9.00046293268167$, $\epsilon _{3,0}=36.0000370368357$ and $\epsilon _{4,0}=36.0000370373120$. The application of perturbation theory is most revealing. When $\lambda \neq 0$ the perturbation expansions for the first $A$ eigenvalues are \begin{eqnarray} \epsilon _{0,0} &=&-\frac{1}{18}\lambda ^{2}+\frac{7}{23328}\lambda ^{4} \frac{29}{8503056}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{1,0} &=&9-\frac{1}{108}\lambda ^{2}+\frac{5}{2519424}\lambda ^{4} \frac{289}{293865615360}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{2,0} &=&9+\frac{5}{108}\lambda ^{2}-\frac{763}{2519424}\lambda ^{4}+\frac{1002401}{293865615360}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{3,0} &=&36+\frac{1}{270}\lambda ^{2}-\frac{317}{157464000}\lambda ^{4}+\frac{10049}{10044234900000}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{4,0} &=&36+\frac{1}{270}\lambda ^{2}+\frac{433}{157464000}\lambda ^{4}-\frac{5701}{10044234900000}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{5,0} &=&81+\frac{1}{630}\lambda ^{2}+\frac{187}{8001504000 \lambda ^{4}-\frac{5861633}{342986069260800000}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{6,0} &=&81+\frac{1}{630}\lambda ^{2}+\frac{187}{8001504000 \lambda ^{4}+\frac{6743617}{342986069260800000}\lambda ^{6}+\ldots . \label{eq:PT_series_A} \end{eqnarray} We appreciate that $\epsilon _{2n,0}(\lambda )-\epsilon _{2n-1,0}(\lambda )=O(\lambda ^{2n})$. We did not apply the standard perturbation theory for degenerate states\cite{M76} because it is rather impractical in the present case; instead we obtained the perturbation expansions (\ref{eq:PT_series_A}) from the characteristic polynomial for sufficiently large values of $N$. \section{Parity} \label{sec:parity} In order to solve the problem posed by the quasi-degenerate $A$ states we take into account that the potential is parity invariant: $V(-\phi )=V(\phi ) $. If $P$ denotes the parity operator then $P\psi _{s}(\phi )=\psi _{s}(-\phi )=\psi _{-s}(\phi )$ transforms states $E_{a}$ into $E_{b}$ but the $A$ states remain as such. This fact allows us to separate the latter states into even and odd ones: \begin{eqnarray} \psi _{A_{+}}(\phi ) &=&c_{0}\frac{1}{\sqrt{2\pi }}+\sum_{j=1}^{\infty }c_{j \frac{1}{\sqrt{\pi }}\cos (3j\phi ), \nonumber \\ \psi _{A_{-}}(\phi ) &=&\sum_{j=1}^{\infty }c_{j}\frac{1}{\sqrt{\pi }}\sin (3j\phi ). \label{eq:psi_A+-} \end{eqnarray} In this way we have a secular equation \begin{eqnarray} \epsilon c_{0}+\frac{\lambda }{\sqrt{2}}c_{1} &=&0, \nonumber \\ \frac{\lambda }{\sqrt{2}}c_{0}+(\epsilon -9)c_{1}+\frac{\lambda }{2}c_{2} &=&0, \nonumber \\ \frac{\lambda }{2}c_{n-1}+\left( \epsilon -9n^{2}\right) c_{n}+\frac{\lambda }{2}c_{n+1} &=&0,\;n=2,3,\ldots , \label{eq:rec_rel_A+} \end{eqnarray} for the $A_{+}$ states and another one \begin{equation} \frac{\lambda }{2}c_{n-1}+\left( \epsilon -9n^{2}\right) c_{n}+\frac{\lambda }{2}c_{n+1}=0,\;n=2,3,\ldots , \label{eq:rec_rel_A-} \end{equation} for the $A_{-}$ states. This analysis based on parity is similar to using the symmetry point group $C_{3v}$ where the states labelled here as $A_{+}$ and $A_{-}$ belong to the symmetry species $A_{1}$ and $A_{2}$, respectively, and the effect of the parity operator is produced by one of the reflection planes $\sigma _{v}$\cite{C90}. In this way, the recurrence relations (or the corresponding tri-diagonal matrices) do not exhibit close roots for any value of $\lambda $ and the calculation is considerably simpler. If we choose $c_{j}=0$ for $j<0$ and c_{0}=1$ we can calculate $c_{j}$ for all $j>0$ and obtain the $A_{+}$ eigenvalues from the termination condition $c_{N}(\epsilon )=0$ for sufficiently large $N$. Exactly in the same way with $c_{j}=0$ for $j<1$ and $c_{1}=1$ we obtain the $A_{-}$ energies of the restricted rotor. The perturbation expansions for the first eigenvalues suggest that $\epsilon _{2n-1,0}$ is $A_{+}$ while $\epsilon _{2n,0}$ is $A_{-}$. For large values of $\lambda $ the eigenvalues behave asymptotically as \begin{equation} \epsilon _{v}=-\lambda +3\sqrt{\frac{\lambda }{2}}(2v+1)+\mathcal{O}(1). \label{eq:E_asymptotic} \end{equation} Figure~\ref{fig:EPS_A_E} shows the lowest eigenvalues for states of symmetry $A$ and $E$ calculated with the expressions indicated above. \section{Space-time symmetry} \label{sec:S-T_symmetry} The unitary operator $U=C_{6}$ that produces a rotation by an angle of $2\pi /6$\cite{C90} leads to the transformation $UV(\phi )U^{-1}=V(\phi +\pi /3)=-V(\phi )$ and $UH(\lambda )U^{-1}=H(-\lambda )$. From its application to the eigenvalue equation $H(\lambda )\psi _{n}=\epsilon _{n}(\lambda )\psi _{n}$, $UH(\lambda )U^{-1}U\psi _{n}=H(-\lambda )U\psi _{n}=\epsilon _{n}(\lambda )U\psi _{n}$, we conclude that $\epsilon _{n}(\lambda )$ is also an eigenvalue $\epsilon _{m}(-\lambda )$ of $H(-\lambda )$. Since $\psi _{n}$ and $U\psi _{n}$ belong to the same symmetry species ($A_{+}$, $A_{-} , $E_{a}$, $E_{b}$) and $\lim\limits_{\lambda \rightarrow 0}\epsilon _{m}(-\lambda )=\lim\limits_{\lambda \rightarrow 0}\epsilon _{n}(\lambda )$ then we conclude that $m=n$ and $\epsilon _{n}(-\lambda )=\epsilon _{n}(\lambda )$ which explains why the perturbation expansions for the eigenvalues of $H(\lambda )$ have only even powers of $\lambda $: \begin{equation} \epsilon _{n}(\lambda )=\sum_{j=0}^{\infty }\epsilon _{n}^{(2j)}\lambda ^{2j}. \label{eq:epsilon_lambda_series} \end{equation} This result suggests that $\epsilon _{n}(ig)$ is real for $g$ real, at least for sufficiently small values of $|g|$. This conclusion is consistent with the fact that $H(ig)$ is $\mathcal{ST}$ symmetric\cite{KC08,F15} with respect to the transformation given by the antiunitary operator\cite{W60} $UT$ as follows from $UTH(ig)TU^{-1}=H(ig)$, where $T$ is the time-reversal operator $THT=H^{*}$ and the asterisk denotes complex conjugation. The antiunitary symmetry tells us that the eigenvalues are either real or appear as pairs of complex conjugate numbers. If the antiunitary symmetry is exact ($A\psi =a\psi $) then the eigenvalues are real, otherwise we say that it is broken. In the present case we know, from the analysis based on perturbation theory, that this symmetry is exact for sufficiently small values of $|g|$. A straightforward calculation, like the one in the preceding section, confirms that the perturbation series for the $E$ states also have only even powers of $\lambda \begin{eqnarray} \epsilon _{0,\pm 1} &=&1-\frac{1}{10}\lambda ^{2}+\frac{83}{32000}\lambda ^{4}-\frac{4581}{30800000}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{1,\pm 1} &=&4+\frac{1}{14}\lambda ^{2}-\frac{143}{54880}\lambda ^{4}+\frac{2601}{17479280}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{2,\pm 1} &=&16+\frac{1}{110}\lambda ^{2}+\frac{383}{37268000 \lambda ^{4}-\frac{72621}{958253450000}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{3,\pm 1} &=&25+\frac{1}{182}\lambda ^{2}+\frac{563}{385828352 \lambda ^{4}+\frac{144549}{30352923537664}\lambda ^{6}+\ldots , \nonumber \\ \epsilon _{4,\pm 1} &=&49+\frac{1}{374}\lambda ^{2}+\frac{1043}{8370179840 \lambda ^{4}+\frac{90081}{3366013416487040}\lambda ^{6}+\ldots . \label{eq:PT_series_E} \end{eqnarray} In this case the interaction potential does not break the two-fold degeneracy. Since $\left\langle \cos (3\phi )\psi \right| \left. \cos (3\phi )\psi \right\rangle \leq \left\langle \psi \right| \left. \psi \right\rangle $ for all $\psi $ the series (\ref{eq:epsilon_lambda_series}) has a finite radius of convergence\cite{RS78} and $\epsilon _{n}(ig)$ will be real in the region of analyticity. More precisely, a given eigenvalue $\epsilon (ig)$ is real for all $|g|<$ $|g_{e}|$ where $g_{e}$ is an exceptional point where two eigenvalues coalesce as shown in Figure~\ref{fig:EE_EA} for the two lowest eigenvalues of symmetry $E$ and $A$. For $|g|>|g_{e}|$ the coalescing eigenvalues become a pair of complex conjugate numbers. There are simple and efficient numerical methods for the calculation of the exceptional points for quantum mechanical models similar to this one\cite{F01}; for the first two $E$ and $A$ states shown in Figure~\ref{fig:EE_EA} we obtained |g_{e1}|=2.9356105095073260590$, $\epsilon (g_{e1})=2.6226454301444952679$ and $|g_{e2}|=6.6094587620331389653$, $\epsilon (g_{e2})=4.6995725311868146666$, respectively. Figure~\ref{fig:EES_EAS} shows that the exceptional points increase with the quantum number which leads to the conclusion that the $\mathcal{ST}$ symmetry is exact for all |g|<|g_{e1}|$. Some time ago Bender and Kalveks\cite{BK11} and Fern\'{a}ndez and Garcia\cite {FG14} discussed other space-time-symmetric hindered rotors with somewhat different symmetries and calculated several exceptional points. In particular, the latter authors estimated the trend of the location of the exceptional points in terms of the quantum numbers of the coalescing states. \section{Conclusions} \label{sec:conclusions} We have shown that the use of parity considerably simplifies the calculation of the eigenvalues with eigenfunctions of symmetry $A$ of the restricted rigid rotor with $C_{3}$ symmetry. This strategy is particularly useful in the case of small barriers o large quantum numbers. We are aware that this situation is not commonly encountered in most physical applications of the model\cite{CHHP84,PH97,H99,HH85,KS16,KS17} but we think that it is worth taking into account the difficulties that it may rise. We have also shown that this simple model exhibits a non-Hermitian $\mathcal ST}$-symmetric counterpart with real eigenvalues for sufficiently small |\lambda |=|g|$ and obtained the exceptional point $g_{e}$ that determines the phase transition between exact and broken $\mathcal{ST}$ symmetry. In this way we added another member to the family of similar problems intensely investigated in the last years\cite{B07} (and references therein).
train/arxiv
BkiUgGDxK1ThhBMLjLI1
5
1
\section{\label{sec:intro}Introduction} The recent availability of cold to ultracold polar dimers in the vibrational and rotational ground state of their singlet electronic ground potential \cite{ni08,deiglmayr:133004}, represents a breakthrough towards the control of all molecular degrees of freedom, i.e., the center of mass, electronic, rotational and vibrational motions, and towards the ultimate goal of obtaining a polar condensate. The experimental achievements have been accompanied by significant theoretical efforts to understand the intriguing physical phenomena expected for ultracold polar quantum gases due to their anisotropic and long-range dipole-dipole interaction. In particular it has been analyzed how external fields can control and manipulate the scattering properties \cite{gorshkov:073201,ticknor:133202,tscherbul:194311,avdeenkov:022707}, and the chemical reactions dynamics \cite{krems_pccp}, or how to use them as tools for quantum computational devices \cite{demille02,yelin06,zoller_qc}. Different approaches to achieve cold and ultracold molecules have been explored \cite{pellegrini:053201,cote06,gonzalez:023402,gonzalez07_2,kotochigova07}. For further work and references we refer the reader to the comprehensive reviews \cite{speciss2004_2,dulieu:jpb_39}. The most widespread techniques to produce ultracold polar molecules are the photoassociation of two ultracold atoms \cite{stwalley99,jones:483}, and the tuning of the atomic interactions via magnetically induced Feshbach resonances \cite{Koehle:rmp06}. Alternative pathways explore the ability to manipulate the interaction between atoms by inducing optical Feshbach resonances. Based on the same principle as the magnetically induced Feshbach resonances, they appear when two colliding ultracold atoms are coupled to a bound state of the corresponding molecular system by using a radiation field. Initially, there were several theoretical proposals to obtain these resonances with the help of radio-frequency, static electric, and electromagnetic fields \cite{PhysRevLett.77.2913,PhysRevA.56.1486,PhysRevLett.81.4596,kokoouline:3046}. In addition, it has been demonstrated that a combination of a magnetic and static electric field can induce Feshbach resonances in a binary mixture of Li and Cs atoms \cite{krems06,li:032709}, and that a suitable combination of these two fields can tune the relevant interaction parameters, such as the width and open-channel scattering length in these resonances \cite{marcelis:153201}. The existence of this optically induced resonances has been experimentally proved for different atomic species by tuning the laser frequency near a photoassociation resonance \cite{PhysRevLett.85.4462,PhysRevLett.93.123001,thalhammer:033403,enomoto:203201}. Within the above experimental techniques, the molecules are usually in a highly excited vibrational level close to the dissociation threshold of an electronic state. These vibrational states are exposed to external fields and most of their overall probability is located at the outermost hump of their probability density. In the present work, we investigate the few last most weakly bound states of the $\textrm{X}^1\Sigma^+$ electronic ground state of a polar molecule in a strong static electric field. We perform a full rovibrational investigation of the field-dressed nuclear dynamics including the interaction of the field with the molecular electric dipole moment and polarizability. The LiCs dimer is a prototype system and will be used here. This choice is based on the experimental interest in this system and the availability of its molecular polarizabilities \cite{deiglmayr:064309}. It completes our previous investigations on the effects of an electric field on this system, where we have analyzed the rovibrational spectrum, the radiative decay properties, and the formation of these ultracold dimers via single-photon photoassociation from the continuum into its electronic ground state \cite{gonzalez07_2,gonzalez:023402, gonzalez06,mayle06,gonzalez08}. Specifically, we analyze the binding energies and the expectation values, $\langle\cos\theta\rangle$, $\langle \mathbf{J}^2\rangle$ and $\langle R\rangle$, of states lying in the spectral region with binding energies smaller than $0.28$ cm$^{-1}$ and vanishing azimuthal quantum number in the very strong field regime. We demonstrate that both the rotational and vibrational dynamics are significantly affected by the field. Indeed, the vibrational motion is squeezed or stretched to minimize the energy, depending on the rotational degree of excitation and the field strength. At such strong fields the nuclear spectrum exhibits several avoided crossings between energetically adjacent states, which lead to a strongly distorted rovibrational dynamics. The latter might be directly observable when imaged by photodissociation experiments. Beyond this, (magnetically induced) avoided crossings in Cs$_2$ have been used to construct a molecular St\"uckelberg interferometer \cite{mark:113201}. In addition, we show that by tuning the electric field strength a dissociation channel is opened, i.e., a weakly bound molecular state with low-field seeking character is shifted to the atomic continuum by increasing the field strength. Of course, the reverse process is also possible, and two free atoms can be brought into a molecular bound state by lowering the field strength. This might be of interest to control the collisional dynamics of the atomic/molecular cold gas by using either very strong static (micro-) electric fields or strong quasistatic, i.e., time-dependent fields. \section{\label{sec:theo} The Rovibrational Hamiltonian} We consider a heteronuclear diatomic molecule in its $^1\Sigma^+$ electronic ground state exposed to a homogeneous and static electric field. Our study is restricted to a non-relativistic treatment and addresses exclusively a spin singlet electronic ground state, and therefore relativistic corrections can be neglected. We assume that for the considered regime of field strengths perturbation theory holds for the description of the interaction of the field with the electronic structure, whereas a nonperturbative treatment is indispensable for the corresponding nuclear dynamics. In addition, we take into account the interaction of the field with the molecule via its dipole moment and polarizabibility, thereby neglecting higher order contributions due to (higher) hyperpolarizabilities. Thus, in the framework of the Born-Oppenheimer approximation the rovibrational Hamiltonian reads \begin{equation} \label{eq:rotvib_hamiltonian} H= T_R +\frac{\hbar^2\mathbf{J}^2(\theta,\phi)}{2\mu R^2} + V(R)-FD(R)\cos\theta -\frac{F^2}{2} \left[\alpha_\bot(R)\sin^2\theta+\alpha_\parallel(R)\cos^2\theta\right], \end{equation} where $R$ and $\theta, \phi$ are the internuclear distance and the Euler angles, respectively, and we use the molecule fixed frame with the coordinate origin at the center of mass of the nuclei. $T_R$ is the vibrational kinetic energy, $\hbar\mathbf{J}(\theta,\phi)$ is the orbital angular momentum, $\mu$ is the reduced mass of the nuclei, and $V (R)$ is the field-free electronic potential energy curve (PEC). The electric field is taken oriented along the $z$-axis of the laboratory frame with strength $F$. The last three terms provide the interaction between the electric field and the molecule via its permanent electronic dipole moment function (EDMF) $D(R)$, and its polarizability, with $\alpha_\parallel(R)$ and $\alpha_\bot(R)$ being the polarizability components parallel and perpendicular to the molecular axis, respectively. In the presence of the electric field, the dissociation threshold changes and it is given by the quadratic Stark shift of the free atoms, i.e., $E_{DT}(F)=-0.5*F^2(\alpha_{1}+\alpha_{2})$, with $\alpha_{i}$ $i=1,2$ being the polarizabilities of the free atoms. In the presence of the electric field, only the azimuthal symmetry of the molecular wavefunction holds and therefore the magnetic quantum number $M $ is retained. In this work we focus on levels with vanishing magnetic quantum number $M=0$. For reasons of addressability, we will label the electrically-dressed states by means of their field-free vibrational and rotational quantum numbers $(\nu,J)$. Let us briefly investigate under which conditions the contribution of the molecular polarizabilities can be neglected in the Hamiltonian (\ref{eq:rotvib_hamiltonian}). For simplicity reasons, and without loss of generality, we use the effective rotor approach \cite{gonzalez04}, assuming that the rotational and vibrational energy scales differ significantly and can therefore be separated adiabatically, and that the field influence on the vibrational motion is very small and can consequently be treated by perturbation theory. Then, in the framework of this approximation the rovibrational Hamiltonian (\ref{eq:rotvib_hamiltonian}) is reduced to \begin{equation} \label{hamil_era} H_\nu^{ERA} B_\nu\mathbf{J}^2 -F\langle D\rangle_\nu^{0}\cos\theta -\frac{F^2}{2}\left[\langle\alpha_\bot\rangle_\nu^{0}+ \langle \Delta\alpha\rangle_\nu^{0}\cos^2\theta\right] +E_{\nu00}^{0}, \end{equation} where $B_\nu =\frac{\hbar^2}{2\mu}\langle R^{-2}\rangle_\nu^{0}$ is the field-free rotational constant of the state with quantum numbers $\nu$, $J=0$ and $M=0$, $\psi_{\nu00}^{0}(R)$ and $E_{\nu00}^{(0)}$ are the vibrational wave function and energy, respectively, and we encounter the expectation values $\langle R^{-2}\rangle_\nu^{0}=\langle\psi_{\nu 0 0}^{0}|R^{-2}|\psi_{\nu 0 0}^{0}\rangle$, $\langle D \rangle_\nu^{0} = \langle\psi_{\nu 0 0}^{0}|D(R)|\psi_{\nu 0 0}^{0}\rangle$, $\langle\alpha_\bot\rangle_\nu^{0}= \langle\psi_{\nu 0 0}^{0}|\alpha_\bot(R)|\psi_{\nu 0 0}^{0}\rangle$, and $\langle \Delta\alpha\rangle_\nu^{0}= \langle \psi_{\nu 0 0}^{0}|\alpha_\parallel(R)-\alpha_\bot(R)|\psi_{\nu 0 0}^{0}\rangle$. Within this approach, at a certain field strength $F$ the interaction due to the polarizability can be neglected in the effective rotor Hamiltonian (\ref{hamil_era}) if $\left| \frac{2\langle D\rangle_\nu^{0}}{F\langle \alpha_\bot \rangle_\nu^{0}} \right|>>1$ and $\left| \frac{2\langle D\rangle_\nu^{0}}{F\langle \Delta\alpha\rangle_\nu^{0}} \right|>>1$. To analyze the very weak field regime, we rescale the effective rotor Hamiltonian (\ref{hamil_era}) with $B_\nu$, and assume that the ratios $\frac{F}{B_\nu}\langle D \rangle_\nu^{0}$, $\frac{F^2}{2B_\nu}\langle \alpha_\bot \rangle_\nu^{0}$, and $\frac{F^2}{2B_\nu}\langle \Delta\alpha \rangle_\nu^{0}$ are smaller than the rescaled field-free rotational kinetic energy $J(J+1)$. Then, for a certain state with quantum numbers $\nu$, $J$ and $M$, time-independent perturbation theory provides the following second order correction to the field-free energy \begin{equation} \label{eq:pertur_the} E^{(2)}_{\nu,J,M}=\left[ A_{JM}\frac{\left(\langle D \rangle_\nu^{0}\right)^2}{B_\nu} -\frac{1}{2}\langle \alpha_\bot \rangle_\nu^{0} -\frac{1}{2}\langle \Delta\alpha \rangle_\nu^{0}C_{JM}\right]F^2, \end{equation} where the angular coefficients \cite{meyenn} are given by \begin{eqnarray} \label{eq:p_t_dipole} A_{JM}&=&\frac{J(J+1)-3M^2}{2J(J+1)(2J+1)(2J+3)} \quad \rm{for} \quad J>0 \nonumber \\ A_{00}&=&-\frac{1}{6} \end{eqnarray} and \begin{equation} \label{eq:p_t_pola} C_{JM}=\frac{(J+1)^2-M^2}{(2J+1)(2J+3)}+ \frac{J^2-M^2}{(2J+1)(2J-1)}. \end{equation} This second order correction to the rotational energy depends on the molecular system and the symmetry of the considered state through the expectation values $\langle D \rangle_\nu^{0}$, $\langle \alpha_\bot \rangle_\nu^{0}$ and $\langle \Delta\alpha \rangle_\nu^{0}$ . In the perturbative regime, the polarizability terms can be neglected if $\left| \frac{2(\langle D\rangle_\nu^{0})^2A_{JM}}{B_\nu\langle \alpha_\bot \rangle_\nu^{0}} \right|>>1$ and $\left| \frac{2(\langle D\rangle_\nu^{0})^2A_{JM}} {B_\nu\langle \Delta\alpha\rangle_\nu^{0}C_{JM}} \right|>>1$. We have $C_{JM} > |A_{JM}|$, and the coefficient $|A_{JM}|$ becomes increasingly smaller than $C_{JM}$ for increasing values of $J$; for example for $J=15$ and $M=0$ $A_{15,0}=4.888\times 10^{-4}$ and $C_{15,0}=0.5005$, and for $J=M=15$ $A_{15,15}=-8.859\times 10^{-4}$ and $C_{15,15}=3.030\times 10^{-2}$. As a consequence we encounter the situation that for high rotational excitations of certain molecular systems the interaction due to the molecular polarizability could be the dominant one. We emphasize that the above considerations are valid only for weak fields and within the effective rotor approach. \section{\label{sec:result} Results} In the present work, we have performed a full rovibrational study of the influence of an external static electric field on the highly excited rovibrational states of the LiCs molecule. The PEC, EMDF and polarizability components of the $^1\Sigma^+$ electronic ground state of LiCs are plotted as a function of the internuclear distance in Figures \ref{fig:pec}(a), and (b), respectively. For the PEC, we use the experimental data of ref.\cite{staanum06}, which includes for the long-range behaviour the van der Waals terms, $-\sum_{n=6,8,10}C_n/R^{n}$, and an exchange energy term, $-AR^\gamma e^{-\beta R}$, see ref.\cite{staanum06} for the values of these parameters. The EDMF and polarizabilities are taken from semi-empirical calculations performed by the group of Dulieu \cite{deiglmayr:064309,aymar05}. The EDMF is negative and its minimum is shifted by $1.4\,a_0$ with respect to the equilibrium internuclear distance $R_e=6.94\,a_0$ of the PEC. For the electronic ground state of the polar alkali dimers the long-range behaviour of the EDMF is given by $D_7/R^{7}$ \cite{Byers1970}, this function has been fitted to the theoretical data for $R\gtrsim 18.15\,a_0$ with $D_7=-5\times 10^{-6}$ a.u. Regarding the polarizability, both components smoothly change as $R$ is enhanced and $\alpha_\bot(R)\ge\alpha_\parallel(R)$ for any $R$ value. They satisfy that $\lim_{R\to\infty}\alpha_\bot(R)=\lim_{R\to\infty}\alpha_\parallel(R)= \alpha_{Li}+\alpha_{Cs}$, with the polarizabilities of the Li and Cs atoms $\alpha_{Li}=164.2$ a.u. and $\alpha_{Cs}=401$ a.u., respectively, \cite{miffre:011603,PhysRevLett.91.153001}. Thus, for $R\gtrsim 26$ a$_0$ the theoretical data were extrapolated by means of exponentially decreasing functions to match the constant value $\alpha_{Li}+\alpha_{Cs}$. For computational reasons, $\alpha_\bot(R)$ and $\alpha_\parallel(R)$ are extrapolated for $R<5$ and $4$ a$_0$, respectively. Since this study is focused on highly excited levels lying close to the dissociation threshold, we are aware of the fact that our results strongly depend on the assumptions made for the long-range behaviour of $D(R)$, $\alpha_\bot(R)$ and $\alpha_\parallel(R)$, and on the extrapolations performed at short-range for $\alpha_\bot(R)$ and $\alpha_\parallel(R)$. However, let us remark that the overall behaviour and physical phenomena presented here remain unaltered as these parameters are altered. \begin{figure} \includegraphics[scale=0.8]{fig1.eps} \caption{\label{fig:pec} (a): Electronic potential curve (solid) and electric dipole moment functions (dashed), and (b): parallel $\alpha_\parallel(R)$ (solid) and perpendicular $\alpha_\bot(R)$ (dashed) components of the polarizability of the electronic ground state of the LiCs molecule.} \end{figure} For the lowest rotational excitations within each vibrational band of LiCs, we have investigated and compared the field interactions with the dipole moment and polarizability presented in the previous section. Within perturbation theory, the interaction due to the molecular polarizability becomes comparable to the one due to the dipole moment only for the last two vibrational bands. Assuming that the effective rotor conditions are satisfied, the interaction with the polarizability can be neglected for those levels with $\nu\le 47$ and $48\le\nu\le 52$ if the field strength is smaller than $10^{-3}$ and $2\times 10^{-4}$ a.u., respectively. Whereas, for the vibrational bands $\nu=53$ and $54$ both interactions possess the same order of magnitude for the much weaker fields $F\approx 6\times 10^{-5}$ and $8\times 10^{-6}$ a.u., respectively. Furthermore, the absolute values of the quadratic Stark shifts of the atomic energies are larger than the binding energies of the last bound state for $F\gtrsim 10^{-5}$ a.u., which also justifies that the interaction with the polarizability has to be included in the present study. Here, we consider the highest rotational excitations ($M=0$) for the last four vibrational bands, $51\le\nu\le54$, of LiCs with binding energies smaller than $0.28$ cm$^{-1}$. We focus on the strong field regime $F = 10^{-6}-3.4\times 10^{-4}$ a.u., i.e., $F = 5.14- 1747.6$ kV/cm, which includes the experimentally accessible range of strong static fields and possibly quasistatic fields. We remark that such strong fields are considered to induce the below-described peculiar behaviour of these states. Most of the overall probability is located at the outermost hump of these states, i.e., in regions where the EDMF possesses small values and the polarizabilities are close to $\alpha_{Li}+\alpha_{Cs}$. Thus, strong fields are needed in order to observe a significant field-effect on these levels. At these field strengths the corresponding rovibrational dynamics cannot be described by means of the effective or (due to avoided crossings) even the adiabatic rotor approximations \cite{gonzalez04,gonzalez05}, and, of course not by perturbation theory (\ref{eq:pertur_the}). Hence, the two-dimensional Schr\"odinger equation associated to the nuclear Hamiltonian (\ref{eq:rotvib_hamiltonian}) has to be solved numerically. We do this by employing a hybrid computational method which combines discrete and basis-set techniques applied to the radial and angular coordinates, respectively \cite{gonzalez:023402,gonzalez04}. Since in the presence of the field the dissociation threshold is $E_{DT}(F)=-0.5*F^2(\alpha_{Li}+\alpha_{Cs})$, we define the energetical shift with respect to this dissociation threshold as $\varepsilon_{\nu, J}=E_{\nu, J}(F)-E_{DT}(F)$, with $E_{\nu, J}(F)$ being the energy of the $(\nu,J)$ state at field strength $F$. Figure \ref{fig:energy}(a) shows these Stark shifts $\varepsilon_{\nu, J}$ satisfying $ \varepsilon_{\nu, J} \ge -0.28$ cm$^{-1}$ in the above-provided range of field strengths. This spectral window includes the $(54,0)$, $(54,1)$, $(53,4)$, $(53,5)$, $(53,6)$, $(52,10)$ and $(51,15)$ states, and for $F\gtrsim 3 \times 10^{-4}$ a.u. also the $(52,9)$ level. Since all these levels possess the same symmetry ($M=0$) for $F\neq 0$, and since the field strength is the only parameter at hand to vary the rovibrational energies, the von Neumann-Wigner noncrossing rule \cite{wigner} holds and we encounter only avoided crossings of energetically adjacent states but no exact crossings in the field-dressed spectrum. In the vicinity of the avoided crossing the nuclear dynamics is dominated by a strong interaction and mixing of the involved rovibrational states. For reasons of simplicity (according to the Landau-Zener theory) we assume that the avoided crossings are traversed diabatically as $F$ is increased. Thus, a certain state has the same character before and after the avoided crossing, e.g. taking for example the $(54,0)$ level we observe that it keeps its high-field seeking trend, see Figure \ref{fig:energy}(a). \begin{figure} \includegraphics[scale=0.7]{fig2.eps} \caption{\label{fig:energy} (a) Energy shifts with respect to the dissociation threshold $\varepsilon_{\nu J}$, and expectation values (b) $\langle\cos\theta\rangle$, (c) $\langle\mathbf{J}^2\rangle$ and (d) $\langle R\rangle$, as a function of the field strength for the states with field-free vibrational and rotational quantum numbers $(54,0)$ (solid), $(54,1)$ (solid thick), $(53,4)$ (dotted-dashed), $(53,5)$ (long dashed thick), $(53,6)$ (long dashed), $(52,10)$ (dotted), $(52,9)$ (short dashed), and $(51,15)$ (double dotted-dashed). Note that in the panel (c) only the results for the levels $(54,0)$, $(54,1)$, $(53,4)$, $(53,5)$ and $(53,6)$ are included.} \end{figure} Before studying these avoided crossings in more detail, let us analyze the general behaviour of the binding energies. For all $\varepsilon_{\nu J}$ we observe a very weak dependence on $F$ for $F \lesssim 3\times 10^{-5}$ a.u. The larger the field-free rotational quantum number of a state, the stronger is the field strength needed to encounter a deviation of $\varepsilon_{\nu, J}$ from its field-free value. With a further enhancement of the strength, $\varepsilon_{\nu, J}$ increases (decreases) for the high (low)-field seekers. The strong field dynamics is dominated by pendular states, whose binding energies increase as $F$ is augmented. Their main feature is their orientation along the field axis: They represent coherent superpositions of field-free rotational levels \cite{meyenn}. In our spectral region, this regime is only reached for the $(54,0)$, $(54,1)$ and $(53,4)$ states. Indeed, $\varepsilon_{54,\, 0}$ monotonically decreases as $F$ increase, whereas $\varepsilon_{54,\, 1}$ and $\varepsilon_{53,\, 4}$ initially increase and reach broad maxima, decreasing thereafter. In contrast, the binding energies of the $(52,9)$, $(52,10)$, $(53,6)$ and $(53,5)$ decrease as $F$ is increased. Due to its large field-free angular momentum, the $(51,15)$ level is the least affected by the field. Initially, $\varepsilon_{51,\, 15}$ is reduced as $F$ is enhanced, passes through a broad minimum and increases thereafter. The contribution of the molecular polarizability causes that this state is a high-field seeker in the weak field regime, whereas if only the interaction with the dipole moment is included it has a low-field seeking character. Regarding the avoided crossings, some of them are very narrow and can not be identified as such on the scale used in Figure \ref{fig:energy}(a), e.g. those among the pairs of levels $(54,1)-(53,6)$, $(54,1)-(51,15)$, $(54,0)-(51,15)$, and $(54,0)-(52,10)$. In contrast, other avoided crossings are very broad, and they are characterized by a strong coupling between the involved molecular states. The avoided crossing between the $(54,1)$ and $(53,5)$ levels takes places at strong fields, and the minimal energetical gap is $\Delta E=|\varepsilon_{54,\,1}-\varepsilon_{53,\,5}|=1.13\times 10^{-2}$ cm$^{-1}$ for $F\approx 2.23 \times 10^{-4}$ a.u. The $(54,0)$ state is involved in an avoided crossing with the $(53,5)$ level characterized by $\Delta E=|\varepsilon_{54,\,0}-\varepsilon_{53,\,5}|=2.82\times 10^{-3}$ cm$^{-1}$ for $F\approx 1.399\times 10^{-4}$ a.u., and another one with the $(53,4)$ state with $\Delta E=|\varepsilon_{54,\,0}-\varepsilon_{53,\,4}|=1.08\times 10^{-2}$ cm$^{-1}$ for $F\approx 1.918 \times 10^{-4}$ a.u. The $(53,4)$ level experiences an avoided crossing with the $(52,9)$ state, with minimal energetical separation $\Delta E=|\varepsilon_{53,\,4}-\varepsilon_{52,\,9}|=1.12\times 10^{-2}$ cm$^{-1}$ for $F\approx 3.25 \times 10^{-4}$ a.u. For other alkali dimers weaker electric field strengths might suffice to exhibit similar avoided crossings. These avoided crossings are wide enough to be experimentally observed in a similar way as it has been done for the weakly bound spectrum of Cs$_2$ dimer in a magnetic field \cite{mark:042514}. In this system coupling strengths, $\frac{\Delta E}{2}$, larger than $1.67\times 10^{-6}$ cm$^{-1}$ were experimentally estimated; i.e., even the energetical separation of the $(54,1)$ and $(53,6)$ avoided crossing, $\Delta E=|\varepsilon_{54,\,1}-\varepsilon_{53,\,6}|=9.6\times 10^{-5}$ cm$^{-1}$ for $F\approx 1.1 \times 10^{-4}$ a.u., could be measured. Moreover, as it has been done for the Cs$_2$ molecule \cite{mark:042514,mark:113201} in a magnetic field, a suitable electric-field ramp could be used to transfer population from high to low rotational excitations in a controlled way, by either diabatically jumping or adiabatically following these electrically induced avoided crossings. An interesting physical phenomenon is observed in the evolution of the $(53,6)$ level in the spectrum. $\varepsilon_{53,\, 6}$ increases as $F$ increases, and after passing the avoided crossing with the $(54,1)$, $\varepsilon_{53,\, 6}$ becomes positive for $F\gtrsim 1.6\times 10^{-4}$a.u. The Stark increase of the $(53,6)$ energy surpasses the reduction of the dissociation threshold, and this level is shifted to the continuum. Hence, if the LiCs is initially in the $(53,6)$ level, it will dissociate as the field strength is adiabatically tuned and enhanced above $F\gtrsim 1.6\times 10^{-4}$ a.u. Therefore a channel for molecular dissociation is opened as the electric field is modified. Of course, the inverse process is also possible, and the continuum state formed by two free atoms can be brought into a bound state by lowering the electric field strength. Indeed, it has been proved that a static electric field could be used to manipulate the interaction between two atoms such that a virtual state could be transformed in a new bound state, i.e., the molecular system supports a new bound level \cite{PhysRevLett.81.4596}. To illustrate the appearance of this phenomenon for a low-lying rotational excitation, we have performed a similar study for a designed molecule. We have taken the theoretical PEC of the $^1\Sigma^+$ electronic ground state of LiCs computed by the group of Allouche \cite{allouche} with the van der Waals long-range potential, $C_6/R^6$, but modifying the LiCs $C_6$ coefficient to $C_6=2225$ a.u. The $(54,1)$ level is shifted towards the dissociation threshold, having a field-free energy $ E_{54\,1}\approx -5.9\times 10^{-5}$ cm$^{-1}$. As electric dipole moment function and polarizabilities we have used the corresponding functions of the LiCs molecule described above. The last most weakly bound states of this toy system have been studied in the presence of a static electric field, but for the sake of simplicity we discuss here only the results for the $(54,1)$ level. As $F$ is enhanced $\varepsilon_{54,\, 1}$ increases, and becomes positive for $F\gtrsim 5\times 10^{-5}$ a.u.; note that this field strength is much weaker than the above used ones. For a bound level, a further enhancement of the field would change its character, and its binding energy would increase as $F$ is augmented. We have observed the same phenomenon for the $(54,1)$ state, which becomes bound again for $F\gtrsim 1.7\times 10^{-4}$ a.u., and $\varepsilon_{54,\,1}$ decreases thereafter. The level has been captured by the nuclear potential demonstrating that the reverse process is possible. Starting with two free atoms with the correct internal symmetry, by adiabatically tuning the field the dimer is formed in a highly excited level. Due to negative sign of the EDMF, the main feature of the pendular regime (focusing again on LiCs) is the antiparallel orientation of the states along the field axis. The orientation can be estimated by the expectation value $\langle\cos\theta\rangle$: The closer $|\langle\cos\theta\rangle|$ is to one, the stronger is the orientation of the state along the field. Figure \ref{fig:energy}(b) illustrates the evolution of $\langle\cos\theta\rangle$ as the field strength is changing. The initial behaviour of $\langle\cos\theta\rangle$ for weak fields depends on the character of the corresponding level. For the $(54,0)$ state $\langle\cos\theta\rangle$ monotonically decreases as $F$ is increased, it achieves the largest orientation with $\langle\cos\theta\rangle\le-0.7$ for $F\gtrsim 5 \times 10^{-5}$ a.u., except in the proximity of avoided crossings. For the $(54,1)$, $(53,5)$ and $(53,4)$ levels, $\langle\cos\theta\rangle$ reach a broad maximum decreasing thereafter. The orientation of the $(54,1)$ and $(53,4)$ states becomes antiparallel for stronger fields. Not considering the proximity of an avoided crossing region, the $(54,1)$ state shows a significant orientation with $\langle\cos\theta\rangle\le-0.4$ for $F\gtrsim 1.31 \times 10^{-4}$ a.u. The remainder of states keep a pinwheeling character, and $\langle\cos\theta\rangle$ increases as $F$ is augmented. Since we have used the notation that the avoided crossings are traversed diabatically, a certain state $\langle\cos\theta\rangle$ reestablishes its increasing or decreasing trend once the avoided crossing has been passed. The smooth behaviour of $\langle\cos\theta\rangle$ is significantly distorted by the presence of these spectral features, where due to the strong mixing and interaction among the two involved states $\langle\cos\theta\rangle$ exhibits sharp and pronounced maxima and minima. For example, the avoided crossing among the $(54,0)$ and $(53,5)$ levels, is characterized by the values $\langle\cos\theta\rangle_{54,\,0}=-0.235$ and $\langle\cos\theta\rangle_{53,\,5}=-0.152$, for $F=1.399\times 10^{-4}$ a.u., compared to the results $\langle\cos\theta\rangle_{54,\,0}=-0.847$ and $\langle\cos\theta\rangle_{53,\,5}=0.436$ obtained for $F=1.3\times 10^{-4}$ a.u. Note that for the $(54,0)$ level $\langle\cos\theta\rangle$ shows an additional maximum for $F\gtrsim 3\times 10^{-4}$, i.e., this level suffers another avoided crossing which is not observed in Figure \ref{fig:energy}(a), because $\varepsilon_{54,0}<-0.28$ cm$^{-1}$ for $F\ge 2.54\times 10^ {-4}$ a.u. The expectation value $\langle\mathbf{J}^2\rangle$ of the states $(54,0)$, $(54,1)$, $(53,4)$, $(53,5)$ and $(53,6)$, is presented as a function of the electric field in Figure \ref{fig:energy}(c). To provide a reasonable scale, the results for the $(52,10)$, $(52,9)$ and $(51,15)$ levels have not been included. This quantity provides a measure for the mixture of field-free states with different rotational quantum numbers $J$ but the same value for $M$, i.e., it describes the hybridization of the field-free rotational motion. Analogous to the binding energy, $\langle\mathbf{J}^2\rangle$ shows for weak fields a plateau-like behaviour: The hybridization of the angular motion is very small and the dynamics is dominated by the field-free rotational quantum number of the corresponding state. For stronger fields, these states possess a rich rotational dynamics, with significant contributions of different partial waves, and $\langle\mathbf{J}^2\rangle$ decreases (increases) for the low-(high)-field seekers as $F$ is enhanced. In the strong field regime, $\langle\mathbf{J}^2\rangle$ shows a broad minimum for the $(54,1)$, $(53,4)$ and $(53,5)$ states, increasing thereafter. The pendular limit is characterized by the augment of $\langle\mathbf{J}^2\rangle$ due to the contribution of higher field-free rotational states. This regime is only achieved by the $(54,0)$, $(54,1)$ and $(53,4)$ levels. In contrast, the mixing with lower rotational excitations is dominant for the $(53,5)$ and $(53,6)$ states, and $\langle\mathbf{J}^2\rangle\le J(J+1)$, with $J$ being the corresponding field-free rotational quantum number; similar results are obtained for the $(52,9)$, $(52,10)$ and $(51,15)$ states not included in Figure \ref{fig:energy}(c). The presence of the avoided crossings significantly distorts the smooth behaviour of $\langle\mathbf{J}^2\rangle$. The $\langle\mathbf{J}^2\rangle$ of the level in an avoided crossing with the lowest (highest) field-free $J$ exhibits a pronounced and narrow maximum (minimum) on these irregular regions. At the smallest energetical gap, we encounter similar values of $\langle\mathbf{J}^2\rangle$ for both states. For example, for $F=1.399\times 10^{-4}$ a.u. we obtain $\langle\mathbf{J}^2\rangle=10.38\,\hbar^2$ and $12.40\,\hbar^2$ for the $(54,0)$ and $(53,5)$ levels, respectively, compared to the values $\langle\mathbf{J}^2\rangle_{54,0}=3.04\,\hbar^2$ and $\langle\mathbf{J}^2\rangle_{53,5}=20.75\,\hbar^2$ for $F=1.3\times 10^{-4}$ a.u. The expectation value of the radial coordinate $\langle R\rangle$ is presented for these states and range of field strengths in Figure \ref{fig:energy}(d). Only if the vibrational motion is affected by the field $\langle R\rangle$ should differ from its field-free value. Analogously to $\varepsilon_{JM}$ and $\langle\mathbf{J}^2\rangle$, $\langle R\rangle$ represents approximately a constant for weak fields, and strong fields are needed to observe significant deviations from its field-free value. Indeed, the larger is the rotational quantum number of a state for $F=0$, the least affected by the field is its $\langle R\rangle$. For the $(54,0)$ level, $\langle R\rangle$ monotonically decreases from $50.24\, a_0$ to $28.13\,a_0$ as $F$ is enhanced from $0$ to $3.4 \times 10^{-4}$ a.u. For the $(54,1)$, $(53,5)$ and $(53,4)$ states, $\langle R\rangle$ increases as $F$ is augmented, reaches a broad maximum and decreases thereafter. The $(54,1)$ level is significantly affected with a reduction from $\langle R\rangle=52.52\, a_0$ to $30.96\, a_0$ for $F=0$ and $ 3.4\times 10^{-4}$ a.u., respectively. For the $(53,4)$ state this effect is much smaller, and $\langle R\rangle$ is modified from the field-free result $30.78\, a_0$ to $28.32\,a_0$. For $(53,5)$ we observe that for $F=3.4 \times 10^{-4}$ a.u. $\langle R\rangle$ is by $4.57\,a_0$ larger than its value for $F=0$. $\langle R\rangle$ increases as $F$ is enhanced for the remaining states, their total rise being smaller than $5\, a_0$ for the analyzed levels $\nu=51$ and $52$. As the $(53,6)$ state is shifted to the continuum, the slope of $\langle R\rangle$ becomes very steep, and $\langle R\rangle$ is enhanced from $\langle R\rangle=32.85\, a_0$ up to $41.30\,a_0$ for $F=0$ and $1.6\times 10^{-4}$ a.u., respectively. The field effect on the vibrational motion can be explained as follows: The probability density of those levels with a antiparallel (parallel) orientation is mostly located in the $\pi/2\le\theta\le \pi$ ($0\le\theta\le \pi/2$) region, where the dipole moment interaction is attractive (repulsive). As a consequence, the wavefunctions are squeezed (stretched) compared to their field-free counterparts to reduce the energy. Again, in the vicinity of the avoided crossing $\langle R\rangle$ exhibits very similar values for the two involved states. For example, we have found that $\langle R\rangle=33.02$ and $33.13\, a_0$ for the states $(54,0)$ and $(53,5)$ and $F=1.399\times 10^{-4}$ a.u., respectively. \begin{figure} \includegraphics[scale=0.8]{fig3.eps} \caption{\label{fig:wf_1} Probability densities (a) of the state $(54,0)$ and (b) of the state $(53,5)$ for $F=1.3\times 10^{-4}$ a.u., i.e., close to the avoided crossing but still without mixing of the rovibrational field-dressed states.} \end{figure} To gain a deeper insight into the coupling of the vibrational and rotational motions induced by the electric field, we have analyzed the corresponding wavefunctions of two states involved in an avoided crossing. As an example, we discuss here the $(54,0)$-$(53,5)$ avoided crossing. For comparison, let us first analyze their wavefunctions for $F=1.3\times 10^{-4}$ a.u., i.e., 'below' the avoided crossing where the mixing is not yet appreciable. The contour plots of the probability densities, $|\psi(R,\theta)|^2\sin \theta$, in the $(R,\theta)$ plane are presented in Figures \ref{fig:wf_1}(a) and (b) for the $(54,0)$ and $(53,5)$ states, respectively. Since most of the overall probability of these weakly bound levels is located in the outer most hump, the radial coordinate has been restricted in these plots to the interval $15 \,a_0\le R\le 52\,a_0$. Indeed, more than $89\%$ of the $(54,0)$ and $(53,5)$ probability densities are located for $R>30\,a_0$, and $25\, a_0$, respectively. Due to the pronounced antiparallel orientation of the $(54,0)$ level, $\langle\cos\theta\rangle =-0.847$, the corresponding probability density shows a pendular-like structure, it is located in the region $3\pi/4\le\theta\le\pi$ and the maximal value is obtained at $\theta=2.77$ and $R=35.87\,a_0$. The typical oscillator-like behaviour with $6$ maxima reminiscent from its field free angular momentum $J=5$ is observed in the $(53,5)$ probability density, see Figure \ref{fig:wf_1}(b). Since this state has still a pinwheeling character, the corresponding probability density is distributed over the complete interval $0\le\theta\le \pi$, however, due to the parallel orientation of this state, $\langle\cos\theta\rangle=0.437$, the probability density is larger in the region $\theta\le\frac{\pi}{2}$. Moreover, the influence of the field on the vibrational motion provokes an inclination of the internuclear axis of this level, i.e., the corresponding wavefunction is stretched and squeezed in the regions $\theta<\frac{\pi}{2}$ and $\theta>\frac{\pi}{2}$, respectively. The squeezing effect also appears for the $(54,0)$ state which possesses a strong antiparallel orientation (see Figure \ref{fig:wf_1}(a)). \begin{figure} \includegraphics[scale=0.8]{fig4.eps} \caption{\label{fig:wf_2} Probability densities (a) of the state $(54,0)$ and (b) of the state $(53,5)$ at the field strength $F=1.399\times 10^{-4}$ a.u.} \end{figure} As the electric field is enhanced approaching the region of the avoided crossing, a strong interaction between the involved states takes places and the rovibrational dynamics is affected drastically. The contour plots of the $(54,0)$ and $(53,5)$ states for $F=1.399 \times 10^{-4}$ a.u., which corresponds with the minimal energetical gap between them, are shown in Figures \ref{fig:wf_2}(a) and (b), respectively. Although, at this field strength their orientation and hybridization of the angular motion are very similar, there exist significant differences with respect to their wavefunctions. The above-described regular structures typical for an oscillator and pendular-like distributions are lost. Even more, for both levels it is not possible to identify an orientation of the molecule, and the most pronounced maxima are not necessarily located at the outermost turning points. The $(54,0)$ probability density is distributed in the interval $0\le\theta\le \pi$, the largest maximum is at $\theta=2.701$ and $R=34.02\,a_0$, and it is accompanied by several less pronounced maxima at smaller $\theta$ values. The $(53,5)$ probability density exhibits three maxima with similar probability density, the first one at $\theta=0.44$ and $R=35.24\,a_0$, the second one at $\theta=2.95$ and $R=33.73\,a_0$, and the third one, at $\theta=2.70$ and $R=25.97\,a_0$, which is shifted towards smaller internuclear separations from the outermost turning point. Both configurations exhibit significantly distorted patterns, and they show a strong mixing between the radial and angular degrees of freedom. In general, at the avoided crossings the nuclear dynamics of the field-dressed states are characterized by an asymmetric and strongly distorted behavior, exhibiting pronounced localization phenomena. We have also analyzed these weakly bound levels taking into account only the interaction of the field with the permanent electric dipole, and not considering the contribution of the molecular polarizability. The results look qualitatively similar but show a quantitatively different behaviour as a function of the electric field, and the effect of the polarizability becomes important for $F \gtrsim 10^{-4}$ a.u. The polarizability terms cause the mixing of states with field-free rotational quantum numbers $J$ and $J\pm2$. If the polarizability is included, the dissociation energies are smaller, i.e., for a certain $F$ value the modulus of the displacement of the dissociation threshold is larger than the modulus of the energetical shift due to polarizability of a certain level, and the avoided crossings are also broader. Without the contribution of the polarizability term the $(53,6)$ level is not shifted to the continuum as the field is increased, and the inverse phenomenon appears, i.e., the $(54,2)$ level becomes a bound state for $F\gtrsim 1.8\, \cdot 10^{-4}$ a.u. The validity of the adiabatic and effective rotor approaches has been previously demonstrated for vibrational low-lying levels of the LiCs dimer \cite{gonzalez06}. However, this does not hold true for the part of the spectrum considered here. Since both approximations do not include the full coupling between the vibrational and rotational motion, the presence of the avoided crossings in the spectrum is not reproduced. In addition, significant errors are found for the binding energies and the expectation value $\langle R\rangle$ of the rotational excitations even in the absence of the field, e.g. the $(53,6)$ and $(51,15)$ levels are not bound within these approaches. The above-discussed field-effects on the vibrational motion can not be explained using an effective rotor description \cite{gonzalez04}. However, the adiabatic results qualitatively reproduce the orientation and hybridization of the angular motion as well as the stretching and squeezing of the vibrational motion. Numerically significant deviations are encountered in the avoided crossing regions. \section{\label{sec:conclu} Conclusions and outlook} We have investigated the influence of a strong static and homogeneous electric field on the highly excited rovibrational states of the electronic ground state X$^1\Sigma^+$ of the alkali dimer LiCs by solving the fully coupled rovibrational Schr\"odinger equation. The interaction of the field with the electric dipole moment function as well as with the molecular polarizability has been taken into account. We focus here on the last rotational excitations with vanishing azimuthal symmetry within the last four vibrational bands, $51\le\nu\le 54$. Due to their large extension, strong fields are needed in order to observe a significant field influence. The richness and variety of the resulting field-dressed rotational dynamics has been illustrated by analyzing the energetical Stark shifts, as well as, the orientation, the hybridization of the angular motion and the vibrational stretching and squeezing effects. Whether we encounter a squeezing or stretching of the vibrational motion depends on the angular configuration: The molecule tries to minimize its energy leading to stretching for a parallel configuration and squeezing for an antiparallel one. In the strong field regime, the electrically-dressed spectrum is characterized by the presence of pronounced avoided crossings between energetically adjacent levels. These irregular features lead to a strong field-induced mixing and interaction between the states, and they cause strongly distorted and asymmetric features of the corresponding probability densities. We stress the importance of identifying these irregular features: Their presence affects the radiative decay properties of the dimer, such as lifetime and transition probability for spontaneous decay, and they might significantly alter the chemical reaction dynamics. Even more, one of their possible applications is their use to transfer population between the involved states. We have demonstrated that if the last most weakly bound state is a low-field seeker it is possible to shift it to the atomic continuum by tuning the electric field, i.e., the molecular system dissociates into free atoms. The reverse process is also possible, i.e., a continuum state, formed by two free atoms with the correct field-free rotational symmetry, can be transferred to a weakly bound molecular state by changing the field strength. These results suggest that by properly increasing or decreasing the value of the electric field, one could study in a control way the opening of a dissociation or an association channel. Although our study is restricted to a LiCs dimer and to the spectral region close to the dissociation threshold of its electronic ground state, we stress that the above-observed physical phenomena are expected to occur in many other polar molecules. \acknowledgments Financial support by the Spanish projects FIS2008-02380 (MEC) and FQM--0207, FQM--481 and P06-FQM-01735 (Junta de Andaluc\'{\i}a) is gratefully appreciated. This work was partially supported by the National Science Foundation through a grant for the Institute for Theoretical Atomic, Molecular and Optical Physics at Harvard University and Smithsonian Astrophysical Observatory. Financial support by the Heidelberg Graduate School of Fundamental Physics in the framework of a travel grant for R.G.F. is gratefully acknowledged. We thank Michael Mayle for his help with respect to technical aspects of this work.
train/arxiv
BkiUeqU241xiDnlsRDYE
5
1
\section{Introduction} The cubic equation holds a special place in the history of mathematics. In the early 16th century, the cubic formula was discovered independently by Niccolò Fontana Tartaglia and Scipione del Ferro. Italy at the time was famed for intense mathematical duels. In 1535, Tartaglia was challenged by Antonio Fior, del Ferro's student, with Tartaglia winning the famous contest. Gerolamo Cardano then persuaded Tartaglia to share his method with him, promising to not reveal it without giving Tartaglia time to publish. Once Cardano learned about Del Ferro's work, which predated Tartaglia's, he decided that his promise could be legitimately broken and published the method in his book "Ars Magna" in 1545. This led to a dramatic decade-long feud between Tartaglia and Cardano [1]. \\ \\ Later on, other methods were developed including a trigonometric solution for cubic equations with three real roots by François Viète (René Descartes expanded on Viète's work) [2]. Joseph Louis Lagrange followed with a new uniform method to solve lower degree (less than 5) polynomial equations including the cubic [3]. In 1683, Ehrenfried Walther von Tschirnhaus [4] proposed a new approach using the Tschirnhaus transformation. In addition, the authors identified other more recent methods and approaches to solving the cubic equation [5][6][7][8] and developed one of their own [9].\\ \\ In this article, a different approach to solving general cubic equations is discussed by introducing a new function MY that provides the real roots uniformly under various cases. MY is then expressed in closed form and hypergeometric form. We then proceed by using an algebraic iteration method that converges globally towards $MY$. \\ \\ While casus irreducibilis [10] for cubic equations states that real-valued roots of irreducible cubic polynomials cannot be expressed in radicals without introducing complex numbers, we came really close with real radicals. Finally, the article is concluded by discussing many of the unique properties of $MY$. \section {Canonical function} Without loss of generality, let's consider the depressed cubic equation\footnote{For a general cubic equations: $ax^3+bx^2+cx+d=0$ where $a \neq 0$, $b$, $c$ and $d$ are real numbers, a change of variable $y=x+\frac{b}{3a}$ leads to the depressed.}: \[y^3+p y +q=0 \qquad (1)\] Where $p$ and $q$ are real numbers.\\ \\ Outside the trivial special cases of p=0 or q=0, equation (1) can be transformed to a canonical form: \[\frac{z^3+ z^2}{2} = t\qquad (2) \] Two transformations can be used: \begin{enumerate} \item \underline{Transformation 1}: A change of variable $z=\frac{q}{p y}$ leads to: $t=-\frac{q^2}{2p^3}$. \item \underline{Transformation 2}: When $p<0$, a change of variable $z=\frac{y}{\sqrt{-3p}}-\frac{1}{3}$ leads to: \[ t=\frac{1}{27}-\frac{q}{2\sqrt{-27p^3}}\] \end {enumerate} The canonical function $f$ is defined in $\R$ as: \[f: z \longmapsto \frac{z^3+ z^2}{2} \] Using the sign of the derivative, there are three intervals where $f$ is monotonic: \begin{enumerate} \item $]-\infty,-\frac{2}{3}]$, $f$ is increasing from $-\infty$ to $\frac{2}{27}$. The point $M(-\frac{2}{3},\frac{2}{27})$ is a local maximum. \item $[-\frac{2}{3},0]$, $f$ is decreasing from $\frac{2}{27}$ to $0$. The point $O(0,0)$ is a local minimum. \item $[0,+\infty[$, $f$ is increasing from $0$ and $+\infty$. \end{enumerate} Together, these properties define the number of roots of the canonical equation $f(z)= x$ (see Figure 1): \begin{enumerate} \item Scenario 1: $x>\frac{2}{27}$ , there is a unique solution that is above $\frac{1}{3}$. \item Scenario 2: $x<0$, there is a unique negative solution. \item Scenario 3: $0\leq x \leq \frac{2}{27}$ there are three real solutions (two of which may coincide). One root is positive and the other two are negative . \end{enumerate} \begin{figure}[!h] \centering \begin{tikzpicture} \begin{axis}[ xmin=-1.7, xmax=1.7, ymin=-0.12, ymax=0.18, xscale=1.6, yscale=1.05] \draw[->, line width=1.1pt] (-1.5, 0) -- (1, 0) node[right] {$z$}; \draw[->, line width=1.1pt] (0, -0.18) -- (0, 0.14) node[above] {$y$}; \draw (0.8,0.12) node[right, cygreen]{$ \textrm{Scenario 1}$}; \draw (0.8,0.05) node[right, cygreen]{$ \textrm{Scenario 3}$}; \draw (0.8,-0.08) node[right, cygreen]{$ \textrm{Scenario 2}$}; \draw (0,0) node[below left]{$0$}; \draw (0.33,2/27) node[above left, blue]{$f$}; \addplot[blue, line width=0.8pt, samples=100, smooth, domain={-1.16}:{0.46}]plot (\x, { \x^3 + \x^2)*0.5 }); \addplot[black, dashed, samples=100, smooth, domain={-1.2}:{0.7}]plot (\x, {0.12 } ); \addplot[black, dashed, samples=100, smooth, domain={-1.2}:{0.7}]plot (\x, { 0.05 } ); \addplot[black, dashed, samples=100, smooth, domain={-1.2}:{0.7}]plot (\x, { -0.08 } ); \draw (-2/3,2/27) node[above] {$M(-\frac{2}{3},\frac{2}{27})$}; \begin{scriptsize} \draw [fill=black] (0,0) circle (2pt); \draw [fill=black] (-2/3,2/27) circle (2pt); \end{scriptsize} \end{axis} \end{tikzpicture} \caption{Canonical function $f$} \label{Solution1} \end{figure} \section{Geometric intuition behind proposed method} To provide the intuition behind our proposed method for solving cubic equations, notice that the inflection point $I\left(-\frac{1}{3}, \frac{1}{27}\right)$ is also a symmetry point. This can be expressed analytically as: \[f\left(-\frac{2}{3}-z\right)=\frac{2}{27}-f(z) \qquad \textrm{for all} \qquad z \in \R \] \begin{figure}[!h] \center \begin{tikzpicture} \begin{axis}[ xmin=-1.1, xmax=0.5, ymin=-0.02, ymax=0.1, xscale=1.5, yscale=1.1, restrict x to domain={-1}:0.35] \draw[->, line width=1.1pt] (-1.1, 0) -- (0.4, 0) node[right] {$z$}; \draw[->, line width=1.1pt] (0, -0.1) -- (0, 0.08) node[above] {$y$}; \draw (0.27955689, 0) node[below]{$z_1$ }--(0.27955689, 0.05); \draw (-0.866951318, 0) node[below]{$z_2$ }--(-0.866951318, 0.05); \draw (-0.412605572, 0) node[below]{$z_3$ }--(-0.412605572, 0.05); \draw [dashed, blue](0.200284651, 0) node[below]{$z'$ }--(0.200284651, 2/27-0.05); \draw (0,0) node[anchor=north east]{$0$}; \draw (0,1/27) --(0.03,1/27) node[right]{$\frac{1}{27}$}; \draw (-1/3,0) node[below]{-$\frac{1}{3}$}; \draw (-1/3,0) -- (-1/3,0.005); \draw (-2/3,2/27) node[above, blue]{$f$}; \draw (0.33,0.07) node[left, blue]{$f_{|\R^+}$}; \draw (0,0.05) node[above left, red]{$y=x$}; \draw (0,2/27-0.05) node[below left, cygreen]{$y=2/27-x$}; \addplot[blue, samples=1000, smooth, line width=1.1pt, unbounded coords=discard, restrict x to domain={-0.01}:0.35]plot (\x, {(\x^3+ \x^2)*0.5) }); \addplot[blue, dashed, samples=1000, smooth, unbounded coords=discard, restrict x to domain={-1}:0.35]plot (\x, { (\x^3+\x^2)*0.5 }); \addplot[red, dashed, samples=1000, smooth, unbounded coords=discard, restrict x to domain={-1}:0.35]plot (\x, {0.05}); \addplot[cygreen, dashed, samples=1000, smooth, unbounded coords=discard, restrict x to domain={-1}:0.35]plot (\x, {2/27-0.05}); \draw (-1/3,1/27) node[right] {$I(-\frac{1}{3},\frac{1}{27})$}; \begin{scriptsize} \draw [fill=red] (-1/3,1/27) circle (2pt); \end{scriptsize} \end{axis} \end{tikzpicture} \caption{Solving of $f(z)=0.05$, three real roots} \label{Solution2} \end{figure} Therefore, the problem of solving the equation $f(z)=x$ for $x \in \R$ is reduced to solving equations $f_{|\R^+}(z)=a$ for positive real numbers $a$. Let's illustrate this point geometrically by considering the following example $f(z)=x$ where $x=0.05$ (see Figure 2). This equation has three real solutions $z_1$, $z_2$ and $z_3$. \begin{enumerate} \item Construct the restricted curve $f_{|\R^+}$, as well as the lines $y=x$ and $y=\frac{2}{27}-x$. The abscissas of the intersections of $f_{|\R^+}$ with these two lines are respectively $z_1$, the positive root of the equation, and $z'$. \item Let $z_2$ be the reflection of $z'$ with respect to $-\frac{1}{3}$: $z_2=-2/3-z'$. Using the symmetry property, $z_2$ is a root of the equation. \item Let $z_3$ be the unique negative point such as the distance between $z_1$ and $-\frac{1}{3}$ is the same as the distance between $z'$ and $z_3$. In other words $z_3=-1/3-z'-z1$. Therefore $z_1+z_2+z_3=-1$. Using Vieta's formula, $z_3$ is the third root. \end{enumerate} \section{$MY$ function definition} The restriction, $f_{|\R^+}$, of $f$ to $\R^+$ is striclty increasing, continuous with $f_{|\R^+}(0)=0$ and $\lim\limits_{z \to +\infty} f_{|\R^+}(z)=+\infty$. Therefore it is bijective from $\R^+$ to $\R^+$ and admits a reciprocal function: \[MY: x \longmapsto MY(x)=f_{|\R^+}^{-1}(x) \qquad x \in \R^+ \] the graph of $MY$ is symmetrical to the graph of $f_{|\R^{+}}$ with respect to the line $y=x$ (See Figure 3). \\ \begin{figure}[!h] \centering \begin{tikzpicture} \draw(-1,-1) rectangle (5,5); \draw[->] (-0.5, 0) -- (4, 0) node[right] {$x$}; \draw[->] (0, -0.5) -- (0, 4) node[above] {$y$}; \draw [dashed](3, 0)node[below] {$1$} -- (3, 3); \draw [dashed](0, 3)node[left] {$1$} -- (3, 3); \draw [dashed](-0.3, 0)node[below] {$0$}; \draw[scale=3, domain=0:1.15, dashed, smooth, variable=\x, blue] plot ({\x}, {(\x^3+\x^2)*0.5}); \draw[scale=3, domain=0:1.15, smooth, variable=\y, black] plot ({(\y^3+\y^2)*0.5}, {\y}); \draw[scale=3, domain=0:1.25, dashed, smooth, variable=\x, red] plot ({\x}, {\x}); \draw (3, 2) node[right]{$\color{black} {- MY(x)}$}; \draw (3,1.5) node[right]{$\color{red} {- y=x}$}; \draw (3,1) node[right]{$\color{blue} {- f(x)}$}; \end{tikzpicture} \begin{tikzpicture} \draw(-1,-1) rectangle (5,5); \draw[->] (-0.5, 0) -- (4, 0) node[right] {$x$}; \draw [dashed](1.5, 0)node[below] {$1$} -- (1.5, 2.55); \draw [dashed](0, 2.55)node[left] {$1$} -- (1.5, 2.55); \draw [dashed](-0.3, 0)node[below] {$0$}; \draw (3, 0)node[below] {$2$}; \draw[->] (0, -0.5) -- (0, 4) node[above] {$y$}; \draw[scale=1.5, domain=0:1.4, smooth, variable=\x, black] plot ({(\x^3+\x^2)*0.5}, {1.7*\x}); \draw[scale=1.5, domain=0:2.3, dashed, smooth, variable=\x, blue] plot ({\x}, {1.7*\x^0.4}); \draw[scale=1.5, domain=0:2.3, dashed, smooth, variable=\x, red] plot ({\x}, {1.7*\x^0.333}); \draw[scale=1.5, domain=0:2.3, dashed, smooth, variable=\x, green] plot ({\x}, {1.7*\x^0.5}); \draw (2.1,2) node[right]{$\color{black} {- MY(x)}$}; \draw (2.1,1.5) node[right]{$\color{blue} {- x^{\frac{2}{5}}}$}; \draw (2.1,1) node[right]{$\color{red} {- x^{\frac{1}{3}}}$}; \draw (2.1,0.5) node[right]{$\color{green} {- \sqrt{x}}$}; \end{tikzpicture} \caption{{\bf Left}: $MY$ inverse of $f$ \qquad {\bf Right:} $MY$ versus power functions } \end{figure} \\ $MY$ is continuous, strictly increasing and infinitely differentiable. Its behavior resembles power functions (See Figure 3): When $x$ is close to 0, $MY(x)$ is equivalent to $\sqrt{2x}$. For large values of $x$, $MY(x)$ behaves like $\sqrt[3]{2x}$ (see section 8, Properties of $MY$). $x^{\frac{2}{5}}$ is an upper bound for $MY$.\\ \\ Thanks to Cardano and Vieta's trignometric formulas for cubic roots, $MY$ can be expressed in closed form. Define: \[u=x-\frac {1}{27}\] \begin{enumerate} \item For $x \in [\frac{2}{27},+\infty [$\\ \[MY\left(x\right)=-\frac{1}{3} +\sqrt[3]{u+\sqrt{u^2-\left (\frac {1}{27}\right)^2}}+\sqrt[3]{u-\sqrt{u^2-\left (\frac {1}{27}\right)^2}}\] \item For $x \in [0, \frac{2}{27}[$\\ \[MY\left(x\right)=-\frac {1}{3}+\frac{2}{3}\cos \left (\frac{\arccos (27 u)}{3}\right)\] \end{enumerate} Notice that MY is continuous and smooth at $x= \frac{2}{27}$ despite two totally different closed form expressions! \section{Cubic roots expressed in $MY$} \subsection{Solving the canonical equation} Roots of the canonical equation $f(z)=x$ can be expressed in a simple form using $MY$: \begin{enumerate} \item For $x > \frac{2}{27}$, there is a unique real solution $z_1=MY\left(x\right)$. \item For $x < 0$, there is a unique real solution $z_1$. Using the symmetry property: \[f\left( -\frac{2}{3}-z_1\right)=\frac{2}{27}-x \qquad \textrm{it follows that:} \qquad z_1=-\frac{2}{3}-MY\left(\frac{2}{27}-x\right) \] \item For $ 0 \leq x \leq \frac{2}{27}$ there are three real solutions. $z_1=MY\left(x\right)$ is the positive solution. The two other negative solutions are obtained, first, by symmetry: \[z_2=-\frac{2}{3}-MY\left(\frac{2}{27}-x\right) \] second, by using Vieta's formula\footnote{Also, for $x \neq 0$ $z_3=2x/(z_1z_2)$} $z_1+z_2+z_3=-1$ : \[z_3=MY\left(\frac{2}{27}-x\right)-MY\left(x\right)-\frac{1}{3} \] Given the sign of the three roots: $z_2, z_3 \leq 0 \leq z_1$. In addition: \[z_3-z_2=\frac{1}{3}+2 MY\left(\frac{2}{27}-x\right)-MY\left(x\right)\] Since $MY(x) \leq MY(\frac{2}{27})=\frac{1}{3}$: \[z_2 \leq z_3 \leq z_1 \qquad (4)\] \end{enumerate} \subsection{Solving the depressed cubic} Using the results from section 2 and section 5.1, we can express the roots of the depressed equation: \[y^3+p y +q=0 \qquad (1)\] Indeed, equation (1) can be transformed to a canonical form using either Transformation 1 or Transformation 2. Interestingly, each transformation leads to a different expression of the roots. In particular, when $p<0$ both transformations can be applied which leads to useful equalities. For this purpose, let's define: \[\xi=\frac{3 q}{2 p}\sqrt{-\frac{3}{p}}\] We distinguish 4 cases:\\ \\ {\bf Case 1: $p =0$}. There is a unique real solution $\alpha =-\sqrt[3]{q}$.\\ \\ {\bf Case 2: $p > 0$}. There is a unique real solution: \[\alpha =\frac{q}{p \left(-\frac{2}{3}-MY\left(\frac{2}{27}+\frac{q^2}{2p^3}\right)\right)}\] {\bf Case 3: $p < 0$ and $|\xi| >1$}. There is a unique real solution with two expressions: \[\alpha =-\sqrt{-\frac{p}{3}}\left(3 MY\left(\frac{1+|\xi|}{27}\right) +1\right)=\frac{q}{p MY\left(-\frac{q^2}{2p^3}\right)}\] {\bf Case 4: $p < 0$ and $-1\leq \xi \leq 1$}. There are three real solutions that can be expressed using Tranformation 2 as: \[\alpha=\sqrt{-\frac{p}{3}}\left(3 MY\left(\frac{1+\xi}{27}\right) +1\right)\] \[\beta=3\sqrt{-\frac{p}{3}}\left(MY\left(\frac{1-\xi}{27}\right) -MY\left(\frac{1+\xi}{27}\right) \right)\] \[\gamma=-\sqrt{-\frac{p}{3}}\left(3 MY\left(\frac{1-\xi}{27}\right) +1\right)\] As a result of (4): \[\gamma \leq \beta \leq \alpha\] When $q \neq 0$, these roots can be expressed differently using Transformation 1: \[\alpha'=\frac{q}{p MY\left(-\frac{q^2}{2p^3}\right)}\] \[\beta'=\frac{q}{p \left(MY\left(\frac{2}{27}+\frac{q^2}{2p^3}\right)-MY\left(-\frac{q^2}{2p^3}\right)-\frac{1}{3}\right)}\] \[\gamma'=\frac{-q}{p \left(\frac{2}{3}+MY\left(\frac{2}{27}+\frac{q^2}{2p^3}\right)\right)}\] Alternatively, $\beta'$ could be derived from any of Vieta's formulas for equation $(1)$. For example: \[\alpha'+\beta'+\gamma'=0\] In addition, using the order of the three roots, when $q < 0$: \[\alpha'=\alpha \qquad \beta'=\gamma \qquad \gamma'=\beta \] and when $q>0$: \[\alpha'=\gamma \qquad \beta'=\alpha \qquad \gamma'=\beta \] Naturally, the roots coincide with the trigonemetric roots provided by François Viète: \[t_k=2\sqrt{-\frac{p}{3}}\cos\left(\theta_k\right) \qquad \textrm{where}\qquad \theta_k= \frac{\arccos(\xi)}{3}-\frac{2 k\pi}{3}\qquad \textrm{for } k=0,1,2\] Since $\theta_0 \in [0, \pi/3]$, $\theta_1 \in [-2\pi/3, -\pi/3]$: $\theta_2 \in [-4\pi/3, -\pi]$: \[t_2 \leq t_1 \leq t_0\] Therefore \[\alpha=t_0 \qquad \beta=t_1 \qquad \gamma=t_2\] \section{Approximation of $MY$ using radicals} \subsection{Casus irreducibilis} One of the oddities of solving general cubic equations using radicals is the absolute requirement to use complex numbers in the irreducible case. In other words, despite the three roots being real for irreducible cubic polynomials, they cannot be expressed as radicals of real numbers [10]. We note of course the alternate solution provided by Viète, which bypasses the use of complex numbers by introducing trigonometric functions. This is reflected indirectly in the provided closed form of the function $MY$, which is a hybrid of an algebraic function for $x \geq \frac{2}{27}$ and a trigonometric function for $x<\frac{2}{27}$.\\ \\ In the following section, we provide an accurate closed form algebraic approximation of $MY$. In doing so, we offer a new perspective on solving the cubic equation including in the irreducible case. \subsection{Fixed point iteration} Assume that $z=MY(x)$. z satisfies the equation: \[z^3+z^2=2x \qquad (5)\] This equation can be exploited in two ways. First, by factorizing $(5)$ we obtain: \[z=\sqrt{\frac{2x}{1+z}} \qquad (6)\] Second, we can complete the cubic: \[\left(z+\frac{1}{3}\right)^3=2x+\frac{1}{27}+\frac{z}{3} \qquad (7)\] When substituting $z$ in the right hand side of $(7)$ by using (6): \[z=G(x,z) \qquad (8)\] Where the function $G$ is defined by: \[ G\left(x,y\right)=\sqrt[3]{2 x+ \frac{1}{27}+\frac{1}{3}\sqrt{\frac{2 x}{1+y}}}-\frac{1}{3}\qquad (9)\] Therefore z is a fixed point of $G(x,.)$, inspiring a fixed point iteration.\\ \\ The following theorem proves the convergence of this method towards $MY$. More importantly, it provides right away an accurate closed form algebraic approximation of $MY$ across $\R^+$.\\ \\ {\bf Convergence theorem:}\\ \\ Define the sequence of functions $(M_n(.))_{n \in \N}$ defined by: \[M_0(x)=G(x,x^{\frac{2}{5}}) \qquad \textrm{and} \qquad M_{n+1}(x)=G(x,M_n(x))\] \begin{enumerate} \item For all positive real numbers $x$: \[|M_{0}(x)-MY(x)|<C_0 \qquad \textrm{where } \qquad C_0=1.4408 \: 10^{-3}\] \item For all strictly positive real numbers $x$: \[\left|\frac{M_{0}(x)}{MY(x)}-1\right|<{C_0}' \qquad \textrm{where } \qquad {C_0}'=1.1527 \: 10^{-2}\] \item For all positive real numbers $x$: \[|M_{n}(x)-MY(x)|<\frac{C_0}{K^n} \qquad \textrm{where } \qquad K=25.05\] \item This sequence converge uniformly to MY over $\R^+$. \\ \end {enumerate} \newpage In order to demonstrate the convergence theorem, we start with the following lemma:\\ \\ {\bf Lemma:} \begin{enumerate} \item For all positive real numbers $x$ and $y$:\[\left|\frac{\partial G}{\partial y}\left(x,y\right) \right| < C_1 \qquad \textrm{where}\qquad C_1 \approx \frac{1}{21.2398}\] \item For all positive real numbers $x$ : \[\left|\frac{\partial G}{\partial y}\left(x,MY\left(x\right)\right)\right | < C_2 \qquad \textrm{where} \qquad C_2 \approx \frac{1}{30.5475}\] \end {enumerate} {\bf Proof of lemma:}\\ \\ $G(x,.)$ is decreasing with respect to $y$ and $G(.,y)$ is increasing with respect to $x$. Define: \[ z=MY(x) \qquad (10)\] z satisfies: \[ G\left(x,z\right)=z \qquad (11)\] For any $x>0$, $G(x,.)$ is derivable and for $y \geq 0$ : \[ \frac{\partial G}{\partial y}\left(x,y\right)=-\frac{\sqrt{2x}}{18}\left( \left(2 x+ \frac{1}{27}\right) (1+y)^{\frac{9}{4}}+\frac{\sqrt{2x}}{3}(1+y)^{\frac{7}{4}}\right)^{-\frac{2}{3}} \qquad (12)\] \begin{enumerate} \item Upper bound for $\left|\frac{\partial G}{\partial y}\left(x,.\right)\right|$\\ $\left|\frac{\partial G}{\partial y}\left(x,.\right)\right|$ is strictly decreasing and convex. Therefore its maximum $C_1$ is reached at $y=0$.\\ \\ Define $t=\sqrt{2x}$. $C_1$ is also the maximum of: \[h(t)=\frac{t}{18}\left( t^2+ \frac{1}{27}+\frac{t}{3}\right)^{-\frac{2}{3}}=\frac{1}{18}\left( t^{\frac{1}{2}}+ \frac{t^{-\frac{3}{2}}}{27}+\frac{t^{-\frac{1}{2}}}{3}\right)^{-\frac{2}{3}}\] Let $v=t^{-\frac{1}{2}}$: \[h(t)=g(v)=\frac{1}{18}\left( \frac{1}{v}+ \frac{v^3}{27}+\frac{v}{3}\right)^{-\frac{2}{3}}\] $C_1$, is obtained by setting the derivative of $g$ to $0$, reached at $v_0$: \[v_0=\sqrt{\frac{-1+\sqrt{5}}{2}} \] and \[ C_1 \approx \frac{1}{21.2398}\] \item Upper bound for $\left|\frac{\partial G}{\partial y}\left(x,MY(x)\right)\right|$\\ \\ For $x>0$ define $z=MY(x)$: \[\left| \frac{\partial G}{\partial y}\left(x,z\right)\right| =\frac{\sqrt{2x}}{18(1+z)^{\frac{3}{2}}}\left( \left(2 x+ \frac{1}{27}\right) +\frac{1}{3}\sqrt{\frac{2x}{1+z}}\right)^{-\frac{2}{3}} \] Recall that: \[z=\sqrt{\frac{2x}{1+z}} \qquad \textrm{and} \qquad (z+\frac{1}{3})^3=2x+\frac{1}{27}+\frac{z}{3}\] Therefore \[ \left|\frac{\partial G}{\partial y}\left(x,z\right)\right| =\frac{z}{18 (1+z) \left(z+\frac{1}{3}\right)^2} \qquad (13)\] Or: \[ \left|\frac{\partial G}{\partial y}\left(x,z\right)\right|= \frac{1}{18 }\left(z^2+\frac{5}{3}z + \frac{7}{9}+\frac{1}{9z}\right)^{-1}\] Setting the derivative to $0$ leads to the maximum $C_2$, reached at $z$ such that: \[2 z+\frac{5}{3}-\frac{1}{9z^2}=0\]- Notice that: \[2 z+\frac{5}{3}-\frac{1}{9z^2}=\frac{(3z+1)(6 z^2+3 z-1)}{9 z^2}\] The only positive solution is: \[z=\frac{-3+\sqrt{33}}{12}\] leading to: \[C_2 \approx \frac{1}{30.5475}\] \end{enumerate} {\bf Proof of Convergence theorem} \begin{enumerate} \item For $x>0$, define $z=MY(x)$. \\ Since $M_0(x)=G(x,x^{\frac{2}{5}})$ and $z=G(x,z)$: \[M_0(x)-MY(x)|=\left|G(x,x^{\frac{2}{5}})-z \right|\] As proven below in the Properties of $MY$ section $z \leq x^{\frac{2}{5}}$. \\ Since $\left|\frac{\partial G}{\partial y}(x,.)\right| $ is decreasing with respect to $y$ over the interval $[z,x^{\frac{2}{5}}]$: \[ \left|M_0(x)-MY(x)\right| = \left|G(x,x^{\frac{2}{5}})-G(x,z) \right| \leq U(z)=\left|\frac{\partial G}{\partial y}(x,z)\right|\left|(x^{\frac{2}{5}}-z)\right| \qquad (14)\] Recall: \[x=(z^3+z^2)/2 \qquad \textrm{and } \qquad \left|\frac{\partial G}{\partial y}(x,z)\right|=\frac{z}{18(1+z)\left(z+\frac{1}{3}\right)^{2}}\] Therefore \[U(z)= \frac{z \left(\left( \frac{z^3+z^2}{2}\right)^{\frac{2}{5}} -z\right)}{18(1+z)\left(z+\frac{1}{3}\right)^{2}}\] Write \[U(z)=A.B \qquad (15)\] where: \[A=\frac{z^2(1+\frac{1}{z})}{18\left(z+\frac{1}{3}\right)^{2}}=\frac{1}{18}\left(1+\frac{1}{3}\frac{\left(z-\frac{1}{3}\right)}{\left(z+\frac{1}{3}\right)^{2}}\right)\] \[B=\frac{\left( \frac{z^{1/2}+z^{-1/2}}{2}\right)^{\frac{2}{5}} -1}{(1+z)(1+\frac{1}{z})} \] First, when $z\leq \frac{1}{3}$ $A\leq \frac{1}{18}$. If $z >\frac{1}{3}$, define $y=z-\frac{1}{3}>0$. \[A=\frac{1}{18}\left(1+\left(y^{\frac{1}{2}}+\frac{2}{3}y^{\frac{-1}{2}}\right)^{-2}\right)\] Using the derivative, A is maximal when $y=\frac{2}{3}$ or $z=1$. Therefore \[A \leq \frac{1}{16} \qquad (16)\] Second, define: \[w=\frac{z^{1/2}+z^{-1/2}}{2} \geq 1\] Notice that \[(1+z)(1+\frac{1}{z})=4w^2 \qquad \textrm{and} \qquad B=\frac{w^\frac{2}{9}-1}{4w^2}\] When $w=1$ $B=0$. If $w>1$ define $\xi=w^\frac{2}{9}-1$. A simple algebra leads to: \[B=\frac{1}{4} \left(\xi^\frac{-2}{9}+\xi^\frac{7}{9}\right)^{-\frac{9}{2}}\] Using the derivative $B$ is maximal when $\xi=\frac{2}{7}$. Therefore: \[B \leq \frac{1}{43.37886} \qquad (17)\] As a result: \[ U(z) \leq \frac{1}{694.061782} \approx 1.4408 \ 10^{-3} \] Going back to $(14)$, for all positive real numbers $x$: \[\left| M_0(x)-MY(x) \right|<C_0 \qquad (18)\] Where \[C_0=\frac{1}{694.061782}\approx 1.4408 \: 10^{-3}\] \item Keeping the notation $z=MY(x)$ and using (14) and (15): \[\left| \frac{M_0(x)}{MY(x)}-1 \right|<\frac{U(z)}{z}={\tilde A}(z) B \] Where \[{\tilde A}(z)=\frac{A}{z}=\frac{z+1}{18\left(z+\frac{1}{3}\right)^{2}}\] The derivative ${\tilde A}'$ of ${\tilde A}$ is \[{\tilde A}'(z)=-\frac{\left(z+\frac{5}{3}\right)}{18\left(z+\frac{1}{3}\right)^{3}}<0 \qquad \textrm{for }z\geq 0\] Therefore ${\tilde A}$ is decreasing on $\R^+$ and reaches its maximum at $z=0$. Then \[{\tilde A}(z) \leq {\tilde A}(0)=\frac{1}{2}\] This combined with (17) leads to: \[\left| \frac{M_0(x)}{MY(x)}-1 \right|<{\tilde A}(z) B \leq {C_0}'= 1.1527 \: 10^{-2} \] \item For $n \in \N$ and any positive real number $x$, define $z=MY(x)$ and $y=M_n(x)$. Since $M_{n+1}(x)=G(x,y)$ and $z=G(x,z)$ : \[\left|M_{n+1}(x)-z\right|= \int_{y}^{z}\left|\frac{\partial G}{\partial y}\left(x,u\right)\right|du \qquad (19)\] Since $\left|\frac{\partial G}{\partial y}\left(x,.\right)\right|$ is positive and convex, the integral in $(19)$ is lower than the area of the trapezoid: \[ \left|M_{n+1}(x)-z\right|= \leq \frac{1}{2}\left|y-z\right|)\left(\left|\frac{\partial G}{\partial y}\left(x,y\right)\right |+\left|\frac{\partial G}{\partial y}\left(x,z\right)\right |\right)\] Using results from the lemma: \[ \left|M_{n+1}(x)-z\right| \leq \left|M_{n}(x)-z\right|\frac{C_1+C_2}{2} \qquad (20)\] Or \[ \left|M_{n+1}(x)-z\right| \leq \frac{\left|M_{n}(x)-z\right|}{K} \qquad (21)\] Where \[ K=\frac{2}{C_1+C_2}\approx 25.0572 \] It follows from $(A.8)$ and $(A.11)$ that: \[ \left|M_{n}(x)-MY(x)\right| \leq \frac{C_0}{K^n} \qquad (22)\] \item As a natural consequence of $(22)$, the sequence $(M_n)_{n \in \N}$ converges uniformly to MY. \end{enumerate} \subsection{Examples:} \subsubsection {Numerical examples for the approximation of MY} To evaluate the efficiency of the algorithm described above, we consider two examples:\\ \\ {\bf Example 1:} $x=0.01$, $MY(x)$ closed form approximate value to 10th decimal digit is $0.1328694292$.\\ \\ \begin{tabular}{|R{1.5cm}|R{3.5cm}|R{3.5cm}|R{3cm}|} \hline Iteration & $M_n(x)$ & $\left|M_n(x)-MY(x)\right|$ & $\left|\frac{M_n(x)}{MY(x)}-1\right|$ \\ \hline 0 & 0.1321129198 & 7.57 $10^{-04}$ & 5.69 $10^{-03}$\\ \hline 1 & 0.1328921191 & 2.27 $10^{-05}$ & 1.71 $10^{-04}$ \\ \hline 2 & 0.1328687489 & 6.80 $10^{-07}$ & 5.12 $10^{-06}$\\ \hline 3 & 0.1328694495 & 2.04 $10^{-08}$ & 1.53 $10^{-07}$\\ \hline 4 & 0.1328694285 & 6.11 $10^{-10}$ & 4.60 $10^{-09}$\\ \hline 5 & 0.1328694292 & 1.83 $10^{-11}$ & 1.38 $10^{-10}$\\ \hline \end{tabular} \\ \\ \\ {\bf Example 2:} $x=1000$, $MY(x)$ closed form approximate value to 10th decimal digit is $12.2745406200$. \\ \\ \begin{tabular}{|R{1.5cm}|R{3.5cm}|R{3.5cm}|R{3cm}|} \hline Iteration & $M_n(x)$ & $\left|M_n(x)-MY(x)\right|$ & $\left|\frac{M_n(x)}{MY(x)}-1\right|$ \\ \hline 0 & 12.2735762826 & 9.64 $10^{-04}$ & 7.86 $10^{-05}$\\ \hline 1 & 12.2745409317 & 3.12 $10^{-07}$ & 2.54 $10^{-08}$ \\ \hline 2 & 12.2745406200 & 6.85 $10^{-11}$ & 5.58 $10^{-12}$ \\ \hline \end{tabular} \\ \\ \\ Note that when $x<2/27$ (example 1), the closed form expression of $MY$ is trigonometric, and when $x \geq \frac{2}{27}$ (example 2) , the closed form expression is in radicals. Yet, the global algebraic (in radicals) approximation provided works well for both scenarios. \subsubsection {Numerical examples for solving cubic equations} {\bf Example 1:} $x^3+x+1=0$.\\ Here $p=1$ and $q=1$. The results of section 5.2, case 2 apply. There is a unique real solution $\alpha$. Using $MY$ closed form: \[ \alpha \approx-0.6823278038 \] $\alpha_n $ is the estimate of $\alpha$ using $ n $ iterations of the fixed point algorithm of $MY$.\\ \\ \begin{tabular}{|R{0.5cm}|R{4cm}|R{3cm}|R{3cm} |} \hline $n$ & $ \alpha_n $ & $ | \alpha_n- \alpha | $ & $ \left|\frac{\alpha_n}{\alpha}-1 \right|$ \\ \hline 0 & -0.6823458163 & 1.80 $10^{-05}$ & 2.64 $10^{-05}$\\ \hline 1 & -0.6823274572 & 3.47 $10^{-07}$ & 5.08 $10^{-07}$\\ \hline 2 & -0.6823278105 & 6.67 $10^{-09}$ & 9.78 $10^{-09}$\\ \hline 3 & -0.6823278037 & 1.28 $10^{-10}$ & 1.88 $10^{-10}$\\ \hline \end{tabular} \\ \\ \\ {\bf Example 2:} $x^3-3x+1=0$.\\ \\ Here $p=-3$ and $q=1$ which leads to $\xi =q\sqrt{-\frac{27}{4p^3}}= \frac{1}{2}$. The results of section 5.2, case 4 apply. There are three real roots $\alpha$, $\beta$ and $\gamma$. Using the closed form of $MY$: \[ \alpha \approx 1.5320888862 \qquad \beta \approx 0.3472963553 \qquad \gamma\approx -1.8793852416 \] $\alpha_n $, $\beta_n$ and $\gamma_n$ are the respective estimates of $\alpha$, $\beta$ and $\gamma$ using $ n $ iterations of the fixed point algorithm of $MY$.\\ \\ \begin{tabular}{|R{0.5cm}|R{4cm}|R{3cm}|R{3cm} |} \hline $n$ & $ \alpha_n $ & $ | \alpha_n- \alpha | $ & $ \left|\frac{\alpha_n}{\alpha}-1 \right|$ \\ \hline 0 & 1.5296764368 & 2.41 $10^{-03}$ & 1.57 $10^{-03}$\\ \hline 1 & 1.5321663348 & 7.74 $10^{-05}$ & 5.06 $10^{-05}$\\ \hline 2 & 1.5320864010 & 2.49 $10^{-06}$ & 1.62 $10^{-06}$\\ \hline 3 & 1.5320889660 & 7.79 $10^{-08}$ & 5.21 $10^{-08}$\\ \hline 4 & 1.5320888837 & 2.56 $10^{-09}$ & 1.67 $10^{-09}$\\ \hline 5 & 1.5320888863 & 8.21 $10^{-11}$ & 5.36 $10^{-11}$\\ \hline \end{tabular} \\ \\ \begin{tabular}{|R{0.5cm}|R{4cm}|R{3cm}|R{3cm} |} \hline $n$ & $ \beta_n $ & $ | \beta_n- \beta | $ & $ \left|\frac{\beta_n}{\beta}-1 \right|$ \\ \hline 0 & 0.3476559549 & 3.60 $10^{-04}$ & 1.04 $10^{-03}$\\ \hline 1 & 0.3472848043 & 1.16 $10^{-05}$ & 3.33 $10^{-05}$\\ \hline 2 & 0.3472967260 & 3.71 $10^{-07}$ & 1.07 $10^{-06}$\\ \hline 3 & 0.3472963434 & 1.19 $10^{-08}$ & 3.42 $10^{-08}$\\ \hline 4 & 0.3472963557 & 3.82 $10^{-10}$ & 1.10 $10^{-09}$\\ \hline 5 & 0.3472963553 & 1.22 $10^{-11}$ & 3.53 $10^{-11}$\\ \hline \end{tabular} \\ \\ \\ \begin{tabular}{|R{0.5cm}|R{4cm}|R{3cm}|R{3cm} |} \hline $n$ & $ \gamma_n $ & $ | \gamma_n- \gamma | $ & $ \left|\frac{\gamma_n}{\gamma}-1 \right|$ \\ \hline 0 & -1.8773323917 & 2.05 $10^{-03}$ & 1.09 $10^{-03}$\\ \hline 1 & -1.8794511391 & 6.59 $10^{-05}$ & 3.51 $10^{-05}$\\ \hline 2 & -1.8793831270 & 2.11 $10^{-06}$ & 1.13 $10^{-06}$\\ \hline 3 & -1.8793853094 & 6.79 $10^{-08}$ & 3.61 $10^{-08}$\\ \hline 4 & -1.8793852394 & 2.18 $10^{-09}$ & 1.16 $10^{-09}$\\ \hline 5 & -1.8793852416 & 6.99 $10^{-11}$ & 3.72 $10^{-11}$\\ \hline \end{tabular} \\ \\ \\ Obviously, the goal here is not to provide the most optimal root-finding algorithm (see Newton method, Secant method, Steffensen method, Halley method, Laguerre method, Aberth-Ehrlich method, Durand-Kerner method, etc.). Instead, our aim is to merely shed light on an algorithm that approximates with real-radicals the function $MY$ and the roots especially in casus irreducibilis. \section{Hypergeometric representation} The objective of this section is to express $MY$ using hypergeometric functions. Recall that $MY(x)$ is the unique positive solution of the equation: \[z^3+z^2-2x=0\] For $x>0$ we consider $y=1/z$, this implies that: \[y^3-\frac{1}{6x}y-\frac{1}{4x}=0\] We use the method provided by Zucker in [8]. Define: \[p'=-\frac{1}{2x} \qquad q'=-\frac{1}{2x} \qquad \textrm{and} \qquad \Delta'=q'^2+p'^3\] The Cardano formula expresses the root as $y=u+v$ with: \[u=\left(-q'+\sqrt{\Delta'}\right)^{-\frac{1}{3}} \qquad v=\left(-q'-\sqrt{\Delta'}\right)^{-\frac{1}{3}}\] Or: \[u+v=(-q')^{-\frac{1}{3}}\left(\left(1+\sqrt{\frac{\Delta'}{q'^2}}\right)^{-\frac{1}{3}}+\left(1-\sqrt{\frac{\Delta'}{q'^2}}\right)^{-\frac{1}{3}}\right)\] Using the identity: \[(1+z)^{-2a}+(1-z)^{-2a}=2 F(a,a+\frac{1}{2},\frac{1}{2},z^2) \qquad \textrm{for }\qquad a=-\frac{1}{6}\] where $F$ is the Gaussian hypergeometric function (analytically continued), we obtain: \[u+v=(-q')^{-\frac{1}{3}}F\left(-\frac{1}{6}, \frac{1}{3}, \frac{1}{2},\frac{\Delta'}{q'^2}\right)\] Using Kummer's transformation [11]: $F(a,b,c,z)=(1-z)^{-b}F(c-a,b,c,\frac{z}{z-1})$: \[u+v=\left(\frac{2q'}{p'}\right)^{-\frac{1}{3}}F\left( \frac{1}{3},\frac{2}{3}, \frac{1}{2},\frac{\Delta'}{p'^3}\right)\] Or: \[u+v=3 F\left(\frac{2}{3}, \frac{1}{3}, \frac{1}{2},1-\frac{27 x}{2}\right) \qquad (23)\] Using Kummer's transformation a second time: \[u+v=3 \left(\frac{27 x}{2}\right)^{-\frac{2}{3}}F\left(\frac{1}{6}, \frac{2}{3}, \frac{1}{2},1-\frac{2}{27x}\right) \qquad (24)\] If $x\leq \frac{2}{27}$, using formula $(23)$, $u+v$ is positive. Likewise when $x\geq \frac{2}{27}$, formula $(24)$ shows that $u+v$ is positive.\\ \\ Therefore \[{\bf MY(x)=\frac{1}{3 F\left(\frac{1}{3},\frac{2}{3}, \frac{1}{2},1-\frac{27 x}{2}\right)}} \] Notice that \[\lim\limits_{z \to 1}F\left(\frac{1}{3},\frac{2}{3}, \frac{1}{2},z\right)=+\infty \] which is coherent with $MY(0)=0$. \section{Properties of $MY$} In Annex we prove all of the following properties:\\ \\ {\bf Equalities:} \begin{enumerate} \item For $x \in [\frac{2}{27},+\infty [$: \[MY\left(x\right)= \frac{\sqrt[3]{2x\left(x-\sqrt{x\left (x-\frac {2}{27}\right)}\right)}}{\frac{1}{3}+\sqrt[3]{\left (x-\frac {1}{27} \right)-\sqrt{x\left (x-\frac {2}{27}\right)}}}\] \item For all positive real numbers $x$, $MY$ satisfies the identity : \[MY\left(x\right)\left( 3 MY\left(\sqrt{\frac{x}{54}}+\frac{1}{27}\right)+1\right)=\sqrt{6 x}\] \item For all positive real numbers $x \neq 0 $: \[MY(x)=\frac{1}{MY\left(\frac{x}{(MY(x))^5}\right) } \] \item For $0 \leq x \leq \frac{2}{27}$, the three roots of the equation $z^3+z^2=2x$ are: \[z_1=MY(x)\] \[z_2=-\frac{2}{3}-MY\left(\frac{2}{27}-x\right) =-\frac{1+MY(x)}{2}\left(1+\sqrt{\frac{1-3 MY(x)}{1+MY(x)}}\right)\] \[z_3=MY\left(\frac{2}{27}-x\right)-MY(x)-\frac{1}{3} =-\frac{1+MY(x)}{2}\left(1-\sqrt{\frac{1-3 MY(x)}{1+MY(x)}}\right)\] Consequently \[MY\left(\frac{2}{27}-x\right) =\frac{1+MY(x)}{2}\left(1+\sqrt{\frac{1-3 MY(x)}{1+MY(x)}}\right)-\frac{2}{3}\] \item \[\lim\limits_{x \to 0}\frac{MY\left(x\right)}{\sqrt{2x}}=1 \qquad \textrm{and} \qquad \lim\limits_{x \to +\infty}\frac{MY\left(x\right)}{\sqrt[3]{2x}}=1\]\\ Consequently for any $x \geq 0$ \[ \sqrt{x}=\lim\limits_{\epsilon \to 0^+} \frac{1}{\epsilon} MY\left(\frac{x \epsilon^2}{2}\right) \qquad \textrm{and} \qquad \sqrt[3]{x}=\lim\limits_{\epsilon \to 0^+} \epsilon MY\left(\frac{x}{2\epsilon^3}\right)\] \end{enumerate} {\bf Inequalities:} \begin{enumerate} \item For all positive real numbers $x$: \[\sqrt{\frac{2 x}{1+x^{\frac{2}{5}}}} \leq MY(x) \leq x^{\frac{2}{5}} \] \item For all positive real numbers $x\neq 0$: \begin{enumerate} \item If $a$ is a real number such that $0 \leq a \leq1$: \[MY\left(x^a\right) \geq \left(MY(x)\right)^a\] \item If $a$ is a real number such that $a \leq 0$ or $a \geq 1$: \[MY\left(x^a\right) \leq \left(MY(x)\right)^a\] \end{enumerate} \item For $x \in [0,1]$: \[\sqrt{x} \leq MY(x) \leq \sqrt[3]{x} \] \item For $x \in [1,+\infty]$: \[\sqrt[3]{x} \leq MY(x) \leq \sqrt{x} \] \end{enumerate} \newpage {\bf Derivative and primitive:} \begin{enumerate} \item For $x>0$ the derivative of $M$ is given by the expression: \[ MY'\left(x\right)=\frac{2}{3 MY^2(x)+2 MY(x)} \] \item For $x \geq 0$ a primitive of $MY$ is given by the expression: \[ \frac{3}{4} x MY\left(x\right)- \frac{x}{12}+ \frac{MY^2\left(x\right)}{24}\] \end{enumerate} {\bf Proof of the properties of $MY$}\\ \\ {\bf Equalities:} \begin{enumerate} \item The objective is to prove: \[MY(x)=\frac{v}{1-u} \qquad (25)\] Where \[ u= 3 \sqrt[3]{\left (\frac {1}{27}-x \right)+\sqrt{x\left (x-\frac {2}{27}\right)}} \qquad v= 3 \sqrt[3]{2x\left(x-\sqrt{x\left (x-\frac {2}{27}\right)}\right)}\] \[\frac{v}{1-u}=\frac{v(1+u+u^2)}{1-u^3}=\frac{v}{1-u^3}+\frac{vu}{1-u^3}+\frac{vu^2}{1-u^3} \qquad (26)\] Then notice: \begin{enumerate} \item \[ \frac{1}{u}= 3\sqrt[3]{\left (\frac {1}{27}-x \right)-\sqrt{x\left (x-\frac {2}{27}\right)}}\] \item \[ v^2= -6 x u\] \item \[ \frac{v^3}{1-u^3}= 2x\] \end{enumerate} Therefore, the three terms in $(26)$ can be expressed: \[\frac{u v}{1-u^3}=\frac{v\frac{v^2}{(-6x)}}{1-u^3}=-\frac{1}{6x}\frac{v^3}{1-u^3}=-\frac{2x}{6x}=-\frac{1}{3}\] \[\frac{u^2 v}{1-u^3}=u\frac{vu}{1-u^3}=-\frac{u}{3}=\sqrt[3]{\left (x-\frac {1}{27} \right)-\sqrt{x\left (x-\frac {2}{27}\right)}}\] \[\frac{v}{1-u^3}=\frac{1}{u}\frac{vu}{1-u^3}=-\frac{1}{3u}=\sqrt[3]{\left (x-\frac {1}{27} \right)+\sqrt{x\left (x-\frac {2}{27}\right)}}\] Which adds up to: \[MY(x)=\frac{v}{1-u}\] In other words: \[MY\left(x\right)= \frac{\sqrt[3]{2x\left(x-\sqrt{x\left (x-\frac {2}{27}\right)}\right)}}{\frac{1}{3}+\sqrt[3]{\left (x-\frac {1}{27} \right)-\sqrt{x\left (x-\frac {2}{27}\right)}}}\] \item For any $x >0 $, define $z=MY(x)$: \[\frac{z^3+z^2}{2}=x\] A change of variable $y=1/z$ leads to: \[y^3-\frac{1}{2x}y-\frac{1}{2x}=0\] Let's introduce a second change of variable: \[y=3 w \sqrt{\frac{1}{6x}}+\sqrt{\frac{1}{6x}}\] which leads to: \[\frac{w^3+w^2}{2}=\sqrt{\frac{x}{54}}+\frac{1}{27}\] so: \[ w=MY\left( \sqrt{\frac{x}{54}}+\frac{1}{27} \right) \] Since $y=1/z=\frac{1}{MY(x)}$: \[MY\left(x\right)\left( 3 MY\left(\sqrt{\frac{x}{54}}+\frac{1}{27}\right)+1\right)=\sqrt{6 x}\] \item Let $z=MY(x)$: \[\frac{z^3+z^2}{2}=2x\] By multiplying both sides by $z^{-5}$: \[\frac{z^{-3}+z^{-2}}{2}=2x z^{-5}\] Or \[\frac{1}{z}=M\left(\frac{x}{z^5}\right)\] Therefore \[MY(x)=\frac{1}{MY\left(\frac{x}{(MY(x))^5}\right) } \] \item For $0 \leq x \leq \frac{2}{27}$, $z_1=MY(x)$ is root of the equation $z^3+z^2=2x$. The other two roots, $z_2$ and $z_3$, can be found by symmetry and using Vieta's formula as provided in section 5.1. Alternatively: \[ z_2+z_3=-1-z_1 \qquad \textrm{and} \qquad z_2z_3=2x/z_1=z_1(1+z_1)\] $z_2$ and $z_3$ are therefore roots of a quadratic equation. The discriminant is: \[\delta=(1+z_1)^2\frac{1-3 z_1}{1+z_1}\] Therefore \[z_2=-(1+z_1)\left(1+\sqrt{\frac{1-3z_1}{1+z_1}}\right)\] \[z_3=-(1+z_1)\left(1-\sqrt{\frac{1-3z_1}{1+z_1}}\right)\] Notice $z_2 \leq z_3$. Using the results from section 5.1:\\ \[z_2=-\frac{2}{3}-MY\left(\frac{2}{27}-x\right) =-\frac{1+MY(x)}{2}\left(1+\sqrt{\frac{1-3 MY(x)}{1+MY(x)}}\right)\] \[z_3=MY\left(\frac{2}{27}-x\right)-MY(x)-\frac{1}{3} =-\frac{1+MY(x)}{2}\left(1-\sqrt{\frac{1-3 MY(x)}{1+MY(x)}}\right)\] Consequently \[MY\left(\frac{2}{27}-x\right) =\frac{1+MY(x)}{2}\left(1+\sqrt{\frac{1-3 MY(x)}{1+MY(x)}}\right)-\frac{2}{3}\] \item $MY$ is the inverse of $f$. Since $\lim \limits_{x \to 0}f(x)=0$ and $\lim \limits_{x \to +\infty} f(x)=+\infty$: \[\lim \limits_{x \to 0}MY(x)=0 \qquad \textrm{and} \qquad \lim \limits_{x \to +\infty} MY(x)=+\infty\] Also $MY(x)^2\left(1+MY(x)\right)=2x$ which leads to: \[\frac{MY(x)}{\sqrt{2x}}=\sqrt{\frac{1}{1+MY(x)}}\] Therefore: \[\lim \limits_{x \to 0}\frac{MY(x)}{\sqrt{2x}}=1\] \\ \\ Similarily, since $MY(x)^3\left(1+\frac{1}{MY(x)}\right)=2x$: \[\frac{MY(x)}{\sqrt[3]{2x}}=\sqrt[3]{\frac{1}{1+\frac{1}{MY(x)}}}\] Therefore \[\lim \limits_{x \to \infty}\frac{MY(x)}{\sqrt[3]{2x}}=1\] \\ \\ \\ Consequently for any $x > 0$ \[ \sqrt{x}=\lim\limits_{\epsilon \to 0^+} \frac{1}{\epsilon} MY\left(\frac{x \epsilon^2}{2}\right) \qquad \textrm{and} \qquad \sqrt[3]{x}=\lim\limits_{\epsilon \to 0^+} \epsilon MY\left(\frac{x}{2\epsilon^3}\right)\] This holds also for $x=0$. \end{enumerate} {\bf Inequalities:} \begin{enumerate} \item For all positive real numbers $x$ \[f\left(x^{\frac{2}{5}}\right)=\frac{1}{2} \left(x^{\frac{6}{5}}+x^{\frac{4}{5}}\right)=x\left(1+\frac{1}{2} \left(x^{\frac{1}{5}}-x^{\frac{-1}{5}}\right)^2\right) \geq x\] Since $f$ is strictly increasing \[MY(x) \leq x^{\frac{2}{5}} \] In addition \[MY(x)=\sqrt{\frac{2x}{1+MY(x)}} \geq \sqrt{\frac{2x}{1+x^{\frac{2}{5}}}}\] \item Let's consider the function $h(x)= x^a$ and define $z=MY(x)$: \begin{enumerate} \item For $0 \leq a \leq 1$, $h$ is concave and: \[\frac{h(z^3)+h(z^2)}{2} \leq h\left(\frac{z^3+z^2}{2}\right)=h(x)\] Since $h(z^3)=(h(z))^3$ and $h(z^2)=(h(z))^2$: \[\frac{(h(z))^3+(h(z))^2}{2} \leq h(x) \qquad \textrm{and }\qquad h(z) \leq MY(h(x))\] Therefore \[MY\left(x^a\right) \geq \left(MY(x)\right)^a\] \item Similarily, for $a\leq 0$ or $a\geq 1$, $h$ is convex and we have: \[MY\left(x^a\right) \leq \left(MY(x)\right)^a\] \end{enumerate} \item For all $x \in [0,1]$: \[f(\sqrt{x}) =x\frac{\sqrt{x}+1}{2} \leq x\] \[f(\sqrt[3]{x}) =x\frac{x^{-1/3}+1}{2} \geq x\] Since $f$ is strictly increasing: \[\sqrt{x} \leq MY(x)\leq \sqrt[3]{x}\] \item For $x \in [1,+\infty]$ \[f(\sqrt{x}) =x\frac{\sqrt{x}+1}{2} \geq x\] \[f(\sqrt[3]{x}) =x\frac{x^{-1/3}+1}{2} \leq x\] Since $f$ is strictly increasing: \[ \sqrt[3]{x} \leq MY(x) \leq \sqrt{x}\] \end{enumerate} {\bf Derivative and primitive:} \begin{enumerate} \item Deriving the identity $f(MY(x))=x$ leads to $MY'(x)f'(MY(x))=1$ \[ MY'\left(x\right)\left(\frac{3}{2}MY^2(x)+MY(x) \right)=1\] Which means: \[ MY'\left(x\right)=\frac{2}{3 MY^2(x)+2 MY(x)} \] . \item Since $MY$ is continuous over $\R^{+}$, a primitive of $MY$ is given by: \[\int_{0}^{x} MY(t)dt\] Let's use a change of variable $MY(t)=w$ (or $t=f(w)$ and $dt=f'(w)dw$): \[\int_{0}^{x} MY(t)dt=\int_{0}^{MY(x)} w \left( \frac{3 w^2+2 w}{2}\right)dw\] Using $x=(MY^3(x)+MY^2(x))/2$, the integral can be simplified to: \[ \int_{0}^{x} MY(t)dt=\frac{3}{4} x MY\left(x\right)- \frac{x}{12}+ \frac{MY^2\left(x\right)}{24}\] \end{enumerate} \section{Acknowledgment} The authors would like to thank Dr. Nizar Demni, Dr. Hassan Youssfi and Dr. Daniel Moseley for their support and valuable feedback. \newpage
train/arxiv
BkiUdSw4dbghOwL13x6y
5
1
\section{Introduction} Non-Abelian anyons, appearing as fractionalized quasi-particle excitations with exotic statistics appearing in topologically ordered phases of correlated quantum many-body systems such as fractional quantum Hall states or $p+ip$ superconductors \cite{MoRe91,ReRe99,ReGr00}, have attracted tremendous interest in recent years -- not least due to their potential use as resources in quantum computing \cite{Kita03,NSSF08}. For several one-dimensional systems forming related topological phases these objects have been shown to be realized as topologically protected zero-energy modes localized at boundaries or defects \cite{Kitaev01,AsNa12,Tsve14a,BoFr17}. In the simplest case of the quantum Ising chain these modes are Majorana anyons, signatures for these have been found in experiments on heterostructures such as semiconductor quantum wires in proximity to superconductors \cite{MZFP12,Deng.etal12}. In the search for anyons beyond these Majorana zero modes there has been a revival of the interest in $n$-state generalizations of the Ising model, i.e.\ the one-dimensional $\mathbf{Z}_n$ symmetric quantum clock models recently. For \emph{open} boundary conditions some of these systems have been found to have an $n$-fold degenerate ground state for chains of arbitrary length \cite{Fendley12,Fendley14,ARFGB16}. This non-trivial degeneracy cannot resolved by the action of local symmetry preserving operators and is closely connected to the existence of localized zero energy modes with fractionalized charge. \emph{Periodic} boundary conditions (or, more generally, the addition of a coupling between the edges to the open chain Hamiltonian) destroy the ground state degeneracy, but still allow for the study of the low energy behaviour of the correlated system to identify the phases realized as the interaction varied. Furthermore, some of these models are exactly solvable at certain values of the coupling constants which allows to provide insights beyond what is possible based on the numerical investigation of finite chains. An alternative approach to study some of the peculiar topological properties of non-Abelian anyons is based on certain deformations of quantum spin chains. Mathematically, the anyons in these models are objects in a braided tensor category equipped with operations describing their fusion and braiding \cite{Kita06}. Here the fusion rules underly the construction of the many-anyon Hilbert space and determine the local interactions allowed in the lattice model in terms of so-called $F$-moves. In numerous studies of anyons satisfying the fusion rules of e.g.\ $su(2)_k$, $D(D_3)$, and $so(5)_2$ subject to different short-ranged interactions and in various geometries a variety of critical phases together with the conformal field theories (CFTs) providing the effective description of their low energy properties in gapless phases have been identified \cite{FTLT07,TAFH08,GTKL09,FiFr13,GATH13,Finch.etal14,FiFF14,BrFF16,VeJS17}. Exactly solvable anyon chains of this type are closely related to integrable two-dimensional classical lattice systems with `interactions round the face' (IRF): the corresponding anyonic Hamiltonian is a member of a family of commuting operators, generated the transfer matrix of the classical model. For example Fibonacci or, more generally, $su(2)_k$ anyon chains with nearest neighbour interactions \cite{FTLT07} can be derived from the critical restricted solid on solid (RSOS) models \cite{AnBF84}. Interestingly, the integrable structures underlying another realization of this approach, i.e.\ a particular $so(5)_2$ non-Abelian anyon chain \cite{FiFF14}, have been found to be closely related to those of an integrable clock model, the $\mathbf{Z}_5$ Fateev-Zamolodchikov model \cite{FaZa82,Albe94}. Such relationships have been observed between other integrable IRF models and vertex models (or, in the present context, anyon chains and quantum spin chains) \cite{Pasq88,Finch13}. Below we will provide some evidence that the connection between chains of anyons constructed from the $so(n)_2$ fusion category and a class of $n$-state clock models can be extend beyond the integrable points in the space of coupling constants and to general odd $n$. In the following two sections we introduce the Hamiltonians for both the $n$-state clock models and the $so(n)_2$ anyon chains. Both can be defined for given $n$ and integers $1\le \ell\le n$ coprime to $2n$. In the clock model the parameter $\ell$ enumerates the primitive $n$-th root of unity appearing in the diagonal Potts spin operator. On the other hand, in the anyon chain, $\ell$ labels the (gauge inequivalent) $F$-moves which have recently been constructed for the $so(n)_2$ fusion category \cite{ArFT16}. We show that both families of Hamiltonian operators have a $\mathbf{Z}_n\otimes \mathbf{Z}_2$ symmetry related to a relabelling of the $\mathbf{Z}_n$ spin basis and an automorphism of the $so(n)_2$ fusion rules, respectively. In addition, there exists a class of unitary transformation relating pairs of labels $(n,\ell)$. Depending on $n$ and $\ell$, this unitary relation has different consequences: it may imply the existence of several inequivalent models of the same type (i.e.\ $n$-state clock or $so(n)_2$ anyon models) but with different realizations of the local interactions which have a second $\mathbf{Z}_2$ symmetry on the space of coupling constants. This is found to be the case for $n=5$ leading to a nearest neighbour $so(5)_2$ anyon chain with different continuum limit than the one considered in Refs.~\cite{Finch.etal14,FiFF14}. In other cases, e.g.\ for $n=7$, models with different $\ell$ are unitary equivalent which means that each one of them maps out the complete parameter space of coupling constants. Within this space we also identify points where the chains become integrable. In Section~\ref{sec:PhasePortraits} we study the zero temperature phase diagram of the models for $n=3,5,7$ using a variational matrix product ansatz for their translationally invariant states in the thermodynamic limit. For the integrable models we also analyze the low energy behaviour: using the Bethe ansatz solution of these models we compute their ground state energies and classify the excitations. From the finite size spectrum, obtained also from the Bethe ansatz or by numerical diagonalization of the Hamiltonian, we compute the lowest scaling dimensions. This allows to identify the low energy effective description in terms of rational CFT with extended symmetries related to that of the lattice model. Many of these rational CFTs have central charge $c=1$. Hence, they only can be rational due to extended chiral symmetry algebras $\mathcal{W}\mathfrak{g}$. These are Casimir algebras of affine Kac-Moody algebras $\hat{\mathfrak{g}}$ related to Lie algebras of type $B_\ell\simeq SO(2\ell+1)$ or $D_\ell\simeq SO(2\ell)$. Naively, these extended chiral symmetry algebras get larger with increasing $n$. However, for the particular value $c=1$ of the central charge, additional null fields appear such that many of the generators of these $\mathcal{W}$-algebras become algebraically dependent. For example, the $\mathcal{W}D_\ell$-algebras are generated by fields of conformal weights $2,4,6,\ldots,2(\ell-1),\ell$ but at the particular central charge $c=1$, only the fields with the weights $2,4,\ell$ remain algebraically independent. Since all rational CFTs with central charge $c=1$ are classified, it is known that, e.g.\ every rational $c=1$ $\mathbb{Z}_2$ orbifold theory has a $\mathcal{W}$-algebra of type $\mathcal{W}(2,4,k)$ with $k$ half-integer or integer. Therefore, Casimir-type $\mathcal{W}$ algebras, which exist for generic values of the central charge, must collapse to these smaller ones at $c=1$. The case $k=\ell\in\mathbb{Z}$ corresponds to $\mathcal{W}D_\ell$, and the case $k=\ell+\frac12\in\mathbb{Z}+\frac12$ corresponds to $\mathcal{W}\mathcal{B}_{0,\ell}$. The latter is an alternative $\mathcal{W}$-algebra for the $B_\ell$ series with an fermionic generator, realized from the Lie-superalgebra $\mathcal{B}_{(0,\ell)}\simeq OSp(1|2\ell)$). Thus, all cases of the Lie algebras $\mathfrak{g}={SO}(n)$ are covered. We note that the observed spectral equivalence between the clock models and anyon chains only holds up to degeneracies. Furthermore, the Hilbert space of the anyons can be decomposed into sectors labelled by conserved topological charges which correspond to different boundary conditions in the clock models. Thus, the full spectrum of conformal dimensions of the underlying CFT is, in general, present only in the finite size spectrum of the anyon chains. Some technical background on the construction of anyon chains and the analysis of the Bethe equations as well on the rational CFTs relevant to the models considered in this paper are presented in the appendices. \section{The $\mathbf{Z}_{n}$ clock models} \subsection{The general model} We construct a family of quantum spin chains with an odd number $n$ of states per site. For each pair of integers $(n,\ell)$, $1\leq \ell \leq n$ and $\gcd(\ell,2n)=1$, the global Hamiltonian for a chain of length $L$ acting on the Hilbert space $[\mathbb{C}^{n}]^{\otimes L}$ is defined by \begin{equation} \label{eqHamFZ} \begin{aligned} \mathcal{H}_{(n,\ell)}(\boldsymbol{c}) & = \sum_{j=1}^{L} \left\{ c_{0}I + \sum_{k=1}^{n-1} c_{k}\left[X_{j}^{k} + Z_{j}^{k}Z_{j+1}^{-k}\right]\right\} \,, \end{aligned} \end{equation} where $c_{k}=c_{n-k}\in\mathbb{R}$. The local operators $X_j$, $Z_j$ are operators acting non-trivially on the spin at site $j$ as \begin{align*} X & = \ket{n}\bra{1} + \sum_{k=1}^{n-1} \ket{k}\bra{k+1}\,, & Z & = \sum_{k=1}^{n} \mathrm{e}^{\frac{4i\ell k\pi}{n}} \ket{k}\bra{k}\,. \end{align*} The $\frac{n+1}{2}$ coupling constants $c_{k}$ span the parameter space of the clock model (\ref{eqHamFZ}). However, as we are free to normalise and shift the Hamiltonian, the parameter space of $\mathcal{H}_{(n,\ell)}(\boldsymbol{c})$ equals the surface of a $\frac{n-1}{2}$-sphere. \subsection{Symmetries and maps} To discuss the symmetries and equivalences between the general $n$-state clock models we first consider bijections from the set of integers $\{1,\dots,n\}$ to itself. Specifically we define the unique maps $\nu_{\downarrow}$ and $\nu_{-}$ \begin{align*} \nu_{\downarrow}(k) & = k-1 \mod \, n\,, & \nu_{-}(k) & = - k \mod \, n\,, \end{align*} where $k\in\{1,\dots,n\}$. From these we can construct transformations of the global Hamiltonian (\ref{eqHamFZ}) \begin{align*} U^{\nu} & = u^{\nu} \otimes u^{\nu} \otimes \cdots \otimes u^{\nu}\,, \qquad u^{\nu} = \sum_{k=1}^{n} |\nu(k)\rangle\langle k|\,, \end{align*} where $\nu$ is one of the aforementioned maps. It is straightforward to see that the Hamiltonian $\mathcal{H}_{(n,\ell)}(\boldsymbol{c})$ is invariant under $U^{\nu_{\downarrow}}$ and $U^{\nu_{-}}$ (we identify $c_{0}\equiv c_{n}$): \begin{align*} U^{\nu_{\downarrow}} \mathcal{H}_{(n,\ell)}(\boldsymbol{c}) & = \mathcal{H}_{(n,\ell)}(\boldsymbol{c}) U^{\nu_{\downarrow}}\,,\\ U^{\nu_{-}} \mathcal{H}_{(n,\ell)}(\boldsymbol{c}) & = \mathcal{H}_{(n,\ell)}(\boldsymbol{c}) U^{\nu_{-}}\,. \end{align*} The first of these equations establishes the $\mathbf{Z}_{n}$ symmetry of the clock model as a consequence of $u^{\nu_{\downarrow}}=X$. The invariance under $U^{\nu_{-}}$ together with the Hermiticity of the Hamiltonian, i.e.\ $c_{n-k}=c_{k}$, implies that the clock model is $\mathbf{Z}_{2}$-invariant. Similarly, we find that under the transformation generated from the bijections $\nu_t$, $1\leq t \leq n$ and $\gcd(t,2n)=1$, \begin{align} \label{map-nut} & \nu_{t}(k) = tk \mod \, n\,, \end{align} the Hamiltonians $\mathcal{H}_{(n,\ell)}$ and $\mathcal{H}_{(n,\ell')}$ with $\ell'=\pm t^{2}\ell\mod\,n$ are related as \begin{align} \label{inv-nut} \mathcal{H}_{(n,\ell')}(\nu_{t}({\boldsymbol{c}})) & = \left[U^{\nu_{t}}\right]^{-1} \mathcal{H}_{(n,\ell)}({\boldsymbol{c}})\, U^{\nu_{t}}\,, \qquad \mbox{where} \quad \nu_{t}(c)_{k} = c_{\nu_{t}(k)}\,. \end{align} This identity implies that some of the Hamiltonians (\ref{eqHamFZ}) for different $(n,\ell)$ may be equivalent up to a basis transformation and simultaneous permutation of the coupling constants. Furthermore, we have $\ell'=\ell$ if $t^2=\pm1\mod n$. This implies another $\mathbf{Z}_{2}$-symmetry in the space of coupling constants as $\nu_{t}\circ\nu_{t}=\mbox{id}$, see Figure~\ref{fig:hamrel}. \begin{figure}[ht] \begin{center} \begin{tikzpicture} \tikzstyle{every loop}=[] % \node (F21) at ( 0, 0) {$\mathcal{H}_{(5,1)}$}; \node (F23) at ( 2, 0) {$\mathcal{H}_{(5,3)}$};loop \draw [thick, ->, loop,looseness=5] (F21) to node [above] {\small $\nu_{3}$} (F21); \draw [thick, ->, loop,looseness=5] (F23) to node [above] {\small $\nu_{3}$} (F23); \node (F21) at ( 6, -0.73) {$\mathcal{H}_{(7,1)}$}; \node (F23) at ( 7.5, 1.85) {$\mathcal{H}_{(7,3)}$}; \node (F25) at ( 9, -0.73) {$\mathcal{H}_{(7,5)}$}; \draw [thick, ->, transform canvas={yshift=0.1cm}] (F21) to node [above] {\small $\nu_{3}$} (F25); \draw [thick, ->, transform canvas={xshift=-0.1cm}] (F25) to node [left, transform canvas={xshift=0.1cm,yshift=-0.2cm}] {\small $\nu_{3}$} (F23); \draw [thick, ->, transform canvas={xshift=0.1cm}] (F23) to node [right, transform canvas={xshift=-0.1cm,yshift=-0.2cm}] {\small $\nu_{3}$} (F21); \draw [thick, ->, transform canvas={xshift=-0.1cm}] (F21) to node [left] {\small $\nu_{5}$} (F23); \draw [thick, ->, transform canvas={xshift=0.1cm}] (F23) to node [right] {\small $\nu_{5}$} (F25); \draw [thick, ->, transform canvas={yshift=-0.1cm}] (F25) to node [below] {\small $\nu_{5}$} (F21); \end{tikzpicture} \end{center} \caption{One can draw directed lines between the general Hamiltonians for each map $\nu_{t}$ (\ref{map-nut}). For $n=5$ we see that the two Hamiltonians get mapped to themselves, indicating both have a $\mathbf{Z}_{2}$ symmetry in their phase space. On the other hand, for $n=7$ the three Hamiltonians get mapped to each other, indicating the three different models are actually equivalent. \label{fig:hamrel}} \end{figure} \subsection{Integrability} Upon fine-tuning of the coupling constants the $\mathbf{Z}_{n}$ clock models (\ref{eqHamFZ}) are integrable, i.e.\ members of a family of commuting operators. We find that there are two different types of such integrable points: For $c_k\equiv c$ independent of $k$, the clock model is the $n$-state Potts model and the Hamiltonian can be given as \cite{Levy91} \begin{equation} \mathcal{H}^{TL}_{(n)} \propto \sum_{i=1}^{2L} E_{i}\,,\qquad E_{i} = \frac{1}{\sqrt{n}}\sum_{k=1}^{n-1} \, \begin{cases} X_{j}^{k} & i=2j-1 \\ Z_{j}^{k}Z_{j+1}^{-k} & i=2j \end{cases} \,, \end{equation} where the $E_i$, $1 \leq i \leq 2L$, satisfy the relations of the periodic Temperley--Lieb algebra \begin{equation} \label{TLalgebra} \begin{aligned} E_{i}E_{i} & = n E_{i}\,,\\ E_{i}E_{i\pm1}E_{i} & = E_{i}\,, \\ E_{i}E_{j} & = E_{j}E_{i}\,, \end{aligned} \end{equation} with $i -j \neq \pm 1$ (the indices and their difference are to be interpreteded modulo $n$). Clearly, these Hamiltonians are independent of the parameter $\ell$. The second type of integrable points of (\ref{eqHamFZ}) are the $\mathbf{Z}_{n}$ Fateev--Zamolodchikov models \cite{FaZa82}: for each pair of integers $(n,\ell_{FZ})$ with $1\le \ell_{FZ} \le n$ and $\gcd(\ell_{FZ},2n)=1$ a Fateev--Zamolodchikov $R$-matrix can be defined as \begin{equation} \label{eqRmatFZ} \begin{aligned} R(u) & = \sum_{a,b,c,d=1}^{n} \overline{W}_{b-c}(u) W_{b-d}(u) \overline{W}_{a-d}(u) W_{a-c}(u) \Big(\ket{a}\bra{b}\Big) \otimes \Big(\ket{c}\bra{d}\Big)\,, \end{aligned} \end{equation} where the weights are given by \begin{align*} W_{a}(u) & = \prod_{k=1}^{g_{1}(a)}\sinh\left(\frac{i\pi (2k-1) \ell_{FZ}}{2n}+u\right) \prod_{k=g_{1}(a)+1}^{\frac{n-1}{2}} \sinh\left(\frac{i\pi (2k-1) \ell_{FZ}}{2n}-u\right)\,, \\ \overline{W}_{a}(u) & = \prod_{k=1}^{g_{1}(a)} \sinh\left(\frac{i\pi (k-1) \ell_{FZ}}{n}-u\right) \prod_{k=g_{1}(a)+1}^{\frac{n-1}{2}} \sinh\left(\frac{i\pi k \ell_{FZ}}{n}+u\right)\,. \end{align*} Here and for later convenience we have defined the two functions \begin{equation} \label{eqggmap} \begin{aligned} g_{1}:\mathbb{Z} & \rightarrow \{0,\dots,\tfrac{n-1}{2}\} & \mbox{such that} && g_{1}(i) & = \pm i\mod\,\, n\,, \\ g_{2}:\mathbb{Z} & \rightarrow \{0,\dots,n-1\} & \mbox{such that} && g_{2}(i) & = \pm i\mod\,\, 2n\,. \end{aligned} \end{equation} The $R$-matrices (\ref{eqRmatFZ}) satisfy the Yang--Baxter equation \begin{align*} R_{12}(u)R_{13}(u+v)R_{23}(v) & = R_{23}(v)R_{13}(u+v)R_{12}(u)\, \end{align*} allowing for the construction of commuting transfer matrices \begin{align*} t(u) = \mbox{Tr}_{0} \left[ R_{01}(u)R_{02}(u)\cdots R_{0L}(u)\right]\,. \end{align*} The logarithmic derivative of the latter yields the integrable Hamiltonians \begin{align*} \mathcal{H}^{FZ}_{(n)}(\ell_{FZ},J) &\propto J \left.\frac{\partial}{\partial u}\ln t(u)\right|_{u=0}\,, \end{align*} with $J=\pm1$. Identifying \begin{equation} \label{eqellFZ} \ell_{FZ} = g_{2}(n - 2\ell t^2)\, \end{equation} these Hamiltonians coincide, apart from a constant shift, with the generic $n$-state clock model (\ref{eqHamFZ}) or its images (i.e.\ up to reordering of the basis) under the transformation $U^{\nu_t}$, Eq.\ (\ref{inv-nut}), i.e.\ \begin{align*} \mathcal{H}^{FZ}_{(n)}(\ell_{FZ},J) & = \sum_{j=1}^{L} \left\{ \sum_{k=1}^{n-1} \frac{J}{\sin\left(\frac{\pi k\ell_{FZ}}{n}\right)} \left[X_{j}^{kt} + Z_{j}^{kt}Z_{j+1}^{-kt}\right]\right\} \\ & \quad + L \left\{J\,\sum_{k=1}^{\frac{n-1}{2}} \frac{\sin\left(\frac{\pi \ell_{FZ}}{2n}\right)}{\cos\left(\frac{\pi k \ell_{FZ}}{n}\right)\cos\left(\frac{\pi (2k-1) \ell_{FZ}}{2n}\right)} - \frac{2}{\sin\left(\frac{\pi k\ell_{FZ}}{n}\right)} \right\} \mathbb{I}\,. \end{align*} (Note that the operators $Z_j$ depend explicitly on the root of unity parameter $\ell$.) The $R$-matrix (\ref{eqRmatFZ}) is actually the uniform square limit of the Fateev--Zamolodchikov $R$-matrix with a general root of unity (parametrized by $\ell_{FZ}$), rather than the one presented originally \cite{FaZa82}. It is also a self-dual case of the chiral Potts $R$-matrix \cite{BaPA88}. This allows to make use of the functional relations from Ref.~\cite{BaSt90} to express the transfer matrix eigenvalues in terms of the $d=(n-1)L$ roots $\{u_j\}$ (some of which may be at $\pm\infty$) to the Bethe equations \begin{equation} \label{baeFZ} \begin{aligned} \left(i\frac{\sinh\left(u_{j}+\frac{i\pi \ell_{FZ}}{4n})\right)}{\sinh\left(u_{j}-\frac{i\pi \ell_{FZ}}{4n}\right)}\right)^{2L} & = -\prod_{k=1}^{d} \left(\frac{\sinh\left(u_{j}-u_{k}+\frac{i\pi}{2} -\frac{i\pi\ell_{FZ}}{2n}\right)}{\sinh\left(u_{j}-u_{k} -\frac{i\pi}{2}+\frac{i\pi \ell_{FZ}}{2n}\right)}\right) \,. \end{aligned} \end{equation} In terms of the Bethe roots the energy and momentum of the corresponding state are given as \begin{equation} \label{specFZ} \begin{aligned} E & = iJ\left\{ \sum_{j=1}^{d} \frac{\cosh(u_{j}-\frac{i\pi \ell_{FZ}}{4n})}{\sinh(u_{j}-\frac{i\pi \ell_{FZ}}{4n})} \right\}, \\ P & = \mbox{Re}\left[ \frac{2}{i} \sum_{j=1}^{d} \ln\left[\sinh\left(-u_{j}+\frac{i\pi \ell_{FZ}}{4n}\right)\right]\right] + \mbox{const}. \end{aligned} \end{equation} Clearly, the dynamics of the integrable Hamiltonians depends only upon the triple of parameters $(n,\ell_{FZ},J)$. In Section~\ref{sec:PhasePortraits} and Appendix~\ref{app:thermo} below we shall classify the Bethe root configurations and discuss the low energy properties of the integrable clock models in greater detail, also see Ref.~\cite{Albe92} for $\ell_{FZ}=1$. Here we note that, for $n=3$, the Temperley--Lieb and Fateev--Zamolodchikov integrable points coincide. \section{The $so(n)_{2}$ anyon chains} \subsection{The general model} The $so(n)_{2}$ fusion category consists of objects $\mathcal{I}=\{\epsilon_{\pm},\sigma_{\pm},\phi_{1},\dots,\phi_{p}\}$, $n=2p+1$. In contrast to the discussion in Appendix~\ref{app:fuscat} we do not distinguish between an object and its label. The fusion rules for this category are \begin{equation} \label{son-fusrules} \begin{aligned} \epsilon_{a} \otimes \epsilon_{b} & \cong \epsilon_{ab}\,, \qquad \epsilon_{a} \otimes \sigma_{b} \cong \sigma_{ab} \,, \qquad \epsilon_{a} \otimes \phi_{i} \cong \phi_{i}\,, \\ \sigma_{a} \otimes \sigma_{b} & \cong \epsilon_{ab} \oplus \bigoplus_{i=1}^{\frac{n-1}{2}} \phi_{i}\,, \qquad \sigma_{a} \otimes \phi_{i} \cong \sigma_{+} \oplus \sigma_{-}\,, \\ \phi_{i} \otimes \phi_{j} & \cong \begin{cases} \epsilon_{+} \oplus \epsilon_{-} \oplus \phi_{g_{1}(2i)} & i=j \\ \phi_{g_{1}(i-j)} \oplus \phi_{g_{1}(i+j)} & i\ne j \end{cases}\,, \end{aligned} \end{equation} where $a,b\in\{+,-\}$, $i,j\in\{1...\tfrac{n-1}{2}\}$. We see that $\epsilon_{+}$ is the identity object. Compatible $F$- and $R$-moves have been constructed by Ardonne \emph{et al.} \cite{ArFT16}. It was found that there exists a family of gauge inequivalent $F$-moves labelled by the pairs $(\ell,\kappa)$ where $\kappa=\pm1$ and $1\leq \ell \leq n$ with $\gcd(\ell,2n)=1$. One can extract the parameters from the $F$-moves by considering the quantities \begin{align*} \kappa & = \sqrt{n}\, \left(F^{\sigma_{+}\sigma_{+}\sigma_{+}}_{\sigma_{+}}\right)^{\epsilon_{+}}_{\epsilon_{+}} \left(F^{\sigma_{+}\epsilon_{+}\sigma_{+}}_{\epsilon_{+}}\right)^{\sigma_{+}}_{\sigma_{+}}, & \ell & = \frac{n}{\pi} {\arccos}\left(\sqrt{\frac{\sqrt{n}}{2\kappa}\left(F^{\sigma_{+}\sigma_{+}\sigma_{+}}_{\sigma_{+}}\right)^{\phi_{1}}_{\phi_{1}} \left(F^{\sigma_{+}\phi_{1}\sigma_{+}}_{\phi_{1}}\right)^{\sigma_{+}}_{\sigma_{+}}}\right), \end{align*} where $\ell$ is the odd integer resulting from the expression. For each of these sets of $F$-moves we can construct a set of projection operators. We observe that every projection operator is independent of the choice of $\kappa$. Therefore, without loss of generality we set $\kappa=+1$ for the remainder of the paper and denote the corresponding $F$-moves and projection operators $F(\ell)$ and $p(\ell)$, respectively. The objects of the $so(n)_2$ fusion category can be grouped according to their quantum dimensions, i.e. the asymptotic contribution of a single such object to a large collection thereof: the sets $\{\epsilon_{\pm}\}$, $\{\phi_{i}\}_{i}$, and $\{\sigma_{\pm}\}$ contain particles of dimension $1$, $2$ and $\sqrt{n}$, respectively. Following the general prescription above it is possible to construct uniform chains of each of these particles. Among these the $\epsilon_{\pm}$-anyon chains are trivial with their Hamiltonians necessarily being proportional to the identity. The $\phi_{i}$-anyon chains, on the other hand, can be mapped to the XXZ model as the subcategory with objects $\{\epsilon_{\pm}, \phi_{i}\}_{i}$ is isomorphic to the category of representations of the group algebra $\mathbb{C} D_{n}$. This leaves the $\sigma_{\pm}$-anyon chains. It will become apparent below that there is a mapping between the $\sigma_{+}$- and $\sigma_{-}$-anyon chains. Therefore we concern ourselves only with the former. The Hilbert space of the $\sigma_{+}$-anyon chain is spanned by the vectors in the set $\mathcal{B}_L^{(\sigma_{+})}$ (\ref{anybasis}). An equivalent definition of the set $\mathcal{B}_L^{(\sigma_{+})}$ is the all the closed walks of length $2L$ on the (undirected) graph displayed in Fig.~\ref{fig:adjgraph}. \begin{figure}[t] \begin{center}{\tiny \begin{tikzpicture}[scale=0.8] \tikzstyle{every node}=[circle,draw,thin,fill=blue!40,minimum size=14pt,inner sep=0pt] \tikzstyle{every loop}=[] \draw[loosely dotted,very thick] (0,1.2) -- (0,-2.0); \node (nep) at (-4, 0) {$\epsilon_{+}$}; \node (nen) at ( 4, 0) {$\epsilon_{-}$}; \node (nsp) at (-2, 0) {$\sigma_{+}$}; \node (nsn) at ( 2, 0) {$\sigma_{-}$}; \node (np1) at ( 0, 2) {$\phi_{1}$}; \node (np2) at ( 0, 1.2) {$\phi_{2}$}; \node (npp) at ( 0, -2) {$\phi_{p}$}; \foreach \from/\to in {nep/nsp,nen/nsn,nsp/np1,np1/nsn,nsp/np2,np2/nsn,nsp/npp,npp/nsn} \draw[thick] (\from) -- (\to); \end{tikzpicture}} \end{center} \caption{Each state $\ket{\bf{a}}$ belonging to the set $\mathcal{B}_L^{(\sigma_{+})}$ can be interpreted as closed walk of length $2L$ on this graph. This is analogous to known models defined by walks on Dynkin diagrams \cite{Pasq87a}.\label{fig:adjgraph}} \end{figure} The generic one-dimensional nearest-neighbour Hamiltonian for these $so(n)_2$ $\sigma_+$-anyons associated with the $F(\ell)$-moves is defined as \begin{align} \label{eqHamSO} \mathcal{H}_{(n,\ell)}({\boldsymbol\alpha}) & = \sum_{i=1}^{2L} \left[\alpha_{\epsilon_{+}} p_{i}^{(\epsilon_{+})}(\ell) + \sum_{k=1}^{\frac{n-1}{2}} \alpha_{\phi_{k}}\, p_{i}^{(\phi_{k})}(\ell)\right]. \end{align} Without affecting the dynamics, one can fix $\alpha_{\epsilon_{+}}$ and have the remaining coupling constants $\alpha_{\phi_{k}}$ as co-ordinates on the surface of a $\tfrac{n-1}{2}$-sphere. \subsection{Symmetries and equivalent $so(n)_{2}$ models} As stated in Appendix~\ref{app:fuscat} global symmetries and maps between equivalent anyon models can be inferred from the automorphisms of the fusion rules and the associated monoidal equivalences between $F$-moves. Along with the $F$-moves, Ardonne \emph{et al.} \cite{ArFT16} also discussed the monoidal and gauge equivalences of $so(n)_{2}$. The automorphisms of $so(n)_{2}$ fusion rules are given by the sets \begin{align*} \left\{ \nu_{\pm} \circ \nu_{t}\, |\, 1 \leq t \leq n, \, \gcd(t,2n)=1 \right\}\,. \end{align*} The non-trivial actions of $\nu_{\pm}$ and $\nu_{t}$ are \begin{equation} \label{maps-son} \begin{aligned} \nu_{a}(\sigma_{b}) & = \sigma_{ab}\,, \quad a,b\in\{+,-\}\,,\\ \nu_{t}(\phi_{i}) & = \phi_{g_{1}(ti)}\,, \quad i\in{1...\tfrac{n-1}{2}}\,, \end{aligned} \end{equation} where $g_{1}$ is the map defined in Eq.~(\ref{eqggmap}). The $F$-moves labelled $(\ell,\kappa)$ are monoidally related to themselves via the map $\nu_{-}$. Thus there is a one-to-one equivalence between $\sigma_{+}$-anyon and $\sigma_{-}$-anyon chains, as asserted above. The automorphism $\nu_{t}$ relates the $F({\ell})$ to $F({\ell'})$ if and only if $\ell'=g_{2}(t^{2}\ell)$ where $g_{2}$ is the map defined in Eq.~(\ref{eqggmap}). Thus we are able to completely determine the mappings between Hamiltonians and the internal symmetries of the Hamiltonians. However, since the number of monoidally inequivalent $F$-moves, and consequently inequivalent Hamiltonians, depends on $n$ not much can be said about the mappings and symmetries in general. Nevertheless, three interesting observations can be made: \begin{enumerate} \item If $\nu_{t}$ gives rise to an internal symmetry of the parameter spaces of $\mathcal{H}_{(n,\ell)}$, it must also give rise to an internal symmetry of the parameter spaces of $\mathcal{H}_{(n,\ell')}$, independent of $\ell'$: this follows from the invertibility of $\ell$ and $\ell'$ mod $2n$. \item The internal symmetries of the parameter spaces are $\mathbf{Z}_{2}$ symmetries: if $\nu_{t\neq1}$ gives rise to an internal symmetry of the parameter spaces of $\mathcal{H}_{(n,\ell)}$ then it follows that $g_{2}(t^{2})=1$, $g_{1}(t^{2})=1$ and lastly $\nu_{t}\circ\nu_{t}=\mbox{id}$. \item The total area of inequivalent points for the $so(n)_{2}$ Hamiltonians $\mathcal{H}_{(n,\ell)}$ equals the surface of a $\tfrac{n-1}{2}$-sphere: there are the same number of Hamiltonians $\mathcal{H}_{(n,\ell)}$ as automorphisms $\nu_{t}$ and every $\nu_{t}$ is unique. \end{enumerate} These observations allow us to make inferences. For instance, for $so(5)_{2}$, the two Hamiltonians, $\mathcal{H}_{(5,1)}$ and $\mathcal{H}_{(5,3)}$, will either be equivalent or the parameter spaces will have a $\mathbf{Z}_{2}$ symmetry, with the latter is found to be true (see Figure \ref{fig:monrel}). \begin{figure}[t] \begin{center} \begin{tikzpicture} \tikzstyle{every loop}=[] % \node (F21) at ( 0, 0) {$F({1})$}; \node (F23) at ( 2, 0) {$F({3})$};loop \draw [thick, ->, loop,looseness=5] (F21) to node [above] {\small $\nu_{3}$} (F21); \draw [thick, ->, loop,looseness=5] (F23) to node [above] {\small $\nu_{3}$} (F23); \node (F21) at ( 6, -0.73) {$F({1})$}; \node (F23) at ( 7.5, 1.85) {$F({3})$}; \node (F25) at ( 9, -0.73) {$F({5})$}; \draw [thick, ->, transform canvas={yshift=0.1cm}] (F21) to node [above] {\small $\nu_{3}$} (F25); \draw [thick, ->, transform canvas={xshift=-0.1cm}] (F25) to node [left, transform canvas={xshift=0.1cm,yshift=-0.2cm}] {\small $\nu_{3}$} (F23); \draw [thick, ->, transform canvas={xshift=0.1cm}] (F23) to node [right, transform canvas={xshift=-0.1cm,yshift=-0.2cm}] {\small $\nu_{3}$} (F21); \draw [thick, ->, transform canvas={xshift=-0.1cm}] (F21) to node [left] {\small $\nu_{5}$} (F23); \draw [thick, ->, transform canvas={xshift=0.1cm}] (F23) to node [right] {\small $\nu_{5}$} (F25); \draw [thick, ->, transform canvas={yshift=-0.1cm}] (F25) to node [below] {\small $\nu_{5}$} (F21); \end{tikzpicture} \end{center} \caption{One can draw directed lines between the sets of $F$-moves to indicate one set of $F$-moves is related/mapped to another via an automorphism $\nu_{t\neq1}$. For $so(5)_{2}$ (left) we set that there are two sets of monoidally equivalent $F$-moves indicating $\mathcal{H}_{(5,1)}(\boldsymbol{\alpha})$ and $\mathcal{H}_{(5,3)}(\boldsymbol{\alpha})$ are inequivalent and their parameter spaces must possess a $\mathbf{Z}_{2}$ symmetry. For $so(7)_{2}$ (right) we see that the $F$-moves are monoidally equivalent and thus $\mathcal{H}_{(7,\ell)}(\boldsymbol{\alpha})$, $\ell=1,3,5$ are equivalent. This is consistent with the relationships shown in Fig. \ref{fig:hamrel}. \label{fig:monrel}} \end{figure} In contrast, for $so(7)_{2}$, the three Hamiltonians $\mathcal{H}_{(7,\ell)}$, $\ell=1,3,5$, must all be equivalent as there is no other possibility of having the total area of inequivalent points equalling the surface of a sphere. \subsection{Integrability} \subsubsection*{The general face model formulation} In the vertex language solutions to the Yang-Baxter equation are often associated with quasi-triangular Hopf algebras, typically quantum groups. In a similar fashion, in the face (or anyon) language one often identifies one or more solution to the Yang--Baxter equation associated with a braided fusion category. Using the basis defined earlier we define an $R$-matrix to be of the form \begin{align*} \bra{\bf{a}'} R_{i}(u) \ket{\bf{a}} & = \Wop[a_{i-1}][a_{i}'][a_{i}][a_{i+1}][u] \prod_{k\neq i} \delta_{a_{k}}^{a_{k}'}, \quad \hbox{where}\quad \ket{\bf{a}}=\ket{a_1,a_2,\cdots,a_{2L}} \in\mathcal{B}_L^{(j)}\,, \end{align*} and must satisfy the face Yang--Baxter equation \begin{align*} R_{i}(u)R_{i+1}(u+v)R_{i}(v) & = R_{i+1}(v)R_{i}(u+v)R_{i+1}(u). \end{align*} The SOS transfer matrix $t(u)$ is then defined by \begin{align*} \bra{\bf{a}'} t(u) \ket{\bf{a}} & = \prod_{i=1}^{2L} \Wop[a_{i-1}'][a_{i}'][a_{i-1}][a_{i}][u-u_{i}]\,\quad \ket{\bf{a}},\ket{\bf{a}'}\in \mathcal{B}_L^{(j)}. \end{align*} This can be used to construct an integrable quantum Hamiltonian \begin{align*} \mathcal{H} & \propto \left. \frac{d}{du} \ln(t(u)) \right|_{u=0}. \end{align*} In the case that the $R$-matrix satisfies the regularity condition, i.e. $R(0)\propto I$, the Hamiltonian will describe a periodic chain of interacting particles with nearest-neighbour interactions. \subsubsection*{The $so(n)_{2}$ integrable points} In this section we identify pairs of integrable points associated with solutions to the Yang--Baxter equation. The solutions of the Yang--Baxter equation come in two types, either Temperley--Lieb $R$-matrices or $\mathbf{Z}_{n}$ Fateev--Zamolodchikov like $R$-matrices. The Temperley--Lieb $R$-matrices are defined as \begin{align*} R_{i}(u) & = \frac{\sinh(\gamma-u)}{\sinh(\gamma+u)} p^{(\epsilon_{+})} + \left[ \sum_{k=1}^{\frac{n-1}{2}} p_{i}^{(\phi_{k})} \right] \end{align*} where $2\cosh(\gamma)=\sqrt{n}$. A simple rescaling of the projection operators onto the identity object, $E_i = \sqrt{n}\,p_i^{(\epsilon_{+})}$, leads to a representation of the Temperley--Lieb algebra (\ref{TLalgebra}). As a consequence the spectrum of the corresponding integrable points coincides with that of the XXZ spin chain (up to boundary conditions). Specifically, this implies that the anyon model at the Temperley--Lieb integrable points ${\boldsymbol\alpha}=\pm(1,0,\dots,0)$ will be gapped for $n>4$. These points are fixed points of any symmetry generated by an automophism of the fusion rules. Moreover, at these two points the Hamiltonian $H_{(n,\ell)}({\boldsymbol\alpha})$ is the same for all allowed $\ell$. The second class of $R$-matrices we consider is connected to the $\mathbf{Z}_{n}$ Fateev--Zammolodchikov like $R$-matrix (\ref{eqRmatFZ}). Analogously to the integrable points for the $\mathbf{Z}_{n}$ clock model, for the $F$-moves $F({\ell})$ and automophism $\nu_{t}$ we define the parameter \begin{align*} \ell_{FZ} & = g_{2}(n - 2\ell t^{2})\,. \end{align*} The $R$-matrices are defined to be \begin{equation} \begin{aligned} R_{i}^{(\ell)}(u) = & \left[\prod_{k=1}^{\frac{n-1}{2}}\sinh\left(u-\frac{i\pi (1-2k)\ell_{FZ}}{2n}\right)\right]\\ & \times \left\{p_{i}^{(\epsilon_{+})}(\ell) + \sum_{j=1}^{\frac{n-1}{2}} \left[ \prod_{k=1}^{j} \frac{\sinh(u+\frac{i\pi (1-2k)\ell_{FZ}}{2n})}{\sinh(u-\frac{i\pi (1-2k)\ell_{FZ}}{2n})} \right] \, p_{i}^{\left(\phi_{g_{1}(2jt)}\right)}(\ell)\right\} \end{aligned} \end{equation} These operators have been verified to satisfy the Yang--Baxter equation for $3\leq n \leq 9$ and conjectured for larger $n$. Their existence implies that the Hamiltonians (\ref{eqHamSO}) with the particular choice of coupling constants $\mathbf{\alpha}$ \begin{align*} \mathcal{H}_{(n,\ell)} & = -J\left\{ \sum_{i=1}^{2L} \sum_{j=1}^{\frac{n-1}{2}} 2\left[ \sum_{k=1}^{j} \frac{\cos\left(\frac{\pi(2k-1)\ell_{FZ}}{2n}\right)}{ \sin\left(\frac{\pi(2k-1)\ell_{FZ}}{2n}\right)} \right] p^{\left(\phi_{g_{1}(2jt)}\right)}(\ell)\right\} + 2JL\left\{\sum_{j=1}^{\frac{n-1}{2}} \frac{\cos\left(\frac{\pi(2j-1)\ell_{FZ}}{2n}\right)}{ \sin\left(\frac{\pi(2j-1)\ell_{FZ}}{2n}\right)} \right\} , \end{align*} are integrable where $J=\pm1$. The Hamiltonian can be normalised to the form given above, but for our purposes that is unnecessary. Using the same approach as in \cite{FiFF14}, additional $R$-matrices labeled by $b\in\mathcal{I}$ can be constructed starting from the given set of $F$-moves. The resulting transfer matrices are mutually commuting and satisfy a set of fusion relations. Within this construction, the topological charges $Y_b$, defined in Eq.~(\ref{Ytopo}) for the general anyon model, are obtained from these transfer matrices in the braiding limit $u\to\infty$. Analyticity arguments imply that the transfer matrix eigenvalues can be parametrized by a set of complex numbers $\{u_j\}$ solving the Bethe equations \begin{align} \label{baeSO} \left(i\frac{\sinh\left(u_{j}+\frac{i\pi \ell_{FZ}}{4n})\right)}{\sinh\left(u_{j}-\frac{i\pi \ell_{FZ}}{4n}\right)}\right)^{2L} & = -s \prod_{k=1}^{d} \left(\frac{\sinh\left(u_{j}-u_{k}+\frac{i\pi}{2}-\frac{i\pi \ell_{FZ}}{2n}\right)}{\sinh\left(u_{j}-u_{k}-\frac{i\pi}{2}+\frac{i\pi \ell_{FZ}}{2n}\right)}\right) \end{align} where $s=\pm1$ is the eigenvalue of the topological charge $Y_{\epsilon_-}$ \cite{FiFF14}. We see that the Bethe equations coincide with those for Fateev--Zamolodchikov $\mathbf{Z}_{n}$ clock model integrable points, Eq.~(\ref{baeFZ}), up to a twist in the boundary conditions in the sector with $s=-1$. This implies some connection and we expect that the two different models share dynamics at these integrable points once the topological sectors of the anyon chain are identified with suitable boundary conditions for the clock model. The energy and momentum are give by \begin{equation} \label{specSO} \begin{aligned} E & = iJ\left\{ \sum_{j=1}^{d} \frac{\cosh(u_{j}-\frac{i\pi \ell_{FZ}}{4n})}{\sinh(u_{j}-\frac{i\pi \ell_{FZ}}{4n})} \right\}, \\ P & = \mbox{Re}\left[ \frac{1}{i} \sum_{j=1}^{d} \ln\left[\sinh(-u_{j}+\frac{i\pi \ell_{FZ}}{4n})\right]\right] + \mbox{const}. \end{aligned} \end{equation} Note that the momentum for the anyon chain differs to the that coming from the $\mathbf{Z}_{n}$ models due to the factor of two difference between the chain lengths of the two models. \subsection{Mapping between anyon and clock models} The analysis of the symmetries and structures underlying their integrable points above has revealed striking similarities between the $so(n)_2$ anyon chains and the $\mathbf{Z}_n$ clock models -- in spite of the very different Hilbert spaces on which they are defined. Here we put forward a more general relationship between the $\mathbf{Z}_{n}$ clock models for given parameter $\ell$ and chain length $L$ and the $so(n)_{2}$ anyons chain, built from the $F({\ell})$-moves with chain length $2L$. Specifically, we claim that when \begin{equation} \label{so2fz} \begin{aligned} c_{0} & = \frac{2}{n}\alpha_{\epsilon_{+}} + \frac{4}{n} \sum_{k=1}^{\frac{n-1}{2}} \alpha_{\phi_{k}}\,, \\ c_{j} & = \frac{1}{n}\alpha_{\epsilon_{+}} + \frac{2}{n} \sum_{k=1}^{\frac{n-1}{2}} \cos\left(\frac{2\ell jk\pi}{n}\right)\alpha_{\phi_{k}}\,, \qquad\mbox{for}\quad 1\leq j \leq \frac{n-1}{2}\,, \end{aligned} \end{equation} the two Hamiltonians $\mathcal{H}_{(n,\ell)}(\boldsymbol{c})$ and $\mathcal{H}_{(n,\ell)}(\boldsymbol{\alpha})$, defined by Equations (\ref{eqHamFZ}) and (\ref{eqHamSO}) respectively, share some energy levels (although their degeneracies may differ). We have checked this conjecture numerically for small system sizes and found that, depending on the chain lengths (e.g.\ for $L$ being multiples of $4$) this includes the ground state. This relationship between models, along with the underlying algebraic structure of the anyon model, suggests the existence of a face-vertex (or anyon-spin) correspondence \cite{Pasq88,Finch13}. \section{Phase portraits and low energy effective theories} \label{sec:PhasePortraits} In the following we study the phase diagrams of the $\mathbf{Z}_n$ clock and $so(n)_2$ anyon models for $n=3,5,7$ based on variational matrix product states (MPS) representing translationally invariant states of the chains in the thermodynamic limit, diagonalization of the Hamiltonian for small lattice sizes $L$ using the Lanczos algorithm, and -- for the integrable points -- numerical solution of the Bethe equations (\ref{baeFZ}) and (\ref{baeSO}) for chains of a few hundred sites. For some of these models we are able to identify the conformal field theories describing the continuum limit based on the finite size scaling of the ground state and low lying excitations, i.e.\ \cite{BlCN86,Affl86} \begin{equation} \label{fscft} \begin{aligned} E_0(L) &= L\epsilon_\infty - \frac{\pi}{6L} v^{(F)} c\,,\\ E_n(L) &= E_0(L) + \frac{2\pi}{L} v^{(F)} X_n\,,& P_n(L) &= P_0(L) + \frac{2\pi}{L} s_n + \mathrm{const.} \end{aligned} \end{equation} where $c$ is the central charge of the underlying Virasoro algebra and $v^{(F)}$ is the velocity of massless excitations. From the spectrum of scaling dimensions $X_n=\left( h+\bar{h} \right)$ and conformal spins $s_n=\left( h-\bar{h} \right)$ the operator content, i.e.\ primary fields with conformal weights $(h,\bar{h})$, can be determined. \subsection{The $\mathbf{Z}_{3}$ and $so(3)_{2}$ models} The spectrum of low energy excitations of the $\mathbf{Z}_{3}$ clock model or, equivalently, the self-dual three-state critical Potts model, has been studied in Ref.~\cite{AlDM92,AlDM93}. Its partition function has been computed both for the antiferromagnetic and the ferromagnetic case and found to reproduce the modular invariant partition function of a $\mathbf{Z}_4$ parafermion model with $c=1$, and that of the three-state Potts model (the minimal model $\mathcal{M}_{(5,6)}$) with central charge $c=\frac{4}{5}$, respectively \cite{KeMc93,DKMM94}. The $so(3)_{2}$ chain on the other hand is equivalent to a chain of $su(2)_4$ anyons. This is a deformation of the spin-$1/2$ Heisenberg model \cite{FTLT07}, more commonly known as the $A_{5}$ restricted solid-on-solid model \cite{AnBF84,Pasq87a} which is another formulation of the three-state Potts model. To illustrate the equivalence between the $\mathbf{Z}_{3}$ clock and $so(3)_{2}$ anyon models with coupling constants related by Eq.~(\ref{so2fz}) we plot the spectra of their low energy excitations in Fig.~\ref{fig:n3_spec}. \begin{figure}[t] \centering (a) \includegraphics[width=.45\linewidth]{n3_fsspec} \hspace{\fill} (b) \includegraphics[width=.45\linewidth]{n3m_fsspec} \caption{(a) Finite size spectra of the integrable $(3,1,+1)$ $\mathbf{Z}_3$ clock model ($\times$) for system sizes $L=13$, $14$, $15$ and $16$ (resp.\ the $so(3)_2$ anyon model ($\bigcirc$) with $\widetilde{L}=2L$ sites). (b) Same for the $(3,1,-1)$ models. Energies are measured relative to value obtained from the ground state energy density in the thermodynamic limit, momenta are scaled to the values taken in the anyon chain. The shaded area indicates the continuum of excitations in the thermodynamic limit.} \label{fig:n3_spec} \end{figure} For the antiferromagnetic model, i.e.\ $(n,\ell_{FZ},J)= (3,1,+1)$, we have identified the Bethe ansatz solutions corresponding to the lowest of the conformal weights of $\mathbf{Z}_4$ parafermions (\ref{cft_specZ4}), see Table~\ref{table:fsdata31p}. This parafermion CFT is realized by the $\mathbf{Z}_2$ orbifold of a $U(1)$ boson compactified on a circle of radius $R^2=3/2$ \cite{Ginsparg88}. We also note that this spectrum coincides with that of the $\mathcal{W}D_3(5,6)$ rational CFT. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{ccccccc} & extrapolation & $s$ & conjecture & $\Delta_{(1,+)}$ & $\Delta_{(1,-)}$ & comment \tabularnewline \hline c & 1.000000 & & 1 & 0 & 0 & a,c \tabularnewline X & 0.125000 & 0 & $(\frac{1}{16},\frac{1}{16})$ & 0 & 0 & a \tabularnewline & 0.166667 & 0 & $(\frac{1}{12},\frac{1}{12})$ & $-1$ & $-1$ & a,c \tabularnewline & 0.666667 & 0 & $(\frac{1}{3},\frac{1}{3})$ & $-2$ & 0 & a,c \tabularnewline \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata31p} Central charge $c$ and conformal spectrum of the $(3,1,+1)$ model: shown are the extrapolated values for $c$ and the conformal dimensions $X=h+\bar{h}$ together with the conformal spins $s=h-\bar{h}$ of the primaries as obtained from the Bethe ansatz solution. The conjectured values are the central charge and pairs of conformal weights $(h,\bar{h})$ for $\mathbf{Z}_4$ parafermions, Eq.~(\ref{cft_specZ4}). $\Delta_\gamma$ denote the difference in the number of $\gamma$-patterns in the Bethe root configuration as compared to the ground state, see {Appendix~\ref{app:thermo}}. In the last column we indicate in which model (a: anyon chain, c: clock model) the corresponding level can be observed -- possibly subject to further constraints on the system size $L$. } \end{table} Similarly, for the ferromagnetic model, $(n,\ell_{FZ},J)=(3,1,-1)$, we have identified the Bethe ansatz solutions yielding the lowest of the conformal weights (\ref{cft_specPotts3}) of the minimal model $\mathcal{M}_{(5,6)}$, see Table~\ref{table:fsdata31m}. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{cccccccc} & extrapolation & $s$ & conjecture & $\Delta_{(1,+)}$ & $\Delta_{(1,-)}$ & $\Delta_{(2,+)}$ & comment \tabularnewline \hline c & 0.800000(1) & & $\frac45$ & 0 & 0 & 0 & a,c \tabularnewline X & 0.050003(1) & 0 & $(\frac{1}{40},\frac{1}{40})$ & 0 & 0 & 0 & a \tabularnewline & 0.133337(1) & 0 & $(\frac{1}{15},\frac{1}{15})$ & 0 & 0 & $-1$ & a,c \tabularnewline & 0.250000(1) & 0 & $(\frac{1}{8},\frac{1}{8})$ & $2$ & 0 & $-1$ & a \tabularnewline & 0.801(1) & 0 & $(\frac{2}{5},\frac{2}{5})$ & - & - & - & a,c\tabularnewline \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata31m} Similar as Table~\ref{table:fsdata31p} but for the $(3,1,-1)$ model. The conjectured values for the conformal weights are those from the critical three-state Potts model (\ref{cft_specPotts3}). The changes $\Delta_\gamma$ of root patterns have been omitted in the last excitation as deformations due to finite size obscure the configuration of the thermodynamic limit. } \end{table} We note that the conformal weights of the twist operators, $h\in\{\frac{1}{40}, \frac18, \frac{21}{40}, \frac{13}{8}\}$, corresponding to disorder operators in the Potts model are observed only in the low energy spectrum of anyon model. They are absent in the clock model with periodic boundary conditions as considered here but do appear in the spectrum of the $\mathbf{Z}_3$ clock model for twisted boundary conditions \cite{Card86b}. \subsection{The $\mathbf{Z}_{5}$ and $so(5)_{2}$ models} The $so(5)_{2}$ model is the first model in which the parameter space is not a single point. As $F({1})$ and $F({3})$ are not monodially equivalent there will be two distinct models each with a global $\mathbf{Z}_{2}$ symmetry in the Hamiltonian. Instead of using the couplings $\boldsymbol{\alpha}$ we switch to polar coordinates with fixed radius. This gives the general Hamiltonians \begin{equation} \label{hamil5} \mathcal{H}_{(5,\ell)}(\theta) = \sum_{j=1}^{L} \left[ \cos\left(\theta+\frac{\pi}{4}\right) \, p_{j}^{(\phi_{1})}(\ell) + \sin\left(\theta+\frac{\pi}{4}\right) \, p_{j}^{(\phi_{2})}(\ell)\right] \end{equation} where $\ell=1,3$ and $\theta \in [0, 2\pi)$. The $\mathbf{Z}_{2}$ symmetry manifests itself as \begin{equation} \mathcal{H}_{(5,\ell)}(-\theta) = U^{-1}\mathcal{H}_{(5,\ell)}(\theta)U\,. \end{equation} At the TL points $\theta_{TL}=0,\pi$, i.e.\ the fixed points of this symmetry, the Hamiltonians coincide. In previous work on the model with $\ell=3$ \cite{Finch.etal14,FiFF14} the TL equivalence to the XXZ spin chain has been employed to conclude that the excitations at $\theta=0$ are massive with a tiny energy gap $\Delta E\simeq 2.910^{-4}$. This gap is difficult to resolve numerically but based on continuity arguments it is expected that this gapped phase extends to small finite values of $\theta$. Similarly, the spectrum at $\theta=\pi$ is highly degenerate indicating a first order transition. When $\theta$ is varied, the models undergo a sequence of phase transitions. To identify the nature of the different phases we have used the \textsc{evomps} software package \cite{evomps} to compute variational matrix product states representing the ground state and the lowest excitation for a given momentum. In Fig.~\ref{fig:phasp_n5} the ground state expectation values of the local projection operators and the dispersion for the $so(5)_2$ anyon models are shown. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{p_n5l1} \hspace*{\fill} \includegraphics[width=0.45\textwidth]{p_n5l3} \includegraphics[width=0.45\textwidth]{so52_l1_3d} \hspace*{\fill} \includegraphics[width=0.45\textwidth]{so52_l3_3d} \begin{tikzpicture}[scale=2, point/.style={draw,scale=0.7,thick,fill=white}, trans/.style={draw,scale=0.7,thick,fill=white,circle}, labell/.style={inner sep=7pt,left}, labelr/.style={inner sep=7pt,right} ] \draw[thick] (0,0) -- (0:1) arc (0:180:1) -- cycle; \draw[thick] (0,0) -- (180:1) arc (180:360:1) -- cycle; \draw[thick,fill=blue!20] (0,0) -- (36:1) arc (36:162:1) -- cycle; \draw[thick,fill=blue!20] (0,0) -- (198:1) arc (198:324:1) -- cycle; \draw[thick,fill=green!20] (0,0) -- (-2:1) arc (-2:2:1) -- cycle; \node at (120:0.6) {$c=\frac12+1$}; \node [trans] at (0,0) {}; \node [point] at (0:1) {}; \node [point] at (62.2:1) {}; \node [point] at (297.8:1) {}; \node [trans] at (36:1) {}; \node [trans] at (162:1) {}; \node [point] at (117.8:1) {}; \node [point] at (242.2:1) {}; \node [trans] at (180:1) {}; \node [trans] at (324:1) {}; \node [trans] at (198:1) {}; \node[inner sep=1.5pt] (a) at (345:0.70) {gapped}; \draw (0.7,-0.10) -- (0.9,0.00); \node [labelr] at (0:1) {$\theta=0$}; \node [labelr] at (62.2:1.05) {$(5,3,+1)$}; \node [labell] at (117.8:1.05) {$(5,3,-1)$}; \node [labell] at (180:1) {$\pi$}; \node [labell] at (242.2:1.05) {$(5,3,-1)$}; \node [labelr] at (297.8:1.05) {$(5,3,+1)$}; \end{tikzpicture} \hspace*{\fill} \begin{tikzpicture}[scale=2, point/.style={draw,scale=0.7,thick,fill=white}, trans/.style={draw,scale=0.7,thick,fill=white,circle}, labell/.style={inner sep=7pt,left}, labelr/.style={inner sep=7pt,right} ] \draw[thick] (0,0) -- (0:1) arc (0:180:1) -- cycle; \draw[thick] (0,0) -- (180:1) arc (180:360:1) -- cycle; \draw[thick,fill=blue!20] (0,0) -- (60:1) arc (60:180:1) -- cycle; \draw[thick,fill=blue!20] (0,0) -- (180:1) arc (180:300:1) -- cycle; \draw[thick,fill=green!20] (0,0) -- (-2:1) arc (-2:2:1) -- cycle; \node at (120:0.6) {$c=1$}; \node [trans] at (0,0) {}; \node [point] at (0:1) {}; \node [point] at (6:1) {}; \node [trans] at (60:1) {}; \node [point] at (174:1) {}; \node [trans] at (180:1) {}; \node [point] at (186:1) {}; \node [trans] at (300:1) {}; \node [point] at (354:1) {}; \node[inner sep=1.5pt] (a) at (345:0.70) {gapped}; \draw (0.7,-0.10) -- (0.9,0.00); \node [labelr] at (0:1) {$\theta=0$}; \node [labelr] at (6+5:1) {$(5,1,-1)$}; \node [labell] at (174-5:1) {$(5,1,+1)$}; \node [labell] at (180:1) {$\pi$}; \node [labell] at (186+5:1) {$(5,1,+1)$}; \node [labelr] at (354-5:1) {$(5,1,-1)$}; \end{tikzpicture} \caption{Ground state expectation values of the local projection operators (top row) and dispersion of lowest excitations (middle row) of the $so(5)_2$ anyon chain as a function of the coupling constant $\theta$ for $\ell=1$ (left column) and $\ell=3$ (right column). The symbols $\blacktriangle$ mark the position of the FZ integrable points, red dashed lines indicate the conjectured locations of phase transitions at $\theta\simeq\pi/5$, $9\pi/10$ for $\ell=1$ ($\theta\simeq\pi/3$ for $\ell=3$). The bottom row shows the proposed phase diagram for the $SO(5)_2$ anyon chains. The Bethe ansatz results indicate that there is a gapped region surrounding the Temperley-Lieb integrable point ($\theta=0$). \label{fig:phasp_n5}} \end{figure} From the numerical data the $\mathbf{Z}_2$ symmetry of the model under $\theta \leftrightarrow 2\pi-\theta$ with a simultaneous exchange of $\phi_1 \leftrightarrow \phi_2$ is clearly seen. In the data for the model with $\ell=1$ we observe a singular change of the expectation values and a change in the translational properties of the ground state reflected in the periodicity of the dispersion of lowest excitations at $\theta\simeq\pi/5$ and $9\pi/10$ which indicates the presence of phase transitions. The two FZ integrable points of $\mathcal{H}_{(5,\ell=1)}$ at $\theta=\eta'$ and $\pi-\eta'$ with $\eta'=\pi/4-\arctan\left(\frac{1-\sqrt{5}}{4}\right)$ are located between these transition points. We shall analyze the spectrum at these points in more detail below. The phase portrait of $\mathcal{H}_{(5,\ell=3)}$ has already been considered previously: a phase transition where the expectation values of the projection operators change in a singular way and the periodicity of dispersion of low energy excitation switches to a different value is observed at $\pi\simeq \pi/3$ \cite{Finch.etal14}. The low energy effective theories at the $\ell=3$ FZ integrable points have been identified with rational conformal field theories respecting the five-fold discrete symmetries of the anyon model which are invariant under extensions of the Virasoro algebra \cite{FiFF14}, i.e.\ from the minimal series of Casimir-type $\mathcal{W}$-algebras associated with the Lie-algebras $B_2=SO(5)$, $D_5=SO(10)$, and $\mathcal{B}_{0,2}=OSp(1|4)$. \paragraph*{\underline{$(n,\ell_{FZ},J)=(5,1,-1)$.}} The low energy effective theory for the feromagnetic $\mathbf{Z}_5$ clock model is the $\mathbf{Z}_5$ parafermion CFT \cite{ZaFa85,JiMO86}. In the finite size spectrum of the corresponding model of $so(5)_2$ anyons (i.e.\ $\theta=\eta$ with $\eta \equiv \pi/4-\arctan\left(\frac{1+\sqrt{5}}{4}\right)$) additional conformal weights appear implying that the continuum limit is described by a $\mathcal{W}B_2(5,7)$ rational CFT with the same central charge $c=\frac87$ and conformal weights (\ref{cft_specWB2-57}), see \cite{FiFF14}. \paragraph*{\underline{$(n,\ell_{FZ},J)=(5,1,+1)$.}} Similarly, the continuum limit of the \emph{anti}ferromagnetic integrable model $(n,\ell_{FZ},J)=(5,1,+1)$ corresponding to $\theta=\pi-\eta$ has been found to be described by a $\mathcal{W}D_5(9,10)$ rational CFT with $c=1$ and conformal weights (\ref{cft_specWD5-9-10}), equivalent to the $\mathbf{Z}_2$-orbifold of a Gaussian model with compactification radius $2R^2=5$. For the critical properties of the FZ integrable points $(n,\ell_{FZ},J)=(5,3,\pm1)$ we have analyzed the Bethe equations based on the root density formalism (see Appendix~\ref{app:thermo}). This approach provides the ground state energy density and Fermi velocities of low energy excitations in the thermodynamic limit. Based on these data the central charge and scaling dimensions can be extracted from the finite size spectra (\ref{fscft}). \paragraph*{\underline{$(n,\ell_{FZ},J)=(5,3,+1)$}.} From the analysis of the thermodynamic limit the Bethe root configuration for the ground state is known to consist of $3L/2$ $(2,+)$-strings and $L/2$ $(2,-)$ strings, see Table~\ref{table:thermo357}. The ground state energy density is $\epsilon_\infty= -6.61811809391087$ and the Fermi velocity of low lying excitations is $v^{(F)}=5/2$. The finite size scaling analysis (\ref{fscft}) of the spectrum of this model using data obtained from the solution of the Bethe ansatz gives a central charge $c=3/2$ and conformal weights as listed in Table~\ref{table:fsdata53p}. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{ccccccccc} $X$ (extrap.) & $s$ & $(h,\bar{h})_1 + (h,\bar{h})_2$ & $\Delta_{(1,+)}$ & $\Delta_{(1,-)}$ & $\Delta_{(2,+)}$ & $\Delta_{(2,-)}$ & $\Delta_{(1,m)}$ & comment \tabularnewline \hline 0.125000 & 0 & $(0,0) + (\frac{1}{16},\frac{1}{16})$ & 0 & $2$ & 0 & $-1$ & 0 & a \tabularnewline 0.175000 & 0 & $(\frac{1}{16},\frac{1}{16}) + (\frac{1}{40},\frac{1}{40})$ & 1 & 1 & $-1$ & $-1$ & 0 & a,c \tabularnewline 0.200000 & 0 & $(0,0) + (\frac{1}{10},\frac{1}{10})$ & 0 & 0 & $-1$ & $-1$ & 0 & a,c \tabularnewline 0.250000 & 0 & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{16},\frac{1}{16})$ & 0 & 0 & 0 & 0 & 0 & a \tabularnewline 0.575000 & 0 & $(\frac{1}{16},\frac{1}{16}) + (\frac{9}{40},\frac{9}{40})$ & 1 & 3 & $-2$ & $-2$ & 0 & a,c \tabularnewline 0.800000 & 0 & $(0,0) + (\frac{2}{5},\frac{2}{5})$ & 2 & 0 & $-2$ & 0 & 0 & a,c \tabularnewline 1.000000 & 1 & descendant & 1 & 1 & $-1$ & $-1$ & 1 & a,c \tabularnewline 1.000000 & 0 & $(\frac{1}{2},\frac{1}{2}) + (0,0)$ & 0 & 0 & $-1$ & $-1$ & 2 & a,c \tabularnewline 1.125000 & 0 & $(0,0) + (\frac{9}{16},\frac{9}{16})$ & 0 & 2 & $-1$ & $-2$ & 2 & a \tabularnewline \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata53p} Finite size data for the effective scaling dimensions from the solution of the Bethe equations for the low lying excitations of the $(n,\ell_{FZ},J)=(5,3,+1)$ model. The conjectured weights $(h,\bar{h})_1+(h,\bar{h})_2$ are based on our proposal of a low energy theory with two critical factors, namely an Ising model ($c_1=\frac12$) and a rational CFT with central charge $c_2=1$. In the last column we indicate in which model (a: anyon chain, c: clock model) the corresponding level can be observed -- possibly subject to further constraints on the system size $L$. In addition to the levels listed there is a spin $\frac{1}{2}$ state in the anyon chain with scaling dimension $\frac{23}{40}<X=\sum h_i+\bar{h}_i <\frac{4}{5}$. Unfortunately we have not been able to identify its Bethe root configuration and have no numerical data for sufficiently large system sizes allowing for a finite size extrapolation. Based on our proposal for the underlying CFT we conjecture that this missing level corresponds to the product of the identity in the Ising sector and a primary field with conformal weights $(h,\bar{h})_2 = (\frac{9}{16},\frac{1}{16})$ in sector $2$. } \end{table} Based on the observed spectrum we conjecture that the critical theory is a product of an Ising model ($c_1=\frac12$) and a rational CFT with central charge $c_2=1$. We have found conformal weights $h\in\{0,\frac{1}{40}, \frac{1}{16}, \frac{1}{10}, \frac{9}{40}, \frac25\}$, which is consistent with the $\mathbf{Z}_2$-orbifold of a $U(1)$ boson compactified with radius $2R^2=10$ (coinciding with the $\mathcal{W}$-minimal model $\mathcal{W}D_{10}(19,20)$, see (\ref{cft_specWD10})), or the $\mathcal{WB}_{0,2}(4,5)$ minimal model with conformal weights (\ref{cft_specWBf2-45}). Note that these data alone are not sufficient to distinguish between these two rational CFTs. Even if we could compute data for states with larger conformal weights, contained in the spectrum of the $\mathcal{W}D_{10}(19,20)$ algebra, but not in the spectrum of the $\mathcal{WB}_{0,2}(4,5)$ algebra, we could not make a choice. The reason is that the additional conformal weights $h'$ in the spectrum of $\mathcal{W}D_{10}(19,20)$ all differ by non negative half-integers or integers from the conformal weights $h$ in $\mathcal{WB}_{0,2}(4,5)$, $h'=h+k/2$ with $k\in\mathbb{Z}_+$, see Appendix~\ref{app:rCFTs-WBf}. This happens because the extended chiral symmetry algebra of the $\mathcal{WB}_{0,2}(4,5)$ model possesses a generator $Q$ of half-integer scaling dimension $h_Q=5/2$. This generator allows to build states from representations with conformal dimensions $h'$ shifted by an half-integer or integer as excitations by modes of this generator of states from representations with unshifted conformal weights $h$, e.g.\ $|h'\rangle = Q_{-k/2}|h\rangle$. Thus, we expect that the $\mathcal{W}D_{10}(19,20)$ model admits a non-diagonal partition function with terms of a form like $|\chi^{D_{10}}_h+\chi^{D_{10}}_{h'}|^2$, as linear combinations of the corresponding characters yield the characters of the representations of the $\mathcal{WB}_{0,2}(4,5)$ model, which would then read something like $\chi^{D_{10}}_h+\chi^{D_{10}}_{h'}=\chi^{\mathcal{B}_{0,2}}_h$. A hint towards the identification can be taken from the degeneracies in the spectrum found from the exact diagonalization of the lattice models of lengths up to $L=10$: in the clock model we find that all levels apart from those containing the singlet vacuum $(h,\bar{h})_2=0$ of the $c_2=1$ sector appear with even degeneracy. This agrees with what is found in the $\mathcal{WB}_{0,2}$ model as a consequence of existence of the fermionic field $Q$ with half-integer modes, see Appendix~\ref{app:rCFTs-WBf}. The multiplicities in the spectrum of the corresponding anyon chain, however, are different: among the spin $s=0$ levels listed in Table~\ref{table:fsdata53p} only the one with weights $(\frac{1}{16},\frac{1}{16}) +(\frac{1}{16},\frac{1}{16})$ appears with multiplicity two for the system sizes which can be handled by numerical diagonalization. In order to really identify the rational CFT, one would have to identify all good quantum numbers of the lattice models and map them to the weights of the representations with respect to the zero modes of all generators of the respective chiral symmetry algebras. If one such quantum number is to be identified with the eigenvalues of the zero mode $W_0$ of the generator with half-integer scaling dimension, the choice would necessarily be the $\mathcal{WB}_{0,2}(4,5)$ model. So far, the numerical data only gives us the weights with respect to the Virasoro generator $L_0$, which is associated to the energy quantum number. \paragraph*{\underline{$(n,\ell_{FZ},J)=(5,3,-1)$}} The thermodynamic Bethe ansatz analysis predicts a ground state energy density $\epsilon_\infty=-2.75801611466$ and that this model has two branches of low energy excitations with different Fermi velocities, $v^{(F)}_{1}=5$ and $v^{(F)}_{2}=5/3$, respectively (see Table~\ref{table:thermo357}). This indicates that the effective field theory is a product of two sectors. Unfortunately, the numerical solution of the Bethe equations is plagued by instabilities and we have to rely on data obtained using the Lanczos algorithm for systems sizes of up to $L=11$. Extrapolating the finite size data of the ground state energy (realized for even $L$ in the clock model and for $L=0\mod 8$ for the anyon chain) we find \begin{equation} \label{C2-extra} C(L) \equiv -\frac{6L}{\pi} \left(E_0(L) -L\epsilon_\infty\right) \to \frac{25}{6}\,, \end{equation} see Table~\ref{table:fsdata53m}. For a CFT with two critical degrees of freedom this quantity is expected to be the combination $\left( v^{(F)}_{1}\, c_1 + v^{(F)}_{2}\, c_2\right)$ of the Fermi velocities and the universal central charges $c_i$ of the factor CFTs. Assuming that both factors are unitary the unique solution is $c_1=\frac12$, $c_2 = 1$, implying that the continuum limit of the lattice models is described by an Ising model and a $c=1$ CFT. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{c|ccccc|c|c|c} $L$ & $2$ & $4$ & $6$ & $8$ & $10$ & extr.\ & conj.\ & \\ \hline $C(L)$ & 4.5466656 & 4.2580757 & 4.2070680 & 4.1893536 & 4.1811756 & $4.166(1)$ & $\frac12+1$ & a,c \\\hline $X(L)$ & 0.1982213 & 0.2060728 & 0.2073608 & 0.2077947 & 0.2079917 & 0.2083(2) & $(0,0)+(\frac{1}{16},\frac{1}{16})$ & a \\ & 0.3306743 & 0.3328838 & 0.3331548 & 0.3332373 & 0.3332732 & 0.3333(3) & $(0,0)+(\frac{1}{10},\frac{1}{10})$ & a,c \\ & 0.7237420 & 0.7120260 & 0.7099619 & 0.7092469 & 0.7089172 & 0.7083(1) & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{40},\frac{1}{40})$ & a,c \\ & 0.8534244 & 0.8380366 & 0.8354018 & 0.8344928 & 0.8340742 & 0.8333(1) & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{16},\frac{1}{16})$ & a \\ & 1.5861812 & 1.3565224 & 1.3430229 & 1.3386661 & 1.3367116 & 1.333(1) & $(0,0)+(\frac25,\frac25)$ & a,c \\ & 1.3868122 & 1.3784210 & 1.3765362 & 1.3758661 & 1.3755547 & 1.3749(3) & $(\frac{1}{16},\frac{1}{16})+(\frac{9}{40},\frac{9}{40})$ & a,c \\\hline $L$ & $3$ & $5$ & $7$ & $9$ & $11$ & & &\\ \hline $X(L)$ & 0.7150004 & 0.7106879 & 0.7095284 & 0.7090548 & 0.7088157 & 0.7083(1) & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{40},\frac{1}{40})$ & a,c \\ & 0.7670079 & 0.7920987 & 0.8033895 & 0.8098203 & 0.8139754 & 0.8332(2) & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{16},\frac{1}{16})$ & a \\ & 0.9160522 & 0.8804778 & 0.8662928 & 0.8586707 & 0.8539125 & 0.833(1) & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{16},\frac{1}{16})$ & a \\ & 0.9748811 & 1.0175706 & 1.0294044 & 1.0342663 & 1.0367220 & 1.041(2) & $(0,0)+(\frac{9}{16},\frac{1}{16})$ & a,2 \\ & 1.3813560 & 1.3772372 & 1.3761362 & 1.3756863 & 1.3754591 & 1.375(1) & $(\frac{1}{16},\frac{1}{16})+(\frac{9}{40},\frac{9}{40})$ & a,c \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata53m} Numerical finite size data obtained using the Lanczos algorithm for the finite size scaling of the ground state energy (\ref{C2-extra}) and the scaled energy gaps (\ref{X2-extra}) of the $(n,\ell_{FZ},J)=(5,3,-)$ model for small systems together with their VBS extrapolation \cite{HaBa81}. The conjectures are central charges $c_1+c_2$ and conformal weights $(h,\bar{h})_1+(h,\bar{h})_2$ based on the proposed factorization into an Ising model and a Gaussian model with $\mathbf{Z}_2$ orbifold compactification radius $2R^2=10$ for the continuum limit. In the last column we indicate in which model (a: anyon chain, c: clock model) the corresponding level can be observed -- possibly subject to further constraints on the system size $L$. Similarly, a label $2$ indicates a spin $\frac12$ state: this has been used in the identification of the state with conformal weights $(0,0)+(\frac{9}{16},\frac{1}{16})$.} \end{table} Similarly, we extrapolate the quantities \begin{equation} \label{X2-extra} X_n(L) \equiv \frac{L}{2\pi} \left(E_n(L) - E_0(L)\right)\, \end{equation} for the low lying excitations to identify the operator content of the two sectors. In the thermodynamic limit $X(L) \to \left( v^{(F)}_{1}\, X_{1,n} + v^{(F)}_{2}\, X_{2,n}\right)$ where $X_{i,n} = h_{i,n} + \bar{h}_{i,n}$ is the scaling dimension of an operator with conformal weights $(h_{i,n},\bar{h}_{i,n})$ in sector $i=1,2$. Our numerical data indicate that the spectrum of conformal weights in the $c=1$ sector coincide with those observed in the $(5,3,+1)$ model discussed above. As a consequence, candidates for the CFT in this sector are again the Gaussian model with $\mathbf{Z}_2$-orbifold compactification with radius $2R^2=10$ (or, equivalently, one of the $\mathcal{W}D_{10}(19,20)$ or $\mathcal{WB}_{0,2}(4,5)$ minimal models). Together with the numerical data for ground state phase diagram of the generic Hamiltonian $\mathcal{H}_{(5,1)}$ (\ref{hamil5}) discussed above this leads us to conjecture an extended phase of this model with an effective central charge $c=\frac12+1$ for coupling constants $\pi/5<\theta<9\pi/10$, see Fig.~\ref{fig:phasp_n5}~(a). Within this phase the Fermi velocities are non-universal functions of the parameter $\theta$. \subsection{The $\mathbf{Z}_{7}$ and $so(7)_{2}$ models} As discussed above, the Hamiltonians $\mathcal{H}_{(7,\ell)}$ of the clock and anyon models are equivalent under the unitary transformations constructed from the automorphisms $\nu_t$ (\ref{map-nut}) and (\ref{maps-son}), respectively. Therefore the integrable points correspond to particular choices of the coupling constants $\boldsymbol{\alpha}$ on the surface of a $3$-sphere. For the presentation of the phase portrait it is convenient to choose two angular variables to parametrize the coupling constants, i.e. \begin{equation} \label{hamil7_sphere} \mathcal{H}_{(7) (\theta,\psi) = \sum_{j=1} \left[ \cos(\psi) \sin(\theta) \, p_j^{(\phi_{1})} + \sin(\psi) \sin(\theta) \, p_j^{(\phi_{2})} + \cos(\theta) \, p_j^{(\phi_{3})} \right]\,. \end{equation} In this parameterization the Temperley--Lieb points occur at $(\psi_{TL},\theta_{TL}) = (\frac{\pi}{4}, \arctan(\sqrt{2}))$ and $(\psi_{TL}+\pi,\pi-\theta_{TL})$. To illustrate some properties of the phases of the model as the coupling parameters are varied ground state expectation values $\langle p^{(\epsilon_+)} \rangle$, $\langle p^{(\phi_1)} \rangle$, $\langle p^{(\phi_2)} \rangle$, $\langle p^{(\phi_3)} \rangle$ as obtained from the variational matrix product ground states computed numerically using the \textsc{evomps} library \cite{evomps} are shown in Fig.~\ref{fig:phasp_n7}. \begin{figure}[t] \includegraphics[width=0.84\textwidth]{p_n7} \caption{Ground state expectation values of the local projection operators ($\langle p^{(\epsilon_+)} \rangle$, $\langle p^{(\phi_1)} \rangle$, $\langle p^{(\phi_2)} \rangle$, $\langle p^{(\phi_3)} \rangle$ from top to bottom) of the $so(7)_2$ anyon chain as a function of the coupling constants. Data are colorcoded on the spherical parameter space of of the models (\ref{hamil7_sphere}) in terms of the projectors obtained from the $\ell=1$ $F$-moves. Red dots indicate the location of the Fateev-Zamolodchikov integrable points $(7,\ell_{FZ},J)$, similarly the Temperley-Lieb integrable models are located at the green dots. \label{fig:phasp_n7}} \end{figure} Note that, as in the $n=5$ model, the ferromagnetic TL point (located on the southern hemisphere) is located at the intersection of $(n-1)/2$ phases differing in the label $k$ of the dominant order parameter $\langle p^{(\phi_k)} \rangle$ leading to a highly degenerate spectrum. Similarly, the model is expected to be gapped in a neighbourhood of the \emph{anti}ferromagnetic TL point (on the northern hemisphere). For the FZ integrable points we have used the approach described above to compute the central charge(s) and some of the scaling dimensions of primary fields of the low energy effective theories describing the continuum limit of the lattice models. \paragraph*{\underline{$(n,\ell_{FZ},J)=(7,1,+1)$}.} The configuration of Bethe roots corresponding to the ground state of this model consists of $(1,\pm)$-strings, see Table~\ref{table:thermo357}. From the root density approach we obtain the ground state energy density, $\epsilon_\infty = -11.7779598163070$. There is a single branch of gapless low energy excitations with Fermi velocity $v^{(F)}=7/6$. From the scaling analysis (\ref{fscft}) of the finite size spectra obtained from the solution of the Bethe equations we obtain a central charge $c=1$ and the conformal weights shown in Table~\ref{table:fsdata71p}. These data are consistent with the $\mathbb{Z}_2$-orbifold of a Gaussian model with compactification radius $2R^2=7$, see Eq.~(\ref{cft_specWD7-1314}), and in agreement with the conjectured $\mathcal{W}D_n(2n-1,2n)$ rational CFT description of the critical series of \emph{anti}ferromagnetic $\mathbf{Z}_n$ FZ models with $c=1$ \cite{FiFF14}. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{lccccc} $X$ (extrap.) & $s$ & $(h,\bar{h})$ & $\Delta_{(1,+)}$ & $\Delta_{(1,-)}$ & comment \tabularnewline \hline 0.071429 & 0 & $(\frac{1}{28},\frac{1}{28})$ & $-1$ & $-1$ & a,c \tabularnewline 0.125000 & 0 & $(\frac{1}{16},\frac{1}{16})$ & 0 & 0 & a \tabularnewline 0.285714 & 0 & $(\frac{1}{7},\frac{1}{7})$ & $-2$ & $-2$ & a,c \tabularnewline 0.642857 & 0 & $(\frac{9}{28},\frac{9}{28})$ & $-3$ & $-3$ & a,c \tabularnewline 1.142856(5) & 0 & $(\frac{4}{7},\frac{4}{7})$ & $-4$ & $-2$ & a,c \tabularnewline \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata71p} Similar as Table~\ref{table:fsdata53p} but for the $(7,1,+1)$ model. The conjectured conformal weights are from a $\mathcal{W}D_7(13,14)$ minimal model (\ref{cft_specWD7-1314}).} \end{table} \paragraph*{\underline{$(n,\ell_{FZ},J)=(7,1,-1)$}.} From the Bethe ansatz analysis of the thermodynamic limit we obtain the ground state energy density $\epsilon_\infty = -4.34504525693274$ of this model as well as the Fermi velocity $v^{(F)}=7$ of the single branch of its gapless excitations. This ferromagnetic FZ clock model has been identified with $\mathbf{Z}_7$ parafermion CFT \cite{ZaFa85,JiMO86} with central charge $c=\frac43$ and conformal weights (\ref{cft_specZ7}). Indeed the lowest $\mathbf{Z}_7$ conformal weights are found in the finite size spectrum of the clock model, see Table~\ref{table:fsdata71m}. In the spectrum of the corresponding $so(7)_2$ anyon chain, however, additional levels are present with $(h,\bar{h}) = (\frac{1}{24}, \frac{1}{24})$ and $(\frac{7}{72},\frac{7}{72})$ leading to conjecture that the critical theory is the $\mathcal{W}B_3(7,9)$ rational CFT which has the same central charge and conformal weights listed in (\ref{cft_specWB3-79}). As noted in Appendix~\ref{app:rCFTs} the spectrum of $\mathbf{Z}_7$ is a subset hereof. The observed anyon levels are the $\mathcal{W}B_3(7,9)$ primaries with the lowest weights outside the parafermion subset \begin{equation} \label{cft_specWB3-79x} \mathrm{spec}(\mathcal{W}B_3(7,9)) \setminus \mathrm{spec}(\mathbf{Z}_7) = \left\{\frac{1}{24}, \frac{7}{72}, \frac{5}{24}, \frac38, \frac{13}{24}, \frac{43}{72}, \frac{17}{24}, \frac{11}{9}, \frac53, \frac{15}{8}, \frac73, 3 \right\}\,. \end{equation} \begin{table}[t] \begin{ruledtabular} \begin{tabular}{lccc} $X$ (extrap.) & $s$ & $(h,\bar{h})$ & comment \tabularnewline \hline 0.08345(3) & 0 & $(\frac{1}{24},\frac{1}{24})$ & a \tabularnewline 0.09612 & 0 & $(\frac{1}{21},\frac{1}{21})$ & a,c \tabularnewline 0.159001 & 0 & $(\frac{5}{63},\frac{5}{63})$ & a,c \tabularnewline 0.190512 & 0 & $(\frac{2}{21},\frac{2}{21})$ & a,c \tabularnewline 0.195(1) & 0 & $(\frac{7}{72},\frac{7}{72})$ & a,* \tabularnewline \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata71m} Similar as Table~\ref{table:fsdata53p} but for the $(7,1,-1)$ model. The conjectured conformal weights appear in the $\mathcal{W}B_3(7,9)$ rational CFT. We do not list Bethe root configurations $\Delta_\gamma$ here, as the root patterns identified in Appendix~\ref{app:thermo} are obscured by finite size effects. For the level (*) extrapolating to $h=\bar{h}=\frac{7}{72}$ we have no Bethe ansatz results and use Lanczos data instead.} \end{table} \paragraph*{\underline{$(n,\ell_{FZ},J)=(7,3,+1)$}.} From the Bethe ansatz analysis of the thermodynamic limit we obtain the ground state energy density of this model to be $\epsilon_\infty = -6.00247204898418$. There are three branches of low lying excitations over the ground state with two different Fermi velocities, $v^{(F)}_1=7$ and $v^{(F)}_2=7/4$. We have not succeeded in solving the Bethe equations for finite chains. Instead we rely on numerical finite size data obtained using the Lanczos algorithm for the identification of the critical properties of this model: based on our extrapolation of the numerical data for $C(L)$ as defined in (\ref{C2-extra}) we conjecture the critical theory to be a product of two sectors with $c_1=4/5$ and $c_2=1$, respectively, giving $C(L)\to 147/20$, see Table~\ref{table:fsdata73p}. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{c|ccccc|c|c|c} $L$ & $2$ & $4$ & $6$ & $8$ & $10$ & extrap. & conj. &\\ \hline $C(L)$ & $8.0776124$ & $7.5286196$ & $7.4302729$ & $7.3956872$ & $7.3795512$ & $7.35(1)$ & $\frac45+1$ & a,c\\\hline $X(L)$ & 0.3509154 & 0.3693661 & 0.3724672 & 0.3735447 & 0.3740478 & 0.375(2) & $(0,0)+(\frac{3}{28},\frac{3}{28})$ & a,c\\ & -- & 0.5765752 & -- & 0.5735123 & -- & -- & $(\frac{1}{40},\frac{1}{40})+(\frac{1}{16},\frac{1}{16})$ & a\\ & 1.0158173 & 0.9931704 & 0.9871721 & 0.9843406 & 0.9826656 & 0.974(3)& $(\frac{1}{15},\frac{1}{15})+(\frac{1}{84},\frac{1}{84})$ & a,c\\ & 1.1435201 & 1.1190081 & 1.1126025 & 1.1096049 & 1.1078447 & 1.100(3) & $(\frac{1}{15},\frac{1}{15})+(\frac{1}{21},\frac{1}{21})$ & a,c \\ & 1.6262976 & 1.5218150 & 1.5085575 & 1.5045289 & 1.5045289 & 1.500(2) & $(0,0)+(\frac37,\frac{3}{7})$ & a,c\\ & 1.9150465 & 1.6157237 & 1.6112404 & 1.6088639 & 1.6074000 & 1.604(5) & $(\frac{1}{15},\frac{1}{15})+(\frac{4}{21},\frac{4}{21})$ & a,c\\ & --- & 1.5564081 & 1.6636580 & 1.7013413 & 1.7188205 & 1.73(2) & $(\frac{1}{15},\frac{1}{15})+(\frac{25}{84},\frac{25}{84})$ & c\\ \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata73p} Similar as Table~\ref{table:fsdata53m} but for the $(7,3,+1)$ model. The last column contains our conjectures for $c_1+c_2$ and $(h,\bar{h})_1+(h,\bar{h})_2$ based on a factorization of the low energy effective theory with the first factor being the $\mathcal{M}_{(5,6)}$ minimal model with central charge $c_1=\frac45$ and conformal weights (\ref{cft_specPotts3}).} \end{table} Identifying the first factor with the $\mathcal{M}_{(5,6)}$ minimal model we also obtain conjectures for some of the lowest scaling dimensions present in the $c_2=1$ sector. We observe that most of these are of the form $k^2/42$ with integer $k$. In addition there is numerical evidence for a level with $X_2=\frac18$ in the spectrum of the anyon chain. Based on this observation we conjecture the that the latter sector is a $\mathbf{Z}_2$-orbifold of a Gaussian model with compactification radius $2R^2=21$ with spectrum (\ref{cft_specWD21}), see also the finite size analysis for the $(7,5,+1)$ model below. \paragraph*{\underline{$(n,\ell_{FZ},J)=(7,3,-1)$}.} The energy density of this model in the thermodynamic limit is $\epsilon_\infty = -8.35336129420055$, gapless excitations are propagating with Fermi velocity $v^{(F)}= 7/3$. From the finite size analysis based on the Bethe ansatz we find that the central charge of the critical theory is $c=3/2$. We have also identified some conformal weights, see Table~\ref{table:fsdata73m}. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{lcccccccc} $X$ (extrap.) & $s$ & $(h,\bar{h})_1+(h,\bar{h})_2$ & $\Delta_{(1,+)}$ & $\Delta_{(1,-)}$ & $\Delta_{(2,+)}$ & $\Delta_{(2,-)}$ & $\Delta_{(1,m)}$ &comment \tabularnewline \hline 0.125000 & 0 & $(0,0)+(\frac{1}{16},\frac{1}{16})$ &&&&&& a,* \tabularnewline 0.142857 & 0 & $(0,0)+(\frac{1}{14},\frac{1}{14})$ & 0 & 0 & $-1$ & $-1$ & 0 & a,c \tabularnewline 0.160714 & 0 & $(\frac{1}{16},\frac{1}{16}) + (\frac{1}{56},\frac{1}{56})$ & 0 & 0 & $-1$ & $-1$ & 1 & a,c \tabularnewline 0.250000 & 0 & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{16},\frac{1}{16})$ & 2 & 0 & $-1$ & 0 & 0 & a \tabularnewline &&& 0 & 2 & 0 & $-1$ & 0 & a \tabularnewline 0.446429 & 0 & $(\frac{1}{16},\frac{1}{16}) + (\frac{9}{56},\frac{9}{56})$ & 0 & 0 & $-2$ & $-2$ & 1 & a,c \tabularnewline 0.571428(4) & 0 & $(0,0)+(\frac{2}{7},\frac{2}{7})$ & 0 & 2 & $-2$ & $-2$ & 0 & a,c \tabularnewline 1.000000 & 0 & $(\frac{1}{2},\frac{1}{2})+(0,0)$ & 2 & 2 & $-1$ & $-1$ & 0 & a,c \tabularnewline \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata73m} Similar as Table~\ref{table:fsdata53p} but for the $(7,3,-1)$ model. We have not identified the Bethe root configuration for the anyon state with $(h,\bar{h}) = (\frac{1}{16},\frac{1}{16})$, the extrapolation is based on numerical finite size data from diagonalization of the Hamiltonian. For the level (*) extrapolating to $h_2=\bar{h}_2=\frac{1}{16}$ we have no Bethe ansatz results and use Lanczos data instead.} \end{table} Based on these data we conjecture that the effective theory describing this model in the thermodynamic limit is again a product of two rational CFTs, namely an Ising model and a $\mathbb{Z}_2$-orbifold of a $U(1)$-boson compactified to a circle with radius $2R^2=14$. The latter contributes $c_2=1$ and conformal weights (\ref{cft_specWD14}) to the finite size scaling. \paragraph*{\underline{$(n,\ell_{FZ},J)=(7,5,+1)$}.} The ground state energy density of this model is $\epsilon_\infty = -10.1731518217117$, its excitations are gapless and propagate with Fermi velocity $v^{(F)}=7/2$. Analyzing the finite size spectrum obtained from the Bethe equations we find that the central charge is $c=9/5$ and have identified several conformal weights, see Table~\ref{table:fsdata75p}. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{lcccc} $X$ (extrap.)& $s$ & $(h,\bar{h})_1+(h,\bar{h})_2$ & comment \tabularnewline \hline 0.157171 & 0 & $(\frac{1}{15},\frac{1}{15})+(\frac{1}{84},\frac{1}{84})$ & a,c \tabularnewline 0.175005(2) & 0 & $(\frac{1}{40},\frac{1}{40})+(\frac{1}{16},\frac{1}{16})$ & a \tabularnewline 0.214286 & 0 & $(0,0)+(\frac{3}{28},\frac{3}{28})$& a,c \tabularnewline 0.228576 & 0 & $(\frac{1}{15},\frac{1}{15})+(\frac{1}{21},\frac{1}{21})$& a,c \tabularnewline 0.375000 & 0 & $(\frac18,\frac18)+(\frac{1}{16},\frac{1}{16})$& a \tabularnewline 0.72864(2) & 0 & $(\frac{1}{15},\frac{1}{15})+(\frac{25}{84},\frac{25}{84})$& a,c \tabularnewline 0.857146 & 0 & $(0,0)+(\frac37,\frac37)$& a,c \tabularnewline \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata75p} Similar as Table~\ref{table:fsdata53p} but for the $(7,5,+1)$ model. Again, the corresponding root patterns $\Delta_\gamma$ are obscured by finite size effects.} \end{table} As in the $(7,3,+1)$ model the finite size data are consistent with the conjecture that the critical theory is a product of two sectors described by the $\mathcal{M}_{(5,6)}$ minimal model with central charge $c_1=4/5$ and an $c=1$ rational CFT, the latter being a $\mathbf{Z}_2$ orbifold of a Gaussian model with radius $2R^2=21$. The levels identified with solutions of the Bethe equations in Table~(\ref{table:fsdata75p}) contains all but one of the lowest ones predicted based on this conjecture. By numerical diagonalization for systems of length up to $L=10$ we find that there is indeed another level in the finite size spectrum of both the clock and the anyon model which is consistent with the missing scaling dimension $X_2=\frac{2}{15}+\frac{8}{21}\simeq 0.5142857\ldots$: \begin{center} \begin{ruledtabular} \begin{tabular}{c|ccccc|c|c} $L$ & $2$ & $4$ & $6$ & $8$ & $10$ & extrap. & conj. \\ \hline & 0.68355511 & 0.58292862 & 0.56046603 & 0.54980798 & 0.54346327 & 0.52(1) & $(\frac{1}{15},\frac{1}{15}) + (\frac{4}{21},\frac{4}{21})$ \end{tabular} \end{ruledtabular} \end{center} Similarly as for the $(n,\ell_{FZ},J=(5,3,\pm1)$ models the numerical data characterizing the ground state phase diagram of the generic Hamiltonian $\mathcal{H}_{(7)}$ (\ref{hamil7_sphere}), see Fig.~\ref{fig:phasp_n7}, indicate that the integrable points $(7,3,+1)$ and $(7,5,+1)$ are located in an extended critical phase with effective central charge $c=\frac45+1$ and a non-universal dependence of the corresponding Fermi velocities on the coupling constants. \paragraph*{\underline{$(n,\ell_{FZ},J)=(7,5,-1)$}.} From the root density approach we obtain the ground state energy density for this model to be $\epsilon_\infty = -3.96535709066090$. There are two branches of low lying excitations over the ground state with different Fermi velocities $v^{(F)}_1=7$, $v^{(F)}_2=7/5$. Based on our extrapolation of the numerical finite size data for $C(L) \equiv -(6 L/\pi) (E_0(L)-L\epsilon_\infty)$ we conjecture the critical theory to be a product of two sectors with $c_1=1/2$ and $c_2=1$, respectively, giving $C(L) = v^{(F)}_1 c_1 + v^{(F)}_2 c_2 \to 49/10$, see Table~\ref{table:fsdata75m}. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{c|ccccc|c|c|c} $L$ & $5$ & $6$ & $7$ & $8$ & $9$ & extrap. & conj. \\ \hline $C(L)$ & $4.9297964$ & $4.9211744$ & $4.9157441$ & $4.9121370$ & $4.9096312$ & $4.899(1)$ & $\frac12+1$ & a,c \\\hline $X(L)$ & 0.1726247 & 0.1733539 & 0.1737921 & 0.1740760 & 0.1742703 & 0.1750(1) & $(0,0)+(\frac{1}{16},\frac{1}{16})$ & a \\ & 0.1940875 & 0.1959664 & 0.1970656 & 0.1977667 & 0.1982423 & 0.2000(3) & $(0,0)+(\frac{1}{14},\frac{1}{14})$ & a,c \\ & 0.8100552 & 0.8068032 & 0.8049255 & 0.8037371 & 0.8029350 & 0.8005(6) & $(0,0)+(\frac27,\frac27)$ & a,c \\ & 0.9284772 & 0.9274070 & 0.9267650 & 0.9263496 & 0.9260655 & 0.9250(1) & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{56},\frac{1}{56})$ & a,c \\ & 1.0486155 & 1.0491055 & 1.0493694 & 1.0495292 & 1.0496342 & 1.050(2) & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{16},\frac{1}{16})$ & a \\ & 1.0611437 & 1.0576541 & 1.0555890 & 1.0542630 & 1.0533600 & 1.052(5) & $(\frac{1}{16},\frac{1}{16})+(\frac{1}{16},\frac{1}{16})$ & a \\ & 1.3284775 & 1.3274085 & 1.3267667 & 1.3263512 & 1.3260669 & 1.3250(1) & $(\frac{1}{16},\frac{1}{16})+(\frac{9}{56},\frac{9}{56})$ & a,c \\ & 1.6118182 & 1.6001571 & 1.5933096 & 1.5889348 & 1.5859659 & 1.5751(2) & $(0,0)+(\frac{9}{16},\frac{9}{16})$ & a \\ & 1.9180760 & 1.8811101 & 1.8591850 & 1.8451080 & 1.8355284 & 1.7999(1) & $(0,0)+(\frac{9}{14},\frac{9}{14})$ & a,c \end{tabular} \end{ruledtabular} \caption{\label{table:fsdata75m} Similar as Table~\ref{table:fsdata53m} but for the $(7,5,-1)$ model. The last column displays the conjectured contributions to the central charge and conformal weights of the two factors (Ising and $\mathbf{Z}_2$-orbifolded boson with compactification radius $2R^2=14$) to the finite size amplitudes $Y = v^{(F)}_1Y_1 + v^{(F)}_2Y_2$ (for $Y=C$, $X$), see Eqs.~(\ref{C2-extra}) and (\ref{X2-extra}).} \end{table} Similarly, the data $X(L) = (L/2\pi)\left(E_n(L) - L\epsilon_\infty\right) + C(\infty)/12$ for the lowest excitations are consistent with those observed for the $(7,3,-1)$ model discussed above. Therefore, we conjecture that the $c=1$ factor is the a $\mathbf{Z}_2$-orbifold of a boson with compactification radius $2R^2=14$. Together with our numerical data characterizing the ground state phase diagram of the generic $\mathcal{H}_{(7)}$ (\ref{hamil7_sphere}), see Fig.~\ref{fig:phasp_n7}, this indicates that the integrable points $(7,3,-1)$ and $(7,5,-1)$ are located in an extended critical phase with effective central charge $c=\frac12+1$. \begin{acknowledgments} This work has been carried out within the research unit \emph{Correlations in Integrable Quantum Many-Body Systems} (FOR2316). Financial support by the Deutsche Forschungsgemeinschaft through grant no.\ Fr\,737/8-1 is gratefully acknowledged. The numerical computations for this work were partially performed on the cluster system at Leibniz Universit\"at Hannover, Germany. \end{acknowledgments} \newpage \begin{appendix} \section{One-dimensional anyon chains from braided fusion categories} \label{app:fuscat} Algebraically anyonic theories can be described by braided tensor categories \cite{Kita06}. A braided tensor category consists of a collection of objects $\{\psi_{i}\}_{i\in\mathcal{I}}$ (including an identity) equipped with a tensor product (fusion rules), \begin{align*} \psi_{a} \otimes \psi_{b} & \cong \bigoplus_{c} N_{ab}^{c} \psi_{c} \end{align*} where $N_{ab}^{c}$ are natural numbers (including zero). In the special case where $N_{ab}^{c}\in\{0,1\}$ the tensor category is said to be multiplicity free, a property which will be assumed for the remainder of the paper. Fusion can be represented graphically \begin{align*} \begin{tikzpicture}[scale=1.0] \tikzstyle{every node}=[minimum size=0pt,inner sep=0pt,style={sloped,allow upside down}] \tikzstyle{every loop}=[] \node (na) at (0.0,1.0) {$\psi_a$}; \node (nb) at (1.0,1.0) {$\psi_b$}; \node (nm) at (0.5,0.5) {}; \node (nc) at (0.5,0.0) {$\psi_c$}; \foreach \from/\to in {na/nm,nb/nm,nm/nc} \draw[middlearrow={latex}] (\from) -- (\to); \end{tikzpicture} \end{align*} provided that $\psi_c$ appears in the fusion of $\psi_a$ and $\psi_b$. We require associativity in our fusion i.e.\ \begin{align*} (\psi_{a} \otimes \psi_{b}) \otimes \psi_{c} & \cong \psi_{a} \otimes (\psi_{b} \otimes \psi_{c}), \end{align*} which is governed by $F$-moves, also referred to as generalised 6-j symbols, \begin{align*} \begin{tikzpicture}[scale=1.0] \tikzstyle{every node}=[minimum size=0pt,inner sep=0pt,style={sloped,allow upside down}] \tikzstyle{every loop}=[] \node (ns) at (0.2,0.0) {$\psi_a$}; \node (ne) at (2.8,0.0) {$\psi_e$}; \node (n1t) at (1.0,0.9) {$\psi_b$}; \node (n2t) at (2.0,0.9) {$\psi_c$}; \node (n1b) at (1.0,0.0) {}; \node (n2b) at (2.0,0.0) {}; \node (l1) at (1.6,-0.3) {$\psi_d$}; \foreach \from/\to in {ns/n1b,n1b/n2b,n2b/ne} \draw[middlearrow={latex}] (\from) -- (\to); \foreach \from/\to in {n1t/n1b,n2t/n2b} \draw[middlearrow={latex}] (\from) -- (\to); \node (l2) at (4.8,0.0) {$=\sum_{d'} (F^{abc}_{e})^{d}_{d'}$}; \node (vs) at (6.9,-0.2) {$\psi_a$}; \node (ve) at (9.5,-0.2) {$\psi_e$}; \node (v1) at (7.7,0.9) {$\psi_b$}; \node (v2) at (8.7,0.9) {$\psi_c$}; \node (v12) at (8.2,0.3) {}; \node (vb) at (8.2,-0.2) {}; \node (l3) at (8.55,0.05) {$\psi_{d'}$}; \foreach \from/\to in {vs/vb,vb/ve} \draw[middlearrow={latex}] (\from) -- (\to); \foreach \from/\to in {v1/v12,v2/v12,v12/vb} \draw[middlearrow={latex}] (\from) -- (\to); \end{tikzpicture} \end{align*} For more than three objects different decompositions of the fusion can be related by distinct series of $F$-moves. Their consistency for an arbitrary number of anyons is guaranteed by the Pentagon equation satisfied by the $F$-moves. There also must be a mapping that braids two objects, $R: \psi_{a} \otimes \psi_{b} \rightarrow \psi_{b} \otimes \psi_{a}$. Here, however, we are not concerned with such maps. Given a consistent set of rules for the fusion we can construct a periodic one-dimensional chain of $2L$ interacting `anyons' with topological charge $\psi_j$.\footnote{Here we are only considering chains of even length, in general one may also consider chains of odd length.} A basis vector for such a model is defined graphically as \begin{align*} \begin{tikzpicture}[scale=1.0] \tikzstyle{every node}=[minimum size=0pt,inner sep=0pt,style={sloped,allow upside down}] \tikzstyle{every loop}=[] \node (ns) at (0.2,0.0) {$\dots$}; \node (ne) at (6.8,0.0) {$\dots$}; \node (nebl) at (6.8,-0.3) {$\psi_{a_{2L}}$}; \node (n1t) at (1.0,0.9) {$\psi_j$}; \node (n2t) at (2.0,0.9) {$\psi_j$}; \node (n3t) at (3.0,0.9) {$\psi_j$}; \node (n5t) at (5.0,0.9) {$\psi_j$}; \node (n6t) at (6.0,0.9) {$\psi_j$}; \node (n1b) at (1.0,0.0) {}; \node (n2b) at (2.0,0.0) {}; \node (n3b) at (3.0,0.0) {}; \node (n5b) at (5.0,0.0) {}; \node (n6b) at (6.0,0.0) {}; \node (nm1) at (3.4,0.0) {}; \node (nm2) at (3.8,0.0) {}; \node (nm3) at (4.2,0.0) {}; \node (nm4) at (4.6,0.0) {}; \node (n1bl) at (1.6,-0.3) {$\psi_{a_{1}}$}; \node (n2bl) at (2.6,-0.3) {$\psi_{a_{2}}$}; \node (n5bl) at (5.6,-0.3) {$\psi_{a_{2L-1}}$}; \foreach \from/\to in {ns/n1b,n1b/n2b,n2b/n3b,n3b/nm1,nm2/nm3,nm4/n5b,n5b/n6b,n6b/ne} \draw[middlearrow={latex}] (\from) -- (\to); \foreach \from/\to in {n1t/n1b,n2t/n2b,n3t/n3b,n5t/n5b,n6t/n6b} \draw[middlearrow={latex}] (\from) -- (\to); \node (ns) at (9.0,0.3) {$\equiv \ket{a_{1},a_{2},...,a_{2L-1},a_{2L}}.$}; \end{tikzpicture} \end{align*} As we are dealing with periodic models we always identify $a_{i}$ with $a_{i+2L}$. Mathematically, we define the Hilbert space of the chain of $2L$ the anyons with charge $\psi_j$ to be the vector space spanned by \begin{equation} \label{anybasis} \mathcal{B}_L^{(j)} = \left\{\ket{a_{1},a_{2},...,a_{2L}} | a_{i}\in\mathcal{I}\,\, \mbox{s.t.}\,\, N_{a_{i-1}j}^{a_{i}}=1 \right\}\,. \end{equation} This Hilbert space can be further decomposed into topological sectors based on the eigenvalues of a family of charges $\{Y_b\}_{b\in\mathcal{I}}$: these charges can be measured by inserting an additional anyon of type $\psi_b$ into the system which is then moved around the chain using the $F$-moves and finally removed again \cite{FTLT07}. The corresponding topological operator $Y_b$ has matrix elements \begin{equation} \label{Ytopo} \bra{a_{1}',a_{2}',...,a_{2L}'} Y_{b} \ket{a_{1},a_{2},...,a_{2L}} = \prod_{i=1}^{2L} \left(F^{b a_{i}'j}_{a_{i+1}}\right)^{a_{i}}_{a_{i+1}'}\,, \quad b\in\mathcal{I}\,. \end{equation} The spectrum of these operators is known: their eigenvalues are given in terms of the modular $\mathcal{S}$-matrix which diagonalizes the fusion rules \cite{Kita06}. The only other operators that we consider on this space are two-site projection operators: \begin{equation} \label{eqnProjOp} p^{(b)}_{i} = \sum_{\ket{\bf{a}},\ket{\bf{a}'}\in\mathcal{B}_L^{(j)}} \left[\prod_{k\neq i} \delta_{a_{k}}^{a_{k}'}\right] \left(\bar{F}^{a_{i-1}jj}_{a_{i+1}}\right)^{b}_{a_{i}'} \left(F^{a_{i-1}jj}_{a_{i+1}}\right)^{a_{i}}_{b} \ket{\bf{a}'}\bra{\bf{a}}\,. \end{equation} where $\bar{F}$ is the inverse $F$-move. Note that the matrix elements of these operators depend on triples of neighbouring labels $a_{i-1}a_{i}a_{i+1}$ in the fusion path but only the middle one may change under the action of the $p^{(b)}_{i}$. In terms of the local projection operators the global Hamiltonian describing nearest-neighbour interactions between $\psi_j$ anyons is given by \begin{equation} \label{anyhamil} \mathcal{H}({\boldsymbol\alpha}) = \sum_{i=1}^{2L}\left[\sum_{b\in\mathcal{I}} \alpha_{b}\, p_{i}^{(b)}\right]. \end{equation} This generic description has obvious redundancies, for example, it is clear the sum need not be over all $b\in\mathcal{I}$ but can instead by restricted to $b$ such that $\psi_{j}\otimes\psi_{j}\cong\psi_{b}\oplus\cdots$. By construction the Hamiltonian commutes with the topological charges (\ref{Ytopo}). \subsection*{Symmetries from monoidal equivalences} Suppose that there exist two sets of $F$-moves, $F$ and $\widetilde{F}$, together with the corresponding projection operators (\ref{eqnProjOp}). $F$ and $\widetilde{F}$ are monoidally equivalent if they obey the relation \begin{align*} \left(F^{abc}_{d}\right)^{e}_{f} & = \frac{u^{af}_{d}u^{bc}_{f}}{u^{ab}_{e}u^{ec}_{d}} \left(\widetilde{F}^{\nu(a)\nu(b)\nu(c)}_{\nu(d)}\right)^{\nu(e)}_{\nu(f)} \end{align*} where $u_{ab}^{c}\in\mathbb{C}$ and $\nu:\mathcal{I}\rightarrow\mathcal{I}$ is an automorphism of the fusion rules, \begin{align*} N_{ab}^{c} & = N_{\nu(a)\nu(b)}^{\nu(c)}. \end{align*} In the special case where the automorphism is trivial, i.e. $\nu=\mbox{id}$, then $F$ and $\widetilde{F}$ are said to be gauge equivalent. In this paper we say that $\widetilde{F}$ is monoidally related to $F$ via $\nu$ (or, equivalently, $F$ is monoidally related to $\widetilde{F}$ via $\nu^{-1}$). This distinction is convenient as it allows us keep track of permutations which arise. It is natural to ask if one can relate models constructed from the different $F$-moves. Consider the invertible operator which maps states from the Hilbert space of anyons with topological charge $\psi_j$ to that of anyons with topological charge $\psi_{\nu(j)}$, \begin{align} \bra{\bf{a}'} U \ket{\bf{a}} & = \left[\prod_{i=1}^{L} \frac{\delta_{\nu(a_{i})}^{a_{i}'}}{u^{a_{i}j}_{a_{i+1}}} \right] & \forall \ket{\bf{a}}\in\mathcal{B}_L^{(j)}\,,\,\, \ket{\bf{a}'}\in\mathcal{B}_L^{(\nu(j))} \label{eqn:Udef} \end{align} This map provides an equivalence between different Hamiltonians, \begin{align*} \mathcal{H}(\nu({\boldsymbol\alpha})) & = U^{-1} \widetilde{\mathcal{H}}({\boldsymbol\alpha}) U, & \mbox{where} \quad \nu(\alpha)_{a} = \alpha_{\nu(a)}, \end{align*} where $\mathcal{H}({\boldsymbol\alpha})$ is the Hamiltonian built from $F$ acting on a chain of $\psi_{j}$-anyons, while $\widetilde{\mathcal{H}}({\boldsymbol\alpha})$ is the Hamiltonian built from $\widetilde{F}$ acting on a chain of $\psi_{\nu(j)}$-anyons. If one considers a chain of $\psi_{j}$-anyons and an automorphism satisfying $\nu(j)=j$ (for that $j$) then one can equate different models via the above transformation. If it is also the case that $F=\widetilde{F}$, which amongst other things necessitates $\mathcal{H}({\boldsymbol\alpha})= \widetilde{\mathcal{H}}({\boldsymbol\alpha})$, then the monodial equivalence implies that the parameter space of the model possesses a symmetry governed by $\nu$. For instance, if $\nu$ is of order $n$, i.e. the minimum $n$ such that $\nu^{n}=\mbox{id}$, then the model must contain a global $\mathbf{Z}_{n}$ symmetry. \section{Thermodynamic limit of the integrable chains} \label{app:thermo} To analyze the properties of the integrable $\mathbf{Z}_n$ clock models and $so(n)_2$ anyon chains in the thermodynamic limit, $L\to\infty$, the root configurations solving the Bethe Eqs.~(\ref{baeFZ}) and (\ref{baeSO}) corresponding to the ground state and low lying excitations have to be identified. For small system sizes the Bethe roots can be obtained by direct diagonalization of the transfer matrices: in the $so(n)_2$ case they can be related to zeroes of the eigenvalues while in the $\mathbf{Z}_n$ case a general non-uniform $R$-matrix has to be considered, see Ref.~\cite{BaSt90}. Although this approach is limited by the available computational resources we find that -- for sufficiently large $L$ -- Bethe roots can be grouped into several characteristic patterns: a group of Bethe roots $\{u_j\}$ with identical real part is said to form a pattern of type $\gamma$ if \begin{align*} u_j \simeq u^{(\gamma)} + \mu_j + \delta\,,\qquad u^{(\gamma)}\in\mathbb{R}\,,\,\, \mu_j\in S_\gamma\,. \end{align*} For the Bethe equations (\ref{baeFZ}) and (\ref{baeSO}) we have to consider the root patterns\footnote{This generalizes Albertini's conjecture for the $\ell_{FZ}=1$ clock models \cite{Albe94}.} \begin{enumerate} \item $(k,+)$-strings: $S_{(k,+)} = \left\{\left.\frac{i\pi}{2}\left(1-\frac{\ell_{FZ}}{n}\right) \left(t-\frac{k+1}{2}\right) \right| t=1\ldots k\right\}$\,, \item $(k,-)$-strings: $S_{(k,-)} = \left\{\left.\frac{i\pi}{2}+\frac{i\pi}{2}\left(1-\frac{\ell_{FZ}}{n}\right) \left(t-\frac{k+1}{2}\right) \right| t=1\ldots k\right\}$\,, \item $k$-multiplets: $S_{(k,m)} = \left\{\left. \pm\frac{i\pi}{4}+\frac{i\pi}{2}\frac{\ell_{FZ}}{n} \left(t-\frac{k+1}{2}\right) \right| t=1\ldots k\right\}$\,. \end{enumerate} Within the root density formalism \cite{YaYa69} the thermodynamic properties of the models can be studied based on this classification of Bethe roots. The solution to the Bethe equations corresponding the ground state of the $(n,\ell_{FZ},J)$-models of length $L$ can be decomposed into $d_\gamma$ patterns of type $\gamma\in\Gamma$ extending over the entire real axis with \begin{align*} \sum_{k=1}^{n-1} k\left( d_{(k,+)} + d_{(k,-)} + 2d_{(k,m)}\right) = (n-1)\,L\,. \end{align*} Their densities $\rho_\gamma(u)$ satisfy a system of coupled linear integral equations \begin{align} \label{igl-rho} D_{\gamma}\rho_{\gamma}(u) & = \rho^{(0)}_{\gamma}(u) - \sum_{\gamma'\in\Gamma} \int_{-\infty}^{\infty} \mathrm{d}v\, K_{\gamma,\gamma'}(u-v) \rho_{\gamma'}(v) \end{align} where $D_{(k,\pm)}=-J$, $D_{(k,m)}=-2J$, and \begin{align*} K_{\gamma,\gamma'}(u) & = \sum_{\mu\in S_{\gamma}}\sum_{\mu'\in S_{\gamma'}} a\left(u;\frac{1}{2}-\frac{\ell_{FZ}}{2n} + \frac{\mu'-\mu}{i\pi}\right)\,, \\ \rho^{(0)}_{\gamma}(u) & = \sum_{\mu\in S_{\gamma}} a\left(u;\frac{\ell_{FZ}}{4n} + \frac{\mu}{i\pi}\right) \,, \end{align*} with \begin{align*} a(u;t) = -\frac{1}{\pi} \, \frac{\sin 2\pi t}{\cosh2u - \cos2\pi t}\,. \end{align*} It is straightforward to solve the integral equations (\ref{igl-rho}) by Fourier transform. In terms of the density functions $\rho_\gamma(u)$ the leading term of the ground state energy (\ref{specFZ}), (\ref{specSO}) is \begin{align*} E_0 &= L \epsilon_\infty +o(L^0)\,, & \epsilon_\infty = & = JL\pi \left\{ \sum_{\gamma\in\Gamma} \sum_{\mu\in S_{\gamma}} \int_{-\infty}^{\infty} \mathrm{d}u\, \rho_{\gamma}(u) a\left(u;\frac{\ell_{FZ}}{4n}+\frac{\mu}{i\pi}\right) \right\}\,. \end{align*} Low lying excitations are described by root configurations with finite deviations $\Delta_\gamma=d_\gamma-d_\gamma^{(0)}$ from the ground state distributions. The have a linear dispersion with Fermi velocities \begin{equation*} v_{\gamma}^{(F)} = \left. \frac{J}{2} \frac{\rho_{\gamma}'(u)}{\rho_{\gamma}(u)} \right|_{u\to\infty}. \end{equation*} Using our finite size data as well we have been able to identify the ground state configurations listed in Tables~\ref{table:thermo357}, \ref{table:thermo9}, \ref{table:thermo11}. These proposals have been checked against the ground state energy densities computed using the \textsc{evomps} algorithm. We observe that with $p$ being the integer satisfying $p \ell_{FZ} = J\mod n$ the ground state configuration always contains at least one of the patterns $(p,\pm)$. For the model $(n,\ell_{FZ},J)=(n,1,-1)$, i.e.\ $p=n-1$ the root configuration corresponding to the ground state consists of only $(p,(-1)^{n+1)/2})$ patterns \cite{Albe92}. The Fermi velocity in this model is $v^{(F)}=n$ and from the finite size corrections to the ground state energy we find the central charge of the low energy effective theory to be $c=2\frac{n-1}{n+2}$. \begin{table}[ht] \begin{ruledtabular} \begin{tabular}{cccccc} $(n,\ell_{FZ},J)$ & \multicolumn{4}{c}{Bethe root configuration} & central charge(s) \tabularnewline & $S_{\gamma}$ & $D_{\gamma}$ & $d_{\gamma}/L$ & $v^{(F)}$ & \tabularnewline \hline \hline $(3, 1, +1)$ & $S_{(1,+)}$ & $-1$ & $3/2$ & $3/2$ & $1$ \tabularnewline & $S_{(1,-)}$ & $-1$ & $1/2$ & $3/2$ \tabularnewline \hline % $(3, 1, -1)$ & $S_{(2,+)}$ & $+1$ & $1$ & $3$ & $4/5$ \tabularnewline \hline\hline $(5, 1, +1)$ & $S_{(1,+)}$ & $-1$ & $5/2$ & $5/4$ & $1$ \tabularnewline & $S_{(1,-)}$ & $-1$ & $3/2$ & $5/4$ \tabularnewline \hline % $(5, 1, -1)$ & $S_{(4,-)}$ & $+1$ & $1$ & $5$ & $8/7$ \tabularnewline \hline % $(5, 3, +1)$ & $S_{(2,+)}$ & $-1$ & $3/2$ & $5/2$ & $3/2$ \tabularnewline & $S_{(2,-)}$ & $-1$ & $1/2$ & $5/2$ \tabularnewline \hline % $(5, 3, -1)$ & $S_{(3,+)}$ & $+1$ & $1$ & $5$ & $1/2$ \tabularnewline & $S_{(1,m)}$ & $+2$ & $1/2$ & $5/3$ & $1$ \tabularnewline \hline\hline $(7, 1, +1)$ & $S_{(1,+)}$ & $-1$ & $7/2$ & $7/6$ & $1$ \tabularnewline & $S_{(1,-)}$ & $-1$ & $5/2$ & $7/6$ \tabularnewline \hline % $(7, 1, -1)$ & $S_{(6,+)}$ & $+1$ & $1$ & $7$ & $4/3$ \tabularnewline \hline % $(7, 3, +1)$ & $S_{(5,-)}$ & $-1$ & $1$ & $7$ & $4/5$ \tabularnewline & $S_{(1,+)}$ & $-1$ & $1/2$ & $7/4$ & $1$ \tabularnewline & $S_{(1,-)}$ & $-1$ & $1/2$ & $7/4$ \tabularnewline \hline % $(7, 3, -1)$ & $S_{(2,+)}$ & $+1$ & $2$ & $7/3$ & $3/2$ \tabularnewline & $S_{(2,-)}$ & $+1$ & $1$ & $7/3$ \tabularnewline \hline % $(7, 5, +1)$ & $S_{(3,+)}$ & $-1$ & $3/2$ & $7/2$ & $9/5$ \tabularnewline & $S_{(3,-)}$ & $-1$ & $1/2$ & $7/2$ \tabularnewline \hline % $(7, 5, -1)$ & $S_{(4,+)}$ & $+1$ & $1$ & $7$ & $1/2$ \tabularnewline & $S_{(1,m)}$ & $+2$ & $1$ & $7/5$ & $1$ \end{tabular} \end{ruledtabular} \caption{\label{table:thermo357} Bethe root configurations for the ground states of the integrable models with $n=3$, $5$, and $7$. Also shown are the Fermi velocities $v^{(F)}$ of the low-lying excitations and central charges of the corresponding continuum theory.} \end{table} \begin{table}[ht] \begin{ruledtabular} \begin{tabular}{ccccc} $(n,\ell_{FZ},J)$ & \multicolumn{4}{c}{Bethe root configuration} \\ &$S_{\gamma}$ & $D_{\gamma}$ & $d_{\gamma}/L$ & $v^{(F)}$\\\hline $(9, 1, +1)$ & $S_{(1,+)}$ & $-1$ & $9/2$ & $9/8$ \tabularnewline & $S_{(1,-)}$ & $-1$ & $7/2$ & $9/8$ \\\hline % $(9, 1, -1)$ & $S_{(8,-)}$ & $+1$ & $1$ & $9$\\\hline $(9, 5, +1)$ & $S_{(2,+)}$ & $-1$ & $5/2$ & $9/4$ \tabularnewline & $S_{(2,-)}$ & $-1$ & $3/2$ & $9/4$ \tabularnewline \hline % $(9, 5, -1)$ & $S_{(7,-)}$ & $+1$ & $1$ & $9$ \tabularnewline & $S_{(1,m)}$ & $+2$ & $1/2$ & $9/5$ \\\hline $(9, 7, +1)$ & $S_{(4,+)}$ & $-1$ & $3/2$ & $9/2$ \tabularnewline & $S_{(4,-)}$ & $-1$ & $1/2$ & $9/2$ \tabularnewline \hline % $(9, 7, -1)$ & $S_{(5,+)}$ & $+1$ & $1$ & $9$ \tabularnewline & $S_{(1,m)}$ & $+2$ & $3/2$ & $9/7$ \end{tabular} \end{ruledtabular} \caption{\label{table:thermo9}Bethe root configurations for the ground states of some integrable models with $n=9$. Also shown are the Fermi velocities $v^{(F)}$ of the low-lying excitations.} \end{table} \begin{table}[ht] \begin{ruledtabular} \begin{tabular}{ccccc} $(n,\ell_{FZ},J)$ & \multicolumn{4}{c}{Bethe root configuration} \\ &$S_{\gamma}$ & $D_{\gamma}$ & $d_{\gamma}/L$ & $v^{(F)}$ \\\hline\hline % $(11, 1, +1)$ & $S_{(1,+)}$ & $-1$ & $11/2$ & $11/10$ \tabularnewline & $S_{(1,-)}$ & $-1$ & $9/2$ & $11/10$ \tabularnewline \hline % $(11, 1, -1)$ & $S_{(10,+)}$ & $+1$ & $1$ & $11$ \tabularnewline \hline % $(11, 3, +1)$ & $S_{(4,+)}$ & $-1$ & $1/2$ & $11/2$ \tabularnewline & $S_{(4,-)}$ & $-1$ & $3/2$ & $11/2$ \tabularnewline & $S_{(1,+)}$ & $-1$ & $1$ & $11/8$ \tabularnewline & $S_{(1,-)}$ & $-1$ & $1$ & $11/8$ \tabularnewline \hline % $(11, 5, +1)$ & $S_{(9,+)}$ & $-1$ & $1$ & $11$ \tabularnewline & $S_{(1,+)}$ & $-1$ & $1/2$ & $11/6$ \tabularnewline & $S_{(1,-)}$ & $-1$ & $1/2$ & $11/6$ \tabularnewline \hline % $(11, 5, -1)$ & $S_{(2,+)}$ & $+1$ & $3$ & $11/5$ \tabularnewline & $S_{(2,-)}$ & $+1$ & $2$ & $11/5$ \tabularnewline \hline % $(11, 7, -1)$ & $S_{(3,+)}$ & $+1$ & $2$ & $11/3$ \tabularnewline & $S_{(3,-)}$ & $+1$ & $1$ & $11/3$ \tabularnewline & $S_{(1,m)}$ & $+2$ & $1/2$ & $11/7$ \tabularnewline \hline % $(11, 9, +1)$ & $S_{(5,+)}$ & $-1$ & $3/2$ & $11/2$ \tabularnewline & $S_{(5,-)}$ & $-1$ & $1/2$ & $11/2$ \tabularnewline \hline % $(11, 9, -1)$ & $S_{(6,+)}$ & $+1$ & $1$ & $11$ \tabularnewline & $S_{(1,m)}$ & $+2$ & $2$ & $11/9$ \\\hline\hline % $(13, 1, +1)$ & $S_{(1,+)}$ & $-1$ & $13/2$ & $13/12$ \tabularnewline & $S_{(1,-)}$ & $-1$ & $11/2$ & $13/12$ \tabularnewline \hline % $(13, 1, -1)$ & $S_{(12,-)}$ & $+1$ & $1$ & $13$ \tabularnewline \hline % $(13, 3, -1)$ & $S_{(4,+)}$ & $+1$ & $1$ & $13/3$ \tabularnewline & $S_{(4,-)}$ & $+1$ & $2$ & $13/3$ \tabularnewline \hline % $(13, 7, +1)$ & $S_{(2,+)}$ & $-1$ & $7/2$ & $13/6$ \tabularnewline & $S_{(2,-)}$ & $-1$ & $5/2$ & $13/6$ \tabularnewline \hline % $(13, 9, +1)$ & $S_{(3,+)}$ & $-1$ & $5/2$ & $13/4$ \tabularnewline & $S_{(3,-)}$ & $-1$ & $3/2$ & $13/4$ \tabularnewline \hline % $(13, 11, +1)$ & $S_{(6,+)}$ & $-1$ & $3/2$ & $13/2$ \tabularnewline & $S_{(6,-)}$ & $-1$ & $1/2$ & $13/2$ \end{tabular} \end{ruledtabular} \caption{\label{table:thermo11}Bethe root configurations for the ground states of some integrable models with $n=11$ and $13$. Also shown are the Fermi velocities $v^{(F)}$ of the low-lying excitations.} \end{table} \section{Rational CFTs with extended symmetries} \label{app:rCFTs} In the previous analysis \cite{FiFF14} of the integrable points of the $so(5)_2$ anyon chain $(n,\ell_{FZ},J)=(5,1,J)$ the continuum limit of the integrable chains has been found to be described by rational CFTs with extended chiral symmetry algebras (see \cite{BoSc95} and References therein) respecting the five-fold discrete ones in the anyon lattice model. Based on this observation Casimir-type $\mathcal{W}$-algebras associated with the Lie-algebras $SO(n)=B_{(n-1)/2}$, $SO(2n)=D_n$, and the super Lie-algebra $OSp(1|n-1) = \mathcal{B}_{0,(n-1)/2}$ containing one fermionic generator with half-integer spin, are possible candidates for the low energy effective description of the models considered in this paper. Similarly, the scaling limit of the ferromagnetic, i.e.\ $J=-1$, FZ $n$-state clock models is known to be a $\mathbf{Z}_n$-invariant conformal field theory with parafermion currents \cite{ZaFa85,JiMO86}. As rational CFTs, $\mathbf{Z}_n$ parafermions possess an extension of the Virasoro algebra to a $\mathcal{W}A_{n-1}$-algebra. Below we list the central charges and conformal spectra of some field theories from the minimal series of these chiral symmetry algebras appearing in the low energy description of the $n$-state clock models and $so(n)_2$ anyon chains considered in this paper. \subsection{Parafermions} The $\mathbf{Z}_n$ parafermion CFT has central charge $c_n = 2(k-1)/(k+2)$. The conformal spectrum is known to be the set \begin{align*} h_{\ell,m} &= \frac{1}{2}\frac{\ell(k-\ell)}{k(k+2)} + \frac{(\ell+m)(\ell-m)}{4k}\,, \ \ \ \ 1\leq \ell\leq k\,, \ \ -\ell\leq m\leq \ell\,, \ \ \textrm{and} \ \ \ell+m\equiv 0\ \mathrm{mod}\ 2\,, \end{align*} of conformal weights \cite{ZaFa85,GeQi87}. Therefore, we get \begin{align} \label{cft_specZ3} k=3: \quad & c = \frac45\,,\quad h \in \left\{0,\frac{1}{15},\frac25,\frac23\right\}\,,\\ \label{cft_specZ4} k=4: \quad & c = 1\,,\quad h \in \left\{0,\frac{1}{16},\frac1{12},\frac13, \frac9{16},\frac34,1\right\}\,,\\ \label{cft_specZ5} k=5: \quad & c=\frac87\,,\quad h \in \left\{0,\frac{2}{35},\frac{3}{35},\frac27, \frac{17}{35},\frac{23}{35},\frac45,\frac67,\frac65 \right\}\,,\\ \label{cft_specZ7} k=7: \quad &c=\frac43\,,\quad h\in\left\{0, \frac{1}{21}, \frac{5}{63}, \frac{2}{21}, \frac{2}{9}, \frac{8}{21}, \frac{11}{21}, \frac{41}{63}, \frac23, \frac{16}{21}, \frac67, \frac{59}{63}, \frac{25}{21}, \frac43, \frac{10}{7}, \frac{12}{7} \right\}\,. \end{align} As mentioned above, the $\mathbf{Z}_n$ parafermions appear in the minimal models of the $A$-series as $\mathcal{W}A_{n-1}(n+1,n+2)$. Note that the operator content of the $\mathbf{Z}_3$ parafermion theory is a closed subset (under the fusion rules) of the scaling fields in the Virasoro minimal model $\mathcal{M}_{(5,6)}$. The complete spectrum of this minimal model (the critical three-state Potts model) is \begin{equation} \label{cft_specPotts3} h\in\left\{0,\frac{1}{40}, \frac{1}{15}, \frac18, \frac25, \frac{21}{40}, \frac23, \frac75, \frac{13}{8},3 \right\}\,. \end{equation} However, the presence of a representation with conformal weight $h=3$ and the existence of a non-diagonal partition function for this minimal model indicate that it's chiral symmetry algebra can be extended. In fact, considering $\mathcal{W}A_{2}(4,5)$ instead, which has a chiral symmetry algebra generated by the stress-energy tensor and a local chiral field of conformal weight $h=3$, we see that all representations of the minimal model with even Kac-labels drop out, so we are left with the pure $\mathbb{Z}_3$ parafermion spectrum. $\mathbf{Z}_4$ parafermions also appear, due to $\hat{A}_3\cong\hat{D}_3$, in the $D$-series as $\mathcal{W}D_3(5,6)$ and are realized by the $\mathbf{Z}_2$-orbifold of a $U(1)$ boson with compactification radius $2R^2=3$ \cite{Ginsparg88,DVVV89}. \subsection{Minimal models for $B_\ell=SO(2\ell+1)$} The $\mathcal{W}B_2(5,7)$ CFT has central charge $c=\frac87$ and conformal weights \cite{FiFF14} \begin{equation} \label{cft_specWB2-57} h\in\left\{0,\frac{1}{28},{\frac {2}{35}},{\frac {3}{35}},{\frac {3}{28}},\frac14,\frac27,{ \frac {17}{35}},{\frac {15}{28}},{\frac {17}{28}},{\frac {23}{35}},\frac45 ,\frac67,\frac65,{\frac {9}{7}},\frac74,{\frac {13}{7}},3\right\}\,. \end{equation} Note that the spectrum (\ref{cft_specZ5}) of the $\mathbf{Z}_5$ parafermion CFT is a subset of (\ref{cft_specWB2-57}). The $\mathcal{W}B_3(7,9)$ CFT has central charge $c=\frac43$ and conformal weights \cite{FiFF14} \begin{equation} \label{cft_specWB3-79} \begin{aligned} h\in&\left\{ 0,\frac{1}{24},\frac{1}{21},{\frac {5}{63}},\frac{2}{21},{\frac {7}{72}}, {\frac {5}{24}}, \frac29, \frac38,{\frac {8}{21}}, {\frac {11}{21}},{\frac {13}{24}},{\frac {43}{72}}, \right.\\ & \quad\left.{\frac {41}{63}},\frac23,{\frac {17}{24}},{\frac {16}{21}}, \frac67,{\frac {59 }{63}},{\frac {25}{21}},{\frac {11}{9}}, \frac43,{\frac {10}{7}}, \frac53,{ \frac {12}{7}},{\frac {15}{8}},\frac73,3\right\}\,. \end{aligned} \end{equation} Again, the spectrum (\ref{cft_specZ7}) of the $\mathbf{Z}_7$ parafermion CFT is contained in (\ref{cft_specWB3-79}). \subsection{Minimal models for $D_\ell=SO(2\ell)$} The $\mathcal{W}D_n(2n-1,2n)$ CFTs have central charge $c=1$. For $n=3$ the spectrum of conformal weights of this theory coincides with that of $\mathbf{Z}_4$ parafermions (\ref{cft_specZ4}) as discussed above. The spectra for $n=5$, $7$, and some multiples thereof are \begin{align} \label{cft_specWD5-9-10} n=5: \quad &h\in\left\{0,\frac{1}{20},\frac{1}{16}, \frac15,{\frac {9}{20}},{\frac {9}{16}},\frac45,1,\frac54\right\}\,,\\ \label{cft_specWD7-1314} n=7: \quad &h\in\left\{0, \frac{1}{28}, \frac{1}{16}, \frac{1}{7}, \frac{9}{28}, \frac{9}{16}, \frac{4}{7}, \frac{25}{28}, 1, \frac97, \frac74 \right\}\,,\\ \label{cft_specWD10} n=10: \quad &h\in\left\{0, \frac{1}{40}, \frac{1}{16}, \frac{1}{10}, {\frac {9}{40}},\frac25,{\frac {9}{16}},\frac58,{\frac {9}{ 10}},1,{\frac {49}{40}},\frac85,{\frac {81}{40}},\frac52 \right\}\,,\\ \label{cft_specWD14} n=14: \quad &h\in\left\{0, {\frac {1}{56}},\frac{1}{16},\frac{1}{14},{\frac {9}{56}},\frac27,{\frac {25}{56}},{ \frac {9}{16}},{\frac {9}{14}},{\frac {7}{8}},1,{\frac {8}{7}},{\frac {81}{56}},{\frac {25}{14}},{\frac {121}{56}},{\frac {18}{7}},{\frac { 169}{56}},\frac72 \right\}\,,\\ \label{cft_specWD21} n=21:\quad & h\in\left\{0,{\frac {1}{84}},\frac{1}{21},\frac{1}{16},{\frac {3}{28}},{\frac {4}{21}},{\frac{25}{84}},\frac37,{\frac {9}{16}},{\frac {7}{12}},{\frac {16}{21}},{\frac{27}{28}},1, \right.\\ &\nonumber\qquad\quad \left. {\frac {25}{21}},{\frac {121}{84}},{\frac {12}{7}},{\frac{169}{84}},\frac73,{\frac {75}{28}},{\frac {64}{21}},{\frac {289}{84}},{\frac{27}{7}},{\frac {361}{84}},{\frac {100}{21}},{\frac {21}{4}} \right\}\,. \end{align} We note that the spectra of these rational CFTs coincide with those of $\mathbf{Z}_2$-orbifolds of Gaussian models with compactification radii $2R^2=n$ indicating that the fields in the $\mathcal{W}D_n$ symmetry algebra are not independent. The orbifold models contain $n+7$ primary fields \cite{Ginsparg88,DVVV89}: \begin{enumerate} \item the identity with conformal weight $h_\mathbf{1} = 0$, \item a marginal field $\Theta$ with conformal weight $h_\Theta= 1$, \item two "degenerate" fields $\Phi^{1,2}$ with conformal weight $h_\Phi = \frac{n}{4}$, \item the twist fields $\sigma_{1,2}$ and $\tau_{1,2}$, with conformal weights $h_\sigma = \frac{1}{16}$ and $h_\tau = \frac{9}{16}$, and \item $(n-1)$ fields $\phi_k$, with $k = 1,2, . . . ,n-1$, with conformal weights $h_k =k^2/{4n}$. \end{enumerate} It is a well known fact that chiral symmetry algebras, which exist for generic values of the central charge, may collapse to smaller ones for certain values of the central charge. The best known example is the collapse of the $\mathcal{W}A_n$-algebra to the $\mathcal{W}A_2$-algebra at $c=1$. This phenomenon is called unifying $\mathcal{W}$-algebras \cite{BEHHH94}. A similar phenomenon happes with the $\mathcal{W}D_n(2n-1,2n)$-algebras at $c=1$, which all collapse down to mere $\mathcal{W}(2,4,n)$-algebras generated by the Virasoro stress-energy tensor and two further chiral primary fields of conformal dimensions $h=4$ and $h=n$, respectively. The reason is that, for special values of the central charge, several chiral primary fields become algebraically dependent due to the existence of additional null fields in the vacuum representation. Moreover, the $\mathcal{W}(2,4,n)$-algebras are in fact the maximally extended chiral symmetry algebras for $\mathbf{Z}_2$ orbifold Gaussian models with compactification radii $2R^2 = n$. It is not surprising that $\mathcal{W}$-algebras, which admit a rational CFT with finite representation content at central charge $c=1$, must shrink. The reason is that all rational CFTs at $c=1$ are known and their partition functions are classified \cite{Kiritsis88}. \subsection{Minimal models for $\mathcal{B}_{0,\ell}=OSp(1|2\ell)$} \label{app:rCFTs-WBf} For non-simply-laced Lie-algebras alternative constructions of extended chiral symmetries are possible if one allows for generators with half-integer spin. In particular \cite{LuFa90}, one can construct an alternative $\mathcal{W}$-algebra for the $B_\ell$ series, i.e.\ for $SO(2\ell + 1)$, containing precisely one fermionic generator. In general, these algebras can be realized from the Lie-superalgebras $\mathcal{B}_{0,\ell} = OSp(1|2\ell)$. Therefore we denote them as $\mathcal{WB}_{0}$-algebras below. However, we note that these $\mathcal{W}$-algebras are not super-$\mathcal{W}$-algebras. In the corresponding minimal series the $\mathcal{WB}_{0,n}(2n,2n+1)$ rational CFTs have central charge $c=1$. The spectrum of conformal weights is \begin{equation} \label{cft_specWBf2-45} \begin{aligned} h\in &\left\{0,\frac{1}{40},\frac{1}{16},\frac{1}{10}, \frac{9}{40},\frac25,\frac{9}{16},\frac{5}{8},1\right\}\, \end{aligned} \end{equation} for $n=2$, and \begin{equation} \label{cft_specWBf03-67} h \in \left\{0,\frac{1}{56},\frac{1}{16},\frac{1}{14},\frac{9}{56},\frac27, \frac{25}{56},\frac{9}{16},\frac{9}{14},\frac78, 1\right\}\, \end{equation} for $n=3$. We note that these spectra are subsets of those for the $\mathcal{W}D_{10}(19,20)$ (\ref{cft_specWD10}) and $\mathcal{W}D_{14}(27,28)$ (\ref{cft_specWD14}), respectively. Similarly, the spectra of the $\mathcal{WB}_{0,n}(2n,2n+1)$ rational CFTs for general $n>2$ are contained in those of the $\mathcal{W}D_{4n+2}(8n+3,8n+4)$ models. The representation with conformal weight $h=\frac{2n+1}{2}$ in the latter is part of the symmetry algebra in the $\mathcal{WB}_{0,n}$ case. This may be an indication that the additional conformal weights in the $\mathcal{WD}_{4n+2}$ series are part of larger representations under the symmetry algebra of the $\mathcal{WB}_{0,n}$ series, as discussed in section IV.B with the example of the $\mathcal{WB}_{0,2}(4,5)$ model versus the $\mathcal{W}D_{10}(19,20)$ model. The $\mathcal{W}D_{4n+2}(8n+3,8n+4)$ models presumably admit non-diagonal partition functions in which characters of two representations, whose highest weights differ by a half-integer or an integer, are combined. In fact, the representations allowed in the $\mathcal{W}D_{4n+2}(8n+3,8n+4)$ models, but not contained in the spectra of the $\mathcal{WB}_{0,n}(2n,2n+1)$ rational CFTs, can all be seen to differ by a half-integer or an integer from one of the common ones. This is easily seen in the above mentioned example, (\ref{cft_specWBf2-45}) and (\ref{cft_specWD10}). We find that the spectra consist out of the orbifold weights \begin{equation} h\in\left\{ \frac{1}{16},\frac{9}{16},1\right\} \end{equation} common to all CFTs of these series, the weights \begin{equation} h\in\left\{ 0,\frac{1}{40},\frac{1}{10},\frac{9}{40},\frac{2}{5},\frac{5}{8} \right\} \end{equation} common to both models, and finally the weights \begin{equation} h\in\left\{ \frac{9}{10}=\frac{2}{5}+\frac{1}{2}, \frac{49}{40}=\frac{9}{40}+\frac{2}{2}, \frac{8}{5}=\frac{1}{10}+\frac{3}{2}, \frac{81}{40}=\frac{1}{40}+\frac{4}{2}, \frac{5}{2}=0+\frac{5}{2} \right\}\,, \end{equation} which only appear in the $\mathcal{W}D_{10}(19,20)$ model, and differ by half-integers or integers from the common weights. The last weight, $h=\frac{5}{2}$, corresponds to a representation which is shifted by an half-integer above the vacuum representation $h=0$, and corresponds to a local chiral field which can be added to the chiral symmetry algebra. Indeed, the chiral symmmetry algebra of the $\mathcal{WB}_{0,2}(4,5)$ model does feature a generator of weight $h=\frac{5}{2}$. Analogous results hold for the spectra of the pairs $\mathcal{W}D_{4n+2}(8n+3,8n+4)$ and $\mathcal{WB}_{0,n}(2n,2n+1)$ for all $n$. Among the representations in the common subset only the one with weight $h=\frac{2n+1}{8}$ does not have a corresponding representation in the shifted part of the $\mathcal{W}D_{4n+2}(8n+3,8n+4)$ spectrum. A further consequence of the existence of a generator of half-integer conformal weight is a two-fold degeneracy of all representations with $h\neq 0$. Let us explain this briefly: The $\mathcal{WB}_{0,n}$ model has a symmetry algebra $\mathcal{W}(2,4,\ldots,\frac{2n+1}{2})$. Let us denote the generator of conformal weight 4 as $W$ and the generator of half-integer weight $\frac{2n+1}{2}$ as $Q$. The operator product expansion of the product $QQ$ will contain the field $W$. The corresponding quantum number $w$ for the zero mode $W_0$ of $W$ must satisfy a quadratic constraint to ensure associativity of the whole symmetry algebra. This leads to a relation $w=\pm h\,f(h)$ with an algebraic function $f$. Hence, for all $h\neq 0$, there are precisely two possible $w$-values. For the smallest case $n=2$, which we encountered in section IV.B, we can compute the function $f(h)$ explicitly, but this gets increasingly more difficult for larger $n$. We finally note that the associativity of the operator algebra for the $\mathcal{W}D_{4n+2}(8n+3,8n+4)$ does not yield obvious constraints of this type on the eigenvalues of the zero modes of the generators. \end{appendix} \newpage
train/arxiv