| Package | NuGet |
|---|---|
| FluentDocker | |
| Microsoft Test | |
| XUnit Test |
This library enables docker and docker-compose interactions usinga Fluent API. It is supported on Linux, Windows and Mac. It also has support for the legazy docker-machine interactions.
Sample Fluent API usage
using ( var container = new Builder().UseContainer() .UseImage("kiasaki/alpine-postgres") .ExposePort(5432) .WithEnvironment("POSTGRES_PASSWORD=mysecretpassword") .WaitForPort("5432/tcp", 30000 /*30s*/) .Build() .Start()) { var config = container.GetConfiguration(true); Assert.AreEqual(ServiceRunningState.Running, config.State.ToServiceState()); }
This fires up a postgres and waits for it to be ready. To use compose, just do it like this:
:bulb: NOTE: Use the AssumeComposeVersion(ComposeVersion.V2) to use the V2 behaviour, default is still V1 (to be changed to default to V2 later this year)
var file = Path.Combine(Directory.GetCurrentDirectory(), (TemplateString) "Resources/ComposeTests/WordPress/docker-compose.yml"); // @formatter:off using (var svc = new Builder() .UseContainer() .UseCompose() .FromFile(file) .RemoveOrphans() .WaitForHttp("wordpress", "http://localhost:8000/wp-admin/install.php") .Build().Start()) // @formatter:on { // We now have a running WordPress with a MySql database var installPage = await "http://localhost:8000/wp-admin/install.php".Wget(); Assert.IsTrue(installPage.IndexOf("https://wordpress.org/", StringComparison.Ordinal) != -1); Assert.AreEqual(1, svc.Hosts.Count); // The host used by compose Assert.AreEqual(2, svc.Containers.Count); // We can access each individual container Assert.AreEqual(2, svc.Images.Count); // And the images used. }
:bulb Note for Linux Users: Docker requires sudo by default and the library by default expects that executing user do not need to do sudo in order to talk to the docker daemon. More description can be found in the Talking to Docker Daemon chapter.
The fluent API builds up one or more services. Each service may be composite or singular. Therefore it is possible to e.g. fire up several docker-compose based services and manage each of them as a single service or dig in and use all underlying services on each docker-compose service. It is also possible to use services directly e.g.
var file = Path.Combine(Directory.GetCurrentDirectory(), (TemplateString) "Resources/ComposeTests/WordPress/docker-compose.yml"); using (var svc = new DockerComposeCompositeService(DockerHost, new DockerComposeConfig { ComposeFilePath = new List<string> { file }, ForceRecreate = true, RemoveOrphans = true, StopOnDispose = true })) { svc.Start(); // We now have a running WordPress with a MySql database var installPage = await $"http://localhost:8000/wp-admin/install.php".Wget(); Assert.IsTrue(installPage.IndexOf("https://wordpress.org/", StringComparison.Ordinal) != -1); }
The above example creates a docker-compose service from a single compose file. When the service is disposed all underlying services is automatically stopped.
The library is supported by .NET full 4.51 framework and higher, .NET standard 1.6, 2.0. It is divided into three thin layers, each layer is accessible:
The Majority of the service methods are extension methods and not hardwired into the service itself, making them lightweight and customizable. Since everything is accessible it is e.g. easy to add extensions method for a service that uses the layer 1 commands to provide functionality.
I do welcome contribution, though there is no contribution guideline as of yet, make sure to adhere to .editorconfig when doing the Pull Requests. Otherwise the build will fail. I'll update with a real guideline sooner or later this year.
All commands needs a DockerUri to work with. It is the Uri to the docker daemon, either locally or remote. It can be discoverable or hardcoded. Discovery of local DockerUri can be done by
var hosts = new Hosts().Discover(); var _docker = hosts.FirstOrDefault(x => x.IsNative) ?? hosts.FirstOrDefault(x => x.Name == "default");
The example snipped will check for native, or docker beta "native" hosts, if not choose the docker-machine "default" as host. If you're using docker-machine and no machine exists or is not started it is easy to create / start a docker-machine by e.g. "test-machine".Create(1024,20000000,1). This will create a docker machine named "test-machine" with 1GB of RAM, 20GB Disk, and use one CPU.
It is now possible to use the Uri to communicate using the commands. For example to get the version of client and server docker binaries:
var result = _docker.Host.Version(_docker.Certificates); Debug.WriteLine(result.Data); // Will Print the Client and Server Version and API Versions respectively.
All commands return a CommandResponse<T> such that it is possible to check success factor by response.Success. If any data associated with the command it is returned in the response.Data property.
Then it is simple as below to start and stop include delete a container using the commands. Below starts a container and do a PS on it and then deletes it.
var id = _docker.Host.Run("nginx:latest", null, _docker.Certificates).Data; var ps = _docker.Host.Ps(null, _docker.Certificates).Data; _docker.Host.RemoveContainer(id, true, true, null, _docker.Certificates);
When running on windows, one can choose to run linux or windows container. Use the LinuxDaemon or WindowsDaemon to control which daemon to talk to.
_docker.LinuxDaemon(); // ensures that it will talk to linux daemon, if windows daemon it will switch
Some commands returns a stream of data when e.g. events or logs is wanted using a continuous stream. Streams can be used in background tasks and support CancellationToken. Below example tails a log.
using (var logs = _docker.Host.Logs(id, _docker.Certificates)) { while (!logs.IsFinished) { var line = logs.TryRead(5000); // Do a read with timeout if (null == line) { break; } Debug.WriteLine(line); } }
Utility methods exists for commands. They come in different flaviours such as networking etc. For example when reading a log to the end:
using (var logs = _docker.Host.Logs(id, _docker.Certificates)) { foreach (var line in logs.ReadToEnd()) { Debug.WriteLine(line); } }
The highest layer of this library is the fluent API where you can define and control machines, images, and containers. For example to setup a load balancer with two nodejs servers reading from a redis server can look like this (node image is custom built if not found in the repository).
var fullPath = (TemplateString) @"${TEMP}/fluentdockertest/${RND}"; var nginx = Path.Combine(fullPath, "nginx.conf"); Directory.CreateDirectory(fullPath); typeof(NsResolver).ResourceExtract(fullPath, "index.js"); using (var services = new Builder() // Define custom node image to be used .DefineImage("mariotoffia/nodetest").ReuseIfAlreadyExists() .From("ubuntu") .Maintainer("Mario Toffia <mario.toffia@xyz.com>") .Run("apt-get update &&", "apt-get -y install curl &&", "curl -sL https://deb.nodesource.com/setup | sudo bash - &&", "apt-get -y install python build-essential nodejs") .Run("npm install -g nodemon") .Add("emb:Ductus.FluentDockerTest/Ductus.FluentDockerTest.MultiContainerTestFiles/package.txt", "/tmp/package.json") .Run("cd /tmp && npm install") .Run("mkdir -p /src && cp -a /tmp/node_modules /src/") .UseWorkDir("/src") .Add("index.js", "/src") .ExposePorts(8080) .Command("nodemon", "/src/index.js").Builder() // Redis Db Backend .UseContainer().WithName("redis").UseImage("redis").Builder() // Node server 1 & 2 .UseContainer().WithName("node1").UseImage("mariotoffia/nodetest").Link("redis").Builder() .UseContainer().WithName("node2").UseImage("mariotoffia/nodetest").Link("redis").Builder() // Nginx as load balancer .UseContainer().WithName("nginx").UseImage("nginx").Link("node1", "node2") .CopyOnStart(nginx, "/etc/nginx/nginx.conf") .ExposePort(80).Builder() .Build().Start()) { Assert.AreEqual(4, services.Containers.Count); var ep = services.Containers.First(x => x.Name == "nginx").ToHostExposedEndpoint("80/tcp"); Assert.IsNotNull(ep); var round1 = $"http://{ep.Address}:{ep.Port}".Wget(); Assert.AreEqual("This page has been viewed 1 times!", round1); var round2 = $"http://{ep.Address}:{ep.Port}".Wget(); Assert.AreEqual("This page has been viewed 2 times!", round2); }
The above example defines a Dockerfile, builds it, for the node image. It then uses vanilla redis and nginx. If you just want to use an existing Dockerfile it can be done like this.
using (var services = new Builder() .DefineImage("mariotoffia/nodetest").ReuseIfAlreadyExists() .FromFile("/tmp/Dockerfile") .Build().Start()) { // Container either build to reused if found in registry and started here. }
The fluent API supports from defining a docker-machine to a set of docker instances. It has built-in support for e.g.
waiting for a specific port or a process within the container before Build() completes and thus can be safely
be used within a using statement. If specific management on wait timeouts etc. you can always build and start the
container and use extension methods to do the waiting on the container itself.
To create a container just omit the start. For example:
using ( var container = new Builder().UseContainer() .UseImage("kiasaki/alpine-postgres") .WithEnvironment("POSTGRES_PASSWORD=mysecretpassword") .Build()) { Assert.AreEqual(ServiceRunningState.Stopped, container.State); }
This example creates a container with postgres, configure one environment variable. Within the using statement it is possible to start the IContainerService. Thus each built container is wrapped in a IContainerService. It is also possible to use the IHostService.GetContainers(...) to obtain the created, running, and exited containers. From the IHostService it is also possible to get all the images in the local repository to create containers from.
Whe you want to run a single container do use the fluent or container service start method. For example:
using ( var container = new Builder().UseContainer() .UseImage("kiasaki/alpine-postgres") .WithEnvironment("POSTGRES_PASSWORD=mysecretpassword") .Build() .Start()) { var config = container.GetConfiguration(); Assert.AreEqual(ServiceRunningState.Running, container.State); Assert.IsTrue(config.Config.Env.Any(x => x == "POSTGRES_PASSWORD=mysecretpassword")); }
By default the container is stopped and deleted when the Dispose method is run, in order to keep the container in archve, use the KeepContainer() on the fluent API. When Dispose() is invoked it will be stopped but not deleted. It is also possible to keep it running after dispose as well.
It is possible to expose ports both explicit or randomly. Either way it is possible to resolve the IP (in case of machine) and the port (in case of random port) to use in code. For example:
using ( var container = new Builder().UseContainer() .UseImage("kiasaki/alpine-postgres") .ExposePort(40001, 5432) .WithEnvironment("POSTGRES_PASSWORD=mysecretpassword") .Build() .Start()) { var endpoint = container.ToHostExposedEndpoint("5432/tcp"); Assert.AreEqual(40001, endpoint.Port); }
Here we map the container port 5432 to host port 40001 explicitly. Note the use of container.ToHostExposedEndpoint(...). This is to always resolve to a working ip and port to communicate with the docker container. It is also possible to map a random port, i.e. let Docker choose a available port. For example:
using ( var container = new Builder().UseContainer() .UseImage("kiasaki/alpine-postgres") .ExposePort(5432) .WithEnvironment("POSTGRES_PASSWORD=mysecretpassword") .Build() .Start()) { var endpoint = container.ToHostExposedEndpoint("5432/tcp"); Assert.AreNotEqual(0, endpoint.Port); }
The only difference here is that only one argument is used when ExposePort(...) was used to configure the container. The same usage applies otherwise and thus is transparent for the code.
In order to know when a certain service is up and running before starting to e.g. connect to it. It is possible to wait for a specific port to be open. For example:
using ( var container = new Builder().UseContainer() .UseImage("kiasaki/alpine-postgres") .ExposePort(5432) .WithEnvironment("POSTGRES_PASSWORD=mysecretpassword") .WaitForPort("5432/tcp", 30000 /*30s*/) .Build() .Start()) { var config = container.GetConfiguration(true); Assert.AreEqual(ServiceRunningState.Running, config.State.ToServiceState()); }
In the above example we wait for the container port 5432 to be opened within 30 seconds. If it fails, it will throw an exception and thus the container will be disposed and removed (since we dont have any keep container etc. configuration).
using ( var container = new Builder().UseContainer() .UseImage("kiasaki/alpine-postgres") .ExposePort(5432) .WithEnvironment("POSTGRES_PASSWORD=mysecretpassword") .WaitForPort("5432/tcp", 30000 /*30s*/, "127.0.0.1") .Build() .Start()) { var config = container.GetConfiguration(true); Assert.AreEqual(ServiceRunningState.Running, config.State.ToServiceState()); }
Sometimes it is not possible to directly reach the container, by local ip and port, instead e.g. the container has an exposed port on the loopback interface (127.0.0.1) and that is the only way of reaching the container from the program. The above example forces the address to be 127.0.0.1 but still resolves the host port. By default, FluentDocker uses the network inspect on the container to determine the network configuration.
Sometime it is not sufficient to just wait for a port. Sometimes a container process is much more vital to wait for. Therefore a wait for process method exist in the fluent API as well as an extension method on the container object. For example:
using ( var container = new Builder().UseContainer() .UseImage("kiasaki/alpine-postgres") .ExposePort(5432) .WithEnvironment("POSTGRES_PASSWORD=mysecretpassword") .WaitForProcess("postgres", 30000 /*30s*/) .Build() .Start()) { var config = container.GetConfiguration(true); Assert.AreEqual(ServiceRunningState.Running, config.State.ToServiceState()); }
In the above example Build() will return control when the process "postgres" have been started within the container.
In order to make use


最适合小白的AI自动化工作流平台
无需编码,轻松生成可复用、可变现的AI自动化工作流

大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


AI论文写作指导平台
AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频


实时语音翻译/同声传译 工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号