Failed Port Is Already Allocated Issue 1114 Docker For Mac Github

  1. Failed Port Is Already Allocated Issue 1114 Docker For Mac Github Commands
  2. Failed Port Is Already Allocated Issue 1114 Docker For Mac Github Windows
  3. Failed Port Is Already Allocated Issue 1114 Docker For Mac Github Server

Bug reports for Docker Desktop for Mac. Contribute to docker/for-mac development by creating an account on GitHub. One reason I've come across is that in some versions of docker (pre 1.2), there's a bug where if it detects a port is already assigned (perhaps even to a system, not docker run, program like nginx) it will continue to fail to reassign it even if you stop the conflicting thing, until you restart docker. Logs and troubleshooting. Estimated reading time: 16 minutes. This page contains information on how to diagnose and troubleshoot Docker Desktop issues, request Docker Desktop support (Pro and Team plan users only), send logs and communicate with the Docker Desktop team, use our forums and Success Center, browse and log issues on GitHub, and find workarounds for known problems.

Estimated reading time: 28 minutes

Device Mapper is a kernel-based framework that underpins many advancedvolume management technologies on Linux. Docker’s devicemapper storage driverleverages the thin provisioning and snapshotting capabilities of this frameworkfor image and container management. This article refers to the Device Mapperstorage driver as devicemapper, and the kernel framework as Device Mapper.

For the systems where it is supported, devicemapper support is included inthe Linux kernel. However, specific configuration is required to use it withDocker.

The devicemapper driver uses block devices dedicated to Docker and operates atthe block level, rather than the file level. These devices can be extended byadding physical storage to your Docker host, and they perform better than usinga filesystem at the operating system (OS) level.


  • devicemapper is supported on Docker Engine - Community running on CentOS, Fedora,Ubuntu, or Debian.
  • devicemapper requires the lvm2 and device-mapper-persistent-data packagesto be installed.
  • Changing the storage driver makes any containers you have alreadycreated inaccessible on the local system. Use docker save to save containers,and push existing images to Docker Hub or a private repository, so you donot need to recreate them later.

Configure Docker with the devicemapper storage driver

Before following these procedures, you must first meet all theprerequisites.

Configure loop-lvm mode for testing

This configuration is only appropriate for testing. The loop-lvm mode makesuse of a ‘loopback’ mechanism that allows files on the local disk to beread from and written to as if they were an actual physical disk or blockdevice.However, the addition of the loopback mechanism, and interaction with the OSfilesystem layer, means that IO operations can be slow and resource-intensive.Use of loopback devices can also introduce race conditions.However, setting up loop-lvm mode can help identify basic issues (such asmissing user space packages, kernel drivers, etc.) ahead of attempting the morecomplex set up required to enable direct-lvm mode. loop-lvm mode shouldtherefore only be used to perform rudimentary testing prior to configuringdirect-lvm.

For production systems, seeConfigure direct-lvm mode for production.

  1. Stop Docker.

  2. Edit /etc/docker/daemon.json. If it does not yet exist, create it. Assumingthat the file was empty, add the following contents.

    See all storage options for each storage driver in thedaemon reference documentation

    Docker does not start if the daemon.json file contains badly-formed JSON.

  3. Start Docker.

  4. Verify that the daemon is using the devicemapper storage driver. Use thedocker info command and look for Storage Driver.

This host is running in loop-lvm mode, which is not supported on production systems. This is indicated by the fact that the Data loop file and a Metadata loop file are on files under /var/lib/docker/devicemapper. These are loopback-mounted sparse files. For production systems, see Configure direct-lvm mode for production.

Failed Port Is Already Allocated Issue 1114 Docker For Mac Github Commands

Configure direct-lvm mode for production

Production hosts using the devicemapper storage driver must use direct-lvmmode. This mode uses block devices to create the thin pool. This is faster thanusing loopback devices, uses system resources more efficiently, and blockdevices can grow as needed. However, more setup is required than in loop-lvmmode.

After you have satisfied the prerequisites, follow the stepsbelow to configure Docker to use the devicemapper storage driver indirect-lvm mode.

Warning: Changing the storage driver makes any containers you have already created inaccessible on the local system. Use docker save to save containers, and push existing images to Docker Hub or a private repository, so you do not need to recreate them later.

Allow Docker to configure direct-lvm mode

Docker can manage the block device for you, simplifying configuration of direct-lvmmode. This is appropriate for fresh Docker setups only. You can only use asingle block device. If you need to use multiple block devices,configure direct-lvm mode manually instead.The following new configuration options are available:

dm.directlvm_deviceThe path to the block device to configure for direct-lvm.Yesdm.directlvm_device='/dev/xvdf'
dm.thinp_percentThe percentage of space to use for storage from the passed in block device.No95dm.thinp_percent=95
dm.thinp_metapercentThe percentage of space to use for metadata storage from the passed-in block device.No1dm.thinp_metapercent=1
dm.thinp_autoextend_thresholdThe threshold for when lvm should automatically extend the thin pool as a percentage of the total storage space.No80dm.thinp_autoextend_threshold=80
dm.thinp_autoextend_percentThe percentage to increase the thin pool by when an autoextend is triggered.No20dm.thinp_autoextend_percent=20
dm.directlvm_device_forceWhether to format the block device even if a filesystem already exists on it. If set to false and a filesystem is present, an error is logged and the filesystem is left intact.Nofalsedm.directlvm_device_force=true

Edit the daemon.json file and set the appropriate options, then restart Dockerfor the changes to take effect. The following daemon.json configuration sets all of theoptions in the table above.

Failed Port Is Already Allocated Issue 1114 Docker For Mac Github Windows

See all storage options for each storage driver in thedaemon reference documentation


Restart Docker for the changes to take effect. Docker invokes the commands toconfigure the block device for you.

Failed port is already allocated issue 1114 docker for mac github version

Warning: Changing these values after Docker has prepared the block devicefor you is not supported and causes an error.

You still need to perform periodic maintenance tasks.

Configure direct-lvm mode manually

The procedure below creates a logical volume configured as a thin pool touse as backing for the storage pool. It assumes that you have a spare blockdevice at /dev/xvdf with enough free space to complete the task. The deviceidentifier and volume sizes may be different in your environment and youshould substitute your own values throughout the procedure. The procedure alsoassumes that the Docker daemon is in the stopped state.

Failed Port Is Already Allocated Issue 1114 Docker For Mac Github Server

  1. Identify the block device you want to use. The device is located under/dev/ (such as /dev/xvdf) and needs enough free space to store theimages and container layers for the workloads that host runs.A solid state drive is ideal.

  2. Stop Docker.

  3. Install the following packages:

    • RHEL / CentOS: device-mapper-persistent-data, lvm2, and alldependencies

    • Ubuntu / Debian: thin-provisioning-tools, lvm2, and alldependencies

  4. Create a physical volume on your block device from step 1, using thepvcreate command. Substitute your device name for /dev/xvdf.

    Warning: The next few steps are destructive, so be sure that you havespecified the correct device!

  5. Create a docker volume group on the same device, using the vgcreatecommand.

  6. Create two logical volumes named thinpool and thinpoolmeta using thelvcreate command. The last parameter specifies the amount of free spaceto allow for automatic expanding of the data or metadata if space runs low,as a temporary stop-gap. These are the recommended values.

  7. Convert the volumes to a thin pool and a storage location for metadata forthe thin pool, using the lvconvert command.

  8. Configure autoextension of thin pools via an lvm profile.

  9. Specify thin_pool_autoextend_threshold and thin_pool_autoextend_percentvalues.

    thin_pool_autoextend_threshold is the percentage of space used before lvmattempts to autoextend the available space (100 = disabled, not recommended).

    thin_pool_autoextend_percent is the amount of space to add to the devicewhen automatically extending (0 = disabled).

    The example below adds 20% more capacity when the disk usage reaches80%.

    Save the file.

  10. Apply the LVM profile, using the lvchange command.

  11. Ensure monitoring of the logical volume is enabled.

    If the output in the Monitor column reports, as above, that the volume isnot monitored, then monitoring needs to be explicitly enabled. Withoutthis step, automatic extension of the logical volume will not occur,regardless of any settings in the applied profile.

    Double check that monitoring is now enabled by running thesudo lvs -o+seg_monitor command a second time. The Monitor columnshould now report the logical volume is being monitored.

  12. If you have ever run Docker on this host before, or if /var/lib/docker/exists, move it out of the way so that Docker can use the new LVM pool tostore the contents of image and containers.

    If any of the following steps fail and you need to restore, you can remove/var/lib/docker and replace it with /var/lib/docker.bk.

  13. Edit /etc/docker/daemon.json and configure the options needed for thedevicemapper storage driver. If the file was previously empty, it shouldnow contain the following contents:

  14. Start Docker.



  15. Verify that Docker is using the new configuration using docker info.

    If Docker is configured correctly, the Data file and Metadata file isblank, and the pool name is docker-thinpool.

  16. After you have verified that the configuration is correct, you can remove the/var/lib/docker.bk directory which contains the previous configuration.

Manage devicemapper

Monitor the thin pool

Do not rely on LVM auto-extension alone. The volume groupautomatically extends, but the volume can still fill up. You can monitorfree space on the volume using lvs or lvs -a. Consider using a monitoringtool at the OS level, such as Nagios.

To view the LVM logs, you can use journalctl:

If you run into repeated problems with thin pool, you can set the storage optiondm.min_free_space to a value (representing a percentage) in/etc/docker/daemon.json. For instance, setting it to 10 ensuresthat operations fail with a warning when the free space is at or near 10%.See thestorage driver options in the Engine daemon reference.

Increase capacity on a running device

You can increase the capacity of the pool on a running thin-pool device. This isuseful if the data’s logical volume is full and the volume group is at fullcapacity. The specific procedure depends on whether you are using aloop-lvm thin pool or adirect-lvm thin pool.

Resize a loop-lvm thin pool

The easiest way to resize a loop-lvm thin pool is touse the device_tool utility,but you can use operating system utilitiesinstead.

Use the device_tool utility

A community-contributed script called device_tool.go is available in themoby/mobyGithub repository. You can use this tool to resize a loop-lvm thin pool,avoiding the long process above. This tool is not guaranteed to work, but youshould only be using loop-lvm on non-production systems.

If you do not want to use device_tool, you can resize the thin pool manually instead.

  1. To use the tool, clone the Github repository, change to thecontrib/docker-device-tool, and follow the instructions in the README.mdto compile the tool.

  2. Use the tool. The following example resizes the thin pool to 200GB.

Use operating system utilities

If you do not want to use the device-tool utility,you can resize a loop-lvm thin pool manually using the following procedure.

In loop-lvm mode, a loopback device is used to store the data, and anotherto store the metadata. loop-lvm mode is only supported for testing, becauseit has significant performance and stability drawbacks.

If you are using loop-lvm mode, the output of docker info shows filepaths for Data loop file and Metadata loop file:

Follow these steps to increase the size of the thin pool. In this example, thethin pool is 100 GB, and is increased to 200 GB.

  1. List the sizes of the devices.

  2. Increase the size of the data file to 200 G using the truncate command,which is used to increase or decrease the size of a file. Note thatdecreasing the size is a destructive operation.

  3. Verify the file size changed.

  4. The loopback file has changed on disk but not in memory. List the size ofthe loopback device in memory, in GB. Reload it, then list the size again.After the reload, the size is 200 GB.

  5. Reload the devicemapper thin pool.

    a. Get the pool name first. The pool name is the first field, delimited by ` :`. This command extracts it.

    b. Dump the device mapper table for the thin pool.

    c. Calculate the total sectors of the thin pool using the second field of the output. The number is expressed in 512-k sectors. A 100G file has 209715200 512-k sectors. If you double this number to 200G, you get 419430400 512-k sectors.

    d. Reload the thin pool with the new sector number, using the following three dmsetup commands.

Resize a direct-lvm thin pool

To extend a direct-lvm thin pool, you need to first attach a new block deviceto the Docker host, and make note of the name assigned to it by the kernel. Inthis example, the new block device is /dev/xvdg.

Follow this procedure to extend a direct-lvm thin pool, substituting yourblock device and other parameters to suit your situation.

  1. Gather information about your volume group.

    Use the pvdisplay command to find the physical block devices currently inuse by your thin pool, and the volume group’s name.

    In the following steps, substitute your block device or volume group name asappropriate.

  2. Extend the volume group, using the vgextend command with the VG Namefrom the previous step, and the name of your new block device.

  3. Extend the docker/thinpool logical volume. This command uses 100% of thevolume right away, without auto-extend. To extend the metadata thinpoolinstead, use docker/thinpool_tmeta.

  4. Verify the new thin pool size using the Data Space Available field in theoutput of docker info. If you extended the docker/thinpool_tmeta logicalvolume instead, look for Metadata Space Available.

Activate the devicemapper after reboot

If you reboot the host and find that the docker service failed to start,look for the error, “Non existing device”. You need to re-activate thelogical volumes with this command:

How the devicemapper storage driver works

Warning: Do not directly manipulate any files or directories within/var/lib/docker/. These files and directories are managed by Docker.

Use the lsblk command to see the devices and their pools, from the operatingsystem’s point of view:

Use the mount command to see the mount-point Docker is using:

When you use devicemapper, Docker stores image and layer contents in thethinpool, and exposes them to containers by mounting them undersubdirectories of /var/lib/docker/devicemapper/.

Image and container layers on-disk

The /var/lib/docker/devicemapper/metadata/ directory contains metadata aboutthe Devicemapper configuration itself and about each image and container layerthat exist. The devicemapper storage driver uses snapshots, and this metadatainclude information about those snapshots. These files are in JSON format.

The /var/lib/docker/devicemapper/mnt/ directory contains a mount point for each imageand container layer that exists. Image layer mount points are empty, but acontainer’s mount point shows the container’s filesystem as it appears fromwithin the container.

Image layering and sharing

The devicemapper storage driver uses dedicated block devices rather thanformatted filesystems, and operates on files at the block level for maximumperformance during copy-on-write (CoW) operations.


Another feature of devicemapper is its use of snapshots (also sometimes calledthin devices or virtual devices), which store the differences introduced ineach layer as very small, lightweight thin pools. Snapshots provide manybenefits:

  • Layers which are shared in common between containers are only stored on diskonce, unless they are writable. For instance, if you have 10 differentimages which are all based on alpine, the alpine image and all itsparent images are only stored once each on disk.

  • Snapshots are an implementation of a copy-on-write (CoW) strategy. This meansthat a given file or directory is only copied to the container’s writablelayer when it is modified or deleted by that container.

  • Because devicemapper operates at the block level, multiple blocks in awritable layer can be modified simultaneously.

  • Snapshots can be backed up using standard OS-level backup utilities. Justmake a copy of /var/lib/docker/devicemapper/.

Devicemapper workflow

When you start Docker with the devicemapper storage driver, all objectsrelated to image and container layers are stored in/var/lib/docker/devicemapper/, which is backed by one or more block-leveldevices, either loopback devices (testing only) or physical disks.

  • The base device is the lowest-level object. This is the thin pool itself.You can examine it using docker info. It contains a filesystem. This basedevice is the starting point for every image and container layer. The basedevice is a Device Mapper implementation detail, rather than a Docker layer.

  • Metadata about the base device and each image or container layer is stored in/var/lib/docker/devicemapper/metadata/ in JSON format. These layers arecopy-on-write snapshots, which means that they are empty until they divergefrom their parent layers.

  • Each container’s writable layer is mounted on a mountpoint in/var/lib/docker/devicemapper/mnt/. An empty directory exists for eachread-only image layer and each stopped container.

Each image layer is a snapshot of the layer below it. The lowest layer of eachimage is a snapshot of the base device that exists in the pool. When you run acontainer, it is a snapshot of the image the container is based on. The followingexample shows a Docker host with two running containers. The first is a ubuntucontainer and the second is a busybox container.

How container reads and writes work with devicemapper

Reading files

With devicemapper, reads happen at the block level. The diagram below showsthe high level process for reading a single block (0x44f) in an examplecontainer.

An application makes a read request for block 0x44f in the container. Becausethe container is a thin snapshot of an image, it doesn’t have the block, but ithas a pointer to the block on the nearest parent image where it does exist, andit reads the block from there. The block now exists in the container’s memory.

Writing files

Writing a new file: With the devicemapper driver, writing new data to acontainer is accomplished by an allocate-on-demand operation. Each block ofthe new file is allocated in the container’s writable layer and the block iswritten there.

Updating an existing file: The relevant block of the file is read from thenearest layer where it exists. When the container writes the file, only themodified blocks are written to the container’s writable layer.

Deleting a file or directory: When you delete a file or directory in acontainer’s writable layer, or when an image layer deletes a file that existsin its parent layer, the devicemapper storage driver intercepts further readattempts on that file or directory and responds that the file or directory doesnot exist.

Writing and then deleting a file: If a container writes to a file and laterdeletes the file, all of those operations happen in the container’s writablelayer. In that case, if you are using direct-lvm, the blocks are freed. If youuse loop-lvm, the blocks may not be freed. This is another reason not to useloop-lvm in production.

Device Mapper and Docker performance

  • allocate-on demand performance impact:

    The devicemapper storage driver uses an allocate-on-demand operation toallocate new blocks from the thin pool into a container’s writable layer.Each block is 64KB, so this is the minimum amount of space that is usedfor a write.

  • Copy-on-write performance impact: The first time a container modifies aspecific block, that block is written to the container’s writable layer.Because these writes happen at the level of the block rather than the file,performance impact is minimized. However, writing a large number of blocks canstill negatively impact performance, and the devicemapper storage driver mayactually perform worse than other storage drivers in this scenario. Forwrite-heavy workloads, you should use data volumes, which bypass the storagedriver completely.

Performance best practices

Keep these things in mind to maximize performance when using the devicemapperstorage driver.

  • Use direct-lvm: The loop-lvm mode is not performant and should neverbe used in production.

  • Use fast storage: Solid-state drives (SSDs) provide faster reads andwrites than spinning disks.

  • Memory usage: the devicemapper uses more memory than some other storagedrivers. Each launched container loads one or more copies of its files intomemory, depending on how many blocks of the same file are being modified atthe same time. Due to the memory pressure, the devicemapper storage drivermay not be the right choice for certain workloads in high-density use cases.

  • Use volumes for write-heavy workloads: Volumes provide the best and mostpredictable performance for write-heavy workloads. This is because they bypassthe storage driver and do not incur any of the potential overheads introducedby thin provisioning and copy-on-write. Volumes have other benefits, such asallowing you to share data among containers and persisting even when norunning container is using them.

  • Note: when using devicemapper and the json-file log driver, the logfiles generated by a container are still stored in Docker’s dataroot directory, by default /var/lib/docker. If your containers generate lots of log messages, this may lead to increased disk usage or the inability to manage your system dueto a full disk. You can configure a log driver to store your containerlogs externally.

Related Information

container, storage, driver, device mapper
I recently had an issue with my server after an apt upgrade/update yielded some errors. I sorted those errors but left me with 513 errors, saying that my server was no reachable.
I rebooted and the 513 errors went away. Most apps work - Plex and *arr and nzbget all work fine (which is a relief), however xteve and portainer do not load; producing 404 errors.
When I try sudo pg I get told that Trafiek is not deployed properly. When I deploy I get this error:
TASK [Deploying portainer] ******************************************************************************************************************
Monday 02 March 2020 19:10:17 +0100 (0:00:00.027) 0:00:03.486 **********
fatal: []: FAILED! => {'changed': false, 'msg': 'Error starting container 43ae2c9669740fa52943241acc5f3ff38494e2cc4bfd74bce38d9a3be14fc361: 500 Server Error: Internal Server Error ('driver failed programming external connectivity on endpoint portainer (ee18df8139460a0a00617648b7d66e5390d7fb5d030ad1b919b5b118915f980e): Bind for failed: port is already allocated')'}
to retry, use: --limit @/opt/coreapps/apps/portainer.retry

Portainer is set as top level domain app. Nothing else has changed since the failed update caused everything to crash and burn. Any idea how to free up the port other than rebooting again and hoping it's free?

Comments are closed.