代码之家  ›  专栏  ›  技术社区  ›  dot

couchDB复制一直从“new remote database”恢复到“local database”

  •  0
  • dot  · 技术社区  · 5 年前

    PS C:\Users\jj2> docker ps -a
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                  NAMES
    aacbb0c8f189        couchdb:2.1.1       "tini -- /docker-ent…"   15 seconds ago      Up 12 seconds       4369/tcp, 9100/tcp, 0.0.0.0:15984->5984/tcp, 0.0.0.0:15986->5986/tcp   jj2_server-1_1
    b00138d9c030        couchdb:2.1.1       "tini -- /docker-ent…"   16 seconds ago      Up 12 seconds       4369/tcp, 9100/tcp, 0.0.0.0:25984->5984/tcp, 0.0.0.0:25986->5986/tcp   jj2_server-2_1
    e4c984413ac1        couchdb:2.1.1       "tini -- /docker-ent…"   16 seconds ago      Up 12 seconds       0.0.0.0:5984->5984/tcp, 4369/tcp, 9100/tcp, 0.0.0.0:5986->5986/tcp     jj2_server-0_1
    

    我可以像这样为每个实例启动Faxton:

    http://127.0.0.1:5984/
    http://127.0.0.1:15984/
    http://127.0.0.1:25984/
    

    现在,我尝试在主容器上设置复制,但是我必须弄乱复制目标的值。 以下是我指定的值:

    Replication Source:    Local Database
    Source Name:   widgets
    Replication Target:    New Remote Database
    New Database: http://127.0.0.1:15984/widgets
    Replication Type:  Continuous
    

    原始配置JSON如下所示:

    {
      "_id": "310ab1c7a68d4ae4aba039d2fa00320f",
      "_rev": "2-cf1a3abced5f09ceebd9d54f42ebd65d",
      "user_ctx": {
        "name": "couchdb",
        "roles": [
          "_admin",
          "_reader",
          "_writer"
        ]
      },
      "source": {
        "headers": {
          "Authorization": "Basic Y291Y2hkYjpwYXNzd29yZA=="
        },
        "url": "http://127.0.0.1:5984/widgets"
      },
      "target": {
        "headers": {
          "Authorization": "Basic Y291Y2hkYjpwYXNzd29yZA=="
        },
        "url": "http://127.0.0.1:15984/widgets"
      },
      "create_target": true,
      "continuous": true,
      "owner": "couchdb"
    }
    

    如有任何建议,我们将不胜感激。

    我要补充的一点是,另外两个节点上还没有运行安装程序。意思是,我创建了集群,但是当我启动webapp时,它会提示我创建一个节点或一个集群。在复制工作之前,是否必须将每个节点设置为单个节点?

    另外,我首先是这样创建集群/容器的: https://github.com/apache/couchdb-docker/issues/74 我用了那个码头工人-合成.yml文件。

    编辑2


    就集群而言,使用运行在127.0.0.1:5984的fauxton,对于server-0,我添加了如下两个节点:

    couchdb-2:5984绑定地址0.0.0.0

    那么当我这样做时(注意端口):

     http://127.0.0.1:15984/_node/couchdb@couchdb-1/_config
    

    我得到了一个合法的json响应,表明有东西以“couchdb-1”的名称运行。然而,我意识到我仍然在使用主机来获取couchdb-1服务器的视图。(服务器-1)

    通过命令行,我确认我有这样的节点:

    PS C:\Users\jj2>curl-X GET“ http://127.0.0.1:5984/_membership “--用户couchdb 输入用户“couchdb”的主机密码: {“所有节点”:[”couchdb@couchdb-0“],“群集节点”:[”couchdb@couchdb-0","couchdb@couchdb-1","couchdb@couchdb-2"]} PS C:\Users\jj2>

    最后,我想也许我可以使用docker分配的容器的IP地址,但是没有一个可以从主机ping。它们都是172.x.x.x地址。

    编辑3

    以防有帮助。

    PS C:\Users\jj2> docker network inspect jj2_network
    [
        {
            "Name": "jj2_network",
            "Id": "a0a799f7069ff49306438d9cb7884399a66470a7f0e9ac5364600c462153f53c",
            "Created": "2020-01-30T21:18:55.5841557Z",
            "Scope": "local",
            "Driver": "bridge",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": null,
                "Config": [
                    {
                        "Subnet": "172.19.0.0/16",
                        "Gateway": "172.19.0.1"
                    }
                ]
            },
            "Internal": false,
            "Attachable": true,
            "Ingress": false,
            "ConfigFrom": {
                "Network": ""
            },
            "ConfigOnly": false,
            "Containers": {
                "006b6d02cd4e962f3df9d6584d58b36b67864872446f2d00209001ec58d3cd52": {
                    "Name": "jj2_server-1_1",
                    "EndpointID": "91260368a2d5014743b41c9ab863a2acbfe0a8c7f0a18ea7ad35a3c16efb4445",
                    "MacAddress": "02:42:ac:13:00:03",
                    "IPv4Address": "172.19.0.3/16",
                    "IPv6Address": ""
                },
                "15b261831c46fb89cdc83f9deb638ada0d9d8a89ece0bc065e0a45818e9b4ce3": {
                    "Name": "jj2_server-2_1",
                    "EndpointID": "cf072d0bbd95ab86308ac4c15b71b47223b09484506e07e5233d526f46baca1e",
                    "MacAddress": "02:42:ac:13:00:04",
                    "IPv4Address": "172.19.0.4/16",
                    "IPv6Address": ""
                },
                "aeaf74cf591cffa8e7463e82b75e9ca57ebbcfd1a84d3f893ea5dcae324dbd1e": {
                    "Name": "jj2_server-0_1",
                    "EndpointID": "0a6d66b95bf973f0432b9ae88c61709e63f9e51c6bbf92e35ddf6eab5f694cc1",
                    "MacAddress": "02:42:ac:13:00:02",
                    "IPv4Address": "172.19.0.2/16",
                    "IPv6Address": ""
                }
            },
            "Options": {},
            "Labels": {
                "com.docker.compose.network": "network",
                "com.docker.compose.project": "jj2",
                "com.docker.compose.version": "1.24.1"
            }
        }
    ]
    
    0 回复  |  直到 5 年前